================================================================================ PROJECT GLASSHOUSE — COMPLETE BRIEFING DOCUMENT The Launchpad TLP · thelaunchpadtlp.education Prepared by Manus AI for Joaquín Antonio "Piqui" Muñoz Ortiz April 5, 2026 ================================================================================ Source: Google Gemini conversation — 20 turns, 2 Canvas documents Canvas 1: "Deep Research and Fact-Checking" (31,277 chars) — Completed April 5, 2026 Canvas 2: "The Glasshouse Directive" whitepaper — God Mac.pdf LEGAL NOTICE: Apple's EULA explicitly forbids running macOS on non-Apple hardware. All methods described are theoretical, experimental, and legally gray. The Corellium precedent (11th Circuit, 2023) protects security research use cases only. For research and educational purposes only. AI MANIFEST: https://glasshouse.thelaunchpadtlp.education/llms.txt THIS FILE: https://glasshouse.thelaunchpadtlp.education/plain.txt FULL SITE: https://glasshouse.thelaunchpadtlp.education/ ================================================================================ SECTION 00: EXECUTIVE SUMMARY ================================================================================ Project Glasshouse is a theoretical and practical framework for virtualizing macOS on non-Apple, agnostic cloud infrastructure. The computing landscape is undergoing a radical shift away from x86 architecture toward ARM-based processing, typified by Apple's transition to Apple Silicon. This briefing analyzes the state-of-the-art technologies that have shattered the "Emulation Tax," including hybrid binary translators (Arancini) and Vulkan-to-Metal graphics pipelines (KosmicKrisp). It explores how modern AI agents securely host and operate these operating systems using microVM sandboxing and the Model Context Protocol (MCP). Finally, it outlines a radical, asymmetrical strategy for bootstrapped startups to achieve hyperscale macOS computing with zero capital. KEY METRICS: Core Breakthrough: KosmicKrisp + Arancini eliminate the Emulation Tax 5x faster, 81% fewer memory ops vs QEMU TCG AI Interface: MCP + mcp-server-macos-use gives AI agents deterministic, hallucination-free macOS control Zero-Cost Path: GitHub Actions (free M1) + Oracle Always Free (24GB ARM) + Tailscale = $0.00 infrastructure ================================================================================ SECTION 01: THE ARGUMENT IN PLAIN LANGUAGE ================================================================================ The core claim of this conversation is that it is theoretically and practically possible to run Apple's macOS on non-Apple hardware — ranging from generic cloud servers to free CI/CD runners — and to fully automate these environments using AI agents. The conversation moves from traditional "Hackintosh" methods to a highly advanced theoretical architecture called Project Glasshouse. The iOS App Store is one of the most lucrative digital markets in the world, but the entry ticket is a $1,000+ Apple computer. For developers and researchers in the Global South, or with zero capital, this represents a structural barrier to economic participation. Project Glasshouse proposes eliminating this barrier entirely. With Apple planning to release macOS 27 exclusively for Apple Silicon, and macOS 28 slated to remove Rosetta 2 entirely by 2027, the traditional "Hackintosh" methodology has reached its terminal end of life. Hypervisor-based virtualization and advanced hardware emulation have become the sole viable pathways for executing Apple operating systems on non-Apple host infrastructure. KEY QUOTE: "To build a 'God-Mac' cloud infrastructure with absolutely zero capital, we have to abandon the traditional path of renting dedicated servers. When you have no money, your primary currency is cunning, open-source leverage, and exploiting corporate free tiers to their absolute limits." — Google Gemini, Project Glasshouse Conversation, April 2026 ================================================================================ SECTION 02: THE FOUR-PHASE MASTER ARCHITECTURE ================================================================================ Project Glasshouse is a four-phase software stack designed to detach Apple's software from Apple's hardware. PHASE 1 — THE ENGINE ROOM (Hypervisor & Host OS) Technology: KVM/QEMU with HugePages memory management + IOMMU PCIe passthrough Container: Docker-OSX (sickcodes/Docker-OSX) Packages QEMU + OpenCore + macOS recovery media 2026 Status: QEMU v9.2.0+ integrates KosmicKrisp alongside virtio-gpu-gl-pci Limitation: ARM64 hosts fall back to pure software TCG (severe performance penalty) Host Provisioning Script: apt-get update -y && apt-get install -y qemu-kvm libvirt-daemon-system \ libvirt-clients bridge-utils ovmf systemctl enable --now libvirtd sed -i 's/GRUB_CMDLINE_LINUX_DEFAULT="/GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on \ iommu=pt kvm.ignore_msrs=1 /g' /etc/default/grub update-grub && update-initramfs -u echo "vm.nr_hugepages = 1024" >> /etc/sysctl.conf && sysctl -p PHASE 2 — THE GHOST (Cryptographic Spoofing) Technology: OpenCore bootloader (acidanthera/OpenCorePkg) Algorithm: SMBIOS spoofing — generates fake Serial Number, MLB, SystemUUID Key Kexts: Lilu.kext: Master patching engine — hooks into XNU kernel during boot VirtualSMC.kext: Emulates Apple's System Management Controller (SMC) WhateverGreen.kext: Patches macOS framebuffers to prevent "Black Screen" on boot Function: Injects ACPI tables so macOS kernel believes it runs on a valid Apple logic board PHASE 3 — THE TRANSLATOR (Binary Translation — Arancini HBT) Technology: Arancini Hybrid Binary Translator (HBT) — ASPLOS 2026 IR: ArancinIR (LLVM-based) Method: Static Binary Translation (SBT) ahead of time + Dynamic Binary Translation (DBT) fallback Key Innovation: Formally verified mathematical memory ordering mappings (TSO → weak model) Performance vs QEMU TCG: Memory access instructions: 19% of baseline (81% reduction) Execution speed: 3.28x to 5.00x faster (geometric mean) Memory model safety: Formally verified (vs heuristic barrier injection) Multi-thread correctness: Provably correct (vs best effort) PHASE 4 — THE INTERCEPTOR (Graphics — KosmicKrisp) Technology: KosmicKrisp — Vulkan-to-Metal layered driver by LunarG Vulkan SDK v1.4.335.1+ Achievement: Vulkan 1.3/1.4 conformance on Apple Silicon Near-bare-metal 60fps acceleration Pipeline: Stage 1: Guest Application — issues standard Vulkan API rendering commands Stage 2: Mesa 3D (Venus Driver) — intercepts Vulkan calls, packages SPIR-V shaders Stage 3: virtio-gpu-gl-pci — paravirtualized kernel device transfers data Stage 4: virglrenderer — receives memory pages, reconstructs Vulkan API calls Stage 5: KosmicKrisp — translates Vulkan commands natively into Apple Metal Stage 6: Apple Silicon GPU — executes Metal instructions natively ================================================================================ SECTION 03: TECHNICAL COMPENDIUM ================================================================================ I. CORE VIRTUALIZATION & HYPERVISOR STACK KVM/QEMU: Coordinates: github.com/qemu/qemu Logic: Uses KVM (Kernel-based Virtual Machine) for near-native CPU execution Requirement: Intel VT-x or AMD-V + AVX2 instruction sets Integration: config.plist must be tuned for Q35 chipset Standard: virtio-gpu-pci for high-speed I/O Docker-OSX: Coordinates: github.com/sickcodes/Docker-OSX Architecture: Wraps QEMU inside a Docker container — macOS-as-a-Service 2026 Status: Supports macOS 26 (Tahoe) images out-of-the-box Use Case: Ideal for headless CI/CD and automated Xcode builds Apple Virtualization.framework (Native Only): High-level API for creating VMs on native Apple hardware only Requires VZVirtualMachineConfiguration + VZMacPlatformConfiguration Uses VZMacOSInstaller to extract and boot from .ipsw restore images Employs com.apple.security.virtualization entitlements — cannot be ported to Linux II. HARDWARE SPOOFING & BOOTLOADERS OpenCore (OC): Coordinates: github.com/acidanthera/OpenCorePkg Algorithm: SMBIOS spoofing — generates fake Serial Number, MLB, SystemUUID Design Kit: OCAuxiliaryTools (OCAT) for visual configuration of config.plist Key Kernel Extensions: Lilu.kext: Master patching engine — hooks into XNU kernel during boot, injects arbitrary code into protected memory VirtualSMC.kext: Emulates Apple's System Management Controller (SMC) Intercepts OS calls and returns mathematically valid spoofed responses WhateverGreen.kext: Patches macOS framebuffers to force standard VESA/DisplayPort outputs III. API TRANSLATION & GRAPHICS KosmicKrisp (New in 2025/2026): Source: LunarG / Vulkan SDK v1.4.335.1+ Technology: Layered driver intercepting and translating Vulkan API commands directly into Metal API commands with minimal translation overhead Achievement: Vulkan 1.3/1.4 conformance on Apple Silicon Pipeline: Sits between virglrenderer (deserialization) and Apple Silicon GPU Reference: https://www.lunarg.com/the-state-of-vulkan-on-apple-jan-2026/ MoltenVK (Complementary): Coordinates: github.com/KhronosGroup/MoltenVK Direction: Runs Vulkan on Metal (forward direction) Provides: Mathematical logic for SPIR-V to AIR (Apple Intermediate Representation) IV. BINARY TRANSLATION — ARANCINI FRAMEWORK (ASPLOS 2026) The x86_64 architecture operates on Total Store Ordering (TSO) — a strong memory model. ARM and RISC-V use weak memory models. Traditional DBTs inject heavy memory barriers between every translated operation, crippling performance. Arancini is a Hybrid Binary Translator (HBT) using LLVM and ArancinIR. It performs Static Binary Translation (SBT) ahead of time, then seamlessly falls back to Dynamic Binary Translation (DBT) for dynamic control flows. Formally verified mathematical mapping schemes guarantee strong memory semantics on ARM and RISC-V without brute-force barriers. Benchmark Results (Phoenix + EEMBC suites): Memory Access Instructions: 81% reduction vs QEMU TCG Execution Speed: 3.28x to 5.00x faster Memory Model Safety: Formally verified Multi-thread Correctness: Provably correct V. LOCAL AI INFERENCE — vLLM-METAL ARCHITECTURE Docker Model Runner (DMR) integrates the vllm-metal backend, routing inference calls from container directly to macOS host's Apple Silicon Metal GPU. Fuses Apple's native MLX machine learning framework with PyTorch, performing zero-copy tensor operations on Apple's unified memory. Engine Comparison: llama.cpp: CPU/GPU Hybrid (GGUF) — ~333-345 tokens/sec — short queries, cross-platform vllm-metal: Apple Silicon MLX/Safetensors — Paged Attention + GQA Massive context windows, agentic workflows ================================================================================ SECTION 04: PHASE-BY-PHASE CODE ARTIFACTS ================================================================================ PHASE 1 HOST PROVISIONING SCRIPT (bash): #!/bin/bash apt-get update -y && apt-get install -y qemu-kvm libvirt-daemon-system \ libvirt-clients bridge-utils ovmf systemctl enable --now libvirtd sed -i 's/GRUB_CMDLINE_LINUX_DEFAULT="/GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on \ iommu=pt kvm.ignore_msrs=1 /g' /etc/default/grub update-grub && update-initramfs -u echo "vm.nr_hugepages = 1024" >> /etc/sysctl.conf && sysctl -p PHASE 2 OPENCORE CONFIG.PLIST (xml): PlatformInfo Generic MLBC02XXXXXXXXXX SystemSerialNumberC02XXXXXXXXXX SystemUUIDXXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX SystemProductNameMacPro7,1 ROMAAAAAAAA Automatic PHASE 3 ARANCINI EXECUTION ENGINE (rust): fn execute_macos_instruction(aarch64_block: &Block) -> Result<(), ExecutionError> { if let Some(native_x86) = ai_cache.get(&aarch64_block.hash) { execute_on_bare_metal(native_x86); return Ok(()); } let ir = lift_to_llvm_ir(aarch64_block); let ir_with_fences = apply_formal_memory_mappings(ir); let optimized = llvm_optimize_for_avx512(ir_with_fences); let x86_block = compile_to_x86_64(optimized); ai_cache.insert(aarch64_block.hash, x86_block.clone()); execute_on_bare_metal(&x86_block); Ok(()) } PHASE 4 VIRTUAL GPU KERNEL EXTENSION (cpp): IOReturn GlasshouseGPU::SubmitCommandBuffer( IOUserClient* client, MetalCommandBuffer* cmd) { SerializedMetal payload = SerializeForHost(cmd); virtio_ring_write(this->virtio_queue, &payload, sizeof(payload)); virtio_notify_host(this->virtio_queue); return kIOReturnSuccess; } ================================================================================ SECTION 05: AI GOD-COMPUTER INTERFACE — MCP ================================================================================ Modern AI models (Claude 3.5/4.5/4.6, GPT-4, Gemini, Manus) do not just chat — they autonomously operate operating systems. The release of Claude 3.5 Sonnet marked the definitive transition from conversational LLMs to proactive Large Action Models (LAMs) via native "Computer Use" capabilities. THE LIMITATIONS OF VISION-ONLY AUTOMATION: Initial iterations relied strictly on a "vision-only" methodology — the AI captured screenshots, analyzed visual layout, and generated X/Y pixel coordinates. This proved inherently brittle: minor UI updates, resolution scaling changes, notification popups, or dynamic layouts could derail the agent entirely. THE MODEL CONTEXT PROTOCOL (MCP) SOLUTION: MCP is an open-source, standardized transport layer utilizing JSON-RPC 2.0 messages, developed by Anthropic and standardized by Google and Anthropic in 2025. Supported by Docker Desktop as of March 2026. TWO PRIMARY MCP ARCHITECTURES: mcp-server-macos-use (mediar-ai, Swift): Approach: Semantic Traversal — traverses AXUIElement accessibility tree Capability: Reads exact UI roles, labels, states — zero hallucination Key tool: macos-use_click_and_traverse with PID targeting Coordinates: github.com/mediar-ai/mcp-server-macos-use automation-mcp (ashwwwin, TypeScript): Approach: Peripheral Control — raw mouse paths, keyboard chords Capability: Pixel color sampling, window management Coordinates: github.com/ashwwwin/automation-mcp COMPLETE MCP SERVER IMPLEMENTATION (python): import asyncio, subprocess, pyautogui from mcp.server import Server from mcp.types import Tool, TextContent app = Server("macos-glasshouse-agent") @app.list_tools() async def list_tools() -> list[Tool]: return [ Tool(name="click_ui_element", description="Moves mouse to X,Y coordinates and clicks.", inputSchema={"type": "object", "properties": {"x": {"type": "integer"}, "y": {"type": "integer"}, "click_type": {"type": "string", "enum": ["left","right","double"]}}, "required": ["x", "y"]}), Tool(name="execute_applescript", description="Executes native AppleScript to control macOS apps.", inputSchema={"type": "object", "properties": {"script": {"type": "string"}}, "required": ["script"]}), Tool(name="type_text", description="Simulates physical keyboard typing.", inputSchema={"type": "object", "properties": {"text": {"type": "string"}, "press_enter": {"type": "boolean"}}, "required": ["text"]}) ] @app.call_tool() async def call_tool(name: str, arguments: dict): if name == "click_ui_element": x, y = arguments["x"], arguments["y"] click_type = arguments.get("click_type", "left") pyautogui.moveTo(x, y, duration=0.2) # Human-like movement latency if click_type == "left": pyautogui.click() elif click_type == "right": pyautogui.rightClick() elif click_type == "double": pyautogui.doubleClick() return [TextContent(type="text", text=f"Clicked {click_type} at ({x}, {y})")] elif name == "execute_applescript": result = subprocess.run(["osascript", "-e", arguments["script"]], capture_output=True, text=True, check=True) return [TextContent(type="text", text=f"Output: {result.stdout}")] elif name == "type_text": pyautogui.write(arguments["text"], interval=0.01) if arguments.get("press_enter", False): pyautogui.press('enter') return [TextContent(type="text", text="Text typed successfully.")] async def main(): from mcp.server.stdio import stdio_server async with stdio_server() as (read_stream, write_stream): await app.run(read_stream, write_stream, app.create_initialization_options()) if __name__ == "__main__": asyncio.run(main()) THE AI LIFECYCLE: 1. Vision Intake: AI receives screenshot of macOS desktop via WebRTC or VNC 2. Reasoning: LLM analyzes pixels, identifies Xcode icon at (1200, 850) 3. Action Formulation: LLM formulates JSON request to MCP Server for click_ui_element 4. Execution: MCP server translates JSON into pyautogui commands 5. Feedback Loop: New screenshot confirms Xcode opened — loop continues SECURITY — MICROVM SANDBOXING: AWS Firecracker MicroVMs: Ephemeral, lightweight VMs — spin up in milliseconds Kata Containers: Container-compatible microVM isolation Google gVisor: User-space kernel for lightweight sandboxing Zero-trust namespaces: Strict egress filtering — blocks LAN, whitelists domains WebAssembly (Wasm): Safely sanitizes and executes LLM-generated Python scripts ================================================================================ SECTION 06: DEEP RESEARCH — FACT-CHECKED ANALYSIS (APRIL 2026) ================================================================================ Source: Gemini Deep Research Canvas document — "Deep Research and Fact-Checking" 31,277 chars — Completed: April 5, 2026, 8:39 AM THE POST-x86 ERA: Apple is planning to release macOS 27 exclusively for Apple Silicon, and macOS 28 is slated to remove Rosetta 2 entirely by 2027. The traditional "Hackintosh" methodology has effectively reached its terminal end of life. Hypervisor-based virtualization and advanced hardware emulation have become the sole viable pathways. On native Apple hardware, Apple's Virtualization.framework provides near-bare-metal execution speeds but is structurally and legally bound to Apple hardware via specialized entitlements (com.apple.security.virtualization). It cannot be ported to Linux or Windows host machines. KOSMICKRISP — VERIFIED: KosmicKrisp is a real, production-grade component developed by LunarG and packaged as part of the official Vulkan SDK (version 1.4.335.1 and newer). It is a sophisticated layered driver that intercepts and translates Vulkan API commands directly into Metal API commands with minimal translation overhead. Recent builds of QEMU (v9.2.0 and later) and user-friendly virtualization frontends like UTM have directly integrated KosmicKrisp alongside the virtio-gpu-gl-pci device. The introduction of the Apple CoreGL backend has also expanded support for legacy applications, allowing OpenGL 4.1 acceleration through native translation layers. Reference: https://www.lunarg.com/the-state-of-vulkan-on-apple-jan-2026/ ARANCINI FRAMEWORK — VERIFIED: The Arancini framework was presented at the ASPLOS 2026 conference. It is an advanced Hybrid Binary Translator (HBT) engineered from the ground up to utilize the LLVM compiler infrastructure, using a proprietary intermediate representation called ArancinIR. The most critical scientific contribution is its definitive resolution to the strong-on-weak memory synchronization problem. Rather than relying on brute-force memory barriers, the framework utilizes formally verified mathematical mapping schemes. Benchmark analyses using the Phoenix and EEMBC suites show 81% reduction in memory access instructions and up to 5x performance improvement over QEMU TCG. MCP & LAMS — VERIFIED: The release of Claude 3.5 Sonnet, followed rapidly by versions 4.5 and 4.6 in 2026, marked the definitive transition of AI from conversational LLMs to proactive Large Action Models (LAMs) via native "Computer Use" capabilities. MCP is an open-source, standardized transport layer utilizing JSON-RPC 2.0 messages. The developer community has produced highly capable open-source MCP servers specifically for macOS automation. mcp-server-macos-use leverages the native macOS Accessibility API (AXUIElement) — the AI reads the actual structural UI tree rather than a flat screenshot. CONCLUSION FROM DEEP RESEARCH: The state of macOS virtualization, binary translation, and programmatic system interaction in 2026 is characterized by the absolute deprecation of legacy x86 paradigms and the rapid maturation of highly sophisticated translation and automation layers. The integration of KosmicKrisp within the virtio-gpu paravirtualization stack has decisively solved the long-standing issue of 3D graphics acceleration across VM boundaries. Formally verified hybrid translators like Arancini have drastically mitigated the severe performance penalties associated with translating strong-memory x86_64 software to run on weak-memory ARM64 and RISC-V architectures. ================================================================================ SECTION 07: THE SCAVENGER ARCHITECTURE — ZERO-COST INFRASTRUCTURE ================================================================================ COMPONENT 01 — THE COMPUTE LOOPHOLE: GitHub Actions (Free Apple Silicon) GitHub provides free, bare-metal macOS runners (including M1 Apple Silicon) for CI/CD on public repositories. A workflow requesting runs-on: macos-14 boots a free M1 instance. Instead of compiling code, the workflow installs VNC, Tailscale, and enters a sleep 21000 state (5.8 hours). The catch: GitHub kills jobs after 6 hours. The workaround: a cron job on the Oracle server commits a dummy file every 5.5 hours, triggering a fresh runner before the old one dies. COMPONENT 02 — THE PERSISTENT BRAIN: Oracle Cloud Always Free (24GB ARM) Oracle Cloud offers the most generous free tier in the world: an ARM-based Ampere A1 instance with 4 CPU cores, 24GB RAM, and 200GB block storage for exactly $0.00. This is the permanent command center. With 24GB RAM, it can run quantized local AI models (Llama 3, Mistral) via llama.cpp entirely for free. Execution: Deploy Ubuntu, SSH in, install Docker and Tailscale. Max out the sliders to 4 OCPUs and 24GB RAM. COMPONENT 03 — THE NERVOUS SYSTEM: Tailscale Mesh VPN (Free) Tailscale (free for personal use) links the Oracle server, the GitHub Mac, and the user's local device into a single virtual LAN. All machines act as if plugged into the same router, bypassing firewalls and NAT restrictions. Install via brew install tailscale, authenticate with an ephemeral auth-key stored in GitHub Secrets. COMPONENT 04 — THE AI OPERATOR: MCP + Open Source LLMs (Free) mcp-server-macos-use (Swift, by mediar-ai) intercepts macOS accessibility APIs, exposing the full UI tree to an AI without requiring expensive vision models. automation-mcp handles raw mouse/keyboard physics. Combined with a local Llama 3 model on the Oracle server, the entire AI automation stack costs zero dollars per inference. Install: git clone https://github.com/ashwwwin/automation-mcp.git then bun install && bun run index.ts GITHUB ACTIONS WORKFLOW (yaml): name: Glasshouse Mac Tunnel on: push: branches: [main] schedule: - cron: '0 */5 * * *' # Auto-restart every 5.5 hours jobs: mac-tunnel: runs-on: macos-14 # Free M1 Apple Silicon runner timeout-minutes: 360 steps: - name: Install Tailscale run: brew install tailscale - name: Connect to Tailscale network run: sudo tailscale up --authkey=${{ secrets.TAILSCALE_KEY }} --ephemeral - name: Enable VNC Screen Sharing run: | sudo /System/Library/CoreServices/RemoteManagement/\ ARDAgent.app/Contents/Resources/kickstart \ -activate -configure -access -on \ -clientopts -setvnclegacy -vnclegacy yes \ -clientopts -setvncpw -vncpw ${{ secrets.VNC_PASSWORD }} \ -restart -agent -privs -all - name: Install AI Automation Layer run: | git clone https://github.com/ashwwwin/automation-mcp.git cd automation-mcp && bun install && bun run index.ts & - name: Hold runner alive (5.8 hours) run: sleep 21000 TOS WARNING: The Scavenger Architecture relies on hijacking GitHub Actions for non-CI/CD workloads. GitHub actively scans for and bans accounts using runners for remote desktop tunneling. Building startup infrastructure on TOS violations risks catastrophic, unrecoverable account bans. Use for prototyping only. ================================================================================ SECTION 08: THE BOOTSTRAPPER'S PLAYBOOK — 5 HACKS ================================================================================ Source: "The Glasshouse Directive" whitepaper — Gemini Canvas document, April 2026 HACK 1 — THE ALWAYS FREE MOTHERSHIP: Oracle Cloud Oracle Cloud offers the most aggressive 'Always Free' tier in the world. Permanently claim an ARM-based Ampere A1 instance with 4 CPU Cores and 24GB of RAM. Deploy Ubuntu Linux. Use the Arancini binary translator to run x86 macOS binaries, or host local, quantized AI models (like Llama 3) via llama.cpp to act as your free, persistent AI orchestrator. HACK 2 — EPHEMERAL CI/CD TUNNELING: GitHub Actions GitHub provides free, bare-metal M1 Apple Silicon runners for public repositories. Write a GitHub Actions script that boots a macOS runner, installs a mesh VPN (Tailscale), and initiates a VNC server. You now have a free, hardware-accelerated Mac. Since jobs time out after 6 hours, write a cron-job on your Oracle mothership to trigger a new GitHub Action every 5.5 hours, persisting your data via cloud storage. HACK 3 — DECOMMISSIONED ENTERPRISE HARVESTING: GSA Auctions When tech giants and government agencies upgrade, they liquidate massive clusters of perfectly functional hardware for pennies on the dollar. Monitor platforms like GSA Auctions (gsaauctions.gov), GovDeals (govdeals.com), and Municibid. You can frequently purchase pallets of decommissioned Apple Mac Minis or enterprise servers from universities or federal departments for under $100. Rack them in your garage, install Proxmox, and build a localized Kubernetes cluster. HACK 4 — SOVEREIGN AI & CLOUD GRANTS: Billions Available Governments are pouring billions into 'Sovereign AI' to break US hyperscaler monopolies. Apply for compute grants: EU: GenAI4EU — massive funding for AI sovereignty Canada: Sovereign Compute Infrastructure Program (SCIP) Global South: EVAH and the AI for Good Impact Awards — up to $60M in compute access Microsoft: Microsoft for Startups — up to $150,000 in Azure credits AWS: AWS Activate — up to $100,000-$350,000 in non-dilutive cloud credits EU DMA: Article 6(7) mandates free interoperability — legal shield for EU startups HACK 5 — BUG BOUNTY COMPUTE MINING If you have cybersecurity skills, platforms like HackerOne and Immunefi pay massive bounties (sometimes up to $1M+). Many programs offer massive multipliers if you accept payouts in cloud compute credits. Use automated, AI-driven reconnaissance to map subdomains and mine low-hanging vulnerabilities to indefinitely fund your infrastructure. ADDITIONAL APPROACHES: DePIN Networks: Render Network, Fluence, Akash — 45-60% below AWS prices Darling: Wine-like translation layer for macOS Mach-O binaries on Linux github.com/darlinghq/darling vllm-metal: Docker Model Runner with Apple Silicon MLX backend Zero-copy tensor operations on unified memory Jurisdictional Arbitrage: Structure in EU to leverage DMA interoperability mandates STRATEGIC QUOTE: "By weaving these systems together — orchestrating an AI on a free Oracle server, directing it to execute code on an ephemeral GitHub Mac, while applying for EU compute grants — you achieve complete digital sovereignty with zero capital." — The Glasshouse Directive, April 2026 ================================================================================ SECTION 09: LEGAL & LICENSING FRAMEWORK ================================================================================ APPLE EULA POSITION: Apple's Software License Agreement (SLA) and EULA strictly and unambiguously stipulate that macOS may only be installed, virtualized, and executed on genuine Apple-branded hardware. This renders the use of Docker-OSX or QEMU/KVM deployments running macOS on commodity servers a direct violation of civil contract terms. APPLE v. CORELLIUM (2019-2023): A critical legal distinction exists between a EULA violation and actionable copyright infringement. Corellium successfully commercialized a platform (CORSEC) that replicated the iOS and macOS kernels for cybersecurity research. Apple sued for direct copyright infringement. In a landmark ruling, the 11th Circuit Court of Appeals ruled in favor of Corellium, establishing that virtualizing an Apple operating system for security research falls under the "fair use" doctrine. The court found the virtualization software was highly "transformative" and did not substantially harm Apple's hardware market. The case culminated in a confidential settlement in late 2023. LEGAL CATEGORIES: Protected (Fair Use): Cybersecurity research, vulnerability discovery, academic research, security testing in controlled environments Prohibited (EULA): Mass distribution of virtualized macOS for general consumer use, standard software development on commodity hardware, commercial deployment EU DMA Shield: Article 6(7) mandates free and effective interoperability for third parties with iOS/iPadOS hardware/software features DMCA 1201(f): Research exemptions for reverse engineering interoperability COMPLIANT ALTERNATIVES: Official cloud computing providers offering macOS hosting (AWS EC2 Mac instances, MacStadium, Liquid Web) continue to utilize physical, bare-metal Mac Mini hardware mounted in specialized server racks — ensuring strict EULA compliance. ================================================================================ SECTION 10: ANALYSIS & IMPLICATIONS ================================================================================ THE EMULATION TAX vs. API TRANSLATION: The single most important conceptual contribution in this dialogue is the "Emulation Tax" vs. "API Translation" paradigm. Traditionally, running macOS on generic hardware requires emulating a physical GPU — computationally crushing, resulting in an unusable interface. Project Glasshouse's insight: stop emulating hardware, start translating software APIs. This makes visible that hardware lock-in is increasingly a software enforcement problem, not a physical physics problem. If a translation layer is fast enough, the underlying silicon becomes irrelevant. This is the same insight that powered Valve's Proton (Windows games on Linux) and Apple's own Rosetta 2 (x86 apps on Apple Silicon). CRITICAL PERSPECTIVES — OPERATIONAL FRAGILITY: Legal Risk: Apple's EULA forbids macOS on non-Apple hardware. Any commercial deployment invites immediate Cease & Desist orders and lawsuits. Security: Bypassing the Secure Enclave and T2 chip compromises cryptographic trust. Storing sensitive data on a 'Ghost Mac' is highly insecure. Maintenance: Apple frequently updates macOS, breaking Hackintosh patches. Maintaining a custom Metal-to-Vulkan bridge requires constant vigilance. TOS Violations: The GitHub Actions tunneling hack violates GitHub's Terms of Service. Building startup infrastructure on TOS violations risks catastrophic account bans. IMPLICATIONS FOR THE LAUNCHPAD TLP: For Piqui's domains — education management, entrepreneurship, and social leadership in Latin America — the implications are profound. The iOS App Store is one of the most lucrative digital markets in the world, but the entry ticket is a $1,000+ Apple computer. The Scavenger Architecture proves that the technical barrier to entry can be bypassed with pure resourcefulness. The integration of MCP means that AI is no longer just generating text; it is operating the machine. For a solo founder, an AI agent running on a free cloud server can act as a tireless employee — scraping competitor data, running tests, managing social media — all orchestrated through a remote, automated macOS instance. STRATEGIC TAKEAWAY: These "hacks" are not permanent enterprise infrastructure. They are PROTOTYPING SUPERPOWERS. Use the zero-cost GitHub/Oracle loophole to build the MVP, compile the first iOS app, and secure the first round of funding or revenue. Once capital is acquired, migrate to legitimate, stable infrastructure. The hack is the ladder, not the destination. ================================================================================ SECTION 11: COMPLETE RESEARCH COORDINATES & SOURCES ================================================================================ CORE VIRTUALIZATION: QEMU/KVM: https://github.com/qemu/qemu Docker-OSX: https://github.com/sickcodes/Docker-OSX OpenCore: https://github.com/acidanthera/OpenCorePkg Proxmox VE: https://proxmox.com gibMacOS: https://github.com/corpnewt/gibMacOS UTM: https://mac.getutm.app GRAPHICS & TRANSLATION: MoltenVK: https://github.com/KhronosGroup/MoltenVK KosmicKrisp: https://www.lunarg.com/the-state-of-vulkan-on-apple-jan-2026/ Darling: https://github.com/darlinghq/darling Arancini HBT: ASPLOS 2026 conference paper (LLVM-based) AI & MCP: MCP Protocol: https://modelcontextprotocol.io mcp-server-macos: https://github.com/mediar-ai/mcp-server-macos-use automation-mcp: https://github.com/ashwwwin/automation-mcp llama.cpp: https://github.com/ggerganov/llama.cpp LangChain: https://github.com/langchain-ai/langchain HyperChat: Open source MCP client Cherry Studio: Open source MCP client SECURITY / MICROVM: Firecracker: https://github.com/firecracker-microvm/firecracker Kata Containers: https://github.com/kata-containers/kata-containers gVisor: https://github.com/google/gvisor FREE CLOUD & INFRASTRUCTURE: Oracle Free: https://cloud.oracle.com/free Tailscale: https://tailscale.com GitHub Actions: https://docs.github.com/en/actions DECENTRALIZED COMPUTE: Akash Network: https://akash.network Render Network: https://rendernetwork.com Fluence: https://fluence.network HARDWARE SOURCING: GSA Auctions: https://gsaauctions.gov GovDeals: https://govdeals.com Municibid: https://municibid.com GRANTS & CREDITS: AWS Activate: https://aws.amazon.com/activate MS for Startups: https://startups.microsoft.com EU GenAI4EU: https://digital-strategy.ec.europa.eu AI for Good: https://aiforgood.itu.int EVAH: https://evah.org BUG BOUNTIES: HackerOne: https://hackerone.com Immunefi: https://immunefi.com LEGAL REFERENCES: Apple v. Corellium (11th Circuit, 2023) EU Digital Markets Act Article 6(7) DMCA 17 U.S.C. Section 1201(f) ================================================================================ END OF DOCUMENT ================================================================================ Project Glasshouse Briefing — The Launchpad TLP · thelaunchpadtlp.education Prepared by Manus AI · April 2026 For research and educational purposes only. Apple's EULA prohibits running macOS on non-Apple hardware. All methods described are theoretical and experimental. The Corellium precedent (11th Circuit, 2023) protects security research use cases only.