Apple dropped eGPU support entirely when it switched to Apple Silicon. The Intel Mac era had AMD eGPU support through Thunderbolt, but the transition to M-series chips cut that off with no replacement. For ML practitioners on Macs, this has been a real constraint: Apple Silicon’s unified memory is excellent for running models up to a certain size, but once you need more VRAM than your chip’s integrated GPU can address, you either hit a wall or keep a separate Linux box for bigger jobs. That constraint just changed. Tiny Corp, George Hotz’s AI startup, has had Apple officially cryptographically sign their TinyGPU DriverKit extension, bringing Nvidia Ampere-and-later and AMD RDNA3-and-later GPUs to Apple Silicon Macs through a standard Thunderbolt or USB4 connection.
The implementation matters for adoption. Tiny Corp built this as a macOS DriverKit extension, which runs in userspace rather than as a kernel extension. That means no SIP bypass required — you enable it with a toggle in System Settings, not by disabling a security feature that your employer’s MDM profile would flag. Apple signing the extension is the other half of that story: it means this is an officially supported path, not a community workaround waiting to break with the next macOS update. The timeline from first proof-of-concept (AMD GPU over USB3 in May 2025) to official Apple approval (March 31, 2026) is about ten months.
The initial performance numbers are credible. A Mac mini M4 connected to a Radeon RX 7900 XTX over Thunderbolt achieved 18.5 tokens per second running Qwen 3.5 27B. That exceeds what the M4’s integrated GPU can do with that model size, which is the key threshold: the eGPU is actually useful for ML inference, not just for edge cases. The current driver requires Docker and is explicitly aimed at ML workloads rather than gaming or general graphics. Nvidia gaming support would require a further step, and there is no indication Apple wants to go there. But for inference and fine-tuning workloads, the use case is clear.
The broader significance is that this reopens a hardware choice that many ML practitioners had written off. A Mac as your primary development machine with an eGPU for heavier inference is now a viable setup without requiring a separate Linux box or cloud instances for every job over a certain size. Whether the community actually adopts this depends on Tiny Corp’s ongoing support and how Apple treats the extension in future macOS versions. Apple has historically been willing to quietly break community GPU workarounds, and the distinction between “we signed this once” and “we are committed to this as a supported capability” is not yet clear.
The most interesting part of this story is what it suggests about Tiny Corp’s direction. Building GPU drivers for general-purpose use across hardware and operating systems, alongside tinygrad, looks like a play to make GPU compute less dependent on the Nvidia-CUDA-Linux stack that currently dominates ML infrastructure. Whether that goes anywhere is a separate question, but this is a concrete step in that direction that is useful today regardless of where the longer-term project ends up.