Nathan Lambert’s piece on Interconnects makes a structural argument that is worth sitting with: the organisations currently releasing frontier-quality open models are doing so at a loss, and the incentive to keep doing that diminishes as training costs climb into the billions. His prediction is not that open models disappear, but that the distribution changes. Smaller, domain-specific, fine-tuneable models will proliferate. Truly frontier-capable open releases will come from fewer and fewer places.
The evidence he points to is behavioural, not speculative. Leadership departures at Qwen and AI2 reflect the tension between maintaining frontier research output and operating a profitable product. Both labs have commercial imperatives that compete directly with releasing their best work freely. Chinese startups including Moonshot AI, MiniMax, and Zhipu AI are financially fragile enough that sustained frontier open releases require them to spend capital they are actively trying to deploy on revenue-generating products. Meta’s Llama programme, which defined the open frontier for two years, has become less aggressive as the compute demands of staying competitive have grown. Nvidia’s Nemotron initiative is a single company’s calculation, not a structural solution.
The consortium framing is Lambert’s answer to this problem. If no individual organisation has both the capital and the incentive to release frontier-quality open models indefinitely, then collective funding across organisations with complementary interests is the logical alternative. The precedent exists in other infrastructure sectors: standards bodies, shared research consortia, and open foundations funded by participants who benefit from a common resource they could not sustain individually.
For practitioners, the near-term implication is not panic but reorientation. Fine-tuneable models below the frontier are already competitive for the majority of production use cases, and that tier will remain well-supplied. The models worth planning around are in the 7B–70B range that can be adapted to domain-specific tasks, run efficiently at inference, and aren’t subject to the same economic pressures as frontier releases. The organisations whose infrastructure depends specifically on near-frontier open model availability, either for competitive benchmarking or capability matching with closed APIs, are the ones with the most exposure.
The assessment is that this shift is already happening and Lambert is naming it rather than predicting it. Tracking which organisations are actually releasing frontier-capable weights, versus training frontier models for internal or commercial use, reveals the trend clearly. If you are building systems that assume continued access to near-frontier open weights on a competitive training schedule, that assumption is worth revisiting now.