The core of the dilemma lies not in the technology, but in the architecture of power. It is not simply that a few companies build large AI models; it is that they increasingly control the underlying terrain: compute capacity, proprietary datasets, foundational architectures. The field of AI is no longer defined by invention alone but by who controls the preconditions for scientific advancement. And those preconditions are consolidating into the hands of a small number of companies, with the consequence that the AI systems we rely on begin to reflect a narrowing of values, incentives, and visibility.
The only meaningful antidote to this creeping enclosure is what we might call “Civic AI”, not merely AI developed by or for the state, but systems engineered with the public in mind at every layer. Compute must not be a scarcity auctioned off to the highest bidder. Datasets must not be the secretive spoils of web crawlers, our online footprints and low-paid labor. Models must not be obscured from audit and adaptation by legal obfuscation. What is required is not just access, but infrastructure: publicly provisioned, openly governed, and constructed with permanence in mind.
Some will balk at the idea of state intervention. And yes, many of us have legitimate grievances with slow, bureaucratic, or poorly maintained public systems. But without public investment, there would be no roads, no railways, no electrical grids. Why are we outsourcing the infrastructure of cognition, models that will become capable of reasoning, generating, and deciding, to a cluster of firms driven by market logic rather than public mandate? It is a choice masquerading as inevitability. And it is one that must be reversed.