This is a consolidated (and long) essay on the AI Polyconflict, but recast as Cognitive Sovereignty, in the same vein as the earlier consolidated essay on Metabolic Sovereignty, which was a recast version of the essays on commodity shocks. No images or hyperlinks in the main body, just like last time.
Also, I am about to take a break from writing. The next essay on Habitability will be the last one for a bit.
Thinking for Ourselves
Before a society can decide anything - who governs, what to build, how to distribute the risks of collective life - it must be able to think for itself. Not merely to have thoughts, but to think in its own language, at its own pace, enact those thoughts in institutions and tools it controls, according to frameworks of understanding that reflect its own situation rather than someone else’s.
This capacity is what I mean by cognitive sovereignty.
Cognitive sovereignty is not isolationism. A society that refuses all external ideas, all foreign tools, all imported knowledge is merely isolated. Cognitive sovereignty is the capacity to refuse a cognitive dependency when that dependency becomes coercive. A society that borrows a tool and can put it down retains its cognitive sovereignty. A society that has organized its hospitals, its courts, and its civic administration around tools it does not control, and cannot put down without catastrophic cost, has lost cognitive sovereignty. Obligatory Rabindranath Tagore poem from Geetanjali:
Gitanjali 35
Where the mind is without fear and the head is held high;
Where knowledge is free;
Where the world has not been broken up into fragments by narrow domestic walls;
Where words come out from the depth of truth;
Where tireless striving stretches its arms towards perfection;
Where the clear stream of reason has not lost its way into the dreary desert sand of dead habit;
Where the mind is led forward by thee into ever-widening thought and action
Into that heaven of freedom, my Father, let my country awake.
There’s also Gandhi’s retort to Tagore:
I do not want my house to be walled in on all sides and my windows to be stuffed. I want the culture of all lands to be blown about my house as freely as possible. But I refuse to be blown off my feet by any.
Cognitive sovereignty is the precondition for every other kind of sovereignty, and I don’t mean this in an idealist sense, but in a thoroughly materialist sense. A state that cannot process its own intelligence cannot make independent foreign policy. A society whose epistemic frameworks were designed elsewhere will consistently misread its own situation. A democracy whose deliberative infrastructure has been colonized will make decisions calibrated to someone else’s values and call them its own.
Over the past year, I have been tracking polyconflicts, nexuses where material reality (the extraction of finite minerals, the generation of megawatts, the synthesis of chemicals, the flow of global capital) tangles with human systems, military strategy, and the deep structures of biology and geology. These concurrent, mutually reinforcing challenges overwhelm traditional modes of governance and strategic foresight. The ongoing Gulf War has pushed several into the open: helium, metformin, firewood. Each is its own story. But there is a polyconflict that underlies them all, one that targets cognitive sovereignty across every society outside the two empires building the system.
AI is the mother of all polyconflicts.
By AI I do not mean chatbots or image generators. I mean Augmented Intelligence - the way human institutions use computational systems to extend their cognitive capacities, sometimes with extraordinary results and sometimes with catastrophic stupidity. What AI is doing to cognitive sovereignty is not accidental. It is a form of extraction that’s best seen as cognitive imperialism: the enclosure of humanity’s collective intelligence into a rent-seeking asset controlled by a handful of corporations, the best known of which are in California and Shenzhen.
Cognitive imperialism is not a metaphor. It is a description of an economic arrangement already in operation: the conversion of humanity’s accumulated intelligence - its languages, legal traditions, cultural output, agricultural knowledge, scientific record, and institutional memory - into training sets and foundation models owned by a small number of corporations, which then charge the very societies whose collective output built the asset for the right to access it. This arrangement is underwritten by military force, by financial entanglement in the form of sovereign wealth fund investments and snapback clauses, by the material dependencies of the water-energy-compute loop, and by the comprador dynamics of local elites.
A lot like imperialism of the 19th and early 20th centuries. As with the essay on metabolic sovereignty, this essay is an attempt to weave together:
Political concepts: sovereignty and political technology
Computational concepts: the idea of a stack in particular,
Cognitive concepts: distributed and 4E cognition
Metabolic concepts: energy-compute arbitrage
Metaphysical concepts, time in particular
Plus contemporary events:
The Gulf War, with its industrialization of lethality
The evolution of the Indian software industry, and its comprador nature
All in order to grasp aspects of our age in thought. What is at stake is the cognitive architecture through which humanity will govern itself (and perhaps the rest of the planet too) in the twenty-first century, and whether that architecture will be a commons or an enclosure.
Augmented Intelligence: The Framework
To understand what AI is doing in the world right now, you need two ideas that most people working on AI do not care about. The first comes from cognitive science: distributed cognition. The second comes from philosophy: political technology. When you put them together, you get a framework for understanding Augmented Intelligence - the actual thing we should be concerned about, as opposed to the science-fiction version of AI that dominates public debate.
In the early 1990s, the cognitive scientist Edwin Hutchins published Cognition in the Wild. Hutchins had spent years studying how the crew of a U.S. Navy ship navigates into port, and what he found upended a basic assumption of cognitive science: that thinking happens inside individual heads. It doesn’t, or at least, not only there. When a Navy crew brings a warship into San Diego harbor, the thinking is spread across the entire bridge team and their instruments. One sailor takes a bearing through an alidade, another plots it on a chart, a third relays headings to the helm. The navigational tools themselves - the charts, the compasses, the plotting instruments - carry centuries of accumulated mathematical knowledge built into their physical design. A sailor using a scale doesn’t need to know trigonometry; the tool does the trigonometry for him. The intelligence of the operation lives in the system: the people, their training, their tools, and the social organization that connects them.
Hutchins’ work was part of a broader movement now called 4E Cognition - the idea that human thinking is Embodied (grounded in the body), Embedded (situated in an environment), Enacted (emerging through action), and Extended (relying on tools and artifacts outside the skull). For 4E Cognition, the right unit of analysis for understanding cognition isn’t the individual mind. It’s the whole sociotechnical system.
Now the second idea. In Discipline and Punish, the philosopher Michel Foucault traced how modern institutions such as prisons, hospitals, schools, and above all, militaries, developed techniques for reshaping human bodies and minds into standardized, controllable components. He called this a political technology of the body. The key innovation of modern military discipline, starting in the seventeenth century, was to take the chaotic mass of infantry and reorganize it through relentless drill, precise timekeeping, and rigid spatial ordering. Every movement was standardized. Every soldier became interchangeable. The military camp became a machine for producing what Foucault called docile bodies - people whose physical capacities were simultaneously maximized and whose political autonomy was minimized.
Foucault’s point was that the military didn’t just use technology. The military was a technology - an artificial apparatus for organizing human beings into a productive, lethal machine. Long before anyone built a computer, the army was already a kind of computational system, processing information about terrain, logistics, and enemy positions through hierarchies of disciplined human processors.
We have always been artificial.
If Foucault argues that militaries are artificial systems - political technologies that format human beings into standardized components - and Hutchins shows that cognition in complex institutions is distributed across people, tools, and organizational structures, then combining their insights yields an unsettling conclusion: militaries have always been artificial intelligences. They are vast, distributed cognitive systems composed of disciplined human elements, material artifacts, and institutional procedures, all organized to process information and produce lethal outputs. The Roman legion was an artificial intelligence. The Napoleonic staff system was an artificial intelligence. What we now call AI - machine learning using neural networks - is only the latest cognitive artifact to be plugged into this historical apparatus.
These systems are simultaneously fearsome and stupid. Hutchins saw this on the Navy bridge. The distributed system could perform calculations beyond any individual crew member - but a misheard bearing, a misplotted line, and the whole system could produce catastrophically wrong outputs. Foucault saw the same thing from the other direction: discipline could produce armies of extraordinary effectiveness, but docile bodies stripped of individual judgment created institutions capable of grinding forward into disaster with mechanical persistence. Vietnam was a textbook case where Pentagon planners mistook body counts for strategic progress because their cognitive system could process what was quantifiable and was blind to everything else. Even the phrase ‘the best and the brightest’ carries a whiff of AI doesn’t it?
This dual character of capability married to stupidity is the permanent condition of complex institutional cognition. Contemporary AI inherits it fully. When today’s large language models are trained, they consume the digital output of the entire internet, much of it shaped by the institutional logics, biases, and priorities of the military-industrial complex and its commercial descendants. Ted Chiang called large language models “lossy JPEGs of the internet.” Just as a compressed image preserves dominant patterns while losing detail, AI models capture the dominant patterns of their training data while discarding situated context and ethical nuance.
Note that I say ‘stupidity’ instead of ‘evil’ - I am assuming that most catastrophes in these polyconflicts are caused by error or apathy, not malice. That’s a flawed assumption, especially when you look at the statements of Israeli or American officials, so we might be transitioning into a state of exception, where the sovereign has suspended the rule of law, as opposed to executing it improperly.
These two frameworks give us the analytical lens. To apply it diagnostically in the real world, we need a third concept: cognitive metabolism. Like the physiological metabolism in the essay on metabolic sovereignty, every society running on augmented intelligence runs on three cognitive metabolisms.
The external cognitive metabolism: the flows of data, compute, models, and inference and the semiconductor fabs, undersea cables, cloud platforms, training pipelines, and APIs through which inference is rented back to the world. In 4E terms, this is the Extended and Embedded layer, the tools and environments through which institutional cognition is distributed and sustained.
The bodily cognitive metabolism: the deliberative capacity of individuals and communities - the ability to reason from cultural context, transmit local knowledge, and think at one’s own pace. This is the Embodied and Enacted layer, the situated, practiced intelligence that gives distributed cognition its human texture.
The political cognitive metabolism: the institutional capacity that governs the relationship between the first two - the sovereignty to build public cognitive infrastructure and set the terms under which foreign cognitive artifacts enter domestic life. Foucault’s political technology is precisely this metabolism weaponized: the disciplinary apparatus that formats cognitive systems and entire polities into docile components of someone else’s design.
The diagnostic question for any society in the age of augmented intelligence is always: who controls these flows, and who is left at the mercy of those who do? Cognitive sovereignty, in metabolic terms, is the capacity to maintain all three of these flows under conditions a society controls. A society whose external cognitive metabolism runs through foreign stacks, whose bodily cognitive metabolism is eroded by dependency, and whose political cognitive metabolism has been captured by the logic of short-term efficiency has lost the preconditions for thinking for itself - regardless of what constitutions and policies say.
The Industrialization of Lethality
Now that we have a hammer, let’s look for some nails in the current Gulf War and its spillovers.
Throughout the history of conflict, technology has provided a tactical and strategic edge to military decision makers - from the horse to the longbow to precision-guided munitions. Today, technology is replacing the human durée - the lived time of ethical deliberation - with algorithmically compressed decision-making. The speeding up doesn’t have to be fully automated; a military target call center that must fulfill a daily quota of a thousand targets is almost as problematic as letting a fully autonomous system control lethality.
The Israeli Defense Forces have effectively industrialized the kill chain. A system known as Lavender analyzes massive surveillance datasets to flag suspected militants based on behavioral patterns, identifying tens of thousands of potential targets with an estimated 10% error rate. Another system, Habsora, generates up to a hundred structural targets a day, compared to the roughly fifty per year that human analysts had previously produced. Where’s Daddy tracks the mobile phones of flagged individuals, alerting operators when targets enter private family residences. The human role in this chain has been compressed to roughly twenty seconds of confirmation per kill order. As one IDF source told the journalist Yuval Abraham: the logic was “quantity over quality.” With the emphasis on quantity comes the targeting of civilian homes and other places, for if there are 40,000 suspected militants, there are 40,000 family homes. International Humanitarian Law requires human judgment to weigh proportionality, distinction, and military necessity. What do you think are the chances of those three values when a human operator spends twenty seconds assessing an AI-generated kill order?
The pattern is not confined to Gaza. The United States military’s Project Maven, built with private partners including Palantir, sifts through drone and satellite footage to classify targets, recommend weapons systems, and draft automated legal justifications for strikes. During Operation Epic Fury, Maven nominated over three thousand targets in a single week. One of those targets was the Shajareh Tayyebeh primary school in Minab, Iran, which killed over 170 civilians, mostly children. The AI system relied on a Defense Intelligence Agency database that had not been updated to reflect the building’s conversion from an IRGC compound into a school.
Simultaneously, the physical dimension of warfare has shifted toward what military theorists call the Cheap Drone Paradox. Inexpensive, sub-$500 autonomous drones equipped with edge AI can now navigate, identify targets, and swarm without a continuous cloud connection, credibly threatening multi-billion-dollar naval assets. The Houthis demonstrated this when a drone navigated over 2,600 kilometers to strike Tel Aviv. In Ukraine, FPV drones and Russian Lancet munitions increasingly operate with autonomous terminal homing - when electronic warfare jams their radio links, they switch to onboard AI, recognizing targets by thermal signature.
The anthropologist Lucy Suchman drew a distinction between the European Navigator, who creates a rigid, detailed plan in advance and tries to force reality to conform to it, and the Trukese Navigator of Micronesia, who continuously adjusts to the waves, the stars, and the wind. The aircraft carrier is the European Navigator: centralized, top-down, enormously expensive. The drone swarm is the Trukese Navigator: situated, adaptive, distributed. Compute is pushed directly onto the device - the drone senses terrain, spots a thermal signature, coordinates with its peers, and enacts a strike.
The cheap drone paradox had its day during Operation Sindhoor, when India and Pakistan came closest to a full-fledged war since 1971. The Line of Control has been completely transformed. India has deployed AI-based surveillance networks and integrated swarm drones into its command systems. Pakistan has accelerated its own drone ecosystem with Chinese assistance. Drones offer cheap precision without the political risks of a manned incursion, but they compress decision-making timelines too.
The compression of decision time is seen as a military advantage. When one side takes thirty minutes to deliberate the legal and ethical implications of a strike and the other takes thirty seconds to execute an algorithmic recommendation, the deliberative side loses any kinetic advantage it might possess. AI guarantees a race to the bottom. The political technologies that constrained military leaders such as chains of command, rules of engagement, international humanitarian law, were all designed for a world where human beings had time to think.
The Stack War
The kinetic conflicts in the Gulf, Ukraine, and along the India-Pakistan border are also Stack Wars, with the contest between the United States and China being the primary one. I don’t think the other rivalries are surrogates for this big one, but the superpower rivalry infiltrates these lesser conflicts for sure.
The United States projects power through what I call the Prediction Stack: a technological ecosystem built on cutting-edge semiconductor fabrication (Nvidia GPUs above all), massive centralized cloud computing platforms (AWS, Azure, Google Cloud), and proprietary foundation models (OpenAI, Anthropic, Google). This is Intelligence as a Service — a stack designed to control the cognitive heights and rent them to the world. Washington treats data centers, cloud access, and compute licensing as instruments of power projection in the same way it once treated military bases and aircraft carrier groups. The goal is to bind allied and client states into the American technology ecosystem while excluding Chinese influence.
The Microsoft-G42 deal is the clearest example. Microsoft invested $1.5 billion in the UAE’s sovereign AI holding company, G42, as the cornerstone of a broader $15.2 billion commitment to UAE digital infrastructure. This was not an ordinary business transaction. It was orchestrated with explicit U.S. government oversight, and in exchange for access to advanced American chips and deep integration onto the Azure platform, G42 was required to divest from Chinese technology firms, sell its stake in ByteDance, and strip Huawei equipment from its data centers. The deal includes American board representation, governance safeguards, and snapback clauses - provisions that automatically revoke compute access and degrade system capabilities if restricted vendors or personnel touch the stack.
You don’t need to invade the UAE or threaten it with military force. You build dependency into the cognitive infrastructure, and then you establish automatic penalties for deviation. The idea is to turn the UAE into a docile body within the American technological order - its digital capabilities maximized, its geopolitical autonomy constrained. If the UAE violates the terms, its AI systems degrade, its military platforms lose access to frontier models, its smart city infrastructure loses cloud support. The punishment is cognitive diminishment, not boots on the ground.
China’s approach looks very different. Rather than competing for dominance at the cognitive heights, China projects power through the Production Stack - an Industrial Nervous System. Through the Digital Silk Road, the technological arm of the Belt and Road Initiative, China provides developing nations and Gulf states with affordable 5G networks, cloud computing, surveillance technology, and smart city infrastructure. In the Gulf, COSCO and Chinese AI firms jointly developed the automation systems for Khalifa Port in Abu Dhabi. Huawei built the Smart City 3.0 platform for Yanbu Industrial City in Saudi Arabia. SenseTime established a $776 million joint venture with the Saudi sovereign wealth apparatus.
Notice where the cognition sits. The American stack concentrates intelligence in the cloud - in centralized data centers and proprietary models that client states must access remotely. The Chinese stack distributes intelligence into physical infrastructure - into ports, factories, power grids, and urban management systems. In Suchman’s terms, the American model is the European Navigator’s approach to geopolitics: a centralized plan projected onto the world. The Chinese model is closer to the Trukese Navigator: embedded, situated, responsive to local material conditions. Which, BTW, is the opposite of what we would expect from a federal, democratic state competing with a centralized one-party state. The American model disciplines through a gun to the head: revoke compute if you deviate. The Chinese model disciplines through dependency: once your ports and power grid run on Chinese AI, switching costs become prohibitive.
BTW, it’s not as if the US only has the prediction stack - its use of airpower, missiles and drones demonstrates high-end production capability as well; I am tempted to say the US has both the pen and the sword, while the Chinese have everything in the middle.
The Gulf states are not passive recipients. They are sophisticated actors playing a three-sided market by purchasing American AI targeting systems for border security, running enterprise software on Azure, relying on Palantir for intelligence analysis, while simultaneously operating their ports on Chinese-built automation, managing their smart cities through Huawei platforms, and training their petroleum engineers on SenseTime’s tools. This is metabolic hedging, and it works because the two stacks, despite being strategically opposed, are functionally complementary: the Prediction Stack handles the cognitive heights, the Production Stack handles the material base. But hedging is not sovereignty, for to rent your intelligence from two competing empires is to be cognitively dependent on both of them.
This is why the wealthiest Gulf states have begun pursuing what they call Sovereign AI. The UAE’s Technology Innovation Institute developed the Falcon family of large language models. Saudi Arabia developed ALLaM, and the UAE developed Jais - Arabic-first models designed to natively understand the linguistic nuances and cultural values of the Middle East, rather than importing the epistemic biases engineered in Silicon Valley. But the realistic ceiling is managed interdependence, not independence. No one outside the US and China has the semiconductor fabrication capacity or the training infrastructure to compete at the frontier. Not the Gulf, not India, not France.
The Brain Needs a Body
So far I have been talking about Augmented Intelligence as though it exists in the realm of strategy, software, and institutional design. It does. But it also exists in the material realm. Every inference token, every trained neural weight, every algorithmic decision has a material cost measured in watts, in liters, and in dollars. The cloud is made of silicon and copper, cooled by water, powered by hydrocarbons, and financed by sovereign wealth.
One of the things that distinguishes extended cognition from ordinary tool use is that the cognitive system becomes dependent on the artifact for its basic functioning. The navigator who uses a chart is cognitively different from the navigator who has memorized the stars; take the chart away and the first navigator is lost. Scale that to the stack: the Prediction and Production Stacks span continents and accomplish things no individual component could achieve alone, but both have enormous metabolic requirements. A typical hyperscale data center requires around 100 megawatts of continuous power with gigawatt-scale facilities under construction. A single 100MW facility consumes approximately two million liters of water every day just to keep its processors from overheating. In the United States, data centers already account for roughly 4.4% of annual energy consumption, and that figure is expected to triple.
The two stacks have different metabolic vulnerabilities. The American Prediction Stack concentrates its demands in a small number of massive nodes. A kinetic strike on a data center or a cyberattack on a utility grid can degrade the cognitive system. The Chinese Production Stack distributes its demands across thousands of smaller installations embedded in industrial infrastructure. Harder to knock out in one blow, but harder to upgrade: the latest frontier model is available throughout the prediction stack at once, while you have to upgrade each production node separately.
The Gulf states sit on two of the metabolic inputs that both stacks need in vast quantities: energy and capital, so it makes sense for them to try becoming a compute hub for the world along with being its gas station. Western hyperscalers face severe domestic permitting constraints and grid bottlenecks. The Gulf has abundant cheap energy, buildable land, and governments that can approve infrastructure at speed. Data center capacity in the Gulf is expected to triple by 2030, from roughly one gigawatt to 3.3 gigawatts.
But these facilities operate in a climate where summer temperatures regularly exceed 45°C. They cannot be cooled by ambient air. They require massive quantities of water, and in the hyper-arid Middle East, that water must come from the sea. Data centers in Saudi Arabia alone are projected to consume 15 billion liters of water per year. If current growth holds, that figure could scale to 87.5 billion liters - roughly 4% of the country’s total water output. That water comes from desalination, which is itself energy-intensive: seawater reverse osmosis consumes 3.7 to 4.5 kilowatt-hours per cubic meter and emits over 3 kilograms of CO₂ per cubic meter produced. The Gulf states burn hydrocarbons to generate electricity. That electricity powers desalination plants that produce freshwater. That freshwater cools the data centers that run the AI. The AI manages the smart grids and logistics systems that distribute the power and water. What was a contingent historical arrangement - we have oil but no water, so we built desalination - is becoming a critical dependency for the global AI system.
This tight coupling of water, energy, and compute makes the core cognitive systems of global AI physically vulnerable to kinetic conflict and instability, as Iran’s targeting of data centers has shown. But this material vulnerability is a symptom of a more pervasive form of extraction, one that echoes earlier imperial systems.
The Highest Stage of the Algorithm
In 1902, the British economist J.A. Hobson published a study of imperialism that identified its root cause as domestic inequality. The working classes could not afford to consume what they produced. Capital accumulated at the top with no profitable domestic outlet, so it went abroad - seeking new markets, raw materials, and investment opportunities in territories that could be politically controlled. Territorial imperialism was the military and political expression of an economic problem that could have been solved at home through redistribution, but wasn’t.
Lenin sharpened Hobson’s analysis in 1916. For Lenin, the key transition was from the export of manufactured goods to the export of finance capital. Banks, cartels, and monopolies no longer needed to sell things to foreign markets. They needed to invest in them - to export capital itself, which would generate returns through the control of foreign productive capacity. This was imperialism’s mature form a century ago: not armies conquering territory for its own sake, but finance capital dividing and redividing the world among monopolies, with armies following to enforce the arrangement.
I want to suggest that while finance remains important, inference might be supplanting it as the bleeding edge of imperial control, with the transition from the export of finance capital to the export of inference capital. The pattern echoes what Hobson and Lenin described, but the commodity being exported is cognition itself.
AI functions as the ultimate absorber of excess capital. At a time of mounting inequality, hyperscalers and tech monopolies are running out of internal markets, and like their Victorian counterparts, pouring hundreds of billions into chips, energy, real estate, and cooling systems across the globe. Unlike physical railroads or oil wells, the raw material of AI - data and compute - is infinitely scalable in principle, which makes it a bottomless sink for stagnant wealth. But capital expenditure of this magnitude must yield a return. The technology must find new markets to monetize, and in the process uncover more data that feeds the next stage of expansion. The two competing stacks are establishing a system in which every nation on earth is encouraged - required? - to rent its intelligence through cloud platforms owned by a handful of corporations in California and, on the other side, to embed its physical infrastructure with AI systems manufactured in Shenzhen.
Foucault’s political technology was originally a theory of how institutions produce docile bodies — individuals whose physical capacities are maximized while their political autonomy is minimized. The Prediction Stack is a political technology of this kind, operating at civilizational scale. It produces docile bodies within docile polities. We have already seen it with social media where disciplining at population scale is political reality across the world. When a nation’s hospital logistics run on a proprietary model owned by a Silicon Valley corporation, when its port management relies on cloud infrastructure controlled from Shenzen, when its military targeting depends on algorithms licensed from American defense contractors - that nation’s institutional cognition has been infiltrated by an external disciplinary apparatus. Its cognitive capacities have been extended, and it can do things it couldn’t do before, so the leash is long. But it’s there.
Don’t get me wrong - these aren’t malicious developments, no men with cigars in a smoky room sniggering about how they will control the world. That’s not how disciplining operates.
But the deeper fear is not that the models are biased, or that its patterns are created elsewhere. The deeper fear is that the prediction provider is even more firmly in control when they localize their prediction engine for your geography. You will see yourself in a mirror made in California, and be happy that you are seeing yourself in a sari, not a suit - but so what? The mirror is not yours, and the more you get used to seeing yourself in it, the less you can walk away from it. You occupy the map - its cognitive infrastructure, the tools, the platforms, the models, the optimization logics through which its institutions think - and the territory follows.
In 1835, when British power in India was ascendant, Thomas Babington Macaulay wrote his Minute on Indian Education:
I have never found one among them who could deny that a single shelf of a good European library was worth the whole native literature of India and Arabia... We must at present do our best to form a class who may be interpreters between us and the millions whom we govern; a class of persons, Indian in blood and colour, but English in taste, in opinions, in morals, and in intellect.
Macaulay was more successful than he thought, but his successes will be nothing compared to what AI will usher.
He who controls the Stack controls the (mental) State.
The AI Polyconflict potentially represents a Great Enclosure of the human mind. Between the fifteenth and nineteenth centuries, the enclosure of the English commons forced subsistence farmers off shared land and into wage labor, creating the proletariat that powered the Industrial Revolution. Something similar is happening now. The collective intelligence of humanity - our aggregated data, our art, our social interactions, our institutional knowledge, our agricultural wisdom, our legal traditions, our languages - has been enclosed, extracted, and refined into a rent-seeking asset controlled by a small number of corporations. Large language models are, as Ted Chiang observed, lossy JPEGs of the internet. The enclosure is not incidental to the technology. It is the business model. Foundation model providers spend billions training on the cognitive commons - the open internet, digitized libraries, public datasets, the accumulated record of human thought and expression - and then charge access fees to the very societies whose collective output made the models possible.
The Defense of Durée
Behind every violation of cognitive sovereignty (should I call it cognitive colonialism?) is the compression of human deliberative time. The kill chain compressed from seventy-two hours to twenty-two seconds. The Stack War runs at algorithmic velocity, technology cycles outpacing the institutional wisdom needed to govern them. The water-energy-compute loop accelerates the material metabolism of the planet without pause for reflection on what is being built or for whom. AI encloses the cognitive commons before most societies have understood what has been taken. And then, at the granular level, the attention economy fragments the day into dopamine hits, while the gig economy reveals the working day one trip at a time, uncertainty managed by the algorithm rather than the worker.
The human durée - the lived time of ethical deliberation, cultural reflection, democratic consensus - is treated as inefficiency. What I have been calling cognitive imperialism is, at its deepest level, a colonization of time. It works not primarily by controlling what people think - though it does that too - but by controlling the pace at which thought can happen, the temporal texture of institutional life. The defense against this is Public Intelligence - cognitive architecture that is genuinely situated in local physical and cultural reality, embodied in local institutional life, enacted through locally owned tools and practices, extended through artifacts that communities actually control. It insists on the right to think at your own pace. I don’t have nostalgia for a slower world, or a return to a glorious past. The defense of durée is not a defense of prior eras. It is a defense of the capacity for self-governance in the present - the institutional ability to deliberate, contest, and choose, rather than to receive, comply, and optimize.
In this, I am seeing the outlines of what ‘socialism’ might look like if it were to respond to today’s capitalism.
In metabolic terms, defending durée means defending the bodily cognitive metabolism - the situated, practiced intelligence that gives distributed cognition its human texture and its capacity for ethical judgment. Public Intelligence, then, is what political cognitive metabolism looks like when it is working: not a passive conduit for corporate cognitive artifacts, but the active institutional capacity to set the terms on which those artifacts enter domestic life.
The practical shape of Public Intelligence is already visible in the places where it has been built. UPI is not just a payments system. It is a demonstration that a society can own the cognitive infrastructure of a critical domain - financial transactions - without ceding that infrastructure to private platforms. ONDC is an attempt to own the marketplace layer rather than rent it from Amazon. These are the application-layer wins that show how you can own the interface between your society and the underlying technology.
None of this requires building frontier foundation models from scratch - that ambition is, for now, beyond the reach of any nation outside the US and China. What it requires is the institutional capacity to refuse coercive dependency in critical domains, to say no to an API when the terms are extractive. That capacity to refuse a dependency when it becomes coercive distinguishes interdependence from indebtedness. Of all the markers of cognitive sovereignty, the one that matters most is agency over time. Individuals and societies who can think at their own pace retain the capacity to imagine a different world. At the individual level, this is the durée that the attention economy fragments and the gig economy erodes. At the collective level, it is the institutional capacity to deliberate before acting, to contest before complying, to set the terms of cognitive integration.
Public Intelligence is the instrument through which that agency can be built and defended - not because it is a superior technology, but because it represents a different relationship to time.


