Artificial intelligence has captured today’s technological imagination. Models improve, benchmarks fall, and claims about superhuman reasoning arrive with weekly regularity. Depending on who is speaking, AI heralds a productivity revolution, a labor upheaval, or a step toward artificial general intelligence (AGI) - as a recent definition put it: ‘systems with the cognitive versatility and proficiency of a well-educated adult.’ Skeptics counter that expectations outrun evidence.
Both views capture something real, yet neither fully explains why the largest technology firms are pouring extraordinary sums into data centers, chips, energy, and model training. To see the pattern clearly, it helps to place AGI inside a longer arc: the shift, over decades, from an economy centered on factories to one centered on knowledge, coordination, and platforms, i.e., cognitive capitalism. In that frame, AGI stops looking like a purely technical goal and starts looking like a frontier myth of a system seeking its next organizational form. The result is both more mundane and more consequential than the hype suggests. It is less about an artificial “person” and more about how human groups, tools, and infrastructures combine into new institutions of intelligence, new cultural technologies to replace the old.
This essay develops that claim in five steps.
First, it reframes AGI from an individual benchmark to an institutional one. Second, it explains why scaling computation - not encoding human expertise - has proved the most reliable path forward, and what follows when intelligence becomes cheap. Third, it outlines two strategic pathways now visible in practice: an American, cloud-centric General Predictive Intelligence and a Chinese, embodied General Productive Intelligence (my terms, and I am not attached to them). Fourth, it considers embodiment and geography: where cognition happens, and how that choice shapes power. Finally, it situates safety debates and existential fears within a broader set of questions about institutions, governance, and the kinds of futures intelligence infrastructure makes possible.
1) From “smart machines” to intelligent institutions
Conversations about AGI usually start from a person-centric premise: can a machine match the cognitive versatility and proficiency of a well-educated adult? The unit of analysis is the individual mind. Progress is measured by performance on exams, code problems, math contests, or professional tasks. That anthropocentric lens is understandable but it omits the most important fact about modern intelligence: human capability is already institutional. Science labs, hospitals, airlines, courts, universities, logistics networks, and city governments coordinate knowledge in ways no single person can match. The last century’s greatest leaps in capability came not from smarter individuals but from better organizations.
If we take AI seriously as a cultural technology, akin to writing, bureaucracy, accounting, or the modern firm, then a more useful definition of AGI becomes:
AGI is a collective institution that can match or exceed the cognitive versatility and proficiency of a well-educated adult, or of existing institutions composed out of well-educated adults.
In other words, the relevant achievement is not a solitary software mind but an assembly of models, tools, workflows, norms, and people that produces general competence across many tasks. A high-school biology team that, with AI assistance, diagnoses complex cases more accurately than a Harvard Medical School specialist; a municipal planning office that uses simulation and optimization to design safer streets; a newsroom that integrates fact-checking, retrieval, and analysis to raise accuracy and throughput - each is an example of institutional augmentation.
The capability does not “reside” in the model; it emerges from the institutional stack in which the model participates.
This shift in viewpoint aligns neatly with the economic story of cognitive capitalism from last week. AGI, reframed, is the next step: institutions that compose those means into generalized service-level intelligence.
We are accustomed to thinking of bureaucracies as incompetent or rule-bound; have you ever imagined an adaptive, flexible and competent bureaucracy? Congratulations: you have AGI on your mind!
2) The bitter lesson and the price curve of intelligence
For seven decades, a recurring pattern has frustrated efforts to encode human expertise directly into machines. The “bitter lesson,” as Rich Sutton calls it, is that general methods that scale with computation - not carefully hard coded human-readable structures -win in the long run. Search and learning, given more compute and data, outstrip systems designed around human theories of mind. This is not a slight against expertise; it is a recognition that the real world is irreducibly complex, and that meta-methods which can absorb that complexity tend to outperform polished simplifications.
Simplifications works for us, and for human expertise to reduce complexity. Just don’t expect an AI expert to understand the same simplifcations.
If that’s right, then two practical conclusions follow. First, the most reliable path to more capable systems is to increase compute, improve data pipelines, and refine training, i.e., to invest in infrastructure. Second, as compute becomes cheaper and models get better, the effective price of intelligence falls. Intelligence becomes more like electricity: available on tap, embedded in services, called when needed. When the price falls, consumption rises—a Jevons-style response. We do not merely replace existing uses; we invent new ones. A radiology analysis that costs $10 instead of $1,000 will be ordered more often; clinical workflows will adapt; quality control will change; only later do staffing patterns shift.
This dynamic suggests that “AGI” might not appear as a single, dramatic moment when a system surpasses all human tests. It may feel more like a steady rise in the background competence of everyday tools and processes. The visible frontier becomes less “a machine smarter than a person” and more ambient general intelligence delivered as a service.
The desired good is not a new mind; it is reliable coverage across many tasks at acceptable cost.
If that is the economic logic, then the strategic logic for firms is obvious: build the infrastructure that makes ambient intelligence available and cheap, and position yourself at bottlenecks where value accumulates - chips, data centers, model pipelines, deployment, and distribution. That’s the reason hyperscalers are spending so aggressively: a world in which radiology exams cost a fraction of what they cost today is one in which their data centers will have many more uses than they do today.
3) Two pathways: predictive vs. productive general intelligence
With the infrastructure logic in view, we can distinguish two pathways that correspond, roughly, to existing national strengths in the US and China respectively.
PS: As I said before, the two GPI’s are not standard descriptors of AI capacity (maybe they will one day!)
General Predictive Intelligence (GPreI) is the American pattern: centralized training of large models in hyperscale data centers; distribution via APIs and assistants into consumer and enterprise software; optimization for predictive and generative tasks across text, code, images, and more. The firm playbook is to integrate the cognitive stack - chips, training clusters, models, orchestration, deployment - and capture value at the platform layer. Governance leans market-first and the unit of deployment is often the cloud account or developer surface.
General Productive Intelligence (GProI) is the Chinese pattern: embedding AI throughout physical infrastructure - factories, vehicles, logistics, energy grids, city systems, robotics. The focus is less on universal chat interfaces and more on perception-action loops in the world. The institutional playbook combines state direction, industrial policy, and scale manufacturing to integrate sensors, actuators, and models into productive systems. Governance leans toward national planning; pilots scale through municipalities and state-aligned firms; and the unit of deployment is often the site - a plant, a port, a district.
Both pathways are coherent. Both can produce real capability. They differ not only in technical emphasis but in (one of our favorite phrases!) metabolism - how energy, matter, and information flow through the economy. The American approach deepens the cognitive metabolism built by platforms; the Chinese approach extends the material metabolism of manufacturing and infrastructure. Where one concentrates compute in data centers and exports cognition via networks, the other distributes embodied cognition across places and machines.
The contrast is not absolute; each country mixes both elements. But as a first approximation it clarifies today’s strategy space:
Stack position: cloud and models vs. robotics and systems integration
Deployment: virtual distribution vs. embedded roll-out
Moat: platform lock-in vs. supply-chain and site control
Scaling loop: data-driven product usage vs. production throughput
Policy alignment: antitrust-bounded private platforms vs. state-coordinated industrial ecosystems
Seen this way, “who wins AGI” is the wrong question. The more relevant question is: which institutional pathway most efficiently converts energy and computation into reliable capability in the domains that matter? The answers will vary by sector. Predictive systems may dominate knowledge services; productive systems may dominate logistics, mobility, and urban management. The frontier is the fit between architecture and task.
4) Embodiment, cloud, and the geography of cognition
A decade plus ago, many believed that general intelligence would require embodiment - machines moving, sensing, and learning in the physical world. The rapid rise of large, disembodied models overturned that assumption for language, code, and many reasoning tasks. Yet embodiment has not disappeared; it has migrated into robotics, industrial control, and the mesh of sensors and actuators that define modern infrastructure.
Why does embodiment matter? Because intelligence is not only inference; it is coordination under constraints - time, space, energy, safety, law, and human expectation. A predictive engine in the cloud can answer questions. A productive system in the world must act, and acting binds intelligence to place. As more AI is bound to place - on factory floors, in distribution centers, at traffic intersections - the geography of cognition changes. Data centers remain critical, but they are complemented by sites where perception and action matter. The physical world becomes addressable by software, and software becomes accountable to the physical world.
This geographic turn extends the idea of metabolic sovereignty. Nations have always cared about energy and materials; now they must also care about compute logistics - power for data centers, grid stability, semiconductor supply, fiber routes, cooling water, and the land footprint of industrial computing. Under cognitive capitalism, “the cloud” felt abstract. Under the coming regime, cognition has materiality, even in the pursuit of predictive intelligence.
Embodiment thus does not refute the cloud; it grounds it. Predictive infrastructure and productive infrastructure will interlock, and institutions that manage that interlock - balancing centralization with locality - will accumulate advantage.
The test of the emerging AGI bureaucracies (see section 1) will be their nimbleness in switching from prediction to production and vice versa.
5) Safety, speculation, and what really matters
No discussion of AGI is complete without acknowledging fears about misalignment and existential risk. Thoughtful researchers warn that advanced systems might deceive, manipulate, or optimize for goals that harm humans; others fear misuse by malicious actors; still others worry about brittle dependencies as cognition becomes infrastructure. These concerns merit serious attention. But the public debate often collapses into two unsatisfying extremes: eschatology, which imagines civilization-ending outcomes, and dismissal, which treats safety as a cover for competitive positioning.
A more productive stance keeps risk on the table while widening the frame. The central questions for the near-to-medium term are institutional:
How do we govern systems that blend public rules and private infrastructure?
Where should authority sit for auditing high-impact models and deployments?
What forms of transparency and recourse are owed to citizens as decisions become machine-supported?
How do we align incentives so that reliability and safety improve alongside capability?
How do we keep state capacity and cloud capacity from collapsing into unaccountable power?
The dominance of neoliberalism meant that the first era of cognitive capitalism was very poorly regulated; now that we are back to thinking about industrial policy everywhere, maybe the AI avatar of cognitive capitalism will have better oversight.
What deserves emphasis is not only danger but direction. The most important outcomes will turn on how societies compose intelligence into institutions: schools, clinics, courts, utilities, research labs, and city halls. The work ahead is less about summoning a supermind and more about designing arrangements that extend human agency, improve judgment, and distribute benefits broadly.
6) Why “AGI as a smarter person” is the wrong frontier
If AGI is framed as machines surpassing humans at everything humans do, we make human capability the ceiling of intelligence. That is a natural reflex, but it unintentionally narrows our ambition. We do not evaluate telescopes by how well they mimic the eye; we evaluate them by what new worlds they reveal. Likewise, the most interesting question is not whether AI can replace a novelist or out-argue a lawyer, but what forms of imagination and coordination become possible with reliable, cheap, general-purpose cognition as a substrate.
Some of those forms will be familiar: faster science, better logistics, improved safety in complex systems. Others will be less obvious: new institutional species that do not yet exist because they are infeasible without ambient intelligence. Consider the gap between what a small town would like to plan - streets, energy, housing, health - and what it can plan with its current staff. Imagine that town’s manager having 90th percentile talent in urban planning available on tap. Or the gap between the problems a small research lab wants to attack and the problems it can realistically explore. With service-level cognition available, those gaps shrink.
The value is not that AI “thinks like us but better,” but that institutions can think at levels previously reserved for the largest and richest organizations.
This is why AGI (using the definition I gave at the beginning of this essay) matters even if the eschatological debates feel remote. The systems worth designing are institutional, not anthropomorphic, along with protocols that form the new governance layer of these institutions.
7) A measured conclusion: AGI beyond AGI
Placed within the history of cognitive capitalism, the AGI conversation looks different. It is not a rupture so much as a continuation: the steady migration of value from physical production toward the coordination of knowledge and decision. Hyperscaler spending on data centers and chips is not a fad; it is the infrastructure strategy of firms whose business model is organizing cognition at global scale. The American path emphasizes predictive platforms. The Chinese path emphasizes productive systems.
Embodiment is important because action binds intelligence to place.
Governance via protocol becomes central when cognition becomes infrastructure.
The rhetoric around AGI (not my definition, but the commonly accepted one), whether utopian or apocalyptic, has a life of its own and I don’t have the expertise to judge where it’s going. Nevertheless, my institutional re-definition of AGI points toward a profoundly important question:
What intelligent institutions do we want and how do we build them?
If the industrial era was defined by machines that processed matter, and the first digital era by platforms that organized information, the era now opening may be defined by institutions that process both—converting energy and computation into judgment, coordination, and care. That is AGI after cognitive capitalism: not a brain in a box, but a society that learns to compose minds, machines, and infrastructures into systems that reliably do what needs doing, and, occasionally, make new things possible.
More next week when we turn towards culture.










New information technologies have often resulted in new societies: the invention of record-keeping with clay tablets made certain forms of trade and taxation possible, which in turn allowed larger social units to emerge. The printing press allowed people to spread information quickly and organize, leading to the overthrow of monarchies and the establishment of democracies. If we follow your thoughts on the contributions of AI to social institutions, what kinds of societies can we imagine emerging?
So little of AI is formulated in terms of empowering the common man to do what he couldn't do before -- even the Internet was first seen as a tool of emancipation and anarchic liberation: think of John Perry Barlow's Declaration of the Independence of Cyberspace. Surely AI will give each of us the power to gain greater autonomy, outwit the bureaucracies, self-organize into intelligent communes, overthrow our oppressors, and achieve independence?