In 1750, a scholar-official from the Yangtze Delta and an English gentleman enjoyed roughly comparable standards of living. A century later, that equation had been shattered: Western Europe had entered the age of the steam engine, while the Qing Empire suffered a devastating defeat in the Opium Wars. This is what historians call the "Great Divergence"—one of the most profound fractures in the history of human civilization. Today, we are witnessing the beginning of another divergence: computational power has replaced the steam engine, data has replaced coal, and the speed of this new divergence may far exceed our imagination.

I. Echoes of History: Understanding the First Great Divergence

Pomeranz's Revisionist Historiography

Before Kenneth Pomeranz published The Great Divergence in 2000, mainstream historiography broadly accepted "European exceptionalism"—the notion that the West had always possessed some unique institutional, cultural, or rationalist tradition that destined it to surpass the East. Max Weber's Protestant ethic, Douglass North's institutional economics, and David Landes's cultural determinism were all variants of this narrative.[1]

Pomeranz offered a radical revision: until around 1750, the Yangtze Delta and England were remarkably similar in market development, agricultural productivity, life expectancy, and consumption levels—and China even held a slight edge in certain respects.[2] So why did the Industrial Revolution occur in Britain rather than in Jiangnan? Pomeranz's answer was surprising: not because Britain was "more advanced," but because it was "luckier"—it possessed easily accessible coal deposits and "ghost acreage" from New World colonies that allowed it to break free from the ecological constraints of the Malthusian trap.[3]

Institutions, Geography, and Contingency

Of course, Pomeranz's argument is not the final word. Robert C. Allen, in Global Economic History, proposed the "high-wage economy" hypothesis: it was precisely because British wages were high and coal was cheap that innovations substituting machines for labor became profitable.[4] Joel Mokyr emphasized "the Enlightened Economy"—the unique knowledge culture of eighteenth-century Europe that enabled the systematic application of scientific knowledge to production.[5]

This scholarly debate remains unresolved, but one point commands consensus: the Great Divergence was not proof of civilizational superiority, but rather a complex interplay of institutions, resources, contingency, and path dependence. More importantly, once the divergence began, the gap became self-reinforcing—leaders commanded more resources for innovation while laggards were trapped in the predicament of perpetual catch-up. This "Matthew Effect" dynamic is the key to understanding the current AI divergence.[6]

II. Computational Hegemony: The Material Foundation of the New Divergence

From Coal to Chips

If coal was the "strategic resource" of the first Industrial Revolution, then compute is its equivalent in the AI era. The computational power required to train a large language model is growing at an exponential rate: GPT-3 consumed approximately 3,640 petaflop-days of compute, GPT-4 is estimated to have required more than ten times that, and the next generation of models may demand several times more still.[7] This means that only organizations with massive computational infrastructure can stand at the frontier of AI development.

The production of compute is highly concentrated. The world's most advanced chip manufacturing capability is held by a handful of companies: TSMC produces over 90% of the world's advanced-process chips, while the design of these chips is dominated by American firms such as NVIDIA and AMD.[8] This "chokepoint" supply chain structure has turned compute into a geopolitical bargaining chip—the U.S. export controls on chips to China in 2022 are a direct manifestation of this logic.[9]

Data Colonialism and Digital Enclosure

Beyond compute, data is AI's other core resource. But the distribution of data is equally uneven: the vast majority of global internet traffic flows through the servers of American tech giants. Google processes over 90% of the world's search queries, Meta holds the social data of billions of users, and Amazon controls the consumption trails of global e-commerce.[10]

Nick Couldry and Ulises Mejias have termed this phenomenon "data colonialism": just as nineteenth-century colonizers plundered land and labor, today's tech giants are extracting human behavioral data and converting it into private capital.[11] This new form of "enclosure" relegates countries of the Global South to the role of data suppliers rather than beneficiaries of data value.

The Energy Constraint

Energy is another material foundation of AI. The carbon footprint of training large models is staggering: one study estimated that training a GPT-3-level model produces carbon emissions equivalent to five times the lifetime driving distance of a car.[12] As model scale continues to expand, energy demands will only grow. This means that regions with cheap, abundant energy—whether Texas's natural gas power or Nordic hydroelectric power—will hold an advantage in the AI race.

Ironically, this is remarkably similar to the first Great Divergence. Britain's coal and America's oil were once the material foundations of industrial hegemony. Today, whoever controls the trinity of compute, data, and energy controls the lifeline of the AI era. And the distribution of these resources is far from equal.

III. Social Structural Fractures: Starting with the Decline of the Middle Class

The Specter of Technological Unemployment

Every major technological revolution reshapes the social class structure. The Industrial Revolution eliminated artisans and created factory workers; the computer revolution eliminated typists and created programmers. What distinguishes the AI revolution is that it threatens not just the blue-collar workforce but the very heartland of the white-collar middle class.[13]

Economist David Autor's research shows that over the past three decades, the U.S. labor market has exhibited a clear "polarization" trend: high-skill, high-wage jobs have increased, as have low-skill, low-wage service jobs, but middle-tier routine jobs—accounting, administration, clerical work—are being replaced by automation.[14] The advent of AI may accelerate this trend: legal assistants, radiologists, financial analysts, translators—occupations once considered "knowledge work"—now face unprecedented disruption.

The Superstar Economy

Accompanying labor polarization is the rise of the "superstar economy." Economist Sherwin Rosen foresaw this trend as early as 1981: in "winner-take-all" markets, small differences in talent lead to enormous income disparities.[15] AI amplifies this effect: top-tier programmers, entrepreneurs, and AI researchers can leverage technology to create astonishing value, while mid-level knowledge workers risk being replaced by algorithms.

Erik Brynjolfsson and Andrew McAfee of MIT call this the central paradox of "the Second Machine Age": technological progress generates unprecedented wealth, yet the distribution of that wealth grows ever more unequal.[16] Their research shows that since the 1980s, U.S. productivity has continued to grow while median wages have barely budged—a phenomenon known as the "Great Decoupling."

The Devaluation of Educational Investment

For the middle class, education has long been the ladder of upward mobility. But in the AI era, that ladder may be collapsing. In the past, a university degree was the ticket to the middle class; today, many graduates find that the skills they acquired are being replaced by automation, while student debt weighs on them like a stone.[17]

The more fundamental question is: when AI can perform an ever-expanding range of cognitive tasks, what is the meaning of "education"? Is it learning specific knowledge and skills, or cultivating more fundamental capacities like creativity, critical thinking, and emotional intelligence? This is a question the educational system has yet to answer. Pierre Bourdieu observed that the educational system is a machine for reproducing social inequality;[18] in the AI era, this reproductive mechanism may continue in new forms—wealthy families can provide their children with AI education, coding training, and startup capital, while children from disadvantaged families are left behind.

IV. Geopolitical Realignment: The Rise of Techno-Nationalism

The Deep Logic of the US-China Tech War

Since 2018, the "tech war" that the United States has waged against China is not merely a trade dispute but a contest for future hegemony. From the Huawei ban to chip export controls, from the Entity List to the CHIPS and Science Act, the American objective is clear: to prevent China from achieving leadership in AI, semiconductors, and other critical domains.[19]

The rise of this "techno-nationalism" signals the end of the globalization era—at least in the high-tech sector. Henry Farrell and Abraham Newman's theory of "weaponized interdependence" argues that "choke points" in global supply chains have become instruments of geopolitical leverage.[20] TSMC's chips, ASML's lithography machines, and NVIDIA's GPUs are all such choke points—whoever controls them holds the lever of coercive power.

Digital Sovereignty and Tech Blocs

In this competition, the world is fracturing into distinct "tech blocs." The U.S.-led bloc includes its traditional allies: the EU, Japan, South Korea, Taiwan, and Australia. China, meanwhile, seeks to build its own technology ecosystem, from Huawei's HarmonyOS to domestically produced chips.[21] This trend toward "decoupling" or "de-risking" is reshaping the geography of global supply chains.

The EU is attempting to chart a "third way." From the General Data Protection Regulation (GDPR) to the Artificial Intelligence Act (AI Act), the EU emphasizes "digital sovereignty"—neither dependent on American tech giants nor accepting Chinese-style digital authoritarianism.[22] But the problem is that the EU lacks its own major tech platforms, and whether its role as a "regulatory superpower" can translate into genuine technological competitiveness remains an open question.

The Plight of the Global South

Caught in the crossfire of US-China competition, countries of the Global South face a particularly difficult situation. They lack the computational infrastructure, talent reserves, and capital investment needed to develop AI, yet they must choose between different "tech blocs." Many countries in Africa, Latin America, and Southeast Asia face the risk of "digital dependency"—they use American platforms and Chinese equipment while lacking core technological capabilities of their own.[23]

This is reminiscent of the nineteenth-century "Great Divergence": when Europe entered the industrial age, Asia, Africa, and Latin America became sources of raw materials and dumping grounds for manufactured goods. Today, will the Global South once again be reduced to "data colonies"—providing cheap data-labeling labor, consuming AI products, yet unable to share in the technological dividends? It is a deeply unsettling question.[24]

V. The Governance Vacuum: Who Will Set the Rules for AI?

Private Governance by Tech Giants

Before national governments could even grasp the implications of AI, tech giants were already exercising the power of "private governance" in practice. OpenAI decides what GPT can and cannot say, Google determines the ranking of search results, and Meta decides what content gets recommended or taken down. These decisions affect billions of people's access to information, yet they are subject to no democratic process.[25]

Frank Pasquale warned in The Black Box Society that algorithms are becoming a new "veil of power"—they shape the world we see without explaining their operating logic to us.[26] The rise of this "algorithmic governance" poses a fundamental challenge to traditional mechanisms of democratic accountability.

The Absence of International Governance

Nuclear weapons have the Non-Proliferation Treaty, chemical weapons have the Chemical Weapons Convention, but AI still lacks any binding international governance mechanism. Discussions at the United Nations have progressed slowly, and the major powers that control AI technology—the United States and China—have shown little interest in international norms.[27]

The dangers of this "governance vacuum" are multifaceted. In the military domain, the development of lethal autonomous weapons systems (LAWS) could trigger a new arms race; in the economic domain, AI-driven automation could exacerbate global inequality; in the political domain, deepfakes and information manipulation could erode the cognitive foundations of democratic societies.[28] Without effective international coordination, these risks will only continue to accumulate.

VI. History Does Not Repeat, But It Rhymes

Path Dependence and Institutional Inertia

Looking back at the first Great Divergence, the lesson we draw is that small differences in initial conditions, amplified through path dependence, can lead to radically different historical trajectories. Once Britain industrialized first, it possessed the capital and technology to further widen its lead, while lagging nations were trapped in the predicament of "catch-up"—forced to replicate the path of the leader under disadvantageous conditions.

The divergence of the AI era may follow a similar logic. Organizations that achieve early leadership in AI today—whether nations or corporations—will command more resources to invest in next-generation R&D. The models they train will generate data, which will be used to train even more powerful models, creating a virtuous cycle. Meanwhile, laggards face a "double catch-up": they must not only close the gap with existing technology but also keep pace with a constantly advancing technological frontier.[29]

Possible Divergence Scenarios

Over the next twenty years, the Great Divergence of the AI era may unfold along multiple axes:

  • Between nations: The gap widens between the few countries that control core AI technologies (the United States, China, and perhaps a handful of allies) and the majority of nations that depend on foreign technology.
  • Between corporations: A handful of tech giants monopolize AI infrastructure, while small and medium enterprises are reduced to vassals of the "application layer."
  • Within societies: A deepening divide emerges between knowledge elites who can harness AI and the middle class displaced by it.
  • Between generations: A growing chasm separates "digital natives" who grew up in an AI environment from older individuals left behind by technology.

These divergences are not inevitable destinies but the outcomes of policy choices. However, without early intervention, the natural evolution of markets will likely lead to greater inequality rather than shared prosperity.

Rethinking the Narrative of "Progress"

Finally, perhaps we need to reexamine the narrative of "progress" itself. The narrative of the first Great Divergence often equated industrialization with progress and Westernization with modernization. This narrative obscured the costs of industrialization: environmental destruction, colonial exploitation, and the alienation of labor.[30]

Similarly, today's mainstream narrative about AI—efficiency, innovation, convenience—may obscure certain fundamental questions: Whose interests does AI development serve? Does it strengthen or diminish human autonomy? How is the wealth it generates distributed? These questions have no simple answers, but if we fail to ask them, we can only passively accept a future arranged by others.

History does not simply repeat, but it does "rhyme." The Great Divergence of the eighteenth century shaped the world order for the next two hundred years. The AI divergence of the twenty-first century may similarly define the civilizational landscape of the future. We stand at a historical inflection point, and our choices—or inaction—will determine where this turning leads.