Geopolitics & Global Power: Medium-term

2028–2033Transformations underway, accelerating | Systems & Institutions

Geopolitics & Global Power: Medium-term (2028-2033)

Current State

By 2028, the geopolitical landscape of AI will have evolved from the initial scramble of export controls and regulatory frameworks into a more structurally entrenched competition. The patterns established in 2024-2027 -- US-led technology denial, Chinese domestic mobilization, European regulatory leadership, and Global South marginalization -- will have matured into durable strategic postures. The medium-term period is where the consequences of these early choices compound, and where the window for course correction narrows significantly.

This section projects forward from the established trajectory, assessing how the 2026-2028 dynamics described in the short-term analysis evolve through 2033.

Key Drivers

1. The maturation of China's indigenous semiconductor ecosystem. By 2028-2030, the massive investments China has made in domestic chip fabrication will begin to yield results -- though the picture will be uneven. China is likely to achieve domestically produced AI chips at roughly the 5nm-equivalent performance level using advanced DUV multi-patterning or potentially early domestic EUV capabilities, though trailing TSMC/Samsung by 2-3 nodes. SMIC, Huawei's HiSilicon, and other domestic producers will supply chips that are "good enough" for many AI workloads, even if not competitive at the absolute frontier. The critical question is whether China can close the gap in chip manufacturing equipment -- the domain where ASML's monopoly on EUV lithography represents the hardest chokepoint to replicate.

2. The AI arms race enters a dangerous phase. By the late 2020s, AI will be deeply integrated into military systems across major powers. Autonomous drones, AI-driven cyber offensive and defensive operations, intelligence fusion systems, and AI-assisted command and control will be operational -- not experimental. The absence of international treaties governing military AI creates escalation risks analogous to the early nuclear age before arms control frameworks existed. Unlike nuclear weapons, AI systems can be deployed incrementally and ambiguously, making deterrence calculations far more complex.

3. Regulatory divergence solidifies into bloc formation. The EU AI Act will be fully operational, with enforcement actions and case law establishing its practical meaning. China will have refined its own AI regulatory apparatus, which has already included regulations on algorithmic recommendations (2022), deep synthesis/deepfakes (2023), and generative AI (2023). The US, having oscillated between executive orders and legislative inaction, will likely still lack comprehensive federal AI legislation, though sector-specific regulations (healthcare AI, financial AI, AI in hiring) may have accumulated. This creates three distinct regulatory ecosystems with limited interoperability.

4. AI-driven economic restructuring reshapes geopolitical power. As AI transforms productivity and economic output, the relative economic weight of nations will shift. Countries that successfully integrate AI across their economies will see GDP growth acceleration; those that do not will fall further behind. The McKinsey Global Institute estimated that AI could add $13-22 trillion to global economic output annually by 2030, but this value will be overwhelmingly concentrated in the US, China, and a handful of other advanced economies.

5. The "compute sovereignty" imperative. By 2028-2030, it will be evident that control over AI compute infrastructure -- data centers, chips, energy supply, cooling systems -- is as strategically important as control over oil was in the 20th century. Nations that lack sovereign compute capacity will be dependent on foreign cloud providers for their AI capabilities, creating a new form of strategic vulnerability. This realization will drive a wave of national compute infrastructure investments, sovereign cloud initiatives, and energy policy adjustments.

Projections

2028-2030: The semiconductor partial decoupling. China will achieve meaningful but incomplete semiconductor self-sufficiency. Its domestic chips will be capable of training large AI models, albeit less efficiently than Western equivalents. This partial decoupling has paradoxical effects: it reduces US leverage over China's AI development while simultaneously reducing Chinese dependence on the global semiconductor supply chain. The export control regime, having served its purpose of buying time, will face diminishing returns. The US will need to decide whether to escalate further (potentially targeting older process nodes, which would disrupt global electronics supply chains) or accept a new equilibrium.

2029-2031: AI-enabled surveillance and governance divergence. China's model of AI-enabled governance -- predictive policing, social credit systems, automated content moderation, and algorithmic bureaucracy -- will be mature and will be exported, through both commercial sales and development assistance, to nations across Africa, the Middle East, Central Asia, and Southeast Asia. An estimated 70-80 countries will have deployed some form of Chinese-origin AI surveillance technology by 2030, according to Carnegie Endowment projections extrapolated from 2024 data. This creates a growing bloc of nations whose digital governance infrastructure is interoperable with China's and dependent on Chinese technical support and standards.

2029-2032: The Global South AI divergence. A critical split will emerge within the developing world. A first tier -- India, Brazil, Indonesia, Vietnam, Nigeria, Kenya, and a few others -- will have developed meaningful domestic AI capabilities: local AI startups, government AI strategies, AI-focused university programs, and growing pools of AI-literate workers. These countries will have enough scale (population, data, capital, talent) to participate in the AI economy as more than passive consumers. A second tier, comprising most of sub-Saharan Africa, Central America, Central Asia, and small island states, will face deepening exclusion. Without sufficient compute infrastructure, training data in their languages, or the institutional capacity to govern AI, these nations risk becoming AI colonies -- their populations subject to AI systems designed elsewhere, optimized for other contexts, and controlled by foreign entities.

2030-2033: The question of international AI governance crystallizes. By the early 2030s, the accumulation of AI-related crises -- a major AI-assisted cyberattack, an autonomous weapons incident, a systemic AI failure in financial markets, or an AI-driven disinformation campaign that destabilizes a government -- will create political pressure for international governance that the current voluntary frameworks cannot absorb. The UN system will be under pressure to produce binding agreements, but the US-China rivalry will make consensus extraordinarily difficult. A more likely outcome is a patchwork of plurilateral agreements among like-minded nations, analogous to the Wassenaar Arrangement for dual-use technologies, rather than a universal treaty.

2031-2033: Energy and environmental geopolitics of AI. The massive energy demands of AI data centers will have become a significant geopolitical and environmental issue. The International Energy Agency projected that data center electricity consumption could double between 2022 and 2026; by the early 2030s, AI-specific energy demand could represent 3-5% of total electricity consumption in major AI-producing nations. This creates new geopolitical dynamics: nations with abundant clean energy (Iceland, Norway, Canada, the Gulf states with their solar potential) gain strategic advantage as AI compute hosts. The carbon footprint of AI becomes a factor in climate negotiations. Countries with energy surpluses gain leverage; those with energy deficits face another constraint on AI sovereignty.

Impact Assessment

Power redistribution:

  • The US retains frontier AI leadership but faces an increasingly capable Chinese AI ecosystem that is no longer dependent on US hardware. The US's primary advantage shifts from hardware control to ecosystem depth -- the combined strength of its tech companies, research universities, capital markets, and talent pipeline. However, if domestic political instability, restrictive immigration policy, or anti-tech regulation undermines these advantages, the lead could erode faster than expected.
  • China achieves "AI great power" status with a self-sustaining ecosystem, even if not at frontier parity. China's advantages -- scale of data, state-directed investment, speed of deployment, and willingness to accept lower accuracy thresholds for broader deployment -- allow it to lead in AI applications even if it trails in foundational research. The Belt and Road Digital Silk Road strategy gives China dominant AI infrastructure positions across much of the Global South.
  • The EU faces a critical reckoning. Its regulatory leadership has set global standards, but its failure to produce competitive frontier AI models or major AI platforms risks making it a rule-setter without market power -- analogous to setting building codes in a city where all the construction companies are foreign. The gap between European AI ambition and European AI capability may widen.
  • India emerges as the most significant swing state in AI geopolitics. With the world's largest population, a massive English-speaking talent pool, growing domestic tech ecosystem, and strategic value to both the US and China, India's AI alignment choices will have outsized impact. India's ability to navigate between blocs while building domestic capability is the most consequential developing-country AI story of this period.

The digital colonialism risk materializes:

By 2030, the pattern of "AI colonialism" -- where advanced nations and their corporations deploy AI systems in developing countries, extracting data and economic value while exporting algorithmic decision-making -- will be a concrete reality, not a theoretical risk. Specific manifestations:

  • Agricultural AI deployed in Africa and South Asia by multinational firms captures farmer data and optimizes for commodity markets rather than local food security.
  • Financial AI (credit scoring, lending algorithms) trained on Global North data produces systematically biased outcomes when applied in Global South contexts.
  • Governance AI provided by Chinese or Western firms for tax administration, identity verification, and resource allocation embeds foreign assumptions about governance into sovereign state functions.
  • Language model dominance means AI systems that mediate information, education, and commerce operate primarily in English, Mandarin, and a handful of other major languages, marginalizing the linguistic and cultural contexts of billions of people.

Cross-Dimensional Effects

Security and conflict (Dimension): The medium-term period is when AI-enabled military capabilities move from enhancement to transformation. AI-powered autonomous weapons systems will be deployed -- initially in constrained roles (perimeter defense, drone swarms, mine clearance) but expanding toward more consequential applications. The risk of an "AI arms race" dynamic, where each side's deployments prompt the other to accelerate, is high. Critically, unlike nuclear weapons, there is no "AI non-proliferation" framework, and the dual-use nature of AI technology makes one nearly impossible to construct.

Digital divide (Dimension): The medium-term is when the AI divide between the Global North and South begins to have irreversible structural consequences. Countries that miss the 2028-2033 window for building AI capacity may be permanently relegated to consumer/colony status in the AI economy. The compounding nature of AI capabilities -- where early advantages in data, compute, and talent generate further advantages -- creates path dependencies that are extremely difficult to reverse.

Ethics and regulation (Dimension): The regulatory bloc formation (US permissive innovation, EU rights-based regulation, China state-directed control) creates governance gaps at the interfaces. AI systems that cross jurisdictional boundaries -- which most consequential AI systems do -- face regulatory arbitrage, enforcement gaps, and norm conflicts. The absence of mutual recognition frameworks for AI safety testing, auditing, and certification means that regulatory divergence becomes a trade barrier and a source of geopolitical friction.

Economic models (Dimension): The AI-driven economic transformation threatens the development model that propelled East Asian and South Asian growth over the past 50 years. The "manufacturing ladder" -- where countries climb from low-value to high-value manufacturing, accumulating capital and skills -- may be disrupted by AI-enabled automation that eliminates the labor cost advantage of developing nations. Similarly, the services outsourcing model (India, Philippines) faces existential challenge. Countries that cannot substitute AI productivity gains for labor cost advantages will see their growth models collapse.

Education and training (Dimension): The talent competition intensifies as AI capabilities grow. By 2030, the estimated global shortfall of AI specialists (researchers, engineers, ethics experts, policy professionals) may reach several million. The countries and institutions that train this talent will have disproportionate influence over AI's development trajectory. Brain drain from the Global South accelerates as advanced nations compete to attract AI talent from developing countries, further weakening the latter's capacity to build sovereign AI capabilities.

Actionable Insights

For governments in advanced economies:

  • Prepare for the diminishing effectiveness of export controls as China achieves partial semiconductor self-sufficiency. Develop post-export-control strategies that emphasize sustained innovation advantage rather than denial.
  • Invest in international AI governance frameworks before a crisis forces hasty action. The window for proactive norm-setting (2026-2030) is narrow.
  • Address the energy demands of AI infrastructure through strategic energy policy. AI compute is not just a technology issue -- it is an energy, climate, and land-use issue.

For governments in the Global South:

  • Form regional AI coalitions to bargain collectively with both the US-led and China-led technology blocs. Individual small and medium nations have minimal leverage; coalitions can negotiate better terms.
  • Prioritize AI applications in sectors with the highest local impact -- agriculture, healthcare, education, governance -- over prestige projects in frontier research.
  • Develop data governance frameworks that ensure local data generates local value. The extractive model, where data flows out and algorithmic products flow in, must be resisted through policy, not just rhetoric.
  • Invest in AI literacy at scale. The most effective long-term strategy against digital colonialism is a population that can critically evaluate, adapt, and eventually build AI systems.

For international organizations:

  • Pursue plurilateral agreements among willing nations rather than waiting for universal consensus. A "coalition of the willing" approach to AI safety, military AI norms, and AI governance can establish standards that later expand.
  • Create AI capacity-building programs that transfer not just technology but institutional knowledge -- how to regulate AI, how to audit AI systems, how to build AI research institutions.
  • Develop shared AI safety testing and evaluation frameworks that can function across regulatory jurisdictions, reducing the risk of regulatory fragmentation becoming a vector for unsafe AI deployment.

For the private sector:

  • Multinational companies should build compliance infrastructure for three regulatory regimes simultaneously. The cost of retrofit is far higher than the cost of designing for multi-jurisdictional compliance from the start.
  • Assess geopolitical risk to AI supply chains on a 5-10 year horizon. The semiconductor supply chain disruption risk (Taiwan contingency, further export controls, trade conflicts) is not a tail risk -- it is a central planning assumption.
  • Companies deploying AI in the Global South should invest in local adaptation, local talent, and local data governance -- not only for ethical reasons but because locally adapted AI systems perform better and face lower regulatory and reputational risk.

Sources & Evidence

  1. CSIS -- "Choking Off China's Access to the Future of AI" -- analysis of semiconductor export control effectiveness and China's domestic responses. csis.org
  2. RAND Corporation -- Research on AI military applications, autonomous weapons governance, and escalation dynamics. rand.org
  3. IISS -- "AI and the Future of Warfare" -- multi-power military AI capabilities assessment. iiss.org
  4. Carnegie Endowment -- "AI and the Global South" -- analysis of AI surveillance exports and developing nation AI capacity gaps. carnegieendowment.org
  5. Brookings Institution -- Digital sovereignty analysis and the geopolitics of AI governance. brookings.edu
  6. EU AI Act -- Full regulatory text and implementation timeline. artificialintelligenceact.eu
  7. IMF -- Analysis of AI's differential economic impact across country income levels. imf.org
  8. UN AI Advisory Body -- Recommendations for international AI governance structures. un.org
  9. CFR -- Background on US-China technology competition trajectory. cfr.org
  10. Foreign Affairs -- Analysis of China's AI development strategy and trajectory. foreignaffairs.com
  11. Chatham House -- "AI, Geopolitics and Global Governance" -- assessment of international governance gaps. chathamhouse.org
  12. White House -- The Stargate Project announcement and US compute infrastructure strategy. whitehouse.gov