Security & Conflict: Medium-term

2028–2033Transformations underway, accelerating | Systems & Institutions

Security & Conflict: Medium-term (2028-2033)

Current State

By 2028, the trends observable in 2026 have matured from emergent phenomena into structural features of the global security environment. Autonomous weapons systems are deployed in active conflicts. AI-driven cyber operations constitute a persistent, low-grade form of interstate conflict. Deepfake technology has fundamentally altered the information landscape, and surveillance capabilities powered by AI have expanded into domains previously considered private. The medium-term period (2028-2033) represents the phase where initial deployments scale, norms either crystallize or fail to form, and second-order consequences become visible.

LAWS normalization: By 2028, autonomous and semi-autonomous weapons have been used in at least several conflicts beyond the initial cases in Ukraine, Gaza, and Libya. Drone swarms -- coordinated groups of 50-500 autonomous UAVs capable of collaborative search, identification, and engagement -- have moved from demonstration to operational deployment by at least 3-5 nations. The Replicator Initiative and similar programs in China, Israel, and other nations have produced tens of thousands of attritable autonomous platforms. The unit cost of a basic autonomous attack drone has fallen below $2,000, creating proliferation dynamics similar to those seen with assault rifles and IEDs in earlier decades. Non-state actors, including insurgent groups and transnational criminal organizations, have acquired or improvised autonomous attack capabilities.

AI cyber warfare as persistent conflict: The distinction between "peacetime" and "wartime" cyber operations has dissolved. Major powers maintain continuous AI-driven cyber campaigns against adversary infrastructure -- pre-positioning access, mapping vulnerabilities, and conducting intelligence collection. AI systems discover zero-day vulnerabilities at a rate that overwhelms patch cycles. Automated exploit generation and deployment means that the window between vulnerability discovery and mass exploitation has compressed from weeks to hours or minutes. Critical infrastructure operators -- energy, water, healthcare, financial systems -- face a constant state of AI-assisted siege.

Post-truth consolidation: The information environment of 2028-2033 is characterized by what researchers have termed "epistemic collapse" in vulnerable populations and contexts. The combination of hyper-realistic deepfakes, AI-generated text indistinguishable from human writing, synthetic social media personas operated at scale, and AI-powered micro-targeting of disinformation has created an environment where large segments of populations cannot reliably distinguish authentic from fabricated content. Content provenance technologies (C2PA watermarking, blockchain verification) exist but have not achieved universal adoption, leaving significant gaps exploitable by bad actors.

Surveillance industrialization: AI surveillance has moved from targeted deployment to mass-scale systems in authoritarian and hybrid regimes, and from contested expansion to routine use in many democracies. Real-time facial recognition in public spaces is deployed in major cities across 50+ countries. Predictive policing algorithms, despite documented bias issues, are used by law enforcement agencies in hundreds of jurisdictions. Private sector data collection, combined with government access mechanisms, creates surveillance capabilities that exceed anything previously imagined -- not through centralized totalitarian systems but through the aggregation of commercial data flows that governments can access through legal compulsion or partnership.

Key Drivers

1. AI capability acceleration: The AI systems of 2028-2033 are substantially more capable than those of 2025-2026. Multimodal models that can reason across text, imagery, video, sensor data, and code enable military and intelligence applications that were previously impossible. AI systems can analyze satellite imagery in real-time, correlate signals intelligence with human intelligence, generate operational plans, and control complex multi-domain operations. The gap between what AI can do and what humans can oversee widens.

2. Autonomous systems arms race: The proliferation of LAWS triggers classic arms race dynamics. Nations that do not develop autonomous weapons face strategic disadvantage against those that do. This creates a security dilemma where each nation's defensive investment appears threatening to others, driving further escalation. The absence of arms control agreements means there are no brakes on this cycle.

3. Economic incentive for cyber offense: The economics of AI-assisted cybercrime are overwhelmingly favorable for attackers. AI-generated phishing, automated vulnerability exploitation, and deepfake-assisted fraud scale at near-zero marginal cost. The global cost of cybercrime, estimated at $8-10 trillion annually by 2025, continues to escalate as AI tools lower barriers to entry for less-skilled criminal actors.

4. Erosion of deterrence frameworks: Traditional deterrence -- whether nuclear, conventional, or cyber -- relies on attribution, proportionality, and credible response. AI complicates all three. AI-launched cyberattacks can obscure attribution through sophisticated false-flag operations. The speed of autonomous weapons engagement may outpace proportionality calculations. And the proliferation of capabilities to non-state actors undermines state-centric deterrence models.

5. Regulatory fragmentation: By 2028-2033, the global regulatory landscape for AI in security is fragmented. The EU has implemented relatively robust civilian AI regulations but carved out national security exceptions. The US relies on executive orders and sector-specific guidance rather than comprehensive legislation. China has implemented AI regulations focused on domestic social control while aggressively deploying AI for military advantage. No international framework equivalent to arms control treaties governs AI in the security domain. This fragmentation creates regulatory arbitrage opportunities and undermines collective security.

Projections

Autonomous conflict escalation risk: The probability of an unintended escalation involving autonomous systems increases significantly in this period. Scenario examples include: autonomous defense systems from two nations engaging each other at machine speed without human authorization; an autonomous reconnaissance drone being shot down, triggering escalation cycles; or autonomous cyber systems launching retaliatory attacks based on misattributed initial attacks. The absence of communication channels, confidence-building measures, or agreed "rules of the road" for autonomous systems interaction makes such scenarios more dangerous than analogous Cold War nuclear close calls, which at least operated within established doctrinal frameworks.

AI-enabled biological and chemical threats: By 2030-2033, AI systems will be capable of assisting in the design of novel biological agents and chemical compounds. While current AI models have some safeguards against providing synthesis instructions for dangerous pathogens, the proliferation of open-source models with fewer guardrails, combined with advancing capabilities in protein structure prediction and molecular design, creates emerging bioweapons risks. Intelligence agencies have flagged this as a growing concern.

Deepfake-driven diplomatic crises: At least one significant international incident triggered by deepfake content is probable in this period. This could involve fabricated footage of a military provocation, a forged diplomatic communication, or a synthetic audio recording of a head of state making inflammatory statements. The "fog of war" will increasingly include AI-generated information designed to mislead adversaries, allies, and domestic populations simultaneously.

Surveillance backlash and adaptation: In democratic nations, expanded AI surveillance will trigger significant legal and political backlash by 2030-2033. Court challenges to facial recognition, predictive policing, and mass data collection will reach supreme and constitutional courts. Some jurisdictions will impose meaningful restrictions; others will entrench surveillance capabilities through legislation. A patchwork of surveillance governance will emerge, with significant variation even within individual nations.

Cybersecurity workforce crisis: The cybersecurity workforce gap, already estimated at 3.5 million unfilled positions globally in 2024, will widen further as the attack surface expands. AI will partially compensate through automated threat detection and response, but the need for human expertise in strategic decision-making, incident response, and policy will outpace supply. This creates particular vulnerability for smaller nations, companies, and institutions that cannot compete for scarce talent.

Impact Assessment

Strategic stability deterioration: The medium-term period is likely the most dangerous for strategic stability since the early Cold War. Multiple new weapon categories (autonomous drones, AI-enabled cyber weapons, potentially AI-assisted bioweapons) are being deployed simultaneously without established norms, doctrines, or control regimes. The speed of AI-enabled conflict compresses decision-making timelines, increasing the risk of miscalculation. Unlike the Cold War, where two primary adversaries developed bilateral understanding, the multipolar AI arms race involves dozens of actors with varying capabilities, doctrines, and risk tolerances.

Civilian protection erosion: International humanitarian law (IHL) -- the laws of war -- was designed for human decision-makers. The principle of distinction (discriminating between combatants and civilians), proportionality (ensuring military advantage justifies collateral damage), and precaution (taking feasible measures to minimize civilian harm) all assume human judgment at critical decision points. As autonomous systems make or contribute to targeting decisions at scale and speed, the practical application of IHL degrades. Accountability gaps widen -- when an autonomous system kills civilians, the chain of responsibility (programmer, commander, manufacturer, algorithm) is legally ambiguous.

Democratic legitimacy under pressure: The combination of deepfake-polluted information environments, AI-powered surveillance, and autonomous security systems creates conditions where democratic accountability becomes harder to exercise. Citizens cannot hold governments accountable for AI-enabled operations they cannot see, understand, or verify. The secrecy inherent in national security applications of AI resists democratic oversight.

Personal security transformation: By 2030-2033, individuals face a security environment where their voice can be cloned from seconds of audio, their likeness can be placed in fabricated video, their personal data can be aggregated into detailed profiles for targeting (whether for advertising, manipulation, or physical threat), and their digital communications are subject to AI-powered interception and analysis. The concept of personal security expands from physical safety to include informational, digital, and epistemic security.

Cross-Dimensional Effects

Geopolitics: AI-enabled security competition becomes the defining feature of great power relations. The US-China rivalry centers increasingly on AI capability, with military applications as the most high-stakes domain. Middle powers (UK, France, India, Japan, South Korea, Australia, Israel) develop niche AI security capabilities and navigate alliance structures shaped by AI access and interoperability. Smaller nations face choices about whose AI ecosystem to join, with security implications.

Ethics and regulation: The medium-term period is where the success or failure of AI governance becomes apparent. If meaningful international norms on LAWS, cyber operations, and AI surveillance emerge by 2030-2033, the long-term trajectory may be manageable. If the governance vacuum persists, the risks compound. The gap between the pace of AI capability development and the pace of governance development is the critical variable.

Digital divide: AI security capabilities become a new axis of global inequality. Nations and populations without AI-powered cyber defenses, counter-surveillance tools, and deepfake detection capabilities are increasingly vulnerable. The security digital divide mirrors and reinforces the economic digital divide, with the most vulnerable populations least able to protect themselves.

Cultural identity: Sustained information warfare degrades social trust, cultural cohesion, and shared identity narratives. Societies that were already polarized experience deepening fragmentation as AI-powered disinformation exploits existing fault lines with unprecedented precision and scale. The concept of a shared public discourse -- essential for democratic culture -- becomes harder to sustain.

Actionable Insights

For governments and international institutions:

  • Establish an international AI security forum (analogous to the Nuclear Security Summit process) that brings together major AI powers for confidence-building measures, transparency agreements, and crisis communication protocols for autonomous systems incidents.
  • Develop and deploy national AI cyber defense capabilities at critical infrastructure scale. This requires public-private partnerships, mandatory security standards, and significant investment.
  • Create legal frameworks that assign accountability for autonomous weapons decisions. The ambiguity benefits no one in the long run -- not commanders, not manufacturers, not the public.
  • Invest in content authentication infrastructure (C2PA and similar standards) as public utility, not just private sector initiative.

For defense and security establishments:

  • Maintain meaningful human control over lethal decisions. The competitive pressure to remove humans from the loop must be resisted not for sentimental reasons but because the risks of autonomous escalation exceed the tactical advantages of speed.
  • Develop doctrine for AI-enabled conflict that addresses escalation management, de-escalation protocols, and communication with adversary AI systems.
  • Red-team autonomous systems extensively before deployment. The failure modes of AI in adversarial environments are poorly understood and potentially catastrophic.

For civil society:

  • Build independent technical capacity to audit and evaluate military AI systems. Democratic oversight requires expertise that currently resides almost exclusively within governments and defense contractors.
  • Develop and promote digital literacy curricula that address the AI-transformed information environment. Media literacy designed for the pre-AI era is insufficient.
  • Advocate for transparency requirements that cover AI use in security and law enforcement, even where operational details must be classified.

For individuals and communities:

  • Develop resilience practices for the post-truth information environment: verify claims through multiple independent sources, be skeptical of emotionally charged content that appears suddenly, understand the limitations and capabilities of deepfake technology.
  • Engage with local governance on AI surveillance deployment. Many of the most consequential surveillance decisions are made at municipal and state/provincial levels with minimal public input.
  • Protect personal data as a security matter, not just a privacy preference. Data that seems innocuous can be aggregated into exploitable profiles.

Sources & Evidence

  1. US DoD AI Strategy (2023) -- Framework for military AI adoption; Replicator Initiative for autonomous systems at scale. defense.gov
  2. SIPRI Emerging Military Technologies Research -- Stockholm International Peace Research Institute analysis of autonomous weapons proliferation and governance gaps. sipri.org
  3. IISS Military Balance 2024 -- International Institute for Strategic Studies annual assessment of global military capabilities including AI integration. iiss.org
  4. RAND Corporation -- Extensive research on autonomous weapons doctrine, AI escalation risks, and cybersecurity strategy. rand.org
  5. Carnegie Endowment for International Peace -- Research on AI catastrophic risk and governance frameworks for military AI. carnegieendowment.org
  6. Microsoft Digital Defense Report 2024 -- Documented AI-augmented nation-state cyber operations and evolving threat landscape. microsoft.com
  7. WEF Global Risks Report 2024 -- Ranked AI misinformation as top short-term global risk; assessed AI security risks over 2-10 year horizon. weforum.org
  8. Europol LLM Impact Assessment (2023) -- Analysis of how large language models enable cybercrime, fraud, and social engineering at scale. europol.europa.eu
  9. C2PA (Coalition for Content Provenance and Authenticity) -- Industry standard for content authentication and provenance tracking. c2pa.org
  10. Stop Killer Robots / Human Rights Watch -- Ongoing advocacy and documentation of LAWS proliferation and humanitarian impact. stopkillerrobots.org | hrw.org
  11. CISA AI Security Guidance -- US government framework for defending critical infrastructure against AI-enabled threats. cisa.gov
  12. Future of Humanity Institute, Oxford -- Research on AI governance, existential risk, and autonomous systems safety. fhi.ox.ac.uk