Security & Conflict: Long-term (2033-2046)
Current State
The long-term horizon for AI in security and conflict extends into a period of profound transformation where the relationship between human agency, machine capability, and organized violence is fundamentally restructured. The trends that emerged in 2024-2028 and consolidated in 2028-2033 reach maturity, producing a security environment qualitatively different from any in human history. This analysis projects forward from the trajectories established in the short- and medium-term analyses, identifying the most probable developments, branching scenarios, and structural transformations.
By the early 2030s, the foundational elements are in place: autonomous weapons systems are normalized in military arsenals worldwide, AI-driven cyber operations constitute a continuous domain of interstate competition, the information environment is thoroughly polluted by synthetic content, and surveillance capabilities have expanded to a degree where privacy as a practical concept has been redefined. The question for 2033-2046 is not whether these technologies will be pervasive -- they will be -- but whether humanity develops the institutional, legal, and social frameworks to manage them, or whether the security landscape degrades into ungovernable complexity.
Key Drivers
1. Artificial general intelligence (AGI) approach and arrival: The most consequential variable for long-term security is whether and when AI systems approach or achieve general-purpose intelligence. If AGI or near-AGI systems emerge by 2033-2040 (as some AI researchers and industry leaders project), the security implications are transformative. Such systems could independently develop novel military strategies, discover entirely new categories of weapons, conduct strategic deception campaigns indistinguishable from human diplomacy, and manage military operations of a complexity that exceeds any human commander's capacity. Even if full AGI does not arrive within this window, the continuing advancement of narrow AI systems across all security-relevant domains produces compounding effects.
2. Autonomous systems ecosystem maturation: By 2033-2046, autonomous systems are no longer individual weapons but integrated ecosystems. Military forces operate networks of autonomous platforms across air, sea, land, space, and cyber domains that coordinate through AI command-and-control systems. The "kill chain" -- the sequence from detecting a target to engaging it -- operates at machine speed with human oversight increasingly limited to strategic-level authorization rather than tactical-level decisions. The cost curves for autonomous weapons continue to decline, eventually reaching commodity pricing for basic systems.
3. Quantum computing intersection: The intersection of quantum computing and AI in the 2033-2040 timeframe has potentially revolutionary security implications. Quantum computers capable of breaking current public-key encryption would render existing communications security infrastructure obsolete, requiring a wholesale transition to post-quantum cryptography. AI systems running on quantum hardware could achieve capabilities in optimization, pattern recognition, and code-breaking that are qualitatively beyond classical computation. Nations that achieve quantum advantage in AI-security applications gain potentially decisive strategic advantages.
4. Biotechnology convergence: The convergence of AI with biotechnology creates new categories of security threat. AI systems capable of designing novel pathogens, optimizing their transmissibility and lethality, and guiding synthesis pathways represent a biological weapons risk that dwarfs traditional bioweapons programs. The dual-use challenge is acute: the same AI capabilities that accelerate vaccine development, drug discovery, and pandemic preparedness also lower the barrier to biological weapons development. By 2040, desktop synthesis capabilities combined with AI design tools may make this threat accessible to small groups or even individuals.
5. Space domain AI competition: The militarization of space accelerates with AI-controlled satellite networks, autonomous space-based sensors, and potentially autonomous anti-satellite systems. AI-managed satellite constellations for intelligence, surveillance, and reconnaissance (ISR) provide persistent global coverage. Space-based AI systems for missile defense, communication jamming, and kinetic operations become operational capabilities for major powers. The lack of effective space arms control frameworks -- already evident in 2025 -- becomes a critical governance gap.
6. Climate security intersection: Climate change as a threat multiplier intersects with AI security capabilities. Resource scarcity, climate migration, and environmental disasters create conditions that increase conflict risk. AI systems are deployed for border surveillance, resource management, and conflict prediction in climate-stressed regions. The weaponization of climate data and modeling -- using AI to predict and exploit adversary vulnerabilities to climate impacts -- becomes a dimension of strategic competition.
Projections
Scenario A -- Managed Competition (probability: 25-30%): International frameworks for AI in security emerge, analogous to (though less robust than) nuclear arms control. A LAWS convention establishes norms including mandatory human control over lethal decisions above a certain threshold, prohibited categories of autonomous weapons (e.g., fully autonomous anti-personnel systems in urban environments), transparency and verification mechanisms, and prohibitions on AI-enabled autonomous weapons of mass destruction. Cyber norms mature through a series of crises that motivate agreement, resulting in binding commitments to refrain from AI-enabled attacks on civilian critical infrastructure, healthcare systems, and democratic processes. The information environment stabilizes partially through universal adoption of content authentication standards. This scenario produces a dangerous but manageable security environment.
Scenario B -- Fragmented Competition (probability: 40-45%): No comprehensive international framework emerges. Instead, a patchwork of bilateral agreements, regional arrangements, and informal norms provides partial constraint. Great powers maintain rough parity in AI military capabilities, creating a form of deterrence based on mutual vulnerability. Autonomous weapons are widely deployed but major escalatory incidents are avoided through luck as much as design. Cybersecurity remains a persistent challenge with periodic crises but without catastrophic infrastructure collapse. The information environment remains degraded but societies adapt through technological tools (authentication), institutional responses (verification organizations), and cultural shifts (widespread skepticism and verification norms). This is the most probable scenario -- messy, dangerous, but muddling through.
Scenario C -- Catastrophic Failure (probability: 15-20%): A major autonomous weapons incident causes mass casualties -- an autonomous swarm targeting the wrong population, an AI-launched cyberattack cascading into physical infrastructure failure causing deaths, or an AI-driven escalation spiral between nuclear powers. This scenario could also manifest through AI-enabled biological weapons deployment or a deepfake-triggered military confrontation between major powers. The catastrophe may ultimately produce stronger governance frameworks (analogous to how Hiroshima and Nagasaki motivated nuclear arms control), but at an enormous human cost.
Scenario D -- Authoritarian AI Lock-in (probability: 10-15%): AI surveillance and autonomous enforcement capabilities enable authoritarian regimes to achieve unprecedented social control that becomes effectively irreversible. The combination of pervasive surveillance, predictive policing, AI-driven censorship, and autonomous enforcement creates a system that cannot be challenged from within. This scenario may apply to individual nations rather than globally, but the demonstration effect could influence governance models worldwide. Democratic nations may adopt elements of this approach under security justifications, creating hybrid systems that maintain democratic form while hollowing out democratic substance.
Impact Assessment
Transformation of warfare: By 2040-2046, the character of war has transformed more fundamentally than at any point since the introduction of nuclear weapons. Autonomous systems conduct the majority of tactical military operations. Human military personnel increasingly serve in oversight, strategic, and maintenance roles rather than direct combat. The ethical and legal frameworks governing warfare, developed over centuries of human conflict, face existential challenges. The distinction between combatant and civilian, central to international humanitarian law, becomes harder to enforce when autonomous systems make targeting decisions. The concept of surrender -- how does one surrender to a drone swarm? -- requires reconsideration. The psychological experience of war changes for both those who fight (increasingly through screens) and those who are targeted (by machines).
Permanent surveillance infrastructure: The surveillance infrastructure built during this period becomes a permanent feature of the built environment. Sensors embedded in urban infrastructure, satellite surveillance from orbit, communication metadata analysis, financial transaction monitoring, and biometric identification create a comprehensive record of human activity. The question shifts from whether this data exists to who controls it and under what constraints. In the best case, robust legal and institutional frameworks govern access. In the worst case, this infrastructure enables totalitarian control that previous authoritarian regimes could only dream of -- not through brutality but through omniscience.
Epistemic environment restructuring: The long-term impact of AI on the information environment may be the most consequential security effect, exceeding even autonomous weapons. If societies cannot maintain shared epistemological foundations -- common agreement on basic facts, trust in institutions that verify reality, and confidence that information can be authenticated -- then democratic governance, diplomatic communication, treaty verification, and public accountability all degrade. The 2033-2046 period determines whether content authentication technologies, institutional adaptations, and cultural norms restore sufficient epistemic security or whether the post-truth condition becomes permanent.
Non-state actor empowerment: The long-term proliferation of AI security capabilities to non-state actors -- terrorist groups, criminal organizations, ideological movements, individual actors -- fundamentally challenges the state monopoly on organized violence that has been the foundation of international order since Westphalia. When a small group can deploy autonomous weapons, conduct devastating cyberattacks, create mass-scale disinformation campaigns, and potentially develop biological agents with AI assistance, the asymmetry between state and non-state capabilities narrows. This does not eliminate state power but creates a more complex, less controllable security environment.
Human-machine command relationship: The ultimate long-term question is the role of human judgment in decisions about violence. The trajectory points toward decreasing human involvement as AI systems become faster, more capable, and more integrated into military operations. The philosophical and practical question -- can machines be entrusted with decisions about life and death? -- becomes not a hypothetical ethical exercise but an operational reality that requires continuous resolution. The 2033-2046 period likely produces a spectrum of approaches: some nations maintaining strict human control requirements, others delegating extensively to machines, and most occupying various positions between these poles.
Cross-Dimensional Effects
Geopolitics: AI security capabilities become the primary determinant of geopolitical power by 2040. The traditional pillars of national power -- population, economic output, natural resources, conventional military forces -- are all mediated through AI capability. Nations that lead in AI security applications exercise disproportionate influence; those that lag face strategic irrelevance or dependence. Alliance structures reorganize around AI interoperability and capability sharing. The concept of sovereignty itself is tested by transnational AI threats (cyber operations, information warfare, autonomous weapons proliferation) that do not respect borders.
Ethics and regulation: The long-term ethical challenge is whether human moral frameworks can adapt to govern AI systems that operate beyond human comprehension and speed. Traditional ethics of war -- just war theory, international humanitarian law, the Geneva Conventions -- assumed human moral agents making decisions. The delegation of lethal decisions to machines requires either a fundamental extension of these frameworks or the development of entirely new ethical paradigms. The gap between ethical theory and operational practice is likely to remain wide, with philosophers and lawyers attempting to govern systems that have already been deployed and used.
Digital divide: The security digital divide becomes a matter of survival. Nations and populations without AI-powered defenses -- cyber, informational, kinetic -- are not merely disadvantaged but existentially vulnerable. This creates new dependencies: smaller nations must align with AI-capable powers for security, accepting reduced sovereignty in exchange for protection. Within nations, communities with access to AI security tools (identity verification, deepfake detection, cyber defense) are safer than those without, adding a security dimension to existing inequality.
Cultural identity: Over the 2033-2046 timeframe, the sustained impact of AI-driven information warfare reshapes cultural identity. Cultures that develop resilience to synthetic media and disinformation -- through education, institutional trust, and technological tools -- maintain coherence. Those that do not may experience deepening fragmentation, erosion of shared narratives, and vulnerability to external manipulation. The cultural dimension of security -- the ability of a society to maintain shared enough understanding to function collectively -- becomes recognized as a strategic asset requiring active cultivation and defense.
Actionable Insights
For the international community:
- Pursue a LAWS Convention with urgency, recognizing that the window for preemptive regulation is closing. Even imperfect agreements create norms that constrain behavior. The Chemical Weapons Convention and the Biological Weapons Convention, while imperfect, have established stigmas against use that shape state behavior. A LAWS equivalent should aim for similar norm-setting, even if verification is challenging.
- Develop international crisis communication protocols for autonomous systems incidents. When AI systems from different nations interact in contested spaces (airspace, cyberspace, maritime zones), miscalculation risks require dedicated communication channels and de-escalation procedures.
- Invest in global public goods for information security: open-source deepfake detection tools, universal content authentication infrastructure, and support for independent media and fact-checking organizations, particularly in the Global South.
For national governments:
- Conduct serious strategic reviews of the implications of autonomous systems for nuclear deterrence. The interaction between autonomous conventional weapons and nuclear weapons -- scenarios where conventional AI systems threaten nuclear command and control, or where autonomous systems inadvertently cross nuclear thresholds -- is insufficiently analyzed and potentially catastrophic.
- Develop legal frameworks that assign clear accountability for autonomous weapons decisions before, not after, a mass casualty incident. Retrospective legislation in response to tragedy is the worst-case approach.
- Invest in AI safety and alignment research as a national security priority, not merely a commercial interest. AI systems that behave unpredictably in adversarial environments pose risks to their operators as well as their targets.
- Build societal resilience against AI-enabled threats through education, institutional strengthening, and support for civil society organizations that can serve as checks on governmental and corporate AI deployment.
For the technology sector:
- Accept and internalize that AI security applications carry responsibilities beyond commercial interest. The companies building foundation models, autonomous systems components, and surveillance technologies are not neutral actors. Establishing and enforcing red lines -- capabilities that will not be developed or sold regardless of commercial demand -- is an ethical imperative.
- Invest in defensive AI capabilities as deliberately and ambitiously as in offensive ones. The commercial incentives favor offense (more customers, higher margins); the societal need favors defense.
- Develop and enforce robust export controls for AI security capabilities that account for the dual-use nature of the technology and the risk of proliferation to authoritarian regimes and non-state actors.
For civil society and researchers:
- Build independent capacity for monitoring, analyzing, and reporting on AI in security applications. Democratic accountability requires informed publics, which requires independent expertise outside government and industry.
- Develop and test governance frameworks now, before the technologies are fully mature. Waiting until autonomous weapons cause a catastrophe to develop governance is the nuclear weapons playbook -- it works, eventually, but at enormous cost.
- Focus on resilience as well as prevention. Some AI security risks cannot be eliminated; societies must be able to absorb and recover from AI-enabled attacks, disinformation campaigns, and surveillance abuses.
Sources & Evidence
- US DoD AI Strategy (2023) -- Long-term framework for military AI integration across all domains. defense.gov
- SIPRI Emerging Military Technologies -- Research on autonomous weapons trends, proliferation trajectories, and governance options. sipri.org
- Carnegie Endowment -- AI and Catastrophic Risk -- Analysis of AI contributions to existential and catastrophic security risks, including autonomous weapons and biological threats. carnegieendowment.org
- CSIS -- AI and the Future of Conflict -- Center for Strategic and International Studies analysis of how AI transforms warfare, deterrence, and strategic competition. csis.org
- RAND Corporation -- Long-range research on autonomous weapons doctrine, AI escalation dynamics, and cybersecurity strategy. rand.org
- NTI -- Bio + AI Risks -- Nuclear Threat Initiative analysis of convergent biological and AI security risks. nti.org
- UN Secretary-General's New Agenda for Peace (2023) -- Framework for addressing emerging technology threats including autonomous weapons and cyber operations. un.org
- WEF Global Risks Report 2024 -- Long-term risk assessment including AI-enabled threats over the 10-year horizon. weforum.org
- Future of Humanity Institute, Oxford -- Research on long-term AI governance, existential risk, and autonomous systems alignment. fhi.ox.ac.uk
- IISS Military Balance -- Annual assessment of global military capabilities and technology integration trends. iiss.org
- Stop Killer Robots / Human Rights Watch -- Advocacy for LAWS regulation and documentation of autonomous weapons proliferation. stopkillerrobots.org | hrw.org
- C2PA -- Content authentication standard for combating deepfakes and synthetic media at scale. c2pa.org
- Brookings Institution -- Research on deepfakes, information warfare, and democratic resilience. brookings.edu