Security & Conflict: Short-term (2026-2028)
Current State
The integration of artificial intelligence into security, defense, and conflict domains has accelerated from experimental to operational across multiple fronts. By early 2026, AI is no longer a future consideration for military planners, intelligence agencies, law enforcement, and malicious actors -- it is an active element shaping the threat landscape and defensive posture of nations worldwide.
Lethal Autonomous Weapons Systems (LAWS): The debate over autonomous weapons has shifted from hypothetical to urgent. The US Department of Defense's 2023 AI Strategy committed to accelerating AI adoption across warfighting, with the Replicator Initiative aiming to deploy thousands of autonomous drones and systems by 2025-2026. The program prioritized small, attritable autonomous platforms -- drones that can operate in contested environments with reduced human oversight. Israel's extensive use of AI-assisted targeting systems in Gaza (2023-2024), reportedly including the "Lavender" and "Gospel" systems that generated target recommendations at scale, brought LAWS from theory to documented battlefield application. These systems reportedly flagged tens of thousands of potential targets, with human review times measured in seconds rather than the careful deliberation envisioned by international humanitarian law frameworks.
Turkey's Kargu-2 loitering munition, which a 2021 UN report suggested may have autonomously engaged targets in Libya, remains a landmark case. By 2025-2026, multiple nations including the US, China, Russia, Israel, South Korea, Turkey, and the UK are developing or deploying systems with increasing autonomy in target identification and engagement decisions. Over 30 countries now have active military AI programs.
AI-Powered Cyber Operations: The cybersecurity landscape has been transformed by AI on both offense and defense. Microsoft's Digital Defense Report (2024) documented a sharp increase in AI-augmented cyberattacks, including phishing campaigns generated by large language models that are significantly more convincing than traditional attempts. Nation-state actors -- notably groups affiliated with Russia, China, Iran, and North Korea -- have been observed using AI to accelerate vulnerability discovery, craft targeted social engineering attacks, and automate reconnaissance. Meanwhile, cybersecurity firms are deploying AI-powered threat detection, anomaly identification, and automated response systems. The offense-defense balance is tilting: AI lowers the skill barrier for attackers while defenders must protect ever-expanding attack surfaces.
Deepfakes and Information Warfare: The 2024 global election cycle -- the largest in history, with over 4 billion people in countries holding elections -- served as a stress test for AI-generated disinformation. Deepfake audio and video appeared in elections across India, Indonesia, Bangladesh, Slovakia, the UK, and the US. In Slovakia, a fabricated audio recording of a liberal candidate allegedly discussing vote-rigging circulated days before the 2023 election, with insufficient time for effective debunking. By 2025-2026, the tools for creating photorealistic video deepfakes, voice clones, and synthetic text are widely accessible through consumer-grade applications. The marginal cost of producing disinformation has collapsed toward zero.
Surveillance Expansion: AI-powered surveillance has expanded dramatically. China's integrated surveillance infrastructure -- combining facial recognition, gait analysis, social media monitoring, and predictive policing algorithms -- remains the most extensive, but similar capabilities are spreading globally. Over 75 countries have adopted AI surveillance technologies, many purchasing systems from Chinese firms like Huawei, Hikvision, and Dahua, or from Western companies like Palantir, Clearview AI, and Cellebrite. In democratic nations, the deployment of AI surveillance often proceeds through law enforcement procurement without adequate legislative oversight.
Key Drivers
1. Great power competition: The US-China technological rivalry is the primary accelerant for military AI development. Both nations frame AI dominance as existential for national security. The 2023 US Executive Order on AI and the DoD AI Strategy explicitly position AI as critical to maintaining military advantage. China's military-civil fusion strategy integrates private AI research directly into defense applications. Russia, though technologically trailing, has invested in autonomous combat systems and AI-enabled electronic warfare.
2. Asymmetric advantage for non-state actors: AI democratizes capabilities previously reserved for well-resourced states. Non-state actors, terrorist organizations, and criminal networks can now access AI tools for reconnaissance, propaganda generation, deepfake creation, and cyber operations. Commercially available drones modified with AI-assisted targeting have appeared in conflicts involving non-state actors in Ukraine, Syria, and Myanmar.
3. Commercial AI dual-use proliferation: The same large language models that power customer service chatbots can generate convincing phishing emails, synthesize disinformation, or assist in planning attacks. Open-source AI models, once released, cannot be recalled. The dual-use nature of AI technology makes traditional arms control frameworks -- designed for physical weapons -- inadequate.
4. Governance vacuum: Despite years of discussion at the UN Convention on Certain Conventional Weapons (CCW), no binding international treaty regulates LAWS. The CCW process has been stalled by opposition from major military powers, particularly the US, Russia, and India. The EU AI Act (2024) addresses some civilian AI risks but does not cover military applications. There is no equivalent of the Chemical Weapons Convention or the Nuclear Non-Proliferation Treaty for AI weapons.
5. Speed of conflict decision-making: AI systems operate at machine speed. In cyber conflict, AI-launched attacks can execute in milliseconds. In air defense scenarios, the decision window for intercept may be seconds. This creates pressure to remove human decision-makers from the loop, not because commanders want to, but because the tempo of AI-enabled conflict may not permit human deliberation.
Projections
LAWS proliferation (2026-2028): At least 10-15 nations will deploy operationally autonomous weapons systems (with varying degrees of human oversight) during this period. Small autonomous drones capable of target identification and engagement will be the most common category. Swarming capabilities -- where dozens or hundreds of drones coordinate without individual human control -- will move from demonstration to deployment. The price point for basic autonomous attack drones will drop below $5,000 per unit, making them accessible to a wide range of state and non-state actors.
Cyber escalation: AI-generated zero-day exploits will increase in frequency as AI systems become capable of discovering and weaponizing software vulnerabilities faster than they can be patched. The average time between vulnerability discovery and exploit will compress from weeks to hours. Critical infrastructure -- power grids, water systems, financial networks, healthcare systems -- will face elevated risk. At least one major AI-assisted cyberattack on critical infrastructure in a G20 nation is probable within this window.
Deepfake normalization: By 2028, deepfake detection technology will lag behind generation capabilities. The "liar's dividend" -- where real footage can be dismissed as potentially fake -- will erode trust in video and audio evidence in legal proceedings, journalism, and public discourse. Election integrity frameworks will struggle to adapt.
Surveillance creep in democracies: Predictive policing, biometric surveillance at borders and public spaces, and AI-assisted intelligence analysis will expand in democratic nations, often justified by terrorism or immigration enforcement. Legal challenges will lag deployment by 1-3 years.
Impact Assessment
Military and strategic stability: The introduction of autonomous weapons creates new instability risks. Systems that operate faster than human decision-making can cause escalation spirals. The lack of established norms for AI in conflict (comparable to nuclear deterrence doctrine) means that miscalculation risks are elevated. The 2026-2028 period represents a particularly dangerous window before norms and guardrails are established.
Civilian harm: AI-assisted targeting systems risk increasing civilian casualties through algorithmic bias (misidentifying civilians as combatants), data quality issues (relying on flawed intelligence databases), and reduced human deliberation time. The documented pattern from Gaza suggests that AI can enable mass targeting at scale while creating an illusion of precision.
Democratic erosion: The combination of deepfakes, AI-generated disinformation, and expanded surveillance threatens democratic processes. Citizens face an increasingly poisoned information environment where distinguishing truth from fabrication requires expertise and effort that most people cannot devote to every claim they encounter. Meanwhile, the surveillance tools ostensibly deployed for security can be repurposed for political control.
Individual security: AI-powered social engineering, voice cloning for fraud, and automated hacking tools mean that individual citizens face elevated personal security risks. Business email compromise attacks using AI-generated voice clones of executives have already resulted in multi-million-dollar fraud cases.
Cross-Dimensional Effects
Geopolitics: Military AI competition reinforces great power rivalry and risks arms race dynamics. AI-enabled cyber capabilities become a primary vector for interstate conflict below the threshold of kinetic war. Nations that lag in AI military capabilities face strategic vulnerability, driving further investment and competition.
Ethics and regulation: The security domain is where AI ethics frameworks face their most severe tests. The tension between military advantage and ethical constraints, between security and civil liberties, between innovation and precaution, is sharpest here. Regulatory efforts in the civilian AI space (EU AI Act, US executive orders) have largely carved out national security exceptions, creating a governance gap precisely where the stakes are highest.
Digital divide: AI security capabilities are unevenly distributed globally. Nations and populations without sophisticated cyber defenses are most vulnerable to AI-enabled attacks. The surveillance gap -- where powerful states can monitor weaker states and their citizens -- exacerbates existing power asymmetries.
Cultural identity: Information warfare targets cultural narratives, identities, and social cohesion. AI-generated disinformation is often designed to exploit cultural fault lines -- racial tensions, religious divisions, political polarization. The erosion of shared epistemological foundations (the ability to agree on what is real) threatens the cultural commons.
Actionable Insights
For governments and policymakers:
- Push urgently for international norms on LAWS, even if a binding treaty remains elusive. Confidence-building measures, transparency requirements, and codes of conduct can reduce risk in the near term.
- Invest in AI-powered cyber defense at least as aggressively as in offensive capabilities. Critical infrastructure protection must be elevated to a national security priority.
- Mandate transparency and impact assessments for AI surveillance deployments by law enforcement. Establish independent oversight bodies with technical expertise.
- Develop rapid-response frameworks for deepfake incidents during elections, including pre-agreements among media organizations and platforms for verification protocols.
For technology companies:
- Implement robust safeguards against misuse of AI tools for weapon guidance, target identification, and attack planning. Red-team products specifically for security-relevant misuse scenarios.
- Invest in content provenance technologies (C2PA, watermarking) to enable authentication of real media rather than solely detecting fakes.
- Cooperate with law enforcement on AI-enabled fraud and cyberattack capabilities while maintaining clear boundaries against mass surveillance facilitation.
For civil society and individuals:
- Support and fund independent AI security research and journalism. Public understanding of these issues is critical for democratic accountability.
- Develop personal digital hygiene practices: multi-factor authentication, skepticism toward unsolicited communications (even if they sound like known contacts), and media literacy skills.
- Engage with policy processes. The decisions being made in 2026-2028 about autonomous weapons, surveillance, and information integrity will shape decades of security outcomes.
Sources & Evidence
- US DoD AI Strategy (2023) -- Committed to accelerating AI adoption for warfighting and decision advantage. Replicator Initiative targets autonomous system deployment. defense.gov
- Stop Killer Robots Campaign -- Coalition of 250+ NGOs in 70 countries advocating for regulation of autonomous weapons. Tracks LAWS development globally. stopkillerrobots.org
- Human Rights Watch -- Documented risks of autonomous weapons and advocated for preemptive ban. Reports on AI-assisted targeting in conflict zones. hrw.org
- UN CCW Group of Governmental Experts on LAWS -- Ongoing multilateral discussions since 2014; no binding agreement reached. Major military powers resist binding restrictions. documents.un.org
- Europol Report on LLM Impact on Law Enforcement (2023) -- Assessed how large language models enable fraud, social engineering, and cybercrime at scale. europol.europa.eu
- RAND Corporation -- Multiple studies on AI and national security, autonomous weapons risks, and information warfare. rand.org
- NIST AI Risk Management Framework -- US standards body framework for trustworthy AI, referenced in Executive Order 14110. nist.gov
- CISA AI Security Guidance -- US Cybersecurity and Infrastructure Security Agency guidance on AI risks to critical infrastructure. cisa.gov
- Microsoft Digital Defense Report 2024 -- Documented AI-augmented cyberattacks by nation-state actors and cybercriminal groups. microsoft.com
- Brookings Institution -- Analysis of deepfakes and international conflict, information warfare implications. brookings.edu
- Carnegie Endowment for International Peace -- Research on AI and catastrophic risk, including military applications. carnegieendowment.org
- WEF Global Risks Report 2024 -- Ranked AI-generated misinformation and disinformation as the top global risk for the 2024-2026 period. weforum.org