Ethics & Regulation: Short-term

2026–2028Impacts already visible or imminent | Systems & Institutions

Ethics & Regulation: Short-term (2026-2028)

Current State

The global regulatory landscape for artificial intelligence in early 2026 is defined by a fundamental asymmetry: AI capabilities are advancing at an exponential pace while governance frameworks remain fragmented, reactive, and often outpaced by the technology they seek to govern. Three major regulatory blocs -- the European Union, the United States, and China -- have adopted starkly different approaches, creating a patchwork of rules that multinational AI companies must navigate simultaneously.

The EU AI Act represents the most comprehensive binding AI regulation in the world. Adopted in March 2024 and entering into force in August 2024, the Act follows a risk-based classification framework. Prohibited AI practices (social scoring systems, real-time biometric surveillance without safeguards, manipulative subliminal techniques) became enforceable in February 2025. Obligations for general-purpose AI (GPAI) models took effect in August 2025, requiring providers to maintain technical documentation, comply with EU copyright law, and publish training data summaries. High-risk AI systems used in critical sectors -- employment, education, law enforcement, credit scoring, migration -- face the most demanding requirements, including conformity assessments, human oversight mandates, and bias testing, with full enforcement beginning in August 2026. The European AI Office, established in 2024, is the central body for coordinating implementation, drafting codes of practice for GPAI providers, and supervising systemic-risk models.

The United States has pursued a markedly different path. The Biden administration's Executive Order on Safe, Secure, and Trustworthy AI (October 2023) established reporting requirements for frontier model developers, directed NIST to develop AI safety frameworks (the AI Risk Management Framework), and tasked federal agencies with sector-specific guidance. However, the order was partially rolled back in early 2025 under the incoming Trump administration, which favored a lighter regulatory approach emphasizing innovation and American AI leadership. As of early 2026, the US lacks comprehensive federal AI legislation. Instead, regulation has emerged through a patchwork of state-level laws (Colorado's AI Consumer Protection Act, California's failed SB 1047 and subsequent proposals, Illinois and New York City algorithmic hiring laws), sector-specific actions by the FTC and EEOC, and voluntary industry commitments. Congress has introduced multiple bills -- including the SAFE Innovation Framework, the Algorithmic Accountability Act, and various AI disclosure requirements -- but none has achieved passage through both chambers.

China has moved faster than any major economy in deploying targeted AI regulations. The Interim Measures for the Management of Generative AI Services (August 2023) require providers to register algorithms, ensure training data legality, and align outputs with "core socialist values." Earlier regulations govern recommendation algorithms (2022) and deepfake/synthetic media (2023). China's approach is pragmatic and sector-specific rather than omnibus, and notably applies primarily to consumer-facing services while granting broader latitude for enterprise and research uses. The Cyberspace Administration of China (CAC) maintains a public algorithm registry where companies must file details of deployed AI systems.

Copyright and intellectual property have become a flashpoint. In the US, the Copyright Office ruled in 2023 that purely AI-generated works cannot receive copyright protection, while works with significant human authorship that incorporate AI tools may qualify. Major lawsuits -- the New York Times v. OpenAI, Getty Images v. Stability AI, Authors Guild v. OpenAI, and multiple class-action suits by artists and musicians -- are proceeding through courts with no definitive appellate rulings yet as of early 2026. The EU AI Act's requirement that GPAI providers respect copyright and disclose training data summaries has created a de facto obligation to license training data or demonstrate fair use, though enforcement mechanisms remain underdeveloped. Japan has maintained a notably permissive stance, holding that training AI on copyrighted material generally does not infringe copyright.

Key Drivers

1. The EU enforcement ramp-up: The August 2026 deadline for high-risk AI system compliance is the single most consequential near-term regulatory event globally. Companies deploying AI in hiring, credit decisions, education, and law enforcement within the EU must demonstrate conformity or face fines up to 35 million euros or 7% of global revenue. This is forcing a global compliance industry into existence.

2. Deepfake and election integrity crises: The 2024 US and EU election cycles saw widespread AI-generated disinformation -- robocalls using cloned candidate voices, synthetic video content, AI-generated fake news articles. These incidents have intensified legislative urgency around AI-generated content labeling, watermarking, and provenance tracking. At least 15 US states passed or introduced deepfake-related legislation by early 2026.

3. Algorithmic harm litigation: A growing body of case law is establishing that companies deploying AI systems can be held liable for discriminatory outcomes. EEOC guidance (2023) clarified that employers using AI hiring tools remain liable under Title VII. Several high-profile lawsuits involving AI-driven denial of healthcare claims, loan applications, and housing access have created powerful incentives for algorithmic auditing.

4. Competitive regulatory arbitrage: Companies are actively choosing where to develop and deploy AI based on regulatory environments. The divergence between EU, US, and Asian approaches is creating what some analysts call a "regulatory trilemma" -- companies cannot simultaneously optimize for EU compliance, US innovation speed, and Chinese market access.

5. The copyright question as existential risk for AI business models: If courts rule that training on copyrighted data constitutes infringement without fair use protection, the financial exposure for major AI labs runs into the billions. This uncertainty is a first-order business risk and is driving intense lobbying and legislative activity.

Projections

EU AI Act implementation will be uneven and contested (2026-2028). The regulatory infrastructure required to enforce the Act -- notified bodies for conformity assessment, standardization efforts through CEN/CENELEC, sector-specific guidance from national authorities -- is still being built. Expect significant compliance gaps in the first 12-18 months of high-risk system enforcement. Small and medium enterprises will struggle disproportionately with compliance costs, potentially concentrating the European AI market among larger firms with compliance budgets. The European AI Office will issue its first enforcement actions by late 2026 or early 2027, likely targeting high-profile cases to establish deterrent credibility.

The US will not pass comprehensive federal AI legislation before 2028. Congressional dysfunction, lobbying by both industry (seeking preemption of state laws) and civil society (seeking stronger protections), and political polarization around technology regulation make omnibus legislation unlikely. Instead, the federal approach will continue to rely on executive action (varying with administration), FTC enforcement under existing authority (unfair and deceptive practices), and a growing patchwork of state laws. California's approach will have outsized influence as a de facto national standard, mirroring its role in data privacy via the CCPA/CPRA.

Copyright litigation will produce landmark but incomplete rulings. By 2028, at least one appellate court in the US will issue a significant ruling on AI training and fair use, but the Supreme Court is unlikely to have weighed in. The most probable outcome is a nuanced ruling that permits training on copyrighted data under some conditions (transformative use, non-competing outputs) while restricting it under others (direct reproduction, competing substitutes). This will create a complex licensing landscape rather than a clear bright-line rule.

China will expand its regulatory framework to cover AI agents and autonomous systems. As Chinese companies deploy increasingly autonomous AI systems in finance, logistics, and consumer services, expect new regulations addressing AI agent accountability, automated decision-making, and cross-border data flows for AI services.

Impact Assessment

On AI companies: Compliance costs are becoming a significant line item. Large AI labs (OpenAI, Anthropic, Google DeepMind, Meta) have established dedicated regulatory affairs teams numbering in the dozens to hundreds. Smaller startups face a disproportionate burden -- estimated compliance costs of $500,000-$2 million for EU AI Act conformity for a single high-risk application, which can be prohibitive for early-stage companies.

On deployment patterns: The regulatory patchwork is creating a "compliance-first" deployment strategy where companies launch AI products in permissive jurisdictions first, then adapt for regulated markets. This disadvantages EU consumers who may receive AI products later or with reduced functionality, while giving US and Asian markets early access.

On affected populations: Workers subject to AI-driven hiring decisions, individuals assessed by AI for credit or insurance, and communities exposed to predictive policing or content moderation algorithms remain inadequately protected in jurisdictions without strong enforcement. The gap between regulatory intent and operational reality is measured in real harms: biased hiring outcomes, wrongful denials of benefits, and discriminatory content moderation.

On innovation: Evidence from early EU AI Act compliance suggests a moderate innovation chill in high-risk categories -- some companies are choosing not to deploy certain AI applications in the EU market rather than bear compliance costs. However, the overall global AI innovation trajectory remains largely unaffected by regulation, as the vast majority of AI applications fall outside high-risk categories.

Cross-Dimensional Effects

Security & conflict: AI regulation intersects directly with national security applications. Military and intelligence AI systems are explicitly exempt from the EU AI Act and most civilian regulatory frameworks, creating a dual-use governance gap. The same facial recognition technology regulated for law enforcement is unregulated for military use.

Geopolitics: Regulatory divergence between the EU, US, and China is becoming a dimension of geopolitical competition. The EU is actively exporting its regulatory model (the "Brussels Effect"), while the US frames its lighter approach as essential for maintaining AI leadership against China. Trade agreements increasingly include AI governance provisions.

Digital divide: Compliance costs and regulatory complexity favor large, well-resourced companies over smaller players and developing-country firms. Countries in the Global South that lack regulatory capacity risk becoming passive recipients of AI systems designed for other markets' regulatory environments -- or having no regulation at all, making their populations testing grounds for unproven systems.

Cultural production: Copyright regulation directly determines whether AI can freely train on cultural works. The resolution of current litigation and legislation will fundamentally shape whether AI-generated content commoditizes cultural production or whether creators retain meaningful economic protection.

Economic models: Regulation shapes the speed and distribution of AI-driven economic transformation. Strict regulation may slow displacement but also slow productivity gains; light regulation accelerates both.

Actionable Insights

For policymakers:

  • Prioritize enforcement capacity over new legislation. The gap between existing rules and actual compliance is more damaging than the absence of new laws. Invest in technical expertise within regulatory agencies.
  • Establish mandatory incident reporting for high-risk AI systems, analogous to aviation or pharmaceutical adverse event reporting. Without data on AI harms, evidence-based regulation is impossible.
  • Coordinate internationally to prevent regulatory arbitrage from creating a "race to the bottom." The OECD AI Principles and the G7 Hiroshima AI Process provide starting frameworks, but need binding mechanisms.

For AI companies:

  • Treat EU AI Act compliance as a baseline global standard. Building compliance infrastructure now is cheaper than retrofitting when other jurisdictions adopt similar frameworks.
  • Proactively develop auditable AI systems. Documentation, bias testing, and human oversight mechanisms should be architected into systems from the design phase, not bolted on for compliance.
  • Engage constructively with copyright holders. Licensing agreements, revenue-sharing models, and opt-out mechanisms are more sustainable than litigation-driven outcomes.

For individuals and civil society:

  • Exercise existing rights. Many jurisdictions already grant rights to contest automated decisions (GDPR Article 22, ECOA in the US for credit decisions). These rights are underutilized.
  • Support transparency mandates. Disclosure that an AI system is making or influencing a decision is a necessary precondition for accountability.
  • Document AI-related harms. Individual incident reports build the evidence base for regulatory action and litigation.

Sources & Evidence

  1. EU AI Act (Regulation 2024/1689) -- Full text and implementation timeline for the world's first comprehensive AI regulation. Risk-based classification framework with graduated obligations. artificialintelligenceact.eu
  2. European Parliament AI Act Overview -- Summary of prohibited practices, high-risk categories, and enforcement mechanisms. europarl.europa.eu
  3. White House Blueprint for an AI Bill of Rights -- Non-binding framework outlining safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives. whitehouse.gov
  4. NIST AI Risk Management Framework -- Voluntary framework for managing AI risks; foundation for US federal approach to AI safety standards. nist.gov
  5. China Interim Measures for Generative AI Services (2023) -- Translation and analysis of China's regulatory framework for generative AI, including algorithm registration and content alignment requirements. digichina.stanford.edu
  6. UK Pro-Innovation AI Regulation (2023) -- White paper outlining the UK's sector-specific, principles-based approach to AI regulation, delegating to existing regulators. gov.uk
  7. WIPO AI and IP Policy -- Analysis of AI's impact on intellectual property frameworks globally, including patent, copyright, and trade secret implications. wipo.int
  8. US Copyright Office AI Initiative -- Ongoing proceedings examining copyright implications of AI training data and AI-generated outputs. copyright.gov
  9. FTC AI Enforcement Actions -- Federal Trade Commission enforcement posture on AI, including actions against deceptive AI claims and algorithmic discrimination. ftc.gov
  10. OECD AI Governance Framework -- International principles and policy recommendations for trustworthy AI, adopted by 46 countries. oecd.org