I. Introduction

Artificial intelligence (AI) has shifted from a niche technology to a critical infrastructure for economies, national security, and everyday life. Generative AI, in particular, has exposed regulators to new risks around misinformation, discrimination, surveillance, and systemic dependencies on opaque models.

By the end of 2025, the European Union (EU) and the United States (US) have taken sharply different regulatory paths:

  • The EU AI Act is now in force with a phased implementation schedule and central oversight via the European AI Office, moving toward a single, risk-based regulatory framework across the bloc.

  • The US continues to lack a comprehensive federal AI statute. Instead, it relies on:

    • A sequence of executive orders and policy frameworks emphasizing innovation, national competitiveness, and First Amendment concerns; and

    • A rapidly expanding patchwork of state AI laws addressing deepfakes, discrimination, transparency, and “high‑risk” systems.

This white paper compares the EU and US AI regulatory landscapes as of late 2025, and offers predictions for 2026. It follows the overall analytical structure of the University of Chicago Business Law Review article “Comparing the EU AI Act to Proposed AI-Related Legislation in the US,” while updating for the current legal and political context.

II. The EU AI Act in 2025

A. Scope and Risk-Based Architecture

The EU AI Act is the world’s first comprehensive horizontal AI regulation. It applies to providers, deployers, importers, and distributors of AI systems placed on the EU market or whose output affects individuals in the EU, regardless of where the provider is established.

The Act relies on a tiered risk framework:

  1. Prohibited AI practices (unacceptable risk), such as:

    • Social scoring systems by public authorities,

    • Certain manipulative or exploitative systems (e.g., those that exploit vulnerabilities),

    • Real-time remote biometric identification in public spaces, subject to narrow law‑enforcement exceptions.

  2. High-risk AI systems, including:

    • Safety components of products covered by sectoral law (e.g., medical devices, vehicles),

    • Systems used in employment, credit, education, essential services, law enforcement, border management, and the administration of justice.

  3. Limited‑risk systems, with transparency obligations, such as:

    • Chatbots that must disclose machine interaction,

    • Deepfake systems that must disclose synthetic content.

  4. Minimal‑risk systems, for which no additional obligations are imposed.

In parallel, the Act creates a separate regime for general-purpose AI (GPAI) models, including those that may pose “systemic risk” (e.g., very large frontier models).

B. 2025 Implementation Milestones

The AI Act formally entered into force in August 2024, with a phased rollout through 2027. 2025 is the first year in which binding obligations become enforceable.

Key 2025 milestones include:

  • February 2, 2025Prohibited practices and AI literacy obligations

    • All bans on unacceptable‑risk AI practices became directly applicable.

    • AI literacy obligations (Article 4) began applying to providers, deployers, and users (e.g., training, awareness).

  • May–July 2025Guidance and codes for General‑Purpose AI

    • Draft and then finalized codes of practice for GPAI models (transparency, copyright compliance, risk management) were developed and endorsed, largely guided by the European AI Office.

  • August 2, 2025GPAI obligations and governance structures

    • GPAI providers are now subject to core obligations (technical documentation, training data summaries, copyright safeguards, model evaluation, and, for “systemic‑risk” models, incident reporting and risk mitigation).

    • Member States:

      • Designated national competent authorities and market-surveillance authorities;

      • Adopted national penalty regimes for non-compliance.

    • The European AI Office became fully operational, coordinating enforcement for GPAI, issuing guidance, and organizing joint investigations with national regulators.

Through 2025, the EU has complemented the Act with policy initiatives such as an “Apply AI Strategy” to accelerate adoption in key sectors and support infrastructure for AI startups.

C. Enforcement and Practical Challenges

Despite clear timelines, the EU faces several implementation challenges:

  • Capacity constraints: Many Member States are still building in‑house expertise in AI auditing, technical assessments, and risk evaluation.

  • Consistency: Achieving uniform enforcement across 27 Member States remains a concern, echoing critiques of uneven GDPR enforcement.

  • GPAI complexity: Determining when a model constitutes a “systemic‑risk” GPAI and how to audit such models is technically and procedurally complex.

  • Standardization: Harmonized technical standards under the AI Act (for conformity assessment of high‑risk systems) are still being drafted and will likely not be complete until 2027 and beyond.

Nonetheless, by the end of 2025, the EU is expected to have a functioning legal and institutional framework, with clear obligations on the horizon for 2026.

III. The U.S. AI Regulatory Landscape in 2025

A. Lack of a Comprehensive Federal AI Statute

As of late 2025, the US still has no EU‑style, comprehensive AI law. Proposals from the prior Congress—such as Schumer’s SAFE Innovation Framework, the Blumenthal–Hawley Bipartisan Framework, and the National AI Commission Act—have not matured into binding federal law.

In the 119th Congress (2025–2026), Congress has focused mainly on:

  • R&D and infrastructure support (e.g., the CREATE AI Act for expanding the National AI Research Resource);

  • Targeted bills addressing issues such as:

    • Deepfakes and election integrity,

    • Child safety and CSAM involving generative AI,

    • AI in critical infrastructure or defense.

No omnibus “US AI Act” has passed. Instead, federal policy is dominated by executive actions and sectoral enforcement by agencies (FTC, CFPB, EEOC, FDA, etc., under existing authorities).

B. Executive Branch Policy in 2025

Federal AI policy has shifted from a predominantly risk‑regulation narrative (under earlier frameworks like EO 14110) to a more innovation‑ and competitiveness‑centric approach.

Key developments in 2025 include:

  • A series of Executive Orders emphasizing:

    • Reducing “regulatory barriers” to AI innovation;

    • Reviewing or rolling back prior guidance perceived as overly restrictive;

    • Re‑orienting federal AI use toward “viewpoint‑neutral” or “non‑ideological” systems, framed around First Amendment concerns.

  • A December 11, 2025 Executive Order on “Ensuring a National Policy Framework for Artificial Intelligence” that:

    • Directs the Department of Justice to challenge certain state AI laws deemed to unduly burden interstate commerce, compel particular content outcomes, or conflict with federal free‑speech principles;

    • Tasks the Department of Commerce with reviewing state AI laws within 90 days and identifying inconsistencies with national policy;

    • Signals support for eventual federal preemption of state AI laws, while carving out space for state laws on child safety and critical infrastructure;

    • Conditions some federal funds (e.g., in broadband or infrastructure) on state alignment with federal AI principles.

This federal approach prioritizes:

  • National competitiveness in AI against global rivals;

  • First Amendment protections, particularly for generative AI systems considered expressive tools;

  • Skepticism toward broad state‑level “AI fairness” or “AI safety” statutes that mandate particular model outcomes or extensive impact assessments.

C. The Proliferation of State AI Laws

In the absence of a federal AI statute, states have taken the lead.

By late 2025:

  • Dozens of states have enacted AI‑related laws, and nearly all have introduced bills covering deepfakes, discrimination, transparency, and sector-specific use cases.

  • Key themes include:

    1. Deepfakes and election integrity

      • Laws penalizing malicious deepfakes in political campaigns;

      • Mandatory disclaimers for AI‑generated political content close to election periods.

    2. Child safety and CSAM

      • Strict prohibitions on generative AI depicting minors in sexual contexts;

      • Expanded criminal liability for producing or distributing AI‑generated CSAM.

    3. High‑risk AI and discrimination

      • Colorado’s AI Act (SB 24‑205, taking effect in 2026) targets high‑risk AI systems that make consequential decisions, imposing duties to prevent algorithmic discrimination and requiring risk management and notice to affected individuals.

      • Other states explore similar frameworks focusing on employment, housing, credit, and essential services.

    4. AI transparency and safety governance for frontier models

      • California’s emerging AI safety laws (e.g., a “frontier model” transparency and safety statute effective 2026) require large‑scale model providers to conduct risk assessments and report certain safety metrics.

These state statutes vary widely in:

  • Scope (narrow deepfake bans vs broad AI impact assessment obligations);

  • Responsible entities (model providers, deployers, or both);

  • Enforcement mechanisms (private rights of action vs AG enforcement vs administrative oversight);

  • Standards (e.g., “reasonable care,” “foreseeable misuse,” non‑discrimination, or content‑neutral labeling).

The December 2025 federal Executive Order directly targets some of the more expansive state AI acts—especially those perceived as compelling particular speech outputs, or imposing broad, preemptive restrictions on model design—as inconsistent with federal policy. Litigation on federal preemption and constitutional grounds is expected in 2026.

IV. Structural Differences Between the EU and U.S. Approaches

A. Centralized, Ex Ante Regulation vs Fragmented, Ex Post Enforcement
  • EU:

    • Single, binding, horizontal statute (AI Act) with ex ante obligations for providers and deployers—conformity assessment, documentation, risk management, and post‑market monitoring.

    • Centralized oversight for GPAI models via the European AI Office, plus national supervisory authorities.

  • US:

    • No horizontal statute; instead:

      • Sectoral enforcement under existing law (FTC, CFPB, EEOC, HUD, HHS, etc.),

      • State‑level experiments (Colorado, California, Texas, New York, etc.), and

      • Executive‑branch policy signaling but limited hard law.

    • Emphasis on ex post enforcement (e.g., unfair/deceptive practices, discrimination after harm).

*Implication: Companies face a single overarching AI regime when operating in the EU, but a patchwork plus agency enforcement in the US. Compliance posture in the EU tilts toward systematic risk management, while in the US, toward litigation and enforcement exposure management.

B. Fundamental Rights and Risk vs Innovation and Speech
  • EU:

    • Grounded in fundamental rights (privacy, non‑discrimination, human dignity) and systemic risk to democracy and safety.

    • Bans on certain AI techniques and stringent controls on law‑enforcement biometric surveillance.

  • US:

    • Framed around:

      • Innovation leadership and national competitiveness;

      • First Amendment safeguards for AI‑generated content;

      • National security and critical infrastructure.

    • Rather than banning high‑risk AI categories ex ante, policy favors targeted constraints (e.g., deepfakes in elections, CSAM) and post‑hoc liability.

*Implication: The EU is more willing to restrict or ban certain AI uses outright where they conflict with rights, while the U.S. keeps a wider perimeter of “permissible experimentation,” constrained primarily by specific harms, speech doctrine, and sectoral laws.

C. General-Purpose AI and Systemic Risk
  • EU:

    • Distinct regime for GPAI with special duties for “systemic‑risk” models (e.g., incident reporting, model evaluation, and mitigation measures).

    • Codes of practice (2025) are crystallizing into enforceable obligations.

  • US:

    • No dedicated GPAI statute; frontier model providers operate under:

      • Soft‑law commitments (voluntary safety frameworks, industry alliances),

      • Potential sectoral liability if outputs cause harm,

      • State‑level laws where applicable (e.g., California frontier model safety reporting).

Implication: By 2026, large model providers will likely treat EU AI Act GPAI rules as the de facto global baseline for documentation, safety evaluation, and incident reporting—even if US law remains comparatively light‑touch.

D. Extraterritorial Reach and Conflict‑of‑Laws
  • EU AI Act applies extraterritorially to non‑EU providers whose systems are placed on the EU market or whose outputs affect EU individuals.

  • The US has no equivalent horizontal AI regulation, but does assert:

    • Extraterritorial enforcement of sectoral laws (e.g., FCPA-style theories for AI‑mediated misconduct, export controls on advanced AI chips),

    • Potential federal preemption of state AI laws, leading to domestic conflict.

Multinational enterprises must resolve potential conflicts where:

  • EU rules mandate specific disclosure, documentation, or risk controls, and

  • Certain US norms and state constitutional doctrines (e.g., compelled speech concerns) might resist such obligations on domestic deployments.

V. Strategic Compliance Considerations for 2025–2026

For organizations operating on both sides of the Atlantic, 2025–2026 is a critical design window for sustainable AI governance.

Key strategic moves include:

  1. Adopt “AI Act‑plus” as the global baseline

    • Treat EU AI Act obligations—especially for high‑risk systems and GPAI—as the default design standard even in the US.

    • This simplifies cross‑border operations and prepares for potential US convergence on risk‑based concepts over time.

  2. Map AI use cases to risk tiers and jurisdictional triggers

    • Conduct an AI inventory and classify systems:

      • Prohibited, high‑risk, limited‑risk, minimal‑risk (EU lens);

      • High‑impact / decision‑making vs low‑impact AI (US/state lens).

    • Identify where Colorado‑style or California‑style state laws impose additional obligations starting in 2026.

  3. Build dual‑track governance for the EU and the US

    • EU track:

      • Formal risk management, conformity assessment for high‑risk systems, and robust technical documentation aligned to anticipated harmonized standards.

    • US track:

      • Litigation‑aware processes (record‑keeping, legal review of model outputs, discrimination testing, and consumer‑protection compliance).

      • Monitoring of state‑level requirements and expected federal preemption litigation.

  4. Strengthen documentation and model evaluation for GPAI

    • Model cards, training data summaries (where feasible), safety evaluations, red‑team testing, and incident response plans.

    • These steps align closely with AI Act GPAI rules and emerging best practices, and position companies for regulatory inquiries in both regions.

  5. Monitor litigation and guidance as de facto regulation

    • In the EU, enforcement cases and AI Office guidance will clarify the interpretation of “high‑risk” and “systemic‑risk”.

    • In the US, court decisions in challenges to state AI laws and federal actions (e.g., DOJ’s AI Litigation Task Force) will shape boundaries of permissible regulation and platform liability.

VI. Predictions for 2026

A. EU: Consolidation and First Major Enforcement Actions
  1. High‑risk obligations become central

    • From August 2, 2026, most obligations for high‑risk AI systems will apply, including:

      • Risk management and mitigation plans;

      • Data quality and governance requirements;

      • Logging, transparency, and human oversight obligations;

      • Conformity assessment and CE‑marking for covered products.

    • Providers failing to meet these requirements face significant administrative fines.

  2. Early landmark cases

    • Expect several high‑profile enforcement actions:

      • Against providers of biometric identification or emotion‑recognition systems used in public spaces;

      • Against GPAI providers that fail to comply with transparency or systemic‑risk mitigation duties.

  3. Refinement through guidance and standards

    • The European AI Office and European standardization bodies will issue:

      • Detailed guidance on high‑risk use cases and GPAI evaluation;

      • Sector‑specific standards (e.g., health, transport, financial services).

    • De facto “best practices” will emerge quickly, particularly around documentation and model evaluation methods.

  4. Increased alignment with data and sectoral law

    • AI Act enforcement will increasingly intersect with:

      • GDPR (data minimization, lawful basis, profiling),

      • DSA/DMA (platform obligations and systemic risks),

      • Sectoral safety and consumer‑protection rules.

    • Organizations will see AI compliance as part of integrated digital‑regulation compliance rather than a standalone silo.

B. US: Toward a Federal Framework, but Through Conflict
  1. Federal preemption and constitutional litigation

    • 2026 is likely to see:

      • Litigation where DOJ challenges broad state AI acts (e.g., Colorado‑style or very intrusive frontier model laws) on Commerce Clause and First Amendment grounds;

      • States defending their role in consumer protection, non‑discrimination, and election integrity.

    • Courts may constrain the most expansive state frameworks while letting narrower, harm‑specific laws (child safety, fraud, CSAM, election deepfakes) stand.

  2. Incremental federal legislation

    • Full omnibus AI legislation remains unlikely in 2026, but targeted federal laws may advance:

      • Narrow federal deepfake and election‑related provisions;

      • Enhanced criminalization of AI‑generated CSAM and certain fraudulent uses;

      • Funding‑linked requirements for AI in critical infrastructure (e.g., risk‑management frameworks).

  3. Soft‑law frameworks and industry self‑governance

    • Federal agencies will continue to publish:

      • AI risk‑management frameworks (e.g., NIST‑style guidance),

      • Sectoral AI guidance (FTC, CFPB, EEOC, HHS, etc.).

    • Large AI providers will likely:

      • Adopt voluntary safety and transparency frameworks;

      • Use internal model evaluations and red‑teaming aligned with NIST and, de facto, AI Act‑inspired practices;

      • Publicize safety reports to pre‑empt calls for stricter statutory regulation.

  4. Emergence of de facto standards via procurement

    • Federal and state procurement rules for AI systems (e.g., in defense, healthcare, and unemployment systems) may require adherence to specific risk‑management, transparency, and documentation standards.

    • For vendors, these standards will function as binding requirements, even without a comprehensive AI statute.

C. Convergence and Persistent Divergence
  1. Areas of convergence

    • Even without formal harmonization, we can expect:

      • Global norms around AI risk assessment, documentation, and incident response;

      • EU‑driven expectations for GPAI providers to carry over into US contracts and due‑diligence processes;

      • Growing market pressure for certifiable assurance about AI safety and fairness.

  2. Areas of persistent divergence

    • Speech and content regulation: The US will likely remain more protective of AI‑generated speech under the First Amendment, limiting direct regulation of outputs; the EU will remain more willing to restrict certain AI uses on fundamental‑rights grounds.

    • Biometrics and surveillance: The EU’s strict limitations on real‑time remote biometric identification will contrast with a more permissive and security‑oriented US approach, particularly in law enforcement and border contexts.

    • Regulatory philosophy: The EU will continue to stress ex ante, centralized, rights‑based regulation, while the US will prioritize innovation, competition, and ex post enforcement plus litigation.

VII. Conclusion

By late 2025, the EU and the US have crystallized two distinct models of AI governance:

  • The EU has moved from debate to implementation, with a comprehensive AI Act, an operational AI Office, and binding obligations on prohibited practices and GPAI providers. It is being prepared for 2026 as the year high‑risk obligations become central.

  • The US has embraced a deregulatory, innovation‑oriented federal stance coupled with a highly active state‑level legislative environment, now on a collision course with federal preemption efforts. Absent a comprehensive federal AI statute, AI governance in the US will continue to evolve through a mixture of executive policy, agency enforcement, state experimentation, and constitutional litigation.

For organizations deploying AI systems globally, 2025–2026 is the period to institutionalize robust AI governance—using the EU AI Act as a compliance anchor, while building sufficient flexibility to navigate US constitutional constraints and a fluid patchwork of state laws.

A proactive, “AI Act‑plus” compliance strategy—combining rigorous risk assessment, documentation, technical evaluation, and clear accountability structures—will not only reduce regulatory risk across jurisdictions but also support safer, more trustworthy AI adoption worldwide.

VIII.References

  1. European Commission. (2024). Regulatory framework for AI. Digital Strategy.

  2. European AI Office. (2025). Implementation Timeline and GPAI Codes of Practice. Artificial Intelligence Act Explorer.

  3. Gracias, S. (2024). Comparing the EU AI Act to Proposed AI-Related Legislation in the US. University of Chicago Business Law Review.

  4. The White House. (2025). Executive Order on Ensuring a National Policy Framework for Artificial Intelligence. Presidential Actions.

  5. National Conference of State Legislatures (NCSL). (2025). Artificial Intelligence 2025 Legislation. NCSL Technology and Communication.

  6. International Association of Privacy Professionals (IAPP). (2025). US State AI Legislation Reviewing the 2025 Session. IAPP News.

  7. Brookings Institution. (2025). How different states are approaching AI. Brookings Research.

  8. DLA Piper. (2025). Latest wave of obligations under the EU AI Act take effect. Insights.

  9. Sidley Austin LLP. (2025). Unpacking the December 11, 2025 Executive Order. News & Updates.

  10. U.S. Congress. (2025). H.R. 2385 - CREATE AI Act of 2025. Congress.gov.

scene from the WarGames movie
scene from the WarGames movie

[WarGames - MGM/UA Entertainment Co., 1983]

Transatlantic Divergence in AI Regulation: The EU AI Act and the Fragmented U.S. Approach