Policy Alert: New U.S. Executive Order on Artificial Intelligence – Aon Tips for Better Risk Capital Decisions

Policy Alert: New U.S. Executive Order on Artificial Intelligence – Aon Tips for Better Risk Capital Decisions
January 15, 2026 9 mins

Policy Alert: New U.S. Executive Order on Artificial Intelligence – Aon Tips for Better Risk Capital Decisions

AI

On December 11, 2025, U.S. President Donald J. Trump released a new Federal Executive Order — the latest effort towards the aggressive promotion of U.S. AI leadership and away from a fragmented, state-by-state regulatory model.

Better Informed

On December 11, 2025, U.S. President Donald J. Trump released a new Federal Executive Order — the latest effort towards the aggressive promotion of U.S. AI leadership and away from a fragmented, state-by-state regulatory model. At its core, the Executive Order, titled “Ensuring a National Policy Framework for Artificial Intelligence,” does three things:

  1. Declares a national policy: The U.S. will pursue “minimally burdensome” AI regulation to sustain and enhance global AI dominance.
  2. Targets state-level AI rules: It creates tools and incentives to challenge or constrain state AI laws seen as “onerous,” especially those perceived as forcing “ideological bias” into models, requiring changes to “truthful outputs” of AI, or imposing extensive reporting and disclosure obligations.
  3. Sets the stage for a uniform federal framework: It directs the Administration to prepare legislation that preempts conflicting state AI laws, while carving out limited areas like child safety and state procurement.

The Administration has framed AI as central to U.S. national security and economic dominance, and explicitly rejects what it sees as over-regulation, particularly at the state level.

  • To create a policy environment that favors rapid AI deployment, experimentation, and capital investment, the Department of Justice(DOJ) will in the next 30 days create an AI Litigation Task Force whose “sole responsibility” is to challenge state AI laws deemed inconsistent with the Executive Order’s policy.
  • Within 90 days, the Department of Commerce (Commerce) must publish an evaluation of state AI laws that flags “onerous” laws that are inconsistent with the Executive Order’s policy. These will include laws that require AI to alter truthful outputs and compel disclosure and reporting that may violate First Amendment or other provisions of the Constitution.
  • Prospectively, prevents states with “onerous” AI laws from accessing certain Broadband Equity, Access, and Deployment Program (BEAD) funds, essentially using federal funding to discourage state AI regulations that are inconsistent with the Executive Order. This could create near-term uncertainty in states that currently have stricter AI regulation.
  • To create a default national disclosure framework for certain AI systems, the Federal Communications Commission (FCC) has been directed to consider a federal reporting and disclosure standard for AI models that preempts conflicting state rules.
  • The Federal Trade Commission (FTC) will issue a policy statement explaining when state laws that require altering truthful AI outputs will be treated as preempted by federal law on unfair and deceptive practices.
  • And lastly, the Administration will draft legislation for a uniform federal AI policy that preempts state AI laws conflicting with the Executive Order’s policy, leading to organizations facing one primary federal AI regime for commercial use plus limited, clearly scoped state rules in exempted areas like child safety and public-sector AI.

For organizations with operations across the U.S., this is not just a legal development. It will influence the enterprise risk profile, control environment and insurability of AI-enabled activities over the next several years. A more uniform AI framework could reduce regulatory friction and compliance cost, but it also raises questions around governance, liability allocation and stakeholder trust in AI-driven decisions.

This new direction aligns with themes highlighted by Kevin Kalinich, Aon’s Intangible Assets Global Collaboration Leader, during his Congressional Testimony regarding artificial intelligence, which emphasized that national standards and clearly defined “safe harbors” can reduce legal uncertainty and encourage responsible innovation, similar to how some state and federal laws use the NIST AI Risk Management Framework and other benchmarks to guide risk mitigation. The Texas AI law, for instance, will enable more confident use of AI because, among other things, it provides some protection for entities that comply with the NIST AI Risk Management Framework. Similarly, Last week, Florida rolled out a proposal for a “Citizen Bill of Rights for Artificial Intelligence” aimed at protecting consumers from the potential risks and harms that come with AI.3

Furthermore, under The McCarran-Ferguson Act, insurance regulation and taxation are primarily delegated under state authority, not the federal government, stating that state regulation is in the public interest and supersedes federal laws unless Congress acts specifically against it. This Act established the modern state-based system for regulating the insurance industry, such as AI safety guidelines. For example, The National Association of Insurance Commissioners (NAIC) issued the Model Bulletin on the Use of Artificial Intelligence Systems by Insurers in December 2023, which has been adopted by over half of the states.4

Better Advised

Artificial intelligence opportunities, perils and Risk Capital solutions are dynamic and fluid. What should organizations consider to maximize their AI return on investment and minimize their total cost of risk?

In the short term, organizations should expect to see greater volatility and legal uncertainty in the market as new state laws continue to emerge5, the DOJ’s Task Force begins challenging selected laws, and federal agencies (FCC6, FTC7, Commerce8) begin to issue standards, evaluations, and policy statements. Organizations will need to activate monitoring and scenario planning for situations where state obligations may be invalidated or narrowed, how rapidly federal preemption efforts progress and changes to disclosure, transparency and reporting obligations. In addition, the Executive Order is likely to face legal challenges as to its constitutionality, which may ultimately impact the Order’s scope and impact.

In the medium term — over the next several years — organizations may expect to see a convergence toward a federal-first AI regime with fewer diverging state requirements to design around and more predictable compliance architecture for multi-state operations. Seeing how the U.S. regime rules compare to global rules around AI will cause organizations to make strategic decisions around deployment of AI globally. For example, organizations that operate globally must still comply with foreign regulations, such as the European Union Artificial Intelligence Act and all other applicable laws.

If a federal framework emerges that:

  • Differentiates higher-risk vs. lower-risk AI uses
  • Embeds clear governance and technical control expectations
  • Potentially provides safe harbor-style protections for entities that align to recognized standards

then enterprises could benefit from:

  • Greater confidence in scaling AI from pilots to production, knowing that liability parameters and oversight expectations are more clearly defined.
  • Reduced compliance fragmentation and associated costs across states.
  • More consistent risk quantification and transfer, including better integration of AI risk into insurance and captive strategies.
  • Stronger enterprise wide AI governance models, integrating legal, risk, cyber, operations and business units, in line with emerging best practices.
Better Decisions9

This Executive Order marks a clear strategic choice by the federal government to promote U.S. AI leadership by reducing perceived state-level barriers and moving toward a more uniform, “minimally burdensome” regime. For organizations, this allows for faster AI adoption and fewer long-term regulatory inconsistencies across states, coupled with a shift in heavy compliance and reporting obligations to something less onerous. Organizations that:

  • Establish risk‑based AI governance
  • Align to emerging standards and safe harbor concepts
  • Integrate AI into enterprise-level risk, resilience and disclosure frameworks

will be better positioned to capture AI-driven growth while managing the evolving landscape of liability, reputation, and operational risk.

Boards, executive teams and risk leaders should consider the following cross sector priorities:

1. Develop an Enterprise AI Use‑Case Inventory
  • Catalog AI systems and use cases across the organization, including those embedded in third-party tools.
  • Classify use cases by business criticality and risk, separating high impact decisions from lower risk automation.
2. Enhance AI Governance and Controls
  • Build or refine an AI governance frameworkthat connects legal, compliance, cyber, data, operational risk and business owners.
  • Align policies and controls with recognized risk management frameworks likely to underpin safe harbors and emerging federal standards.
3. Review Contracts and Risk Transfer Arrangements
  • Revisit key vendor and partner agreementsinvolving AI, data and models to ensure clarity on responsibilities, performance, security and compliance.
  • Assess how AI related risks are addressed in existing insurance programs (e.g., cyber, E&O, D&O, media / tech liability, EPLI, General/Product Liability, Crime – e.g. deep fake fraudulent funds transfers, etc.) and consider enhancements as the market for AI specific coverages evolves.[10] Compare advantages and disadvantages of assumed “Silent AI” coverage under existing coverage versus affirmative AI coverage in midst of growing specific AI exclusions and sub-limits.
4. Strengthen Transparency and Stakeholder Communication
  • Prepare for potential federal baselines on AI reporting and disclosures, including where AI materially influences customer outcomes or core operations.
  • Integrate AI into enterprise risk management and board reporting, focusing on how AI both enables growth and introduces new categories of risk.
  • Consider an AI Risk Diagnostic
Aon’s Thought Leaders
  • Jesus Gonzalez
    Intangible Assets Global Collaboration Co-Leader
  • Daniel Serota
    Vice President of Government and Public Affairs

 

1 "Guardrails and Growth: AI’s Role in Capital and Insurance Markets," Before the U.S. Senate Committee on Banking, Housing, and Urban Affairs Subcommittee on Securities, Insurance, and Investment (Jul 30, 2025), https://www.banking.senate.gov/imo/media/doc/kalinich_testimony_7-30-25.pdf

2 Texas Responsible Artificial Intelligence Governance Act (TRAIGA) (HB 149) was signed into law by Governor Greg Abbott.1 TRAIGA takes effect on 1 January 2026

3 "Governor Ron DeSantis Announces Proposal for Citizen Bill of Rights for Artificial Intelligence" (Dec 4, 2025), https://www.flgov.com/eog/news/press/2025/governor-ron-desantis-announces-proposal-citizen-bill-rights-artificial

4 Implementation of NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers, National Association of Insurance Commissioners (last updated Aug. 5, 2025), https://content.naic.org/sites/default/files/cmte-h-big-data-artificial-intelligence-wg-map-ai-model-bulletin.pdf.

5 US State-by-State AI Legislation Snapshot, Bryan Cave Leighton Paisner LLP (Dec. 16, 2025), https://www.bclplaw.com/en-US/events-insights-news/us-state-by-state-artificial-intelligence-legislation-snapshot.html

6 "Implications of Artificial Intelligence Technologies on Protecting Consumers from Unwanted Robocalls and Robotexts," (Aug 8, 2024), https://docs.fcc.gov/public/attachments/FCC-24-84A1.pdf

7 Prepared and issued by Mark D Gray, Chief Artificial Intelligence Officer, "FTC Compliance Plan on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence," (Sept 2025), https://www.ftc.gov/system/files/ftc_gov/pdf/FTC-AI-Use-Policy.pdf

8 "Commerce Department Launches New Center to Guide AI Standards and Security," (Jun 2025), https://www.asme.org/government-relations/policy-impact/commerce-department-launches-new-center-to-guide-ai-standards-and-security

9 "Policyholders Should Negotiate to Limit AI Exclusions at Renewal," by Lynda Bennett, Alexander Corson, (Dec 2025), https://news.bloomberglaw.com/legal-exchange-insights-and-commentary/policyholders-should-negotiate-to-limit-ai-exclusions-at-renewal

10 Tracking the Evolution of AI Insurance Regulation | Fenwick, by Heidi Lawson, Faye Wang, Sarah Hopkins (Dec 11, 2025), https://www.fenwick.com/insights/publications/tracking-the-evolution-of-ai-insurance-regulation

General Disclaimer

This document is not intended to address any specific situation or to provide legal, regulatory, financial, or other advice. While care has been taken in the production of this document, Aon does not warrant, represent or guarantee the accuracy, adequacy, completeness or fitness for any purpose of the document or any part of it and can accept no liability for any loss incurred in any way by any person who may rely on it. Any recipient shall be responsible for the use to which it puts this document. This document has been compiled using information available to us up to its date of publication and is subject to any qualifications made in the document.

Terms of Use

The contents herein may not be reproduced, reused, reprinted or redistributed without the expressed written consent of Aon, unless otherwise authorized by Aon. To use information contained herein, please write to our team.

More Like This

View All
Subscribe CTA Banner