Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
Financial and Market intelligence
Fundamental & Alternative Datasets
Government & Defense
Professional Services
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
Financial and Market intelligence
Fundamental & Alternative Datasets
Government & Defense
Professional Services
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
05 Mar, 2026
The Trump administration is moving forward on identifying state AI laws deemed inconsistent with federal policy, setting the stage for potential litigation and funding restrictions that could reshape how companies navigate AI regulation across the US.
The Commerce Department is expected to publish its evaluation of state AI laws by March 11. The report, required under a December 2025 executive order, will serve as a road map for a Justice Department AI Litigation Task Force to challenge state laws the administration considers overly burdensome. The same order directs the Commerce Department to restrict access to billions in broadband funding for states that maintain laws flagged as "onerous."
The DOJ directed S&P Global Market Intelligence to a Jan. 9 memo establishing the AI Litigation Task Force and declined further comment.
Members of Congress are already pressing the Commerce Department on which laws to target. Reps. Gabe Evans (R-Colo.) and Nick Langworthy (R-NY) wrote to Commerce Secretary Howard Lutnick on Feb. 19 urging the department to include Colorado's AI Act and New York's Responsible AI Safety and Education (RAISE) Act in the evaluation. But companies deploying AI across multiple jurisdictions remain caught between state laws that are fully enforceable and a federal preemption push whose scope and timeline are unclear.
"For a company that is in compliance with a state law, until you see an injunction by a district court, I think they should just continue to comply with it," said Ryan Thompson, counsel at Hogan Lovells who advises clients on AI regulatory issues.
Which laws face scrutiny
The executive order explicitly cites Colorado's AI Act, which regulates algorithmic decision-making in high-stakes contexts, as an example of a law that "may even force AI models to produce false results." Thompson said Colorado's law and Illinois' Biometric Information Privacy Act are among the most likely targets.
The Evans-Langworthy letter adds New York's RAISE Act to the list, arguing its broad applicability positions the state as a "de facto national regulator." The state law creates audit and transparency requirements for foundation models.
States enacted 145 AI-related laws in 2025, a more than 50% increase over the prior year, according to regulatory tracking firm Multistate.ai.
Spokespersons for the attorneys general of Colorado and Illinois did not respond to requests for comment.
Federal funding as leverage
The executive order goes beyond litigation, directing the Commerce Department to make states with flagged AI laws ineligible for nondeployment funds under the $42.45 billion Broadband Equity, Access and Deployment (BEAD) program.
As of the latest update from the National Telecommunications and Information Administration (NTIA), the bureau in the Commerce Department overseeing the BEAD program, 50 of 56 state and territory proposals have been approved, with roughly $21 billion being spent on BEAD deployments and $21 billion remaining for nondeployment purposes. Nondeployment projects can include things such as building AI infrastructure, streamlining permitting and pole attachment processes, expanding cybersecurity and telehealth, and improving broadband affordability.
On Feb. 10, Lutnick testified before the Senate Appropriations Subcommittee that the administration intends to spend BEAD nondeployment funds "according to the statute."
But when Sen. Chris Van Hollen (D-Md.) asked how the AI executive order aligns with existing BEAD law, Lutnick said he was not "sufficiently familiar to answer."
State lawmakers have estimated that most states will have hundreds of millions of dollars available under nondeployment funds, while Texas and North Carolina will have over $1 billion. Given the amount of funding involved, Hogan Lovells' Thompson said the BEAD lever is significant.
"We also do have this overlay of the use of federal funding to compel states to either not pass laws or to not enforce existing laws that are inconsistent with the priorities of the Trump administration," Thompson said.
An NTIA spokesperson directed S&P Global Market Intelligence to a Feb. 13 blog post on listening sessions for BEAD nondeployment funds and confirmed the agency is still working on guidance.
What companies should do
Bryan McGowan, a principal in KPMG's advisory practice who leads the firm's AI governance work, said most large companies are building comprehensive AI frameworks aligned to internationally recognized standards such as the National Institute of Standards and Technology's AI Risk Management Framework and the ISO 42001 certification rather than tailoring compliance to individual state laws.
KPMG has not significantly altered the governance framework it rolled out in 2023 even as thousands of regulatory changes have been proposed since, McGowan said, and the firm is advising clients to take a similar approach.
McGowan said KPMG's patented AI testing framework and related testing of client AI systems has found cases where third-party AI systems had guardrails in place that were "not functioning as intended," including risks around data exposure. "It's relatively straightforward to design an AI Governance program and communicate related expectations," he said. "It's much more difficult to test and validate whether the systems are truly functioning as expected.”
Anthony Habayeb, CEO of AI governance firm Monitaur, said the biggest mistake companies are making is delaying governance investments while waiting for regulatory clarity, noting that early research has begun to show correlations between governance spending and improvements in revenue and margins.
Hogan Lovell's Thompson said the enforcement environment is where the real shift is occurring, pointing to the Federal Trade Commission's September 2025 inquiry into major AI chatbot operators, including Alphabet Inc., Character Technologies Inc., Instagram LLC, Meta Platforms Inc., OpenAI OpCo LLC, Snap Inc. and X.AI LLC, and a bipartisan letter from 44 state attorneys general to leading AI firms raising concerns about child safety.
The executive order carves out child safety from preemption, but Thompson flagged algorithmic pricing, the use of AI in employment decisions and state data protection compliance as areas of concentrated enforcement activity.
"We're still waiting to see what the recommendations are with respect to state laws that the federal government has identified as inconsistent with the policy objectives of the executive order," Thompson said.