S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
14 Oct, 2025

| State legislators in Colorado delayed implementation of the state's AI law as they considered various revisions and amendments. Source: Brad McGinley Photography/Moment via Getty Images |
As states experiment with various legislative AI frameworks, they are finding that regulating AI is challenging — a lesson that could eventually influence federal lawmakers.
Colorado's AI Act was signed into law in 2024 as the nation's first comprehensive AI law at the state level. As originally crafted, the law regulates "high-risk" AI systems used to make "consequential decisions" in areas such as education, employment, lending, government services, healthcare, housing, insurance and legal services. The act requires developers and deployers of these systems to exercise reasonable care to prevent algorithmic discrimination and provide transparency through consumer notices and disclosures. The Colorado AI Act imposes fines of $20,000 per violation of the law, but exempts businesses with fewer than 50 employees from risk assessment requirements.
After the law passed, industry stakeholders — particularly small and midsized technology firms — called for changes, raising concerns about the law's broad scope and unclear definitions. Lawmakers proposed amendments, including shifting liability to developers of AI systems rather than deployers and extending the small business exemption. Colorado lawmakers did not reach an agreement and voted this summer to delay implementation of the law by five months until June 30, 2026, to allow more time for revisions.
While the extension gives companies more time to prepare, companies say it is hard to know what to prepare for because the law may change significantly.
"The uncertainty is pretty jarring, and that's creating a lot of confusion inside businesses," said Andrew Gamino-Cheong, chief technology officer and co-founder of Trustible, an AI governance and compliance platform. "There are oftentimes some groups that are trying to push to kind of adopt better AI governance postures and better documentation to prepare for these things, but then they're getting mixed signals from regulators about what's going to come in."
A rocky start
Tyler Thompson, a Denver-based partner at global law firm Reed Smith, told S&P Global Market Intelligence that Colorado's AI Act could benefit from a more gradual transition. Staggered implementation dates would allow "some of the more burdensome and contentious parts of the law to take effect later than others."
"There also is just a clear need for cleanup and explanation: many definitions are vague and compliance requirements uncertain," Thompson said. "If nothing else, a concerted effort to provide clarity and explanation throughout the law could help businesses better understand what they will actually be dealing with."
In particular, the definition of "consequential decisions" has been debated. The current definition is intentionally broad, but an earlier version of the bill limited the definition to employment and public safety, curbing the number of businesses covered by the act.
Although the delay has caused confusion for companies seeking to maintain compliance in Colorado, it also presents an opportunity for the state to reexamine the scope and intent of the law and make changes that benefit businesses and consumers, said Jake Parker, senior director of government relations at the Security Industry Association.
"Colorado's delay presents a strategic window for businesses to adopt best practices from other states and to harmonize and implement internal governance, as well as conduct bias and transparency testing," said Katrina Rosseini, an expert in AI, cybersecurity and quantum computing. "Getting on the forefront of this is only going to benefit businesses, innovation and consumer protection and reputation."
Rosseini recommended harmonizing regulatory frameworks with other states, including California's transparency standards and Illinois' consent-based policies. Rosseini said she expects national guidelines from the National Institute of Standards and Techology and the Federal Trade Commission to follow state models.

RELATED COVERAGE:
Little tech braces for impact of state AI law patchwork
How the tech industry influenced California's latest AI legislation

A second take
Colorado is not the only state to find itself having to amend an earlier AI law. In 2024, Utah enacted the Utah Artificial Intelligence Policy Act (UAIP), which required all regulated businesses in the state to disclose the use of generative AI tools in customer interactions. The law also confirmed that those businesses could not avoid liability under Utah consumer protection law if the GenAI tools led to an unintentional violation. The law created the Office of Artificial Intelligence Policy to oversee an AI learning laboratory program, which provides companies that participate in the program exemptions from some of the law's provisions.
A year later, the state amended the law to narrow the scope of the disclosure requirement to cover only those customer interactions deemed "high-risk," including those involving the collection of sensitive personal information or the provision of personal recommendations for key services, like financial, legal, medical or mental health advice. The amendment also created a new enforcement safe harbor for companies that disclose at the beginning of and throughout a customer interaction that the user is engaging with AI.
The amendments, as well as the original learning lab, have been lauded by the technology industry.
"The UAIP's continued focus on promoting innovation is not only consistent with the Trump administration's federal AI policy to date ... but also mitigates the concern about stifling growth in the early stages of a transformative industry," lawyers from Davis Polk's IP litigation team wrote in a note.
Utah's learning lab is somewhat similar to a framework proposed by Sen. Ted Cruz, R-Texas, as part of his Strengthening Artificial Intelligence Normalization and Diffusion By Oversight and eXperimentation (SANDBOX) Act. The bill directs the White House Office of Science and Technology Policy to establish a regulatory sandbox for AI developers, who can use the program to apply for waivers of or modifications to federal regulations as they test their AI products and services.
In both cases, the technology industry is collaborating with regulators rather than opposing them to produce better outcomes, a strategy that has proven successful for other industries, said Anthony Habayeb, CEO and co-founder of the AI Governance platform Monitaur Inc.
Habayeb pointed to insurance regulators working through the National Association of Insurance Commissioners to establish a bulletin that guides insurance companies toward regulatory expectations for AI governance, a bulletin adopted by more than half of US states since 2023.
"Such a successful outcome was very much a result of years of intentional regulator and industry collaboration to mutually educate and consider the opportunities and challenges of AI," Habayeb said. "It is good to see Utah attempting to similarly bring stakeholders together as partners in AI's future."
Wide vs. narrow
A major question for states looking to regulate AI is whether to pursue comprehensive AI bills like Colorado's or more narrow industry-focused bills.
Some experts note that broad protections are needed because the technology is rapidly becoming pervasive.
"If we don't have the right regulations that are going to protect the consumers, as well as the businesses, then are we really going to be ready for what's coming from a technological advancement angle, now that we're already on that trajectory?" Rosseini asked.
Others note that these bigger bills get caught up in the difficulty of legally defining the wide range of potential harms or negative impacts resulting from the deployment of the technology.
Rather than attempting to regulate AI comprehensively, one approach may be to focus on "a distinct form of AI that has a demonstrative negative impact in a discrete area and set parameters," said Henry Noye, a Philadelphia-based partner in Obermayer's litigation department.
"And if that is successful, then you use that as a model in other industries," Noye said. "You don't catch the whale all at once, right?"