Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
Financial and Market intelligence
Fundamental & Alternative Datasets
Government & Defense
Professional Services
Banking & Capital Markets
Economy & Finance
Energy & Commodities
Technology & Innovation
Podcasts & Newsletters
Financial and Market intelligence
Fundamental & Alternative Datasets
Government & Defense
Professional Services
Banking & Capital Markets
Economy & Finance
Energy & Commodities
Technology & Innovation
Podcasts & Newsletters
27 Mar, 2026
|
US Sen. Marsha Blackburn (R-Tenn.), third from the left, speaks as President Donald Trump listens in the Oval Office. Blackburn and the White House have issued separate legislative AI frameworks that contain key differences. |
Republican proposals on AI released in March differ on key provisions, including copyright, liability and Section 230.
The Trump administration released a four-page legislative blueprint on March 20 calling on Congress to establish a single national AI standard that preempts state laws and takes a light-touch approach to regulation. The same week, Sen. Marsha Blackburn (R-Tenn.) released a discussion draft bill branded as the TRUMP AMERICA AI Act, presented as the legislative vehicle to carry out the president's mandate.
But the two documents differ significantly on key issues, raising questions about whether Congress can pass a national AI governance law in 2026.
"Outside of small exceptions, Blackburn's proposal is diametrically opposed to Trump's," said Neil Chilson, head of AI policy at the Abundance Institute and a former chief technologist at the Federal Trade Commission. "Blackburn's approach would place dozens of government veto points in front of developers, deployers and users. The Trump framework seeks to eliminate such barriers."
Conflict on copyright
The starkest conflict is on copyright. The proposal from Blackburn, a longtime ally of Nashville music labels and other rightsholders, would codify that using copyrighted works to train AI models is not fair use under the Copyright Act. The White House framework takes the opposite position, calling for AI to retain the ability to make fair use of materials used for training.
Blackburn, who is running for governor of Tennessee, said in a statement to S&P Global Market Intelligence that she would work to codify Trump's AI agenda while calling her proposal "the solution America needs."
Matthew Sag, a copyright scholar at Emory University School of Law, said the White House position is consistent with existing case law, pointing to two recent rulings from the Northern District of California, Kadrey v. Meta Platforms and Bartz v. Anthropic. In both cases, the court ruled in favor of AI developers on summary judgment motions, finding that the end product was "transformative" and not a direct replica of the original copyrighted work.
"Declaring that AI training was under no circumstances fair use would not stop the development of AI," Sag said. "It would simply cause talent, capital and development to head overseas to one of several jurisdictions with more hospitable copyright laws."
Mark Lemley, a law professor at Stanford University, identified a separate structural problem with the provision's scope.
"It might grandfather in the existing models, locking in an oligopoly that would control all future AI," Lemley said. "Even those models would be hard to improve, because they would have to be trained on a small and nonrepresentative subset of works in the public domain."
Duty of care
The liability structure presents a second fault line. Blackburn's proposal imposes a duty of care on AI chatbot developers and creates a federal cause of action for AI-related harms under Title VII, including restrictions on contractual liability waivers. The White House framework warns against open-ended liability and excessive litigation.
Peter Salib, an assistant professor at the University of Houston Law Center who studies AI governance and serves as co-director at the Center for Law and AI Risk, said Blackburn’s duty-of-care and liability framework mostly adapts existing tort and product-liability concepts to AI developers.
"All that stuff is the kind of thing that already is state law, not just for AI developers but for anyone who makes any product," Salib said. "So in that sense, you might think of it as a kind of light-touch way of regulating the AI industry."
Ryan Thompson, counsel at Hogan Lovells who advises clients on AI regulatory issues, said the vagueness of the standard creates compliance uncertainty for developers and others deploying AI tools.
"It's really hard for companies to ascertain on the front end what is required," Thompson said. "In more developed areas of law, you have court decisions, regulations that specify what the obligations are, industry guidelines and other things you can point to. In this space, it's much harder for a company developing or deploying these tools to assess what that duty is."
Blackburn's proposal would require the FTC to set rules establishing minimum reasonable safeguards.
Sunsetting Section 230
On Section 230, Blackburn's proposal would repeal the 1996 provision that shields platforms from liability for third-party content while simultaneously creating a minimum liability standard specific to AI.
Thompson said the combination amounts to a significant escalation of legal exposure for the industry.
"Repealing Section 230 on top of adding these new liability frameworks is something of a double whammy for the AI industry if this were to pass as proposed," he said.
The White House framework makes no mention of a Section 230 repeal, though Trump has previously called for its repeal or reform. The law states that no user or platform provider shall be treated as a publisher, and that service providers and users may not be held liable for voluntarily removing objectionable material in good faith. Trump and others have argued that by enabling fact-checking by online platforms, the law interferes with free speech and leads to the censorship of conservative voices.
Congressional outlook
Separate legislative frameworks are also expected from Senate Commerce Committee Chairman Ted Cruz, R-Texas, and Majority Leader John Thune, R-SD.
Cruz, in an earlier AI framework released in September 2025, emphasized regulatory sandboxes and light-touch regulation without the liability architecture in Blackburn's proposal. He welcomed the White House framework in a March 20 post on X.
"I look forward to working with the White House and members of the Commerce Committee to advance meaningful AI legislation," Cruz said.
Salib said the pace of AI development makes the timeline for any legislative response particularly urgent.
"If we do this sort of information gathering for a few years and then Congress takes a few more years to actually figure out what it wants to do, given political gridlock, I just think the ship is going to have sailed," Salib said.
Some states, including New York and California, have already begun requiring companies to draft and follow concrete safety plans for frontier models.
House Republican leaders — including Speaker Mike Johnson, Majority Leader Steve Scalise and Committee on Energy and Commerce Chairman Brett Guthrie — endorsed the White House framework. Democrats responded with the GUARDRAILS Act, introduced by Rep. Don Beyer (D-Va.) with a companion bill in the Senate from Sen. Brian Schatz (D-Hawaii) that would repeal President Trump's December executive order that called for a federal preemption of state AI laws.
Still, Thompson noted Congress, which is not known for moving quickly, currently has other priorities.
"In an election year, with limited bipartisan agreement and a clear gap between the policy framework and this draft bill, the likelihood of passage is low," he said. "Narrower issue-specific proposals may have a better chance where there is alignment, but even those remain uncertain."