Working Group Proposes Revamping Colorado AI Act
Colorado’s legislature enacted one of the first comprehensive U.S. artificial intelligence (AI) laws in 2024, the Consumer Protections for Artificial Intelligence (aka, the Colorado AI Act). Now, a working group convened by Gov. Jared Polis proposed a near-total rewrite of that law on Tuesday. This isn’t a technical cleanup—it meaningfully changes how AI is regulated and where legal risk shows up.
The Colorado AI Act currently focuses on “high-risk AI” and imposes heavy governance requirements, impact assessments, and risk programs, while the new proposal narrows the scope to automated tools that “materially influence” meaningful decisions and shifts compliance toward consumer notice, post adverse-decision disclosures, and meaningful human review. In sum, the proposal provides a more business friendly approach to AI regulation. But can the bill cross the finish line?
What are the key provisions?
- Narrows scope to cover more significant uses of AI
- Shifts compliance burdens from pre-use assessments to post-adverse decisions
- Embraces significant rulemaking to build out core concepts
- Focuses developer risk on how they sell or market their product—the contracts will matter
- Emphasizes execution (bright-line rules) over papering defenses (assessments and risk frameworks)
How did we get here?
In May 2024, Colorado became the first state to pass a comprehensive AI law. The law focuses on high-risk AI systems, which is AI used to make consequential decisions, and requires companies to:
- Use reasonable care to avoid algorithmic discrimination in high risk AI systems
- Share information about AI systems, including purpose, training, and known risks
- Conduct impact assessments for high risk AI uses
- Maintain a risk-management program
- Provide consumers with detailed pre and post use notices and explanations
- Notify the attorney general of known or reasonably foreseeable algorithmic discrimination
The law was hailed as the start of a wave of AI legislation. But the tides quickly turned. Immediately after the bill became law, the governor and other leaders led calls to reform the law. The legislature tried, twice, to no avail. The most recent attempt saw multiple competing substantive proposals die before the legislature eked out a last-minute amendment to delay the effective date to June 2026. Since then, a working group has been meeting to find a solution to the deadlock. They just released their proposal.
What does the proposal scrap?
Compared to the Colorado AI Act, the proposal simplifies compliance by removing vague standards and AI-governance requirements.
- Duty of Care. Eliminates duty to use “reasonable care” to protect consumers from algorithmic discrimination.
- AI Governance. Removes obligation to conduct annual impact assessments and maintain a risk-management program.
- Algorithmic Discrimination. Removes all references to algorithmic discrimination and, in doing so, leaves policing that issue to existing anti-discrimination laws.
The proposal also removes affirmative defenses. But this isn’t as significant as it appears, because the proposal adopts more clear-cut obligations that lessen the need for affirmative defenses. More on that below.
What does the proposal introduce?
The proposal adds more business-friendly provisions, including clarity for businesses trying to understand and mitigate their potential liability.
- Cure Period. Adds a 90-day cure period, with no sunset. But it only applies to civil penalties; the attorney general does not have to wait to pursue injunctive or equitable relief.
- Fault Allocation. Requires allocation of fault among developers (creators of AI systems) and deployers (users of the AI system) rather than joint and several liability.
- Developer Liability. Limits liability to claims arising from uses the developer intended, advertised, marketed, etc. The developer isn’t liable for a deployer’s rogue activity.
- Record Retention. Requires deployers keep for three years any records reasonably required to show their compliance.
- Indemnification Limits. Invalidates any contract provision purporting to indemnify a party for liability arising from violations of the proposed bill.
What does the proposal modify?
The proposal tweaks some of the existing provisions to give companies extra time to prepare for a narrower law that shifts the focus to a regime more concerned with notice and post-use rights.
- Scope. Limits the application by focusing on cases where AI is a non-de minimis factor in a consequential decision and adding explicit carveouts for advertising, trivial or clerical uses, routine or low-stake decisions, and more.
- Developer Disclosures. Reduces the disclosures and requires them only when the developer intended/marketed the AI to materially influence consequential decisions.
- Appeals. Reframes the right to appeal adverse decisions as a right to human review and bulks up what that review entails.
- Effective Date. Delays the effective date to January 1, 2027.
- Consumer Notices. Adds more flexibility for providing notice and concentrates more disclosures to after an adverse decision.
- HIPAA Exemption. Expands carveout to cover most healthcare entities’ AI activities, provided they give general notice about their AI usage.
Where do we go from here?
A Colorado legislator must file the proposal as a bill. This is likely a formality. Even if there are competitor bills, the proposal stands a strong chance of succeeding where others have failed. Unlike prior attempts, Polis stated this proposal has buy-in from a range of stakeholders—businesses, consumer advocates, hospitals, and schools—whose disagreement on prior iterations sunk prior amendments. But time is ticking; the legislature closes on May 13.
A potential wildcard: President Donald Trump’s Executive Order directing the U.S. Department of Justice to name-and-shame states’ AI regulations while suing to block those laws and cutting funding to those states. We expect that list any day. If Colorado’s law is targeted, it could add some chaos. Legislators could claim any legislation is likely to face uncertain court battles and imperil critical funding, such that a more prudent option is to scrap the law entirely.