Colorado Hits Reset on AI Regulation

Colorado (basically) has a new comprehensive artificial intelligence law, again. Yesterday, the Colorado legislature passed SB26-189, an overhaul of the Consumer Protections for Artificial Intelligence (the Colorado AI Act), and the governor announced he will sign the bill. This isn’t a technical cleanup—it is a substantive overhaul that meaningfully changes how AI is regulated and where legal risk shows up. We highlight the key points below and invite you to register for our webinar diving into the law at 12 p.m. MT on May 26.  

Overview

Colorado has significantly narrowed the scope of its AI regulation by abandoning the focus on "High-Risk AI" and burdensome governance requirements in favor of a business-friendly framework focused on consumer notice and meaningful human review. For most businesses, this means a shift in focus from internal-risk programs to external-facing disclosures and contract negotiations. The compliance program now requires executing on bright-line rules rather than papering defenses through assessments and risk frameworks.                                                                                                    

  Original Law New Law
Primary Scope “High-Risk AI Systems” – AI tools that make or are a “substantial factor” in consequential decisions “Covered ADMT” – AI tools that “materially influence” consequential decisions
Governance Impact assessments and risk-management programs None (but consider AI inventories to facilitate notice and consumer rights)
Rulemaking Optional Mandatory
Liability Joint and several liability Comparative fault and limits developer liability to intended, marketed, or contracted for uses
Effective Date June 30, 2026 January 1, 2027

How did we get here?

In May 2024, Colorado became the first state to pass a comprehensive AI law, the Colorado AI Act. The law focused on high-risk AI systems, which are AI tools that make or are a substantial factor in making consequential decisions. Under that law, businesses were required to:

  • Use reasonable care to avoid algorithmic discrimination in high-risk AI systems
  • Share information about AI systems, including purpose, training, and known risks
  • Conduct impact assessments for high-risk AI uses
  • Maintain a risk-management program
  • Provide consumers with detailed pre- and post-use notices and explanations
  • Notify the attorney general of known or reasonably foreseeable algorithmic discrimination

The law was hailed as the start of a wave of AI legislation. But the tides quickly turned. Immediately after the bill became law, the governor and other leaders led calls to reform the Colorado AI Act. The legislature tried—twice—to no avail. Before the successful push this year, the most recent attempt saw multiple competing bills die before the legislature eked out a last-minute amendment to delay the effective date of the original law to June 30, 2026.

In October 2025, the governor created a working group that met for months to find a solution. In mid-March 2026, that group released a proposal that the legislature adopted nearly verbatim.

What was removed?

Compared to the original Colorado AI Act, the new law simplifies compliance by removing vague standards and AI-governance requirements.

  • Duty of Care. Eliminates duty to use “reasonable care” to protect consumers from algorithmic discrimination.
  • AI Governance. Removes obligation to conduct annual impact assessments and maintain a risk-management program.
  • Algorithmic Discrimination. Removes all references to algorithmic discrimination and, in doing so, leaves policing that issue to existing anti-discrimination laws.

The new law also removes affirmative defenses. But this isn’t as significant as it appears because the law adopts more clear-cut obligations that lessen the need for affirmative defenses. More on that below.

What is new?

The new law adds more business-friendly provisions, including clarity for businesses trying to understand and mitigate their potential liability.

  • Cure Period. Adds a 60-day cure period that sunsets on January 1, 2030. But it only applies to civil penalties; the attorney general does not have to wait to pursue injunctive or equitable relief.
  • Fault Allocation. Requires allocation of fault among developers (creators of AI systems) and deployers (users of the AI system) rather than joint and several liability.
  • Developer Liability. Limits liability to claims arising from uses the developer intended, advertised, contracted for, etc.
  • Record Retention. Requires deployers to keep for three years any records reasonably required to show their compliance.
  • Indemnification Limits. Invalidates any contract provision purporting to indemnify a party for liability arising from violations of the law.
  • HIPAA Obligations. Requires covered entities to disclose their “use of advanced technologies, including covered ADMT.” But the law does not define “advanced technologies,” which is seemingly broader than the AI covered by the law.

What was changed?

The new law tweaks some of the existing provisions while giving businesses extra time to prepare for a framework more concerned with notice and consumer rights than substantive governance.

  • Scope. Limits the application by focusing on uses where AI is a non-de minimis factor affecting the outcome of a consequential decision and adding explicit carveouts for advertising, trivial or clerical uses, routine or low-stake decisions, and more.
  • Developer Disclosures. Reduces the amount of required disclosures, simplifies the process by allowing the use of public release notes, and requires sharing details only when the developer intended/marketed the AI to materially influence consequential decisions.
  • Effective Date. Delays the effective date to January 1, 2027.
  • Rulemaking. Requires the attorney general to adopt rules by January 1, 2027, clarifying the requirements for consumer rights and disclosures after an adverse decision. The attorney general may, but is not required to, adopt rules on other aspects of the law. [The legislature suggested that it would like rules to clarify the definition of “materially influence,” including providing “presumptions, illustrative examples, and objective indicators.”]
  • Appeals. Reframes the right to appeal adverse decisions as a right to human review and bulks up what that review entails.
  • Consumer Notices. Adds more flexibility for providing notice and focuses more on disclosures after an adverse decision.
  • Correction Right. Limits the right to correct personal data to situations involving an adverse outcome from a consequential decision materially influenced by AI.
  • HIPAA Exemption. Expands carveout to generally include covered entities “doing business in Colorado” and their business associates in their work for the covered entity. But this exception does not apply when using AI in employment decisions or for certain financial aid decisions.

Where do we go from here?

The updated requirements will inevitably force businesses to reexamine and adjust their compliance programs. Many likely adopted procedures in anticipation of that law’s June 30 effective date that are no longer required. Compliance teams may grapple with whether to keep those now-optional procedures as part of a good-governance program rather than for legal purposes. Regardless, there are four critical action items for the new law:

  1. Audit AI Inventory. Assessing current AI use cases against the “material influence” standard and the new carveouts for advertising or clerical tasks.
  2. Update Contracts. Ensuring contracts with developers clearly define the AI activities, because developers are only liable for intended, advertised, or contracted-for uses.
  3. Review Record Retention. Confirming that there is adequate documentation to show compliance and that those materials are kept for at least three years.
  4. Monitor Rulemaking. Tracking the attorney general’s rulemaking on consumer rights and adverse-decision disclosures (which must be finalized by January 1, 2027) and keeping an eye on whether the attorney general decides to publish rules on other topics.

But all of that work ultimately could be for naught. There is still a pending lawsuit against the original law that the plaintiffs indicated may expand to cover the new version. And the Department of Justice may yet bring its own lawsuit to stop the law based on the Executive Order directing challenges to “onerous laws” that are inconsistent with a “minimally burdensome national policy framework for AI.”


Colorado Just Rewrote Its (and Your) AI Playbook

Time and Date: 12 p.m. Mountain Time, May 26, 2026

Join Josh Hansen and Camila Tobón for a webinar unpacking the overhaul of Colorado's AI Act, which moves the state's AI regulation from demanding governance requirements to a more business-friendly framework centered on consumer disclosures, human oversight, and clearer liability rules. You’ll leave with a practical understanding of the new law, a framework for assessing enforcement risks, and a concise roadmap for adapting your AI compliance program.

Register Today