Texas Legislature Passes Comprehensive AI Bill

Texas’ legislature passed a business friendly “comprehensive” artificial intelligence (AI) bill, the Texas Responsible Artificial Intelligence Governance Act (House Bill 149). But this isn’t a Colorado AI Act copycat; the differences are so stark it seems unfair to say the split parallels how Virginia and Colorado/Connecticut adopted competing privacy frameworks in 2021. The lingo may be the same, but the scope and obligations are drastically different. Texas diverged from the Colorado model by creating an AI sandbox with limited safe harbors for testing AI, providing fewer (and much narrower) obligations or restrictions for companies, adding conditions on government uses of AI, and setting a relatively quick effective date (January 1, 2026). 

Application

The bill only protects Texas residents acting in their personal or household capacity. It does not cover employees, B2B contacts, or job applicants. The bill only regulates those who (1) deploy or develop AI in the state; (2) produce a product or service used by Texas residents; or (3) promote, advertise, or conduct business in the state.

AI Sandbox

Texas created an AI-sandbox program to encourage AI innovation by limiting a company’s liability. The government will evaluate companies’ requests to test their AI in the state for up to 36 months, which may be extended for good cause. Companies in the program have immunity from enforcement actions or administrative penalties by state regulators (including the attorney general) concerning the approved AI, except that immunity does not apply to the requirements in the bill. Although this stops some state action, companies still need to keep an eye on federal laws and Texas laws with a private right of action.

Restrictions on Companies

Texas provides more targeted and narrower restrictions on companies’ uses of AI than the Colorado law. Rather than tracking Colorado’s focus on the use of AI in consequential decisions, Texas focuses on specific harms and ties violations to intentional actions. This narrower approach also ditches many of the obligations that Colorado adopted—such as imposing a duty of care, requiring various notices, and mandating impact assessments.

Physical Harms 

A company cannot develop or deploy AI in a way that “intentionally” seeks to “incite and encourage” individuals to harm themselves, hurt others, or engage in criminal activity. 

Constitutional Harms 

A company cannot develop or deploy AI with the “sole intent” for the AI to impair a person’s rights guaranteed under the United States Constitution. This isn’t a prohibition on intending for the AI to cause such harm—the harm just can’t be the only reason why a company created or adopted the AI.

Pornography

The bill regulates the use of AI in connection with pornography. A company cannot develop or deploy AI with the “sole intent” of producing, assisting in producing, or distributing child sexual abuse material or deep-fake images/videos. Relatedly, a company cannot intentionally develop or distribute an AI that engages in text-based conversations that describe sexual conduct while impersonating a minor.

Unlawful Discrimination

A company cannot deploy or develop AI with the “intent” to unlawfully discriminate against a protected class (which is defined by reference to civil rights law) in violation of state or federal law. But the bill narrows this restriction in three ways. First, it requires more than disparate impact to prove intent. Second, it excludes federally insured financial institutions. Third, it carves out insurance companies (and those developing AI for such companies) who are subject to certain laws on unfair practices.

Notably, this provision sidesteps a common critique of other AI proposals: the idea that regulating AI-related discrimination prohibits already illegal activity—discrimination. The bill avoids that trap by focusing on intent (the why) rather than the effect (the discrimination). This means a company could violate the prohibition without ever actually discriminating against a protected class. 

Restrictions on Government
 
Unlike Colorado’s law, Texas’ bill restricts when and how state and local government may use AI. Specifically, the bill covers four topics:

  • Transparency. The government must disclose the use of AI that is intended to interact with a consumer (even if the AI’s presence is obvious).
  • Social Scoring. The government cannot use or deploy AI for social scoring (classifying people based on behavior/characteristics) if the scoring could infringe consumers’ rights or subject them to unjustified, unfavorable treatment.
  • Biometric Data. The government cannot develop or deploy AI to identify a consumer using biometric data.
  • Public Images/Data. The government needs consent to develop or deploy AI to identify a consumer using public data (such as online images) if collecting that data would infringe the consumer’s legal rights.

Preemption

The bill preempts all local regulation of AI, including for activity not covered by the bill. 

Enforcement

There is no private right of action. The attorney general has exclusive enforcement authority, subject to a limited right for state agencies to also impose penalties on their licensees. State agencies can impose additional sanctions on licensees only if (1) a court found the company liable for a violation and (2) the attorney general recommended enforcement action by the agency. 

Investigation

The attorney general can issue civil investigative demands after receiving a complaint from the public. But it is unclear whether the attorney general can issue demands on its own initiative.

Right to Cure

The attorney general cannot bring a lawsuit without first providing notice and an opportunity to cure. There is a 60-day right to cure, which does not sunset. A company relying on that right must report to the attorney general that they cured the violation, provide supporting documentation of their cure, and update internal policies to “reasonably prevent further violations” (not just the violation being cured). 

Defenses and Safe Harbors

There are two notable liability carve outs:

  • Third-Party Responsibility. A company is not responsible for violations if someone else uses the company’s AI in a manner prohibited by the bill.
  • Self-Discovery. A company is not liable for a violation that it discovers from (1) receiving feedback; (2) conducting testing, such as red-team testing; (3) following state-agency guidelines; or (4) performing a risk review, provided that the company “substantially complies” with a recognized AI-risk-management framework.

Penalties

The bill sets up a tiered penalty system with both a floor and a cap for civil penalties. The violations depend on the nature of the violation:

  • Curable. Minimum of $10,000 and maximum of $12,000
  • Not Curable. Minimum of $80,000 and maximum of $200,000
  • Ongoing. Minimum of $2,000/day and maximum of $40,000 per day

The attorney general can also seek an injunction and recover attorney’s fees, court costs, and other investigative expenses. Additionally, a state agency can (subject to the limitations noted above) issue separate penalties—such as suspending a license or imposing a penalty of up to $100,000.

Next Steps

All eyes turn to Gov. Greg Abbott: he has until June 22 to sign the bill, veto it, or let it become law without his signature. If it becomes law, the provisions take effect on January 1, 2026. 

But this all could become moot based on what Congress does with the reconciliation bill. Federal legislators are trying to hammer out a compromise for banning (or strongly discouraging) states’ AI regulation. The House adopted a ban, while the Senate proposal replaced the ban with a stick—states regulating AI would lose federal funding for expanding broadband access.