Texas's Responsible AI Governance Act is now law. Colorado's comprehensive AI Act takes effect in June. Corporate legal teams have weeks to get compliant.
Texas TRAIGA in Effect, Colorado AI Act Looms in June — Corporate Legal Teams Scramble
By Hector Herrera | April 19, 2026 | Legal
Texas's Responsible AI Governance Act (TRAIGA) is now law, effective January 1, 2026. Colorado's comprehensive AI Act follows in June. Companies with AI systems deployed in either state are inside a live compliance window—and legal teams are racing to catch up before the June deadline arrives.
What Happened
Two of the most significant state AI laws in the United States are either now in effect or imminent. According to JD Supra's April 2026 AI Legal Watch:
Texas TRAIGA (Responsible Artificial Intelligence Governance Act) took effect January 1, 2026. It bans specific harmful AI uses outright and requires disclosure requirements when government agencies and healthcare providers deploy AI systems. Texas is a large market, and TRAIGA's healthcare and government provisions immediately affect a significant number of AI deployments.
Colorado's AI Act follows in June 2026, creating a compliance deadline that is now less than two months away. Colorado's law is broader—it establishes requirements for high-risk AI systems that make consequential decisions affecting individuals, including disclosure, documentation, and impact assessment obligations.
This confluence creates a tight, dual-deadline compliance environment that legal teams across industries have been tracking closely.
Context
The U.S. has no comprehensive federal AI law. Into that absence, states have been moving. Texas and Colorado represent two different approaches:
- Texas TRAIGA is primarily prohibitionary and disclosure-focused: here are things you cannot do, and here is what you must tell people when AI is involved.
- Colorado's AI Act is more process-oriented: if your AI system makes high-risk decisions, you must document it, assess its impact, and give individuals avenues for appeal.
Both laws exist in direct tension with the White House's March 2026 National AI Policy Framework, which recommends Congress preempt state AI laws that impose "undue burdens." That preemption, if it happens, would be a major development—but it requires Congressional action that is far from guaranteed or imminent. Until federal law preempts them, Texas and Colorado law applies.
Get this in your inbox.
Daily AI intelligence. Free. No spam.
The compliance challenge is compounded by the fact that many companies do not have a complete inventory of their AI systems—including those purchased from third-party vendors and embedded in enterprise software. You cannot comply with a disclosure or documentation requirement for AI systems you don't know you're running.
Details
Texas TRAIGA — Key Provisions:
- Bans specific harmful AI uses (deepfake-based fraud, certain manipulative AI practices)
- Requires government agencies to disclose when AI is used in public-facing decisions
- Requires healthcare providers to disclose AI use in clinical decision support
- Effective: January 1, 2026
Colorado AI Act — Key Provisions:
- Covers "high-risk AI systems" making consequential decisions (employment, credit, healthcare, housing, education, legal services)
- Requires impact assessments before deployment
- Requires disclosure to individuals when AI is used in decisions affecting them
- Requires mechanisms for individuals to appeal AI-influenced decisions
- Effective: June 2026
Companies most immediately affected: Any enterprise deploying AI in Texas healthcare, Texas government contracts, or Colorado-based or Colorado-directed operations across the high-risk categories.
Impact
For corporate legal and compliance teams: The Colorado June deadline is the immediate action item. Legal teams need to:
- Inventory all AI systems in use, including vendor-supplied AI embedded in enterprise software
- Classify each system against Colorado's "high-risk" definition
- Document deployment of systems that qualify
- Implement disclosure and appeal mechanisms for affected decisions
- Revise vendor agreements to ensure AI vendors are providing the documentation required for compliance
This is not a theoretical exercise—it requires coordination across legal, IT, procurement, and the business units actually using AI tools. Most organizations underestimate how widely AI is embedded in their vendor software stack.
For AI vendors: Vendors who cannot provide documentation of their AI systems' behavior, training data, and performance characteristics are becoming a compliance liability for their enterprise customers. Expect enterprise procurement teams to add AI documentation requirements to vendor qualification processes. Vendors who don't meet those requirements will lose deals.
For the legal services sector itself: Law firms that advise corporate clients on AI compliance are facing high demand. The market for AI legal advisory services is growing rapidly—driven by exactly this state-level compliance wave. This is also a direct application of AI in legal services: tools that help legal teams audit AI deployments are being positioned as compliance infrastructure.
For the federal preemption debate: Every company that faces dual compliance requirements across Texas, Colorado, and other states with AI laws in the pipeline has a concrete business reason to support federal preemption. The compliance cost argument is real and will be used effectively in lobbying. Whether Congress responds to it in a way that preserves meaningful consumer protections—or simply eliminates them—is the critical policy question.
What to Watch
Colorado's June 2026 effective date is the immediate deadline. Watch for enforcement actions in the first six months after the law takes effect—the cases Colorado brings will define the practical scope of the law's requirements and signal to other states whether the compliance framework is working as intended. Also watch for the first federal preemption bill that actually advances through committee: that will clarify whether the state AI law landscape is temporary or permanent.
Hector Herrera covers legal and AI for NexChron.
Did this help you understand AI better?
Your feedback helps us write more useful content.
Get tomorrow's AI briefing
Join readers who start their day with NexChron. Free, daily, no spam.