Texas Breaks New Ground with Landmark AI Legislation: What TRAIGA Means for the Industry
Texas has officially joined the ranks of pioneering states in artificial intelligence regulation. On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA or the Act) into law, making Texas the third US state, after Colorado and Utah, to adopt a comprehensive artificial intelligence (AI) law. Set to take effect on January 1, 2026, TRAIGA represents a significant shift in how states approach AI governance and could reshape the regulatory landscape for AI companies nationwide.
A Unique Approach to AI Regulation
Unlike many proposed AI regulations that focus heavily on risk assessment and compliance burdens, Texas has crafted a law that balances innovation with protection. The Act aims to protect Texas consumers from the foreseeable risks associated with using AI systems and contains language promoting transparency, notice to consumers, and the responsible development and use of AI systems.
What sets TRAIGA apart is its evolutionary journey. Lawmakers introduced TRAIGA in December 2024 (as HB 1709) with provisions to extensively regulate the use of “high-risk artificial intelligence systems,” but the Texas legislature reduced the scope of the now-enacted law. This legislative refinement process resulted in a more focused and practical approach to AI governance.
Key Provisions: Prohibited Practices Take Center Stage
The heart of TRAIGA lies in its clear prohibition of specific AI applications deemed harmful or unethical. Most notably, the Act imposes categorical restrictions on the development and deployment of AI systems for certain purposes, including behavioral manipulation, discrimination, the creation or distribution of child pornography and unlawful deepfakes, and infringement of constitutional rights.
The prohibited practices extend across several critical areas:
Behavioral Manipulation and Social Control: TRAIGA outlines a set of prohibited practices related to AI, including use of AI to manipulate human behavior, assign a social score (by government entities), discriminate unlawfully, infringe on constitutional rights, and capture biometric data without consent.
Harmful Activities: AI systems cannot be developed or deployed to intentionally incite or encourage self-harm or criminal activity.
Healthcare Transparency: The law introduces specific requirements for healthcare applications. Under TRAIGA, healthcare providers must clearly disclose AI system use in treatment contexts, ensuring patient awareness and informed consent for AI-assisted medical decisions.
Government AI Use and Transparency Requirements
TRAIGA places particular emphasis on government use of AI systems, requiring unprecedented transparency. Government agencies are required to disclose to each consumer, before or at the time of interaction, that the consumer is interacting with AI (even if such disclosure would be obvious to a reasonable consumer).
This transparency requirement extends beyond simple disclosure. TRAIGA sets forth disclosure requirements for government entity AI developers and deployers, outlines prohibited uses of AI, and establishes civil penalties for violations.
Innovation-Friendly: The Regulatory Sandbox
Perhaps the most industry-friendly aspect of TRAIGA is its innovative regulatory sandbox program. For the AI industry, the bill creates a regulatory “sandbox,” a controlled environment where developers can test AI systems free from certain state rules without being penalized. This approach demonstrates Texas’s commitment to fostering innovation while maintaining oversight.
The sandbox isn’t a free-for-all, however. Prohibitions on manipulation, discrimination, and unlawful content remain in force. Sandbox participants must submit quarterly reports on system performance, risk mitigation, and stakeholder feedback.
Enforcement and Compliance
Enforcement authority is vested exclusively in the Texas attorney general, creating a centralized approach to compliance oversight. This singular enforcement mechanism could provide more predictable regulatory interpretation compared to multi-agency approaches seen in other jurisdictions.
The law also addresses privacy concerns by amending existing regulations. The new law also amends existing privacy laws to address AI-specific issues, including important updates to biometric data handling. Most critically, TRAIGA introduces exemptions to CUBI for (1) developing and deploying AI systems that are not used to uniquely identify individuals and (2) developing and deploying AI systems used to prevent or respond to security incidents, identity theft, fraud, harassment, or other illegal activities.
Implications for the AI Industry
TRAIGA’s passage signals several important trends for AI companies:
State Leadership in AI Governance: With federal AI regulation still evolving, states like Texas, Colorado, and Utah are establishing the regulatory templates that may influence national policy. Companies operating across multiple states will need to navigate an increasingly complex patchwork of regulations.
Focus on Use Cases Over Technology: Rather than regulating AI technology broadly, TRAIGA focuses on specific prohibited applications. This approach provides clearer guidance for companies about what they cannot do, rather than imposing broad compliance burdens on all AI development.
Innovation-Safety Balance: The regulatory sandbox demonstrates that states are seeking ways to encourage AI innovation while maintaining appropriate safeguards. This balanced approach could become a model for other states.
Compliance Complexity: For national AI companies, TRAIGA adds another layer of state-specific compliance requirements. Companies will need to ensure their AI systems and practices comply with Texas’s prohibited use cases while operating in the state.
Transparency as a Baseline: The emphasis on disclosure, particularly for government AI use, suggests that transparency will be a fundamental requirement across jurisdictions.
Looking Ahead
TRAIGA represents a pragmatic middle ground in AI regulation – neither the hands-off approach some industry advocates prefer nor the comprehensive risk-based frameworks seen in the EU AI Act. As the law takes effect in January 2026, it will provide real-world testing of whether this focused, prohibition-based approach can effectively balance innovation with consumer protection.
For AI companies, TRAIGA underscores the importance of building compliance capabilities that can adapt to varying state requirements. The regulatory landscape for AI is clearly evolving from a federal vacuum to a state-by-state patchwork, and companies that can navigate this complexity while maintaining ethical AI practices will be best positioned for success.
The Texas approach may well influence other states considering AI legislation, making TRAIGA not just a local concern but a potential template for the future of AI governance in America. As we approach the law’s effective date, the AI industry will be watching closely to see how this unique regulatory experiment unfolds.
AI Disclaimer: This content was created with assistance from artificial intelligence technology. While content is based on factual information from the source material, readers should verify all details directly with the respective sources before making business decisions.

