Colorado’s AI Law Is Still Coming — And Insurance Executives Need to Pay Attention

The deadline moved, but the stakes didn’t. Here’s what the Colorado AI Act means for carriers, wholesalers, and agents operating in the state.


Key Takeaways

  • Colorado’s SB 24-205, the first comprehensive state AI law in the U.S., was delayed from February 1 to June 30, 2026 — but its core requirements remain fully intact.
  • The law applies to anyone who develops or deploys “high-risk” AI systems that make or substantially influence consequential decisions affecting Colorado consumers.
  • For insurance, that means underwriting models, claims decisioning tools, pricing algorithms, and fraud detection systems are all potentially in scope.
  • Insurers already subject to the Colorado Division of Insurance regulations on predictive models may qualify for a compliance safe harbor — but it requires documentation to claim it.
  • The 2026 legislative session is considering amendments, creating ongoing uncertainty. Executives should build toward the current law rather than waiting for a version that may or may not arrive.

What Just Happened — and Why It Matters

Colorado Governor Jared Polis signed SB 24-205 into law in May 2024, making Colorado the first state in the nation to attempt comprehensive regulation of AI systems used in high-stakes decisions. Originally set to take effect February 1, 2026, the law’s implementation was pushed back to June 30, 2026 after a special legislative session collapsed in August 2025 without reaching consensus on amendments.

Do not read the delay as a retreat. As the American Bar Association noted in November 2025, nothing fundamental changed despite intense lobbying from over 150 industry representatives during the special session. The core framework survived intact: risk assessments, impact assessments, transparency requirements, and the duty of reasonable care all remain in place. The extra months exist to allow the 2026 regular legislative session to consider refinements — not to gut the law.

For insurance executives, the message is straightforward. You have roughly four months to get your AI governance house in order before enforcement begins. The question is not whether to prepare, but how.


What the Law Actually Requires

SB 24-205 draws a sharp distinction between two types of regulated entities.

Developers are companies that build or substantially modify AI systems. If your organization has an internal data science team creating proprietary underwriting or pricing models, you are a developer under this law. Developers must document their systems’ intended uses, known risks, and training data, and make that documentation available to the companies deploying their systems.

Deployers are companies that use AI systems to make or substantially influence consequential decisions affecting consumers. For most carriers, wholesalers, and larger agencies, this is the more relevant category. If your organization uses any AI-powered tool — whether built internally or purchased from a vendor — that influences an underwriting decision, a claims outcome, a pricing determination, or a coverage recommendation for a Colorado consumer, you are a deployer.

Deployer obligations under the law include:

A written risk management policy. You must establish and maintain a documented program that identifies, assesses, and mitigates known or foreseeable risks of algorithmic discrimination in your AI systems. The law specifically references the NIST AI Risk Management Framework and ISO/IEC 42001 as acceptable models.

Impact assessments. An initial impact assessment must be completed within 90 days of the law’s effective date, then repeated at least annually and within 90 days of any substantial modification to a covered AI system. The assessment must document the system’s purpose, intended use cases, potential for discriminatory outcomes, and steps taken to mitigate identified risks.

Consumer disclosure. If you deploy an AI system that interacts directly with consumers, you must disclose that they are interacting with an AI. For most insurance applications — where AI is operating in the background rather than chatting with policyholders — this requirement has limited practical impact. But it becomes relevant if your organization uses AI-powered customer service tools.

Incident reporting. If you discover that a deployed AI system has caused algorithmic discrimination, you have 90 days to notify the Colorado Attorney General. This is not optional, and it is not triggered only by consumer complaints — your own internal audits can create this obligation.

Vendor accountability. This is where many organizations will be caught off guard. The law does not limit your obligations to AI systems you built yourself. If you are using a third-party vendor’s AI tools to make consequential decisions, you are still responsible for ensuring those tools comply. You need documentation from your vendors — and your contracts need to require it.


The Insurance-Specific Safe Harbor

Here is a detail that many general-purpose summaries of this law miss entirely, and it matters significantly for insurance organizations.

Colorado’s law includes a provision stating that an insurer — or a developer of AI used by an insurer — is in full compliance with SB 24-205 if the entity is already subject to the Colorado Division of Insurance’s regulations governing the use of external consumer data, algorithms, and predictive models. Those regulations, updated in recent years and codified under Colorado Regulation 10-1-1, impose their own requirements for bias testing, documentation, and ongoing model monitoring.

In plain terms: if you are already complying with the Division of Insurance’s AI regulations, you may not need to build a separate compliance program under SB 24-205. The insurance regulatory framework is treated as a functional equivalent.

This is genuinely good news for carriers who have already invested in AI governance under insurance-specific guidance. It is not, however, a free pass. You still need to document that your existing compliance program meets the standard. The safe harbor must be claimed, not assumed. If a consumer files a complaint or the Attorney General’s office opens an inquiry, you will need to demonstrate that your insurance regulatory compliance actually satisfies the law’s requirements.

Agencies that use AI tools but are not subject to Division of Insurance predictive model regulations — which is most of them — cannot rely on this safe harbor and should evaluate their exposure independently.


The Bigger Picture: Colorado as the Bellwether

Colorado’s experience matters beyond its own borders for two reasons.

First, at least 17 other states introduced or advanced AI bills targeting insurance in 2025, according to Baker Tilly. Virginia enacted legislation closely mirroring Colorado’s approach. Connecticut advanced similar legislation. Pennsylvania has proposed requiring health insurers to publicly disclose AI tools used in claims decisions. The NAIC’s Big Data and Artificial Intelligence Working Group spent much of 2025 debating whether to develop a comprehensive AI model law that could create more uniform standards across states.

Second, Colorado’s law is being actively tested in the current legislative session. Possible amendments include narrowing the definition of “high-risk AI system,” adjusting deployer obligations, and expanding exemptions. The final version of the law that takes effect June 30 may look somewhat different from what was signed in 2024. Organizations that build their compliance programs around the current framework will be in a far stronger position to adapt than those who wait to see what the final text looks like.

There is also a federal dimension. The Trump administration’s December 2025 executive order on state AI regulation raised concerns about federal preemption of state AI laws, prompting the NAIC to issue a formal statement defending state-based insurance oversight. The NAIC noted that existing state frameworks “protect consumers, foster innovation, and allow flexibility essential in a rapidly changing world.” The federal-state tension is real, and it adds another layer of uncertainty to long-range planning. For now, state laws remain in effect. Build for compliance with those, and reassess as the federal picture clarifies.


What You Should Be Doing Right Now

The June 30 deadline provides a defined window, but the work required is not trivial. Here is a practical roadmap organized by role.

For carriers and wholesalers:

Start with an AI inventory. Document every AI or machine learning system your organization uses that touches a decision affecting Colorado policyholders or applicants. Underwriting models, claims triage tools, fraud scoring systems, pricing algorithms, and customer-facing chatbots all belong on this list.

For each system, determine whether it qualifies as “high-risk” under the law’s definition. The threshold is whether the system makes or substantially influences a consequential decision — which, in insurance, covers most of the use cases listed above.

Assess your vendor contracts. If third-party AI vendors cannot provide documentation of their systems’ training data, intended use cases, and bias testing results, that is a compliance gap that needs to be addressed now, not in May.

Determine whether your existing Division of Insurance compliance program covers the safe harbor. If it does, document that determination explicitly and keep it accessible.

Build or formalize your risk management policy. If you already have AI governance documentation, review it against the NIST AI Risk Management Framework. If you do not have formal documentation, the June 30 deadline is a forcing function to create it.

For independent agents and agencies:

Most independent agencies do not develop AI systems, and the AI tools they use — AMS platforms, comparative raters, communication tools — typically do not rise to the level of making consequential decisions about Colorado consumers in the way the law contemplates. That said, as AI becomes more embedded in agency operations, this assessment will need to be revisited.

If your agency uses any AI tool that generates coverage recommendations, flags accounts for non-renewal, or automates any decision that directly affects a client’s coverage or pricing, it is worth a conversation with your E&O carrier and legal counsel about your exposure under this law.

The more immediate concern for agents is carrier compliance. As your carrier and MGA partners build AI governance programs, expect to see new data-sharing requirements, disclosure language, and vendor questionnaires flowing through the distribution chain. Understanding why those requests are coming will help you respond to them efficiently.


The Bottom Line

Colorado’s AI Act is not a distant regulatory hypothetical. It is a law with a June 30, 2026 effective date, enforceable by the state’s Attorney General, carrying penalties under Colorado’s Consumer Protection Act. The delay bought organizations additional time — not a reprieve.

For insurance executives, the path forward is clear. Inventory your AI systems, close your vendor documentation gaps, leverage the insurance regulatory safe harbor if it applies to you, and build a governance program that will hold up to scrutiny. The organizations that treat this as a compliance exercise to complete before a deadline will be adequately protected. The organizations that treat it as a foundation for responsible AI governance going forward will be better positioned as more states follow Colorado’s lead.

That second group is the one worth being in.


Sources