By James W. Moore | InsuranceIndustry.AI


Executive Summary / Key Takeaways

  • A Nobel Prize-winning study found 55% variance among underwriters pricing identical risks at the same carrier, revealing governance gaps that predate AI entirely.
  • AI-driven systems can shift insurance governance from sample-based auditing and tacit judgment to continuous oversight and documented reasoning.
  • The NAIC’s 12-state AI Evaluation Tool pilot and Colorado’s AI Act are already building the regulatory infrastructure around evidence-based governance.
  • AI improves governance not by making better decisions, but by making decision-making observable, measurable, and controllable for the first time.
  • Observability alone isn’t governance. Insurers that treat AI as a productivity tool rather than governance infrastructure will amplify risk, not reduce it.

The 55% Problem

In 2015, Nobel laureate Daniel Kahneman and his colleagues conducted a noise audit inside a large insurance company. They presented identical case files to 48 experienced underwriters and asked each to set a premium. Management expected roughly 10% variance between high and low quotes.

The actual median variance was 55%.

One underwriter might price a risk at $9,500. Another, working from the same file at the same company, might quote $16,700. When the experiment was replicated at a second carrier, the variance approached 60%. Kahneman’s conclusion was blunt: the underwriters were, in a meaningful sense, wasting their time.

This wasn’t a technology failure. There was no AI involved. It was a governance failure, one that had been embedded in insurance operations for decades without anyone measuring it. The decisions were inconsistent, the reasoning was undocumented, and the variance was invisible to the executives overseeing the process.

That’s the starting point for a conversation the industry needs to have. Not whether AI creates governance problems, but whether the governance we’ve been relying on was ever as rigorous as we assumed.

Four Structural Shifts

After spending three articles examining how AI creates accountability gaps, vendor risk illusions, and autonomous decision-making challenges, it’s worth asking the inverse question: what if AI, properly governed, could close governance gaps the industry has tolerated for decades simply because there was no alternative?

The argument isn’t that AI makes decisions better. It’s that AI makes decision-making systems more observable, measurable, and controllable. That distinction matters. Here are four structural shifts that illustrate why.

From Sampling to Full Population Oversight

Traditional compliance relies on reviewing a small sample of files, often months after the decisions were made. An EY survey of insurance carriers found that the average compliance analytics program scored near Stage 2 on a four-stage maturity scale, characterized by point-in-time, episodic review. Most carriers are still checking rearview mirrors.

AI-enabled compliance monitoring changes the denominator. Instead of auditing 2-5% of transactions after the fact, systems can evaluate every transaction in real time, flagging anomalies against policy, regulation, and precedent as they occur. One multi-state carrier reported reducing compliance-related audit findings by 70% in its first year after implementing real-time monitoring.

This isn’t incremental improvement. It removes a fundamental constraint that governance has operated under since the industry began regulating itself.

From Tacit Judgment to Explicit Reasoning

Kahneman’s underwriting study didn’t just reveal variance. It revealed invisibility. The underwriters couldn’t explain their own reasoning in ways that would allow meaningful comparison, and neither could their managers. The decisions lived in individual judgment, shaped by experience, mood, caseload, and factors no one was tracking.

AI systems, when designed properly, force reasoning to be externalized. Every decision can be accompanied by a generated rationale. Inputs, outputs, and the logic connecting them can be logged automatically. Policy language can be explicitly referenced at the point of decision rather than assumed to have been internalized during training.

This creates something insurers have never truly had at scale: a complete, queryable record of why decisions were made. For carriers facing market conduct exams, that shift from reconstructing reasoning after the fact to documenting it in real time is significant.

From Static Rules to Adaptive Control

Traditional governance depends on training, procedure manuals, and post-hoc correction. An underwriting guideline is written, distributed, and then compliance hopes it gets applied consistently across every office, every state, and every line of business. It’s soft control in a hard regulatory environment.

AI allows governance rules to be embedded directly into workflows. Underwriting guidelines can be enforced at the point of decision. Claims handling rules can be checked in real time. Regulatory constraints can be applied dynamically based on jurisdiction.

This matters now more than ever. Colorado’s AI Act takes effect June 30, 2026, requiring insurers to evaluate AI systems for discriminatory outcomes across protected classes and submit annual compliance reports. The NAIC’s Model Bulletin has been adopted in approximately 24 states. At least 17 states introduced or advanced AI bills in 2025 targeting insurance. The regulatory environment is moving toward continuous, evidence-based oversight, and static governance frameworks aren’t built for that pace.

From Reactive Audits to Predictive Governance

Most insurance governance is backward-looking: market conduct exams, post-incident reviews, annual audits. By the time a problem surfaces through traditional channels, it has often been compounding for months or years.

AI enables a different posture: detecting drift in underwriting patterns before it becomes systemic, identifying emerging bias in claims outcomes, and flagging operational risks in real time. McKinsey has described this as the industry’s shift from “detect and repair” to “predict and prevent.” Applied to governance specifically, it means intervention before systemic failure rather than investigation after the fact.

The Regulator Signal

The NAIC’s AI Systems Evaluation Tool pilot, now underway across 12 states, may be the clearest signal of where governance expectations are heading. The tool asks insurers to document their AI inventory, governance frameworks, high-risk system details, and data inputs. It is explicitly designed to help regulators understand how companies use AI and to inform long-term standards for market conduct exams.

Read between the lines, and the implication is significant: regulators may come to prefer AI-governed environments precisely because they are more auditable, more consistent, and less dependent on individual judgment than the human-driven processes they replace. That’s not a prediction. It’s the logical consequence of a regulatory framework built around transparency, traceability, and documented reasoning.

Where This Breaks Down

None of this happens automatically. AI improves governance only if insurers treat it as governance infrastructure rather than a productivity tool.

Three failure modes are worth watching.

First, AI can centralize risk faster than it distributes control. A system that scales bad logic across an entire book of business does more damage than an inconsistent underwriter ever could.

Second, observability is not the same as understanding. A clean audit trail doesn’t guarantee sound decision-making. It guarantees documentation. Insurers that confuse the two will develop false confidence in systems they don’t fully comprehend.

Third, governance theater remains a real risk. Checking the boxes on an NAIC evaluation tool or filing a Colorado compliance report doesn’t mean the underlying governance is substantive. Documentation can become performative if there’s no independent validation behind it.

The Real Question

The uncomfortable reality is that AI hasn’t introduced governance problems so much as it has surfaced how shallow existing governance models were all along. The 55% underwriting variance Kahneman found wasn’t caused by AI. It was caused by humans operating in a system with no mechanism to detect it.

The question for insurance leaders isn’t whether AI is good or bad for governance. It’s whether they’ll use AI to scale control, or just scale decisions.


Action Items for Insurance Leaders

  1. Conduct a noise audit. Before debating AI governance frameworks, measure the consistency of your current human decision-making. You may find governance gaps that have nothing to do with technology.
  2. Evaluate your compliance monitoring maturity. If your organization is still relying on sample-based, after-the-fact auditing, AI-enabled continuous monitoring represents a structural upgrade worth prioritizing.
  3. Prepare for evidence-based regulation. The NAIC evaluation tool and Colorado’s AI Act signal a shift toward demonstrated compliance rather than documented policies. Build governance that can show how decisions are actually made, not just how they’re supposed to be made.
  4. Treat AI as governance infrastructure. Productivity gains are a byproduct. The strategic value of AI in governance is observability, consistency, and real-time control. Budget and design accordingly.

Sources

  1. Kahneman, D., Sibony, O., & Sunstein, C.R. (2021). Noise: A Flaw in Human Judgment. Insurance Thought Leadership review
  2. National Association of Insurance Commissioners. AI Systems Evaluation Tool Pilot (March–September 2026). NAIC AI Topic Page
  3. Fenwick & West LLP. “NAIC Expands AI Systems Evaluation Tool Pilot Program to 12 States.” March 2026. Fenwick Insights
  4. Colorado General Assembly. SB24-205, Consumer Protections for Artificial Intelligence. Colorado Legislature
  5. EY. “Four Key Steps to Insurance Compliance Risk Analytics.” April 2025. EY Insights
  6. Baker Tilly. “The Regulatory Implications of AI and ML for the Insurance Industry.” December 2025. Baker Tilly
  7. Workday. “3 Ways AI Has Changed the Insurance Industry.” November 2025. Workday Blog
  8. McKinsey & Company. “The Future of AI in the Insurance Industry.” July 2025. McKinsey
  9. Vertafore. “Shifting to Smart Compliance: AI in Insurance Regulation.” October 2025. Vertafore Blog
  10. Crowell & Moring LLP. “NAIC Intensifies AI Regulatory Focus.” March 2026. Crowell & Moring

 


Check out our three-part series on issues with AI in Insurance Governance 


 

AI Disclaimer: This blog post was created with assistance from artificial intelligence technology. While the content is based on factual information from the source material, readers should verify all details, pricing, and features directly with the respective AI tool providers before making business decisions. AI-generated content may not reflect the most current information, and individual results may vary. Always conduct your own research and due diligence before relying on information contained on this site.