By James W. Moore | InsuranceIndustry.AI
Executive Summary / Key Takeaways
- AI was deployed to improve efficiency and accuracy. Exposing decades of institutional inconsistency was not in the requirements document — but that’s what it’s doing.
- Inconsistency in underwriting and claims wasn’t just random noise. It had texture, direction, and relationships attached to it. AI makes all of that visible.
- Underwriting decisions aren’t limited to yes or no. Pricing and coverage terms can serve as quiet acceptance or quiet rejection — and AI logs both.
- When variance becomes a documented event rather than an invisible habit, the accountability architecture of an entire organization changes.
- This isn’t an argument against human judgment. It’s a reckoning with what happens when undocumented human judgment meets a system that never forgets.
The Auditor Nobody Hired
Nobody put “expose 40 years of institutional inconsistency” in the AI implementation RFP.
The requirements document called for faster processing, improved accuracy, better fraud detection, and reduced loss ratios. All reasonable objectives. All achievable. Carriers and agencies that have deployed AI in underwriting and claims are, by most accounts, hitting those targets.
But AI arrived with a side effect that nobody budgeted for: it remembers everything, it logs everything, and it has no interest in protecting anyone’s professional reputation. It doesn’t know about the longtime client relationship. It doesn’t know it’s a Friday afternoon. It doesn’t know that the underwriting manager prefers not to see certain questions asked.
It just logs the decision. And the next one. And the one after that.
Over time, those logs tell a story that the industry has never had to confront in quite this way before.
What Inconsistency Actually Looked Like
If you’ve spent time on an underwriting desk, you already know what this section is about.
Inconsistency wasn’t usually malfeasance. It was something more human and, in some ways, harder to address because of it. It was the commercial account that had been with the carrier for fifteen years, whose renewal got a second look at pricing when the numbers didn’t quite work. It was the claims adjuster who ran a little tighter on Fridays before a long weekend, or a little looser in December when the holiday slowdown meant fewer files to worry about. It was the underwriter who’d had a bad quarter and was writing more conservatively than the guidelines technically required.
And it extended further than most governance discussions acknowledge.
In insurance, a decision isn’t limited to approve or decline. Underwriting has always offered a third option that doesn’t appear on any form: price it out of the market, or price it into a deal. An underwriter who wants to write a piece of business finds ways to make the numbers work — broader coverage terms, a more favorable rate, a deductible structured to close. An underwriter who doesn’t want the account doesn’t have to decline it. They can quote $22,000 when the market is at $14,000, or attach conditions they know won’t be accepted, or offer coverage so narrow it functionally doesn’t solve the client’s problem.
The bias runs in the other direction, too, and this one has a direct revenue consequence.
Underwriter Joe wrote a restaurant account eight years ago that produced a record-setting loss. Underwriter Joe no longer writes restaurant accounts. He doesn’t announce this policy. He simply says no, or prices them out of the market, or attaches conditions he knows won’t fly, because his personal loss history with that class of business has calcified into an unofficial underwriting rule that exists nowhere in the guidelines and appears nowhere in the file notes.
The problem is that restaurants may be on the company’s current hot list of target SIC codes. The appetite statement says write them. The production goals say write them. Underwriter Joe’s invisible personal embargo says otherwise — and three other underwriters on the team have similar histories with similar classes for similar reasons.
The result is an appetite statement and an actual book running in opposite directions, with no mechanism to detect the gap. Until the logs accumulate enough data to show that certain underwriters are systematically underperforming on target classes, or that certain SIC codes are being priced 40% above market by specific individuals, while their colleagues price them at market. Experience is genuinely valuable. But experience filtered through one catastrophic loss can calcify into bias just as easily as it can sharpen into wisdom. AI doesn’t know the difference — but it will show you the pattern, and then it’s your job to ask the question.
These decisions lived entirely inside individual judgment. They were shaped by experience, relationships, competitive pressure, mood, and personal loss history that no compliance department was tracking. In the previous article in this series, we noted Nobel laureate Daniel Kahneman’s finding of 55% pricing variance among experienced underwriters pricing identical risks at the same carrier. That variance didn’t come from nowhere. It came from here.
The Log That Doesn’t Forget
The mechanical difference AI introduces is straightforward, but its implications are not.
When a human underwriter makes an inconsistent call, it disappears into the file. The only way to surface it is to pull the file, compare it to a similar file, and hope someone has both the time and the mandate to make that comparison. Traditional compliance programs review somewhere between 2% and 5% of transactions after the fact. The other 95% to 98% disappear undisturbed.
When an AI-assisted system makes or supports a decision, that decision is logged, timestamped, and queryable. Every input, every output, every deviation from standard is a permanent record. When deviation from the model’s recommendation becomes a documented event, it can be analyzed across thousands of transactions — not to second-guess legitimate judgment, but to surface patterns that would otherwise never be visible.
A human underwriter who systematically prices accounts from certain geographies or segments more aggressively than guidelines require may never be flagged under traditional review. The same pattern across an AI-assisted workflow becomes a detectable anomaly — not necessarily in the first month, but over time, as the log accumulates.
As one of the commenter threads that sparked this article put it: AI may not remove judgment entirely, but it makes inconsistency much harder to ignore.
What Experienced Judgment Actually Contributes
Before this piece gets filed in the “AI is coming for everyone” category, it’s worth being precise about what the argument is and isn’t.
Experienced underwriters bring something to risk assessment that no model replicates cleanly: the ability to read context that doesn’t fit in a data field. The management team behind a small commercial account. The way a contractor describes their safety practices in a conversation versus what their loss runs show. The instinct that a piece of business is being shopped for the wrong reasons. These are genuine inputs, and the better carriers are building AI systems that surface and document that kind of contextual reasoning rather than eliminate it.
The argument here isn’t that human judgment is worthless. It’s that undocumented human judgment — the kind that lives entirely inside an individual and leaves no trail — is now exposed in ways it never was before. The difference between a well-reasoned deviation from standard pricing and a relationship accommodation, or between a sound coverage restriction and a personal embargo rooted in a loss from eight years ago, will over time become visible in the data.
That’s uncomfortable. It’s also, on reflection, probably how it should work.
The Leadership Problem Nobody Is Talking About
There is a dimension of this conversation that has been largely absent from the industry’s AI governance discussion, and it sits squarely with management.
When AI makes variance visible, managers see things they used to miss — or used to be able to plausibly claim they missed. The underwriting supervisor who never quite noticed that one team member consistently priced large accounts from a particular segment above market can no longer maintain that posture. The compliance officer who reviewed the 3% sample and found nothing concerning now has access to the full population.
This changes the moral and operational position of anyone overseeing an AI-assisted team.
Regulators have noticed. The NAIC’s Model Bulletin on AI, adopted in some form by more than two dozen states, places accountability explicitly at the executive level — not the model level, not the vendor level. New York’s DFS Circular Letter 2024-7 requires insurers to document that AI systems don’t generate disproportionate adverse effects, with documentation available for regulatory review. Colorado’s AI Act, effective June 30, 2026, mandates annual compliance reporting on discriminatory outcomes across protected classes.
The regulatory signal is consistent: ignorance is no longer a defensible posture for leadership. If the data is there and the tools exist to surface patterns, the expectation is that someone accountable reviewed them.
Inconsistency as a Choice You Now Have to Defend
Here is where the piece lands, and it’s worth stating plainly.
In a pre-AI environment, inconsistency was largely invisible and therefore largely uncontested. A pricing decision that deviated from guidelines by 30% in either direction might surface in a market conduct exam, or might not, depending on whether that file was in the 3% sample. An underwriting pattern that quietly steered certain risk profiles toward the door left no aggregate record.
In an AI-visible environment, deviation from the standard is a documented event. That doesn’t make it wrong. Sometimes the right call is to override the model, extend the relationship, or use judgment that the AI couldn’t access. But now you have to own it — explain it, document it, and be prepared to defend it if the pattern of those deviations draws scrutiny.
For carriers and leaders who have operated with sound judgment and genuine consistency, this is a net positive. Their decisions hold up under review, and AI-enabled governance infrastructure actually protects them by demonstrating that consistency at scale.
For those who benefited from the opacity of the old system — deliberately or not — the adjustment is going to be harder.
The question for insurance leaders isn’t whether AI is good or bad for human judgment. It’s whether the judgment your organization has been exercising is something you’d want documented.
Action Items for Insurance Leaders
- Audit your deviation patterns before someone else does. If your organization is using AI in underwriting or claims, pull the override and deviation data now. Understand what the patterns show before a market conduct exam does.
- Distinguish legitimate judgment from accommodating behavior. Build documentation practices that capture the reasoning behind model overrides at the time of decision, not reconstructed afterward. Good judgment should be able to explain itself.
- Brief your leadership team on the accountability shift. AI visibility is not just a technology issue. The expectation that executives can be held accountable for patterns in AI-assisted decisions is already embedded in NAIC guidance and state regulation. Make sure your C-suite understands what the logs contain.
- Review your soft accept/soft reject patterns. Pricing and coverage decisions that function as de facto approvals or denials without formal declinations deserve the same governance scrutiny as explicit underwriting decisions. AI will surface those patterns whether you look for them or not.
Sources
- Kahneman, D., Sibony, O., & Sunstein, C.R. (2021). Noise: A Flaw in Human Judgment. Insurance Thought Leadership review
- InsuranceIndustry.AI. “The Governance Problem AI Didn’t Create (But Might Actually Fix).” April 2026. InsuranceIndustry.AI
- National Association of Insurance Commissioners. Model Bulletin: Use of Artificial Intelligence Systems by Insurers. December 2023. NAIC
- Buchanan Ingersoll & Rooney PC. “When Algorithms Underwrite: Insurance Regulators Demanding Explainable AI Systems.” October 2025. BIPC
- Fenwick & West LLP. “Tracking the Evolution of AI Insurance Regulation.” February 2026. Fenwick
- Colorado General Assembly. SB24-205, Consumer Protections for Artificial Intelligence. Colorado Legislature
- NAIC. “AI Systems Evaluation Tool Pilot.” 2026. NAIC AI Topic Page
AI Disclaimer: This blog post was created with assistance from artificial intelligence technology. While the content is based on factual information from the source material, readers should verify all details, pricing, and features directly with the respective AI tool providers before making business decisions. AI-generated content may not reflect the most current information, and individual results may vary. Always conduct your own research and due diligence before relying on information contained on this site.

