Your weekly analysis of AI developments in insurance.


Grant Thornton Finds Insurers Are Winning With AI and Failing at Governance Simultaneously

A Grant Thornton 2026 AI Impact Survey published April 30 surveyed 100 insurance executives and found that AI is delivering real results — and that most carriers cannot prove their governance actually works. Fifty-two percent of respondents said AI has driven revenue growth. Sixty-two percent reported improved decision-making insights. Half said it has reduced costs.

The governance picture is considerably less flattering. Forty-four percent of insurance executives said governance or compliance challenges have contributed to AI project failure or underperformance. Sixty-one percent reported their boards have established AI governance policies — yet only 24% are very confident they could pass an independent AI governance review within 90 days. Grant Thornton’s conclusion: most insurers have policies on paper but lack the operational infrastructure to prove them.

“Without clear policies and tested controls, insurers are leaving their organizations open to risk with regulators and customers, fueling financial pressure that could ultimately erode product profitability,” the firm said.

A separate AM Best report released April 27 reinforces the pattern from a different angle. Nearly 60% of AM Best survey respondents expect AI to significantly transform their business models within one to three years — but the largest obstacles cited are data readiness, security and privacy, and integration with legacy systems. Sridhar Manyem, senior director of industry research and analytics at AM Best, noted that “AI systems can produce unreliable outputs when underlying data is of poor quality, fragmented across legacy systems, insufficiently governed or lacking appropriate context.”

Why This Matters for Insurance:

The combination of these two surveys captures a split that is becoming the defining story of 2026 for insurance AI: most carriers can point to measurable AI wins, and most carriers cannot demonstrate that their AI governance is functional under scrutiny. Those two facts can coexist for a while. They cannot coexist indefinitely once regulators, reinsurers, or plaintiffs begin asking to see the evidence.

The 24% figure deserves particular attention. Three-quarters of insurers surveyed by Grant Thornton are not confident they could survive a governance review in 90 days. That is not a gap in ambition — it is a gap in documentation, testing, and operational accountability. The articles in this publication’s AI governance series have argued that governance without evidence is policy theater. These two surveys are the industry-level confirmation.

For carriers at any stage of AI deployment, the practical implication is direct. The question is not whether AI governance policies exist. The question is whether the controls can be demonstrated in a short time window under adversarial conditions. That is the standard regulators, and litigation will apply. Carriers that build toward provable governance now are not just managing risk — they are building a competitive position for the regulatory environment that is clearly coming.


Allstate Is Selling Policies With AI in Three States. The Earnings Call Quote Is Worth Reading Carefully.

Allstate’s Q1 2026 earnings call on April 30 contained a disclosure that deserves more attention than it received in the business press. CEO Tom Wilson confirmed that Allstate’s AI-powered sales system, part of the company’s Large Language Intelligent Ecosystem called ALLIE, is actively closing insurance policies in three states — not in a lab, not in a pilot, in the live market.

Wilson framed it without fanfare: “Their AI can also just sell directly. And we’re live in the market doing that right now on a particular product. It’s more of a learning. But it’s doing it in three states, it’s closing policies. And so we’re just seeing what we learn from that.”

The context matters. Allstate reported first-quarter total revenue of $16.9 billion, up 3% year over year. The underlying combined ratio improved to 80.3%. Auto market share expanded in 29 states. By every conventional financial measure, Q1 was a strong quarter. The AI direct sales disclosure was dropped almost as an aside in an otherwise operational earnings call.

ALLIE is described by Allstate as a company-wide platform designed to harness both generative and agentic AI across customer engagement, sales, and claims processing. The direct sales deployment represents the agentic end of that spectrum: AI that initiates transactions, guides consumers through a sales process, and binds coverage without a human agent in the loop.

Why This Matters for Insurance:

This is a milestone that the industry needs to sit with rather than pass over. A major carrier is not just using AI to assist agents or improve back-office workflows. It is using AI to replace the agent in the transaction entirely, in a live regulatory environment, across multiple states. Allstate is characterizing this as a learning exercise. That framing is accurate and also somewhat understated. Learning at scale, in the market, with real policyholders means the data being gathered from those three states will shape Allstate’s AI sales strategy for years.

The implications run in two directions for the independent agency channel. The first is the straightforward competitive concern: if AI can close policies in three states today, the threshold for expansion is primarily regulatory and actuarial, not technical. The second is more nuanced. Wilson’s own framing on the same call acknowledged that many customers still want a person between them and their insurance decision. The quote directly after the AI disclosure: “It’ll help those agents who have good relationships with people improve their relationships.” Whether that framing reflects a genuine strategic commitment to the agent channel or represents a considered messaging choice during an earnings call is something agency leaders should watch closely over the next several quarters.


Erie’s CEO Says AI Should Strengthen the Human Touch. The Earnings Call Context Makes That Position More Interesting.

Erie Insurance CEO Tim NeCastro used Erie’s Q1 2026 earnings call last week to articulate Erie’s AI philosophy in terms that will resonate with the independent agency community: “AI should strengthen that human touch and not replace it.”

The AM Best headline on the statement was straightforward: Erie CEO says AI is not intended to replace employees. The fuller earnings call picture adds useful context. Erie is embedding AI into workflows with the goal of streamlining processes while maintaining what NeCastro called a strong human element in service delivery. The company simultaneously reported strong Q1 results — the underlying combined ratio improved, and the Erie Secure Auto product is expanding into additional states following a successful Ohio pilot.

NeCastro is the CEO in the final months of a tenure that built Erie to nearly $13 billion in premiums. His public positioning on AI — measured, human-centered, agent-preserving — is consistent with Erie’s long-standing identity as a company that differentiates on service and its independent agent relationships.

Why This Matters for Insurance:

Reading the Erie and Allstate earnings calls side by side is instructive. Two major carriers, both reporting strong Q1 results, are using almost opposite public language about AI’s relationship to human judgment. Allstate is closing policies in three states with AI acting as the agent. Erie is publicly committed to AI as an amplifier of human service.

Neither position is wrong. They reflect genuinely different business models and distribution strategies. But for independent agents, the distinction is consequential. Erie’s explicit commitment to the human element is a defensible and currently differentiated position in the market. Whether it remains differentiated as AI direct sales prove out at scale, at Allstate and elsewhere,  is a question that the next 12 to 18 months will begin to answer. Agents and agency owners who work with Erie have reason to pay attention to whether the philosophy behind NeCastro’s quote survives the leadership transition underway this year.


Cloudflare Publishes Its Enterprise MCP Architecture. The Security Risks It Documents Are Directly Relevant to Insurers.

Cloudflare published a detailed account on April 14 of how it has secured its own company-wide deployment of Model Context Protocol, the open standard that allows AI agents to connect to external systems, data sources, and tools. The post is framed as a reference architecture for other enterprises, and it describes problems that every carrier or agency deploying agentic AI will eventually encounter.

MCP is the infrastructure layer behind AI agents that actually do things: retrieving documents, updating records, querying databases, filing forms, and connecting to external APIs. When AI moves from answering questions to taking actions, MCP is typically how the connection is made. Cloudflare’s internal deployment has spread well beyond its engineering team — employees across product, sales, marketing, and finance are using agentic workflows powered by MCP.

The security concerns Cloudflare identifies are specific and worth naming: authorization sprawl (agents gaining access beyond their intended scope), prompt injection (malicious instructions embedded in content the agent retrieves), and supply chain risks (unvetted MCP servers that employees install locally without IT visibility). Their solution involves centralized governance of all MCP server deployments, company-approved servers only, mandatory authentication through their identity platform, audit logging on every tool call, and a shadow MCP detection capability to find unauthorized connections.

The cost reduction finding is operationally interesting. Their “Code Mode” architecture reduced token consumption by 94% when agents needed to explore available tools before executing tasks — a meaningful cost lever as agentic AI deployments scale.

Why This Matters for Insurance:

Most carriers and agencies thinking about AI governance today are focused on the generative layer: what the model says, how outputs are reviewed, and how hallucinations are controlled. The Cloudflare piece documents the next governance problem, which is already arriving at organizations that have moved to agentic AI: what the agent does, what systems it touches, and whether anyone knows when it acts outside its intended scope.

The ISACA research covered in the April 24 issue of AI Insights found that 59% of organizations could not quickly halt an AI system during a security incident. Cloudflare’s architecture is essentially a detailed engineering answer to that problem. For insurance IT leaders evaluating agentic AI deployments — in claims, in underwriting triage, in customer service — the specific categories Cloudflare documents (authorization sprawl, prompt injection, shadow MCP) are the categories that will appear in cyber underwriting questionnaires within 18 to 24 months. Getting familiar with the vocabulary and the controls now, rather than at renewal, is the more defensible position.


Sources


By James W. Moore

AI Disclaimer: This blog post was created with assistance from artificial intelligence technology. While the content is based on factual information from the source material, readers should verify all details, pricing, and features directly with the respective AI tool providers before making business decisions. AI-generated content may not reflect the most current information, and individual results may vary. Always conduct your own research and due diligence before relying on information contained on this site.