By James W. Moore
Key Takeaways
AI in insurance is shifting from tools that assist human decisions to autonomous agents that execute multi-step workflows independently. Most governance frameworks weren’t built for this.
The NAIC addressed agentic AI directly at its March 2026 Spring Meeting, flagging accountability gaps, cascading error risks, and the need to redesign governance frameworks.
OWASP published its first Top 10 for Agentic Applications, establishing the industry’s first formal taxonomy of autonomous AI risks.
The EU AI Act’s high-risk obligations take effect August 2, 2026, and explicitly name AI used for risk assessment and pricing in life and health insurance.
Microsoft’s launch of Agent 365 signals that AI agent governance has become a commercial product category, not an afterthought.
This is the third article in a series on AI governance in insurance. The first article, “The AI Did It” Is Not a Defense, establishes that insurers cannot delegate accountability to the algorithm. The second, Why Vendor AI Doesn’t Transfer Risk (Even If Your Contract Says It Does), demonstrated that vendor contracts don’t move regulatory liability off your books. The third article, When AI Starts Acting on Its Own: The Governance Gap Insurers Aren’t Ready For, addresses what happens when AI stops assisting and starts acting.
The first two articles in this series closed two escape hatches. You can’t blame the algorithm. You can’t hide behind your vendor’s contract. The insurer is accountable for AI outcomes, and that accountability doesn’t transfer.
Those arguments were built around a specific model of AI use: tools that inform human decisions. A predictive model scores a risk. A claims triage system recommends a routing. A fraud detection algorithm flags a file. In each case, a human reviews the output and makes the call.
That model is changing. And most insurer governance frameworks were designed for AI that informs decisions. They were not designed for AI that makes and executes them. That’s not a minor gap. It’s the central challenge facing insurance AI governance in 2026.
From Copilot to Colleague
The insurance industry is moving toward AI systems that don’t just recommend actions but execute them. In the technology world, these are called “agentic AI” systems, and the distinction from the AI tools most insurers are currently using is significant.
A traditional AI tool answers a question when asked. An agentic AI system plans a sequence of tasks, executes them across multiple systems, adapts when it encounters unexpected data, and completes the workflow with minimal or no human involvement at each step.
In practical terms, this means an AI agent in claims doesn’t just flag a file as potentially fraudulent. It pulls the policy, cross-references the loss history, checks external data sources, assesses the damage estimate against comparable claims, determines whether the claim meets straight-through processing criteria, and either routes it to an adjuster or initiates settlement. Autonomously.
In underwriting, it means an AI agent doesn’t just score a submission. It ingests the documentation, identifies missing information and requests it, builds a risk profile using internal guidelines and external data, runs pricing scenarios, checks regulatory compliance, and either generates a quote or escalates to a human underwriter for complex risks.
This isn’t theoretical. A Celent survey found that 22% of insurers plan to have an agentic AI solution in production by the end of 2026. The agentic AI insurance market is projected to grow from $5.76 billion in 2025 to $7.26 billion this year. IDC projects 1.3 billion AI agents in circulation across all industries by 2028.
The transition is real. Adoption isn’t the problem. Governance is.
The Governance Gap
Most insurer AI governance programs were built around the NAIC Model Bulletin framework adopted in December 2023. That framework is solid for its intended purpose: ensuring that AI systems used in decision-making comply with existing insurance laws, maintain documentation, undergo testing for bias and accuracy, and operate under senior management oversight.
But the Model Bulletin was written for a world where AI produces outputs and humans make decisions. When the AI itself becomes the decision-maker and the executor, several assumptions in that framework start to break down.
Accountability gets harder to assign. When a human underwriter uses an AI model’s risk score to inform a pricing decision, accountability is clear. The underwriter owns the decision. When a multi-agent system coordinates intake, risk profiling, pricing, compliance checking, and quote generation across five autonomous steps, who owns the outcome? The person who configured the system? The vendor who built the agents? The compliance officer who approved the workflow? The NAIC’s Big Data and Artificial Intelligence Working Group raised exactly this concern at its March 2026 Spring Meeting in San Diego, where attendees heard a presentation on agentic AI that highlighted the material risks associated with autonomous systems, including challenges in assigning accountability.
Errors can cascade. A traditional AI model that produces a bad output affects one decision. An agentic system that makes a bad determination in step two of a five-step workflow can compound that error through every subsequent step. If the intake agent misclassifies a submission, the risk profiling agent builds on that misclassification, the pricing agent quotes based on the wrong risk profile, and the compliance agent checks the wrong regulatory criteria. The final output can be materially wrong in ways that no single checkpoint would catch. The risk isn’t a single bad decision. It’s a flawed workflow executing thousands of times before anyone realizes it’s wrong. The NAIC Spring Meeting discussion specifically flagged cascading errors across multiple agents as a key risk.
Audit trails become more complex. Documenting a single model’s inputs and outputs is manageable. Documenting the reasoning chain across multiple autonomous agents, each making decisions that feed the next, requires a fundamentally different approach to logging and explainability. Regulators conducting market conduct exams will expect to trace how an AI-driven decision was reached. With agentic systems, that trace is no longer a straight line.
The Regulatory Landscape Is Already Responding
Regulators aren’t waiting for a major incident to address agentic AI. The signals are already in the market.
NAIC activity is intensifying. Beyond the Spring Meeting discussions, the NAIC’s AI Systems Evaluation Tool pilot launched in January 2026 and runs through September. Ten carriers are participating, and regulators plan to use the tool during market conduct examinations. The Fenwick law firm’s regulatory analysis noted that as agentic AI advances, it will continue to prompt regulatory concerns about transparency, bias, and accountability, requiring insurers to demonstrate ongoing human oversight.
The EU AI Act’s high-risk rules take effect August 2, 2026. For insurers with any European exposure, this is significant. The Act’s Annex III explicitly classifies AI systems used for risk assessment and pricing in life and health insurance as high-risk. High-risk systems require documented risk management processes, data governance, human oversight, transparency, and conformity assessments. These aren’t guidelines. They’re enforceable requirements with penalties up to €15 million or 3% of worldwide turnover.
Colorado’s AI Act takes effect June 30, 2026. As covered in detail in previous IIAI reporting, Colorado’s law imposes deployer obligations including written risk management policies, impact assessments, consumer disclosure, and incident reporting. The law applies to AI systems that make or substantially influence “consequential decisions” affecting consumers. Autonomous agents that execute underwriting or claims decisions without human review at each step almost certainly qualify.
OWASP published its first taxonomy of agentic AI risks. The OWASP Top 10 for Agentic Applications, released in December 2025, is the security community’s first formal framework for the risks that autonomous AI systems introduce. The list includes agent goal hijacking (where an attacker redirects an agent’s decision-making process), tool misuse (where agents use legitimate tools in unintended or harmful ways), identity and privilege abuse (where agents inherit or retain access they shouldn’t have), and cascading failures (where errors compound across multi-agent workflows). For an industry that handles sensitive personal and financial data, every item on this list has direct operational relevance.
The Market Is Building Governance Infrastructure
Perhaps the clearest signal that agentic AI governance has moved from theoretical to operational is the fact that technology vendors are now selling it as a product category.
Microsoft launched Agent 365 in March 2026, a platform designed specifically to observe, govern, and secure AI agents across an enterprise. Priced at $15 per user per month, it provides an agent registry (cataloging all agents operating in an organization), identity management for individual agents (each agent gets a unique identity in Microsoft Entra with conditional access policies and audit trails), and governance controls for monitoring agent behavior and enforcing compliance.
Microsoft also released an open-source Agent Governance Toolkit this week, a set of tools designed to bring runtime security governance to autonomous agents regardless of the framework they were built on. The toolkit maps its capabilities directly to all ten OWASP agentic AI risk categories.
The pricing tells a story. Microsoft 365 E7, the enterprise bundle that includes Agent 365, Copilot, and the full security stack, costs $99 per user per month. When governance gets its own price tag, it has officially stopped being optional.
For insurers, the vendor landscape doesn’t matter as much as the signal: the technology market has decided that AI agents need their own governance infrastructure, separate from and in addition to the governance you already have for AI models and data.
What This Means for Insurance Executives
The three articles in this series have built a straightforward argument:
You can’t blame the algorithm for bad outcomes. The insurer is accountable.
You can’t rely on your vendor’s contract to absorb regulatory risk. The contract doesn’t transfer the duty.
And now: you can’t govern autonomous AI agents with a framework designed for AI tools that assist human decisions. The governance has to evolve with the technology.
If your organization is exploring or deploying agentic AI, here are the questions that matter right now:
Does your AI governance framework distinguish between AI that advises and AI that acts? If your framework was built around the NAIC Model Bulletin’s approach to AI-assisted decision-making, it may not adequately address autonomous multi-step workflows. Review whether your policies, documentation requirements, and oversight mechanisms account for AI systems that execute decisions without human review at each step.
Can you trace an autonomous decision from start to finish? Regulators will expect it. The NAIC’s AI Evaluation Tool pilot is building the examination infrastructure right now. If an agent makes a claims determination or generates an underwriting decision through a multi-step workflow, you need the audit trail that shows what data was used, what decisions were made at each step, and why.
Do your AI agents have defined authority limits? The same way you set authority levels for adjusters and underwriters, AI agents need defined boundaries. What dollar thresholds trigger human review? What types of decisions require escalation? What happens when an agent encounters a scenario outside its defined scope? The OWASP framework calls this “least agency,” and it’s the autonomous-AI equivalent of the least-privilege principle your IT team already applies to system access.
Is your vendor management keeping pace? If you’re using third-party AI agents, your vendor due diligence needs to include how those agents are governed, what identity and access controls are in place, and how the vendor handles the OWASP-identified risks. Your vendor contracts should require audit rights over agent behavior, not just model performance.
Are you monitoring the regulatory calendar? Colorado’s AI Act (June 30), the EU AI Act high-risk rules (August 2), and the NAIC Evaluation Tool pilot (concluding in September) all converge in 2026. If your governance framework isn’t ready, the timeline for getting it ready is measured in weeks, not quarters.
The Bottom Line
The insurance industry has always been good at governance. Reserving practices, underwriting authority levels, claims handling guidelines, market conduct compliance: insurers know how to build oversight frameworks that work. The question isn’t whether the industry can govern AI agents. It’s whether it will adapt its governance fast enough to keep pace with how quickly agents are being deployed.
The problem is timing. AI agents are being deployed faster than governance frameworks are being redesigned to control them.
The escape hatches are closed. The regulatory infrastructure is being built. The governance tooling now exists as a commercial product. The remaining question isn’t whether insurers can govern agentic AI. It’s whether they will close the gap before autonomous decisions start being made at scale.
Sources
Alston & Bird: “Key AI, Cybersecurity, and Privacy Takeaways from the NAIC 2026 Spring Meeting” (April 2, 2026) — https://www.alstonprivacy.com/key-ai-cybersecurity-and-privacy-takeaways-from-the-naic-2026-spring-meeting/
Fenwick: “Tracking the Evolution of AI Insurance Regulation” (December 2025) — https://www.fenwick.com/insights/publications/tracking-the-evolution-of-ai-insurance-regulation
InsuranceNewsNet: “NAIC’s 2026 AI Evaluation Pilot Moves Ahead as Industry Balks” (December 2025) — https://insurancenewsnet.com/innarticle/naic-regulators-prep-ai-evaluation-tool-for-use-in-2026-as-industry-balks
Crowell & Moring: “NAIC Intensifies AI Regulatory Focus: What Health Insurance Payors Need to Know” (March 2026) — https://www.crowell.com/en/insights/client-alerts/naic-intensifies-ai-regulatory-focus-what-health-insurance-payors-need-to-know
OWASP: “Top 10 for Agentic Applications for 2026” (December 2025) — https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/
EU Artificial Intelligence Act, Annex III: High-Risk AI Systems — https://artificialintelligenceact.eu/annex/3/
EU AI Act Overview: “Shaping Europe’s Digital Future” — https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
Microsoft Security Blog: “Secure Agentic AI for Your Frontier Transformation” (March 9, 2026) — https://www.microsoft.com/en-us/security/blog/2026/03/09/secure-agentic-ai-for-your-frontier-transformation/
Microsoft Open Source Blog: “Introducing the Agent Governance Toolkit” (April 2, 2026) — https://opensource.microsoft.com/blog/2026/04/02/introducing-the-agent-governance-toolkit-open-source-runtime-security-for-ai-agents/
VentureBeat: “Microsoft Says Ungoverned AI Agents Could Become Corporate ‘Double Agents’” (March 9, 2026) — https://venturebeat.com/technology/microsoft-says-ungoverned-ai-agents-could-become-corporate-double-agents-its
InsureTech Trends: “5 Ways Agentic AI Is Transforming Insurance Underwriting in 2026” (March 2026) — https://insuretechtrends.com/5-ways-agentic-ai-is-transforming-insurance-underwriting-in-2026/
InsuranceNewsNet: “How Agentic AI Is Rewiring Insurance for 2026” (December 2025) — https://insurancenewsnet.com/innarticle/how-agentic-ai-is-rewiring-insurance-for-2026
AI Disclaimer: This blog post was created with assistance from artificial intelligence technology. While the content is based on factual information from the source material, readers should verify all details, pricing, and features directly with the respective AI tool providers before making business decisions. AI-generated content may not reflect the most current information, and individual results may vary. Always conduct your own research and due diligence before relying on information contained on this site.

