AI Insights: March 6, 2026

Your weekly analysis of AI developments in insurance.


BofA Says $15 Billion in Insurance Commissions Are in AI’s Crosshairs

Bank of America Global Research published a report this week estimating that more than $15 billion in insurance industry commissions are classified as “low complexity” and face a real risk of AI disintermediation. The warning arrives just weeks after insurance distribution stocks rebounded from their February sell-off, and it challenges the market’s conclusion that the AI threat has been priced in.

BofA’s analysts examined commission payments from just six carriers serving small business and personal lines: Travelers, Hartford, Progressive, Cincinnati Financial, Hanover, and Selective. From those six companies alone, they identified over $15 billion in commissions paid to independent agents in 2025 that skew heavily toward low-complexity transactions. Progressive alone paid more than $6 billion to independent agents last year. Travelers and Hartford paid roughly $3.35 billion and $1.25 billion, respectively, in segments dominated by personal lines and small commercial.

BofA’s core thesis is direct: large language model digital agents can effectively do a meaningful portion of the work currently provided by 20,000 to 30,000 independent agents across the United States. Standard home and auto policies represent low-sophistication transactions where human agents add limited value, making direct-to-consumer digital channels a significant cost-saver for buyers.

The report also dismantles a popular counterargument. Some investors have compared the AI threat to the slow-to-materialize disruption of self-driving cars. BofA draws a sharp distinction. Autonomous vehicles require trillions in infrastructure investment and years of development. Deploying large language model chatbots is cheap, easy, and happening right now. Munich Re’s Next Insurance already offers an AI chatbot where customers can purchase and bind commercial policies directly, without a human agent.

Why This Matters:

The February sell-off and subsequent rally created a narrative that the market overreacted to the Insurify ChatGPT launch. BofA is arguing the opposite: the rally was the overreaction. The firm warns that an agency business currently perceived as having 3% to 7% organic revenue growth could see that slip to 1% to 5% as AI-native distribution channels take hold.

The “snowball effect” BofA identifies is particularly important. Years of tuck-in acquisitions have brought significant low-complexity, small-ticket business under the umbrellas of large brokers. That vulnerability is often hidden by limited public disclosures. And even large-case, complex commercial business could face pricing deflation as AI makes insurance markets more transparent for sophisticated corporate buyers.

Strategic Implications:

For independent agents, BofA’s analysis is not a reason to panic. It is a reason to get very honest about which parts of your book are defensible and which are not. A standard homeowners policy that renews annually without meaningful client interaction is the kind of business AI-native distribution will absorb. The client who calls you after a covered loss, who trusts your judgment on umbrella limits, who relies on your advice when their business outgrows its current program, that is the relationship AI cannot replicate.

The agents who will thrive through this transition are the ones who are deliberately migrating their value proposition from transaction processing to advisory services, and who are building the AI fluency to do the transactional work faster themselves rather than ceding it to competitors.


The Insurance Industry Is Stuck in “Pilot Purgatory” on AI Claims

A new report from Sedgwick, the global risk and claims management firm, paints a detailed picture of an industry that has embraced AI adoption in theory but cannot figure out how to make it work at scale. The report, titled “Future-Ready Property Claims: Leveraging Technology and AI for a Strategic Advantage,” was released March 3 and found that while 58% to 82% of carriers are using AI tools in some capacity, only 12% report fully mature AI capabilities and just 7% have achieved what Sedgwick calls scalable AI success.

Insurance Business America described the result bluntly: carriers are stuck in “pilot purgatory,” experimenting with AI but failing to graduate to enterprise-wide deployment. Nearly two-thirds of carriers acknowledge a gap between their AI ambitions and their actual capabilities, and 90% say AI needs to be orchestrated across their operations to deliver meaningful returns.

The report identifies fragmentation as the primary obstacle. Different tools and vendors support different parts of the claims process, and carriers’ data is often inconsistent, incomplete, or siloed across systems. Without integrated workflows and consistent data governance, AI initiatives fail to deliver the productivity gains executives expect.

Where AI has gained a foothold, the results are striking. Sedgwick found that intake automation has cut average property claims processing times from 10 days to 36 hours in some cases. AI-driven photo analysis has boosted claim handling efficiency by up to 54%. On low-severity claims, some carriers report 80% faster processing and 50% productivity gains in documentation. The report estimates that as much as 85% of straightforward claims could eventually be processed end-to-end with minimal human involvement.

The biggest drag on progress remains legacy technology. Research from Equisoft cited in the report shows that many claims systems still run on outdated languages like COBOL and were never designed for cloud-native applications or real-time data exchange.

Why This Matters:

The 7% figure is the number to sit with. Sedgwick is saying that out of the entire property insurance carrier landscape, fewer than one in ten have figured out how to make AI work across their claims operation at meaningful scale. Everyone else is running pilots, testing tools, and generating internal presentations about AI’s potential without crossing the threshold into production deployment that moves the needle on costs, cycle times, or customer experience.

A Bain & Company study painted a nearly identical picture, finding that 78% of P&C insurers had adopted generative AI in some form but only 4% had managed to scale it across their organizations. The consistency across research sources is itself a finding: this is not a measurement problem. It is an execution problem.

The market value of AI in insurance is projected to reach nearly $80 billion by 2032, up from roughly $10 billion in 2025. The carriers that capture their share of that value will be the ones that treat AI not as a technology project assigned to an innovation team but as a strategic transformation of how claims operations actually work.

Strategic Implications:

For carriers still running disconnected AI pilots across their claims organization, Sedgwick’s report is both a warning and a roadmap. The warning is that fragmented AI adoption is producing fragmented results. The roadmap is that the carriers achieving scalable success are the ones treating AI as an operational transformation, not a collection of standalone tools.

For agencies and brokers, the implications are different but equally important. The claims experience your clients receive is about to diverge sharply between carriers that have scaled AI in claims and carriers that have not. A carrier processing straightforward claims in 36 hours instead of 10 days is delivering a materially different product. That difference will increasingly factor into placement decisions, and agents who understand which carriers have made this transition will have a genuine competitive advantage.


States and the White House Are Headed for a Showdown on AI in Insurance

A bipartisan wave of state legislation governing AI in insurance is colliding with a White House executive order that seeks to preempt it. The result is a regulatory conflict that will shape AI governance in insurance for years, and carriers, agents, and insurtech companies need to understand what is at stake.

The scope of state activity is significant. At least four states (Arizona, Maryland, Nebraska, and Texas) enacted legislation last year restricting AI use in health insurance. Illinois and California passed similar measures the year before. New York has established rules requiring insurers to disclose when AI is used in decision-making and mandating regular algorithmic audits. Florida Governor Ron DeSantis proposed an AI Bill of Rights in his February 2026 State of the State address that includes restrictions on AI in claims processing and a requirement allowing regulators to inspect algorithms. Bills are advancing or under consideration in Rhode Island, North Carolina, and several other states.

The federal pushback is equally direct. A December 2025 executive order seeks to preempt state AI regulation, describing AI development as a race with adversaries for supremacy and characterizing state regulation as an obstacle to innovation. The order proposes to sue states and restrict federal funding for those enacting what it calls “excessive” regulation.

Harvard Law School health policy scholar Carmel Shachar has publicly questioned whether the executive order is constitutional, noting that preemption authority generally rests with Congress, not the executive branch. Federal lawmakers have twice considered but declined to pass legislation barring states from regulating AI. The legal consensus appears to be that a court challenge to the executive order would have strong prospects for success.

The insurance industry itself is divided. Trade groups like the American Insurance Association have advocated for a federal framework, arguing that a patchwork of state regulations creates compliance complexity. But state regulators, particularly the NAIC (which declared AI governance a top 2026 priority just weeks ago), maintain that states are best positioned to protect their consumers.

Why This Matters:

This is not an abstract governance debate. It is a jurisdictional fight over who will write the rules that determine how your organization can use AI. The outcome has direct operational implications for every carrier, wholesaler, and agency deploying or planning to deploy AI tools.

The state-by-state approach creates real compliance costs for multi-state carriers. New York’s algorithmic audit requirements are different from Colorado’s AI Act provisions, which are different from Florida’s proposed AI Bill of Rights. A carrier operating in all three states needs governance infrastructure that satisfies each framework. That is expensive and complex.

But the alternative the executive order proposes, federal preemption with minimal regulation, creates its own risks. A regulatory vacuum invites the kind of AI deployment practices that generate consumer backlash, political intervention, and ultimately more restrictive regulation. The insurance industry has seen this pattern before with credit scoring and redlining. Insufficient early governance can lead to overcorrection later.

Strategic Implications:

For compliance and legal teams, the practical guidance is straightforward: build to the most stringent standard. If your AI governance program satisfies Colorado’s requirements, New York’s audit mandates, and the NAIC’s emerging examination framework, you will be compliant regardless of how the federal-state jurisdictional question resolves.

For agencies and brokers, the regulatory landscape affects your carrier partners more directly than it affects you, but it is not irrelevant. Carriers with robust AI governance will be able to deploy AI faster and with less regulatory friction. Carriers without it will face examination risk, enforcement actions, and potentially costly remediation. Understanding which carriers have invested in governance infrastructure is another dimension of the due diligence that sophisticated agents and brokers should be performing.

The constitutional question around the executive order may take years to resolve. The compliance obligations are here now.


AI in Health Insurance Is Drawing Fire from Every Direction

Health insurers’ use of AI to evaluate coverage requests, process prior authorization, and adjudicate claims is facing unprecedented scrutiny from researchers, regulators, and Congress simultaneously. A convergence of academic research, congressional hearings, and state legislative activity is building toward a regulatory reckoning that will reshape how AI is used in health insurance decisions.

A Stanford University study published in Health Affairs, authored by professor Michelle Mello and three colleagues, identified fundamental governance failures in how health insurers deploy AI. The researchers found that human reviewers at insurance companies often lack the time, expertise, and incentives to effectively review AI recommendations. The opacity of AI algorithms makes it difficult to understand why a particular determination was made, which in turn makes it hard for patients to challenge denials. AI tools frequently do not consider important contextual information, such as a patient’s social supports at home when assessing discharge timing. And algorithms trained on insurers’ historical coverage decisions risk locking in the flawed patterns of those past decisions.

The NAIC’s own survey found that 84% of health insurers use AI or machine learning across their product lines. Among large health insurers in 16 states, 37% reported using AI for prior authorization, 44% for claims adjudication, and 56% for utilization management broadly. That level of adoption, combined with limited governance, is exactly the combination that draws regulatory attention.

Congressional pressure is adding to the scrutiny. Last month, the House Ways and Means Committee brought executives from Cigna, UnitedHealth Group, and other major health insurers to testify about affordability and coverage practices. When pressed about AI use in denials, the executives either denied or avoided directly answering questions about deploying advanced technology to reject authorization requests.

Public opinion is not on the industry’s side. A Fox News poll found 63% of voters describe themselves as “very” or “extremely” concerned about AI, with majorities across both parties. A KFF survey documented widespread discontent with prior authorization practices even before AI entered the picture.

Why This Matters:

The health insurance AI story matters to the P&C industry for a specific reason: it is writing the regulatory playbook that will be applied to property, casualty, and commercial lines next. The governance frameworks being developed for health insurance AI, transparency requirements, algorithmic audit mandates, human oversight standards, and bias testing protocols, will migrate to P&C regulation. The NAIC’s examination tools being piloted this year are designed to work across all insurance lines, not just health.

The Stanford researchers’ finding that AI trained on historical decisions tends to perpetuate those decisions’ flaws is particularly relevant for P&C underwriting. If a carrier’s historical underwriting data contains patterns that correlate with protected classes (even unintentionally), an AI system trained on that data will reproduce those patterns at scale. The regulatory and legal exposure from that outcome is substantial.

Strategic Implications:

For carriers in any line, the health insurance AI experience is a preview. The questions Congress is asking health insurers today, “Do you use AI to deny claims? Does a human review every AI recommendation? Can you explain how the algorithm reached its decision?”, are the questions that will be asked of P&C carriers in the near future.

The carriers that build transparent, well-documented AI governance programs now will be positioned to answer those questions credibly. The ones that do not will find themselves in the same uncomfortable position as the health insurance executives who testified last month: unable to clearly explain their own AI deployment practices under public scrutiny.


The Bottom Line

This week’s stories share a pattern that should concern any insurance executive who has been treating AI preparation as a second-tier priority.

Bank of America has put a number on the disintermediation risk: $15 billion in commissions sitting in the path of AI-native distribution. Sedgwick has put a number on the execution gap: 7% of carriers have achieved scalable AI success, while the rest remain stuck in pilot programs that look impressive in board presentations but do not move operational metrics. The state-federal regulatory collision is creating a compliance landscape that rewards carriers with AI governance infrastructure and penalizes those without it. And the health insurance sector is demonstrating, in real time, what happens when AI deployment outpaces governance: congressional hearings, academic investigations, and state legislation that constrains future flexibility.

The thread connecting all four stories is the cost of waiting. Waiting to build AI capabilities means ceding ground to competitors and AI-native entrants who are already operating at scale. Waiting to build AI governance means absorbing regulatory risk that grows more expensive with each new state requirement. Waiting to articulate your value proposition in an AI-disrupted market means letting someone else define it for you.

The insurance industry does not lack awareness of AI’s importance. What it lacks, as Sedgwick’s 7% figure makes painfully clear, is execution. The transition from awareness to action is no longer optional. It is the competitive divide.

AI Insights appears every Friday, analyzing AI developments through an insurance lens. For deeper analysis of strategic implications, visit InsuranceIndustry.ai.

By James W. Moore


Sources:

AI Disclaimer: This blog post was created with assistance from artificial intelligence technology. While the content is based on factual information from the source material, readers should verify all details, pricing, and features directly with the respective AI tool providers before making business decisions. AI-generated content may not reflect the most current information, and individual results may vary. Always conduct your own research and due diligence before relying on information contained on this site.