The Readiness Gap: What the 2026 ACT Tech Trends Report Reveals About Independent Agencies and AI

The Big “I” Agents Council for Technology just released its 2026 Tech Trends Report, and it confirms what many of us have suspected: independent agencies are enthusiastic about AI but dangerously underprepared to use it.

The numbers tell a striking story. According to ACT’s “Agent Tech & AI Trends Survey,” 68% of independent agencies say they’re “somewhat” or “very likely” to increase AI use in the next 12 months. But only 8% are currently using it regularly and strategically. That’s not a gap. That’s a chasm.

And the obstacles standing between intention and execution should concern every agency principal reading this.

Key Takeaways

  • 68% of agencies plan to increase AI use, but only 8% use it strategically today. The readiness gap isn’t about technology. It’s about processes, governance, and organizational discipline.
  • 56% of agencies have no written AI policy. In an era of accelerating cyber threats and E&O exposure, that’s an unacceptable vulnerability.
  • Agentic AI is arriving fast, but it needs guardrails. The ability of AI to execute multi-step workflows creates enormous efficiency potential and equally significant liability questions.
  • The real competitive divide isn’t adopters vs. non-adopters. It’s agencies with operational foundations vs. those without them.

The Process Problem

ACT’s report identifies organizational readiness as the single biggest barrier to effective AI adoption. Not cost. Not technology complexity. Readiness.

Casey Nelson of Catalyit put it bluntly in the report, describing the biggest tech trend affecting independent agents as a moment of panic where agencies realize they have no processes.

That observation deserves attention. You can’t automate a workflow that doesn’t exist. You can’t deploy AI to follow business rules that have never been documented. And yet, as the report notes, agencies routinely discover that workflows exist only inside employees’ heads or vary widely from person to person.

This isn’t a new problem. But AI makes it urgent in a way it wasn’t before. When the only people executing your workflows were humans who’d been doing the job for years, undocumented processes were an inconvenience. When AI is executing those workflows, undocumented processes become a liability.

The Governance Vacuum

Perhaps the most alarming finding in the entire report: 56% of surveyed agencies have no written policy or guidance on staff use of AI tools. Nearly 44% reported relying on peer-to-peer training for new technology.

Meanwhile, the cybersecurity threat environment is intensifying. Financial services companies remain among the top four most cyber-attacked industries. The report notes that in 2026, it’s expected to take hackers an hour or less from infiltration to full capture of targeted data.

And then there’s what ACT calls “shadow deployment,” which is employees using AI tools on personal devices and bringing the results back into agency systems. This isn’t driven by malicious intent. It’s driven by productivity pressure. Staff members want to work faster, and if the agency hasn’t provided approved AI tools with clear guidelines, they’ll find their own.

The E&O implications here are significant. Agencies using consumer-grade AI tools to process client data, without considering privacy or compliance, are creating exposure they may not discover until a claim is filed.

Agentic AI: Promise and Peril

The report devotes considerable attention to agentic AI, and rightly so. Unlike generative AI, which responds to individual prompts, agentic AI can understand multi-step goals and execute them with minimal human intervention. It can review a customer’s insurance portfolio, financial status, and risk profile to generate a gap analysis. It can process a claim from the initial report of loss through to resolution. It can design a client presentation from proprietary and peer data.

The potential is real. So is the risk.

As the report correctly notes, agentic AI’s effectiveness depends entirely on the quality of the workflows and business rules it’s designed to execute. Users need to provide instructions that include which programs and data to use, what legal requirements apply, what cost parameters to follow, and what tone to use in outputs.

Here’s the question ACT raises but doesn’t fully answer: When agentic AI executes a workflow that results in a coverage determination or a binding decision, who owns that outcome? The report states that accountability must remain with licensed professionals. But operationally, what does that look like when the AI assembled the analysis, identified the coverage gaps, and drafted the recommendation?

This is the decision-ownership question the industry will need to wrestle with as agentic AI deployment accelerates. The NAIC’s Big Data and Artificial Intelligence Working Group has focused primarily on carriers so far. But agent-level AI use is coming into regulatory view, and it would be consistent for states to eventually require agencies to disclose AI use in consumer-facing interactions.

The Vendor Problem

ACT’s survey data on top tech challenges is revealing. “Keeping up with the pace of technology change” and “too many disconnected systems” ranked as the leading concerns. “Overwhelmed with vendor options” was also significant.

One Big “I” member captured the frustration, describing vendors who want to sell their tools while often leaving out crucial information, whether by design or because agencies don’t know what questions to ask.

The report predicts vendor consolidation is imminent, which is probably correct. But in the meantime, agencies need a framework for evaluating tools that goes beyond marketing claims. Does the solution automate something that should actually be automated? What does the vendor’s privacy policy say? What’s the realistic return on investment? Is it practical for your agency’s specific workflow?

These are straightforward questions. But without documented processes and clear governance, agencies struggle to answer them.

What the Report Gets Right

ACT’s framing of technology as a leadership conversation, not a back-office concern, reflects the cultural shift happening inside forward-thinking agencies. The report notes that even small and midsized agencies have begun creating hybrid roles like innovation leads and operations strategists. That’s encouraging.

The emphasis on the human element is also well-placed. No ACT interviewee framed AI as eliminating the agent’s role. Instead, they described a rebalancing of work, with back-office functions becoming highly automatable while producers and relationship managers remain critical. The report’s characterization of the future producer as a “strategic orchestrator” who blends human judgment with digital tools is exactly the right vision.

And the data on customer expectations reinforces the opportunity. A Vertafore study cited in the report found that 85% of insurance customers prefer agent help when buying insurance and 90% want help managing a policy. Customers don’t want agents replaced. They want agents who are faster, more informed, and more responsive. AI makes that possible.

The Path Forward

The ACT report paints a clear picture of where the independent agency channel stands: eager to adopt AI but lacking the operational foundations to do it well. The agencies that thrive in 2026 and beyond won’t be the ones with the most tools. They’ll be the ones with documented processes, written AI governance policies, deliberate vendor evaluation frameworks, and clear human-in-the-loop protocols for high-stakes decisions.

If you’re an agency principal or operations leader reading this and recognizing your own agency in these findings, you’re not alone. The 68% vs. 8% gap exists because moving from interest to strategic execution is genuinely hard, especially without a structured roadmap.

That’s exactly why we developed The Independent Insurance Agent’s AI Playbook. It’s a practical, step-by-step guide built specifically for independent agencies, covering everything from your first AI implementation to E&O risk management, data privacy frameworks, and the workflow documentation that this report identifies as the foundational barrier to adoption. It was written by an insurance professional, for insurance professionals.

Details on availability are coming soon. If you want to be among the first to know, follow InsuranceIndustry.AI on LinkedIn.


Sources:


James W. Moore is the founder of InsuranceIndustry.AI, covering AI developments for insurance industry executives. With over 40 years of experience across carriers, agencies, and wholesale operations, he brings both deep insurance expertise and technology fluency to the industry’s most pressing questions.

AI Disclaimer: This blog post was created with assistance from artificial intelligence technology. While the content is based on factual information from the source material, readers should verify all details, pricing, and features directly with the respective AI tool providers before making business decisions. AI-generated content may not reflect the most current information, and individual results may vary. Always conduct your own research and due diligence before relying on information contained on this site.