The AI Talent Crisis in Insurance: Building Teams for 2026 and Beyond
A White Paper from InsuranceIndustry.ai
By James W. Moore | February 2026
Executive Summary
Insurance executives are making a $50 to $70 billion bet on artificial intelligence while systematically starving it of the one resource that determines success: skilled people. The result is an industry stuck in what BCG calls “pilot purgatory”—only 7% of insurers have successfully scaled AI beyond pilot programs, with talent gaps identified as the primary bottleneck. Meanwhile, AI leaders are pulling away with 6.1 times the total shareholder return of laggards over the past five years—a wider performance gap than in most other sectors.
The disconnect is striking. Ninety-two percent of insurance workers want generative AI skills training, yet only 4% of insurers are reskilling at the required scale. Human and organizational factors account for 70% of AI scaling challenges among insurers—the barriers are overwhelmingly about people, not technology. This talent crisis is compounding as the U.S. insurance industry faces the potential loss of 400,000 workers through attrition, with 50% of the current workforce projected to retire within the next 15 years.
The central problem is straightforward: expensive AI tools are sitting underused, pilot projects stall before they ever reach production, and the competitive gap between talent leaders and laggards widens every quarter.
This white paper provides insurance executives with a practical framework for assessing current talent readiness, building AI-capable teams at every level, and creating workforce strategies that convert AI investment into measurable business outcomes. The carriers, wholesalers, and agencies that master the people side of AI won’t just survive the industry’s transformation—they will lead it.
I. The Scope of the Insurance Talent Crisis
By the Numbers
The financial stakes are enormous and accelerating. The insurance analytics market is projected to grow from $13.29 billion in 2025 to $15.37 billion in 2026, reaching $31.76 billion by 2031—a 15.64% compound annual growth rate. McKinsey estimates that generative AI alone could unlock $50 to $70 billion in insurance industry revenue. Insurance AI spend is expected to grow by more than 25% in 2026.
Yet these massive technology investments are hitting a wall. According to BCG’s Build for the Future 2024 Global Study, about two-thirds of insurers remain stuck in the piloting stage. The 7% that have scaled are spending $25 million or more annually on AI, while most remain in siloed projects with budgets under $5 million. More broadly, McKinsey’s 2025 State of AI survey found that while 88% of organizations now use AI in at least one function, only about a third have managed to scale across the enterprise, and fewer than half can tie significant EBIT impact to those efforts.
The root cause isn’t the algorithms. BCG’s 10-20-70 rule tells the story: only 10% of AI’s value comes from algorithms, 20% from technology and data, and a full 70% comes from people and process transformation.
The Three-Part Talent Challenge
The insurance industry’s AI talent problem manifests in three interconnected ways, each creating friction that prevents technology investments from delivering returns.
The Skills Gap. The insurance workforce lacks the data scientists, ML engineers, and AI product managers needed to build and deploy solutions. At the same time, domain experts—veteran underwriters, claims professionals, actuaries—often don’t understand AI’s capabilities or limitations. IT teams lack insurance context, and executives struggle to evaluate vendor claims or set realistic expectations. Almost 89% of insurers now prioritize critical thinking and problem-solving as essential future skills, followed closely by data literacy and AI proficiency. Yet only 1 in 3 insurers have a formal AI training program in place.
The Career Development Paradox. AI is automating the very tasks that have traditionally built insurance expertise. How do you develop underwriters when AI handles routine submissions? How do you train claims adjusters when AI triages FNOL? Entry-level job postings have fallen 29% since January 2024 worldwide. Deloitte’s 2025 Human Capital Trends found that two-thirds of hiring managers believe entry-level hires arrive underprepared, with AI accelerating this problem by reducing on-the-job training opportunities. Meanwhile, 30% of the insurance workforce is reaching retirement age by 2030, and the Bureau of Labor Statistics projects approximately 21,500 job vacancies annually over the next decade in claims alone.
The Trust and Adoption Problem. Even when AI tools are deployed, they don’t get used without trust. BCG identifies cultural resistance—particularly the clash between AI’s probabilistic nature and insurance’s demand for actuarial precision—as a fundamental adoption barrier. McKinsey found that 48% of U.S. employees want formal generative AI training from their organizations, but the level of support employees receive falls well short of what they’re seeking. Poor change management creates “shelfware”—technology that was purchased but never effectively adopted.
Why This Matters Now
The window for competitive positioning is closing. McKinsey’s research shows that insurance AI leaders are already compounding their advantages—delivering 6.1 times the total shareholder return of laggards, with measurable impacts including 10 to 20% improvement in sales conversion rates, 10 to 15% premium growth, and 20 to 40% reduction in customer onboarding costs. As BCG concludes, “switching costs are low, and productivity gains are quickly realized. The industry’s new leaders will be the companies that recognize AI as a catalyst for fundamental business transformation.”
2026 represents the inflection point where the industry transitions from pilot programs to production-scale AI. The carriers, wholesalers, and agencies that solve the talent equation now will define the competitive landscape for the next decade.
II. The Two-Team Framework: The Lab and The Crowd
Most insurers make one of two strategic errors when building AI talent: they either try to make everyone an AI expert—diluting focus and wasting resources—or they keep AI locked inside a specialized technical team that never integrates with the business. Both approaches fail.
The most effective approach combines two complementary forces: The Lab (specialized AI capability) and The Crowd (organization-wide AI literacy).
The Lab: Specialized AI Capability
The Lab is your core AI team—typically 5 to 15 people for mid-sized insurers, though the exact size depends on your organization’s scale and ambitions. This team owns AI strategy, builds or customizes AI solutions, manages vendor relationships, and establishes governance frameworks.
Core roles in The Lab include:
The AI Product Manager translates business problems into AI opportunities, owns the product roadmap, and prioritizes use cases based on ROI and feasibility. This person speaks both insurance and technology fluently.
The Data Scientist/ML Engineer builds models, performs feature engineering, conducts experiments, and optimizes algorithms. At scale, you’ll need multiple specialists across different AI domains—natural language processing for claims notes, computer vision for property inspection, predictive analytics for underwriting.
The Data Engineer builds the infrastructure that makes AI possible, managing data pipelines, ensuring data quality, and creating the architecture for model deployment and monitoring.
The AI Ethics and Governance Lead ensures regulatory compliance, manages bias testing, creates documentation for regulators, and develops internal policies around AI use—a role that becomes more critical as nearly half of states have now adopted the NAIC Model Bulletin on insurers’ use of AI.
The AI Training and Change Management Specialist bridges The Lab and The Crowd, designing training programs, supporting adoption, and gathering feedback from end users.
Building The Lab presents three distinct challenges. First, you’re competing for scarce technical talent against tech companies, consultancies, and well-funded startups that can often offer higher compensation and more cutting-edge work. Second, insurance-specific AI expertise is even rarer than general AI skills—you need people who understand both transformers and combined ratios. Third, The Lab must maintain credibility with the business while pushing technical boundaries, requiring diplomatic skills alongside technical depth.
Practical strategies for building The Lab include:
Start with strategic hires in areas where internal development is most critical—typically an AI product manager who can set direction and a data engineer who can build infrastructure. Supplement specialized roles through partnerships with AI vendors, consulting firms, or academic institutions rather than trying to hire every skillset immediately.
Look for “translators”—people with insurance domain expertise who’ve developed an appetite for data and technology. A former underwriter with SQL skills may be more valuable than a PhD data scientist who’s never seen a policy. These hybrid professionals can move fluidly between The Lab and The Crowd.
Create a technical career ladder that allows AI specialists to advance without becoming managers. Many insurers lose technical talent because the only path to higher compensation runs through management roles that these specialists neither want nor are suited for.
Consider “acqui-hiring” by bringing in small AI consulting shops or insurtech teams that have already built insurance-specific AI capabilities. You’re buying a working team rather than assembling one person at a time.
The Crowd: Organization-Wide AI Literacy
While The Lab builds and deploys AI, The Crowd uses it. The Crowd is your entire organization—underwriters, claims handlers, agents, actuaries, customer service representatives, executives. These people won’t build neural networks, but they need to understand what AI can and cannot do, how to work alongside AI tools, and when to trust or question AI recommendations.
The Crowd needs three levels of AI literacy. At the foundational level, everyone should understand basic AI concepts—the difference between rules-based systems and machine learning, what training data means, why AI makes probabilistic rather than deterministic decisions, and what bias in AI systems looks like. At the functional level, people need hands-on skills with the AI tools they’ll actually use—how to prompt generative AI effectively, how to interpret model outputs, when to escalate edge cases to human review. At the strategic level, leaders need to evaluate AI investments, set realistic expectations, identify high-value use cases, and recognize when vendor claims don’t match reality.
Building The Crowd requires systemic training, not just workshops. Accenture’s research on insurance workforce preparation identifies three essential elements. First, role-specific AI training that shows underwriters how AI changes underwriting, not generic “Introduction to AI” courses that try to teach everyone the same content. Second, hands-on practice with low-stakes AI tools before deploying them in production workflows, giving people time to build confidence and understanding. Third, ongoing learning programs that keep pace with AI capabilities, since what’s true about AI in 2026 may be outdated by 2027.
The key insight: The Crowd doesn’t need to understand how AI works under the hood, but they must understand how to work effectively with AI systems. This is the difference between teaching someone the mathematics of internal combustion versus teaching them to drive. Your underwriters don’t need to understand backpropagation, but they absolutely need to know when an AI underwriting recommendation seems wrong and what to do about it.
Practical strategies for building The Crowd include:
Embed “AI champions” in each department—respected domain experts who receive advanced AI training and then support their colleagues. These champions become the first line of support, answering questions and building confidence.
Create internal AI sandboxes where employees can experiment with AI tools in low-risk environments. Let claims adjusters play with AI-assisted claim summaries on closed claims. Let underwriters practice with AI submission triage on historical data.
Celebrate “AI success stories” from frontline employees. When a claims handler uses AI to spot a fraud pattern or an underwriter uses AI to identify a profitable niche, publicize it. Social proof accelerates adoption.
Build AI literacy into performance reviews and promotion criteria. If you want people to develop AI skills, make it clear that these skills affect career progression.
How The Lab and The Crowd Work Together
The magic happens at the interface between The Lab and The Crowd. The Lab builds tools based on The Crowd’s feedback. The Crowd adopts tools more readily when The Lab involves them early. Together, they create a learning loop that compounds over time.
Successful insurers create formal structures for this collaboration. Regular “AI office hours” where domain experts can bring problems to The Lab. Cross-functional project teams that include both Lab members and Crowd representatives. Pilot programs that treat frontline employees as partners in refinement rather than passive recipients of technology.
The most important principle: The Lab’s job is not just to build AI systems but to build AI capacity across the organization. The Crowd’s job is not just to use AI tools but to help The Lab understand what tools would actually create value. When both teams embrace these responsibilities, AI adoption accelerates.
III. Evaluating AI Vendors: The People Question
Insurance executives evaluating AI vendors typically focus on technology capabilities—model accuracy, integration requirements, scalability. These matter, but they miss the determining factor in successful AI implementation: whether the vendor can help you build and support your AI-capable workforce.
When evaluating AI vendors, ask these questions about their approach to your people:
What training and onboarding does the vendor provide? Look for vendors offering role-specific training programs, not just generic product demos. The best vendors understand that their technology succeeds only when your people adopt it, and they invest accordingly in change management support.
How does the vendor support your internal AI capability development? Some vendors operate as black boxes, keeping all AI expertise in-house and making you dependent on them forever. Better vendors actively transfer knowledge, helping you build internal understanding so you can eventually extend or customize solutions independently.
What change management resources does the vendor provide? Implementation should include change management consulting, adoption tracking, and ongoing support—not just technical implementation. Ask for case studies showing how they’ve driven adoption in similar organizations.
How transparent are their AI systems? You need to explain AI decisions to regulators, customers, and internal skeptics. Vendors that can’t or won’t explain how their models reach conclusions create compliance risk and adoption barriers.
What’s their approach to your domain experts’ feedback? The best AI vendors treat your underwriters, claims professionals, and actuaries as partners in refinement, not obstacles to overcome. They should have formal processes for gathering domain expert input and incorporating it into model improvements.
Red flags in vendor conversations include:
Claims that their AI is “fully autonomous” and requires no human involvement. Insurance is a regulated, high-stakes domain where human judgment remains essential. Vendors who promise to eliminate your experts don’t understand your business.
Resistance to sharing information about training data or model architecture. Legitimate concerns about intellectual property are one thing, but vendors who refuse to discuss these topics at all may be hiding data quality problems or bias issues.
No clear path for your team to develop AI literacy. If the vendor’s business model depends on you never understanding how their technology works, you’re building dangerous dependency.
The most successful AI implementations involve vendors who see themselves as partners in building your AI capability, not just suppliers of a technology product. They recognize that their long-term success depends on your team’s ability to use, trust, and eventually extend their solutions.
IV. Regulatory Compliance and the New Talent Requirements
The regulatory landscape around insurance AI is evolving rapidly, creating new talent requirements that many insurers haven’t yet recognized. The NAIC Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, released in December 2023, has now been adopted by nearly half of U.S. states. This bulletin requires insurers to establish AI governance frameworks, conduct risk assessments, ensure third-party AI systems comply with insurance laws, and maintain detailed documentation of AI systems and their decision-making processes.
The EU AI Act, which began phased implementation in 2024, creates additional compliance requirements for insurers operating in European markets. Combined with state-level regulations and federal oversight that continues to evolve, insurers face a complex compliance environment that demands specialized expertise.
This regulatory complexity creates three new talent imperatives. First, you need people who can navigate the intersection of insurance regulation, data privacy law, and AI ethics—a rare combination of expertise that requires either developing internal specialists or partnering with law firms and consultancies that understand these issues deeply. Second, you need robust documentation and audit trails for every AI system, which requires data governance professionals who can maintain the paper trail regulators will demand. Third, you need ongoing monitoring and testing for bias, accuracy drift, and unintended consequences, which requires dedicated resources focused on AI system performance and fairness.
Practically, this means The Lab needs explicit regulatory compliance capabilities, not just technical AI expertise. The AI Ethics and Governance Lead role mentioned earlier isn’t optional—it’s a regulatory necessity. This person must understand insurance law, data protection requirements, and AI fairness principles while also being technical enough to evaluate model behavior and identify compliance risks.
The NAIC’s December 2025 statement on the AI Executive Order emphasizes the importance of regulatory coordination and consumer protection in AI deployment. As Fenwick’s analysis of the evolving regulatory landscape notes, insurers should expect continued regulatory scrutiny and requirements around transparency, fairness, and accountability in AI systems.
For many insurers, especially smaller carriers and regional players, building full in-house regulatory compliance expertise may not be practical. The alternative is strategic partnerships with compliance-focused consultancies, participation in industry working groups that share best practices, and careful vendor selection that prioritizes partners with strong regulatory track records. The cost of non-compliance—regulatory sanctions, reputation damage, class-action lawsuits—far exceeds the investment in compliance capabilities.
V. Building Entry-Level Talent Pipelines in an AI-Automated World
One of the most troubling aspects of AI’s impact on insurance is the disruption of traditional career development pathways. For decades, carriers have hired entry-level workers for routine tasks—data entry, basic underwriting decisions, initial claims review—that served as training grounds for more complex work. AI now automates many of these tasks, eliminating the first rungs of the career ladder.
The question becomes: how do you develop the senior underwriters and claims experts of 2035 when AI has automated away the junior roles of 2026?
Some insurers are responding by creating “AI-augmented apprenticeships.” Rather than having new hires spend two years processing straightforward claims to build judgment, they work alongside AI systems on complex claims from day one, with experienced mentors helping them understand when to trust AI recommendations and when to dig deeper. This accelerates development in some ways while creating risk in others—you’re compressing the learning curve, which means both faster advancement for talented people and faster failure for those who can’t keep up.
Other insurers are deliberately preserving “judgment development zones” where AI assistance is limited, forcing newer employees to develop core insurance instincts before leaning on AI tools. A carrier might route simple personal auto claims to AI while routing homeowners claims to junior adjusters, creating structured learning opportunities that build foundational skills.
Academia and industry partnerships are emerging as another solution. Insurance Thought Leadership’s analysis of next-generation talent attraction emphasizes the importance of university partnerships, internship programs that expose students to real AI implementations, and industry certifications that validate AI competency alongside traditional insurance expertise. Some insurers are sponsoring insurance analytics programs at universities, essentially creating a direct pipeline of graduates who understand both insurance fundamentals and modern AI capabilities.
The most forward-thinking approach: redesign entry-level roles around AI collaboration rather than trying to preserve pre-AI workflows. This means hiring for different skills—less emphasis on the ability to manually process high volumes of routine work, more emphasis on critical thinking, domain curiosity, and comfort with technology. It means creating training programs that teach AI literacy from day one rather than treating it as an advanced skill. It means accepting that the insurance professionals of the future will need shorter time to functional competency but longer time to genuine expertise.
The challenge is particularly acute for independent agencies, which lack the resources of large carriers but still need to develop talent. IA Magazine’s February 2026 analysis highlights how agencies are addressing the talent crisis through producer development programs that emphasize relationship skills and strategic advice rather than policy comparison—areas where human judgment remains valuable even as AI handles quoting and binding.
The uncomfortable truth: not every carrier or agency will solve this problem. Some will continue to rely on experienced professionals who developed their skills in the pre-AI era, accepting that they’re not building the next generation of expertise. When those veterans retire, these organizations will face a crisis. Others will invest now in developing AI-native insurance professionals, accepting short-term costs for long-term sustainability.
VI. Strategic Takeaways: A Phased Approach to Building AI Talent
Insurance executives reading this white paper face a practical question: where do we start? The talent challenges outlined above are real, but they don’t require solving all at once. The most successful approach is phased implementation that builds capability progressively while delivering measurable returns at each stage.
30-Day Priority: Assessment and Foundation
Your immediate focus should be honest assessment of current state and quick wins that build momentum. Conduct a talent inventory that maps your current workforce’s AI capabilities across both specialized technical skills and general AI literacy. Identify which roles will be most affected by AI in the next 12 to 24 months—these are your priority areas for capability building. Simultaneously, establish executive alignment on AI talent strategy, ensuring your C-suite understands that AI success depends on workforce development, not just technology procurement.
On the quick wins front, launch AI literacy training for executives first. If your leadership team doesn’t understand AI fundamentals, they can’t make informed decisions about AI investments or set realistic expectations. Identify low-hanging fruit for AI pilot projects that can demonstrate value while also serving as learning opportunities for both The Lab and The Crowd. Create an AI governance framework even before you have extensive AI deployments—the framework guides decision-making and prevents reactive policy creation later.
60-Day Development: Build Core Capabilities
With assessment complete and quick wins underway, focus on building foundational team structures and processes. If you don’t have core Lab roles filled—particularly an AI product manager who can prioritize use cases and a data engineer who can build infrastructure—make these hiring priorities. Don’t wait for perfect candidates; hire people with 70% of the required skills and commit to developing the rest.
Launch role-specific AI training for the departments most affected by current or planned AI implementations. If you’re deploying AI-assisted underwriting, your underwriting team needs hands-on training before go-live, not after. This training should be practical and job-specific—how this AI tool affects this workflow—rather than theoretical.
Establish formal feedback loops between The Lab and The Crowd. Create regular forums where domain experts can share what’s working and what’s not with AI tools. Build processes for capturing this feedback and incorporating it into system refinements. Many AI implementations fail not because the technology is inadequate but because there’s no mechanism for continuous improvement based on user experience.
Begin building relationships with AI vendors, consultancies, and academic institutions that can supplement your internal capabilities. You don’t need to hire every role—strategic partnerships can fill gaps more quickly and cost-effectively. Look for partners who prioritize knowledge transfer rather than creating dependency.
90-Day Transformation: Scale and Sustain
By this point, you should have pilot projects delivering measurable results, core Lab capabilities in place, and initial Crowd training completed. Now the focus shifts to scaling what works and building sustainable capability development.
Expand AI implementations from pilots to production, incorporating lessons learned from early deployments. This expansion requires not just technical scaling but organizational change management—ensuring that processes, policies, and performance metrics align with AI-augmented workflows. Create career pathways that value AI competency, building AI skills into job descriptions, performance reviews, and promotion criteria across the organization.
Develop internal AI training programs that can onboard new hires and upskill existing employees without relying entirely on external providers. These programs should evolve as your AI capabilities grow, ensuring your workforce keeps pace with technology advancement. Establish metrics that track AI adoption, capability development, and business outcomes, creating visibility into whether your talent strategy is working.
Perhaps most importantly, formalize your governance framework and compliance processes before regulatory scrutiny intensifies. With nearly half of states having adopted NAIC AI guidelines and federal regulation continuing to evolve, proactive compliance is both cheaper and safer than reactive scrambling.
Beyond 90 Days: Continuous Evolution
AI talent strategy isn’t a project with an endpoint—it’s an ongoing organizational capability that must evolve as technology advances, regulation changes, and competitive dynamics shift. The insurers that succeed will be those that treat AI talent development as a core strategic priority, not a one-time initiative. They will maintain The Lab as a center of excellence that pushes boundaries while simultaneously raising The Crowd’s baseline AI literacy year after year. They will view AI vendors as capability-building partners rather than just technology suppliers. They will create career pathways that attract young talent excited about working at the intersection of insurance and AI. Most importantly, they will recognize that AI is not replacing human judgment in insurance—it’s amplifying it for those who develop the skills to work alongside AI systems effectively.
Conclusion: The Window Is Closing
The insurance industry has reached a defining crossroads. The technology exists, the business case is proven, and the competitive advantages of AI adoption are measurable and substantial. What separates winners from losers isn’t access to algorithms—it’s the ability to build organizations that can effectively deploy, use, and continuously improve AI systems.
The talent crisis is real, but it’s also solvable. The strategies outlined in this white paper—The Lab and The Crowd framework, phased implementation, vendor evaluation focused on capability building, proactive regulatory compliance, reimagined career pathways—are accessible to insurers of all sizes and budgets.
The carriers, wholesalers, and agencies that master the people side of AI won’t just survive the industry’s transformation. They will lead it.
Sources
BCG, “Insurance Leads in AI Adoption. Now It’s Time to Scale,” September 2025. https://www.bcg.com/publications/2025/insurance-leads-ai-adoption-now-time-to-scale
BCG, “To Win with AI, Insurers Must Go Beyond the Algorithm,” October 2025. https://www.bcg.com/publications/2025/to-win-with-ai-insurers-must-go-beyond-algorithm
BCG, “The Widening AI Value Gap: Build for the Future 2025,” September 2025. https://media-publications.bcg.com/The-Widening-AI-Value-Gap-Sept-2025.pdf
McKinsey & Company, “The Future of AI for the Insurance Industry,” July 2025. https://www.mckinsey.com/industries/financial-services/our-insights/the-future-of-ai-in-the-insurance-industry
McKinsey & Company, “The State of AI in 2025: Agents, Innovation, and Transformation,” November 2025. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
McKinsey & Company, “AI in Insurance: Understanding the Implications for Investors,” February 2026. https://www.mckinsey.com/industries/financial-services/our-insights/ai-in-insurance-understanding-the-implications-for-investors
Accenture Insurance Blog, “3 Ways to Prepare the Insurance Workforce for the Generative AI Era,” July 2025. https://insuranceblog.accenture.com/3-ways-to-prepare-the-insurance-workforce-for-the-generative-ai-era
Deloitte, “AI, Demographic Shifts, and Agility: Preparing for the Next Workforce Evolution,” August 2025. https://www.deloitte.com/us/en/insights/topics/talent/strategies-for-workforce-evolution.html
World Economic Forum, “Future of Jobs Report 2025,” January 2025. https://reports.weforum.org/docs/WEF_Future_of_Jobs_Report_2025.pdf
Workday, “How AI Can Help Solve the Insurance Industry’s Talent Crisis,” January 2024. https://blog.workday.com/en-us/how-ai-can-help-solve-insurance-industrys-talent-crisis.html
NAIC, “Model Bulletin on the Use of Artificial Intelligence Systems by Insurers,” December 2023. https://content.naic.org/sites/default/files/cmte-h-big-data-artificial-intelligence-wg-ai-model-bulletin.pdf.pdf
NAIC, “Statement on AI Executive Order,” December 2025. https://content.naic.org/article/statement-national-association-insurance-commissioners-naic-ai-executive-order
Fenwick, “Tracking the Evolution of AI Insurance Regulation,” December 2025. https://www.fenwick.com/insights/publications/tracking-the-evolution-of-ai-insurance-regulation
Holland & Knight, “The Implications and Scope of the NAIC Model Bulletin on the Use of AI by Insurers,” May 2025. https://www.hklaw.com/en/insights/publications/2025/05/the-implications-and-scope-of-the-naic-model-bulletin
Quarles Law, “Nearly Half of States Have Now Adopted NAIC Model Bulletin,” March 2025. https://www.quarles.com/newsroom/publications/nearly-half-of-states-have-now-adopted-naic-model-bulletin-on-insurers-use-of-ai
Insurance Journal, “The Insurance Industry’s Talent Crunch: Attracting and Retaining Gen Z,” March 2025. https://www.insurancejournal.com/magazines/mag-features/2025/03/24/816425.htm
Insurance Thought Leadership, “Attracting Next-Generation Talent to Insurance,” December 2025. https://www.insurancethoughtleadership.com/talent-gap/attracting-next-generation-talent-insurance
IA Magazine, “How the Insurance Industry Is Tackling the Talent Crisis,” February 2026. https://www.iamagazine.com/2026/02/01/how-the-insurance-industry-is-tacking-the-talent-crisis/
Hanover Search, “6 Risk Factors Facing the Insurance Industry in 2025,” November 2025. https://www.hanoversearch.com/blog/top-risk-factors-facing-the-insurance-industry-in-2024/
Roots AI, “10 Insurance AI Predictions for 2026,” December 2025. https://www.roots.ai/blog/10-insurance-ai-predictions-2026-forecasting-shift-from-promise-performance
eMarketer, “BCG Study Examines AI Adoption Among Insurers,” December 2025. https://www.emarketer.com/content/insurers-stuck-ai-pilot-purgatory
The Talent Pool, “2026 Insurance Hiring Trends and Forecast,” January 2026. https://www.thetalentpool.ai/blogs/2026-insurance-hiring-trends-and-forecast/
Sonant AI, “Insurance Agency Talent Shortage,” 2025. https://www.sonant.ai/blog/insurance-agency-talent-shortage
AgentSync, “2026 Insurance Industry Predictions: AI Edition,” December 2025. https://agentsync.io/blog/technology/2026-insurance-industry-predictions-ai-edition
iNube Solutions, “The 5 AI Pitfalls That Insurers Didn’t See Coming Within the AI Rut in 2025,” 2025. https://inubesolutions.com/resource/the-5-ai-pitfalls-that-insurers-didnt-see-coming-within-the-ai-rut-in-2025/
James W. Moore is the founder of InsuranceIndustry.ai and AgencyEvolved, with over 40 years of experience across insurance carriers, agencies, and wholesalers. He holds a degree in Finance with a specialization in Insurance and brings a background in IT management, sales, marketing, and website development to his analysis of AI’s impact on the insurance industry.
Subscribe to the InsuranceIndustry.ai Weekly AI Insights Newsletter for ongoing coverage of AI developments that matter to insurance professionals.
AI Disclaimer: This blog post was created with assistance from artificial intelligence technology. While the content is based on factual information from the source material, readers should verify all details, pricing, and features directly with the respective AI tool providers before making business decisions. AI-generated content may not reflect the most current information, and individual results may vary. Always conduct your own research and due diligence before relying on information contained on this site.

