Your Best Underwriters Are Leaving. What Happens to What They Know?

AI Can Now Capture Expert Judgment Before It Walks Out the Door. The Question Is Whether Carriers Will Pay for It.

By James W. Moore | InsuranceIndustry.AI


Earlier this month, painter Michael Hafftka did something unusual. He published his entire life’s work as a structured AI dataset on Hugging Face, the open-source machine learning platform. Roughly 3,000 to 4,000 documented works, spanning fifty years, complete with metadata: medium, dimensions, provenance, collection history. His paintings hang in the Metropolitan Museum of Art, MoMA, SFMOMA, and the British Museum.

Hafftka’s reasoning was blunt: “I want my work to have a future and the future involves AI. I would rather engage with that on my own terms than wait for it to happen to me.”

He isn’t selling it. The dataset carries a noncommercial license. What he’s doing is more interesting than monetization. He’s turning a career’s worth of accumulated judgment into structured, retrievable, machine-readable knowledge. He’s making sure his expertise survives him.

The insurance industry should be paying attention. Not because carriers need to publish datasets on Hugging Face, but because they face a version of the same problem Hafftka solved, and most of them haven’t started working on it.

The Numbers Are Stark

The insurance industry’s demographic math has been discussed for years, but it hasn’t gotten any less alarming with repetition.

The Bureau of Labor Statistics estimates that over the next 15 years, 50 percent of the current insurance workforce will retire, leaving more than 400,000 positions unfilled. The median age of insurance industry employees sits at 44 to 45, above the national workforce median of 42.2. One in four insurance professionals is already 55 or older. According to one recent analysis, the ratio of retirement-age employees to young entrants in insurance stands at roughly six to one. The Jacobson Group/Aon Q1 2026 Insurance Labor Market Study confirms that technology, claims, and underwriting roles are expected to see the greatest demand growth, while actuarial, executive, and analytics positions remain the hardest to fill for the fifth consecutive survey.

In London, the picture is similarly acute. At specialty insurer Convex, Head of Talent & Growth Suzanne Bray recently noted that more than a quarter of underwriters are now over 50. “Following Covid, a lot of experienced professionals left the workforce, and at the same time, internship and grad programs were frozen. So, we’ve had a talent gap at both ends.”

There are early signals that AI itself may be accelerating the transition. The Jacobson/Aon Q1 2026 study found that 43 percent of carriers plan to maintain their current staff size this year, a 15-year high. Jeff Rieder, head of benchmarking at Aon’s Strategy and Technology Group, noted that companies may be “somewhat pausing on hiring plans to see just how artificial intelligence will be adopted within the organization.” Job openings in insurance and finance fell to 138,000 in December 2025, the lowest monthly level in a decade. If AI is reducing the need for some positions while retirements eliminate others, the window for knowledge transfer is narrowing from both directions.

This isn’t just a staffing problem. It’s a knowledge problem. When a senior underwriter with thirty years of specialty experience walks out the door, they take with them pattern recognition that no policy administration system has ever captured. They take the instinct that says “this submission looks clean but something feels wrong.” They take the memory of the last three times a particular class of risk blew up and what the early warning signs looked like.

Most Organizations Aren’t Even Trying

A 2025 APQC survey of 1,000 knowledge management professionals, conducted in partnership with eGain, found that organizations expect an average of 51 percent of their workforce to retire or leave within five years. That finding alone should concentrate attention. But the response data is worse.

A staggering 92 percent of organizations do not capture knowledge on a regular basis. Forty-one percent said they “rarely or never” even attempt to collect expertise from departing employees. Among those that do make the effort, 30 percent wait until the employee is literally heading for the exit to begin. Another 23 percent conduct last-minute elicitation interviews that are, by their own admission, rushed and shallow. Fifty-three percent of organizations still rely on manual capture and documentation methods for whatever knowledge transfer they do attempt.

Meanwhile, a Deloitte study estimated that Fortune 500 companies lose approximately $31.5 billion annually due to knowledge attrition, a figure expected to double by 2030. The Panopto Workplace Knowledge and Productivity Report found that a firm with 1,000 employees loses approximately $2.4 million per year in productivity due to knowledge inefficiencies alone. MIT research suggests that formal documentation represents only about 20 percent of what employees know. The remaining 80 percent exists as tacit knowledge: unwritten rules, contextual understanding, and experience refined through years of problem-solving.

For insurance, which runs on expert judgment more than almost any other industry, that 80 percent figure should be deeply unsettling.

What Convex Discovered About What Underwriters Actually Know

One of the most revealing recent experiments came from Convex, the London-based specialty insurer. The company partnered with a behavioral scientist to study what experienced underwriters actually do when preparing for client meetings. The underwriters themselves thought the process involved about five steps.

It turned out to be fifteen.

Fifteen nuanced, largely unconscious processes that veteran underwriters execute automatically, honed over decades of practice. The kind of knowledge that never appears in a procedures manual because the people who have it don’t realize they’re doing it. This is what cognitive scientists call “tacit knowledge,” and it’s the heart of the problem. Experts are often the worst people to ask about their own expertise, because so much of it has become invisible to them through years of repetition.

This is also why traditional knowledge transfer methods fail. You can’t capture thirty years of underwriting judgment in a two-week handover or a PowerPoint deck. The knowledge isn’t in the procedures. It’s in the exceptions to the procedures, the situations where the textbook answer is wrong, and the subtle cues that tell an experienced professional when to override the model.

The Apprenticeship Model and Its Limits

To understand why AI-facilitated knowledge capture matters, it helps to name what carriers actually do today. And what they do, in most cases, is run an informal apprenticeship system that hasn’t fundamentally changed in decades.

A junior underwriter gets hired. They’re assigned to shadow a senior underwriter, or placed on a team where they’re expected to absorb expertise through proximity and repetition. Over a period of years, they gradually take on more complex submissions, check their instincts against their mentor’s judgment, and slowly build the pattern recognition that separates a competent underwriter from an exceptional one. The BLS notes that beginning underwriters “typically work under the supervision of senior underwriters for up to 12 months” before working independently, and even then, it takes years more to develop genuine expertise in complex lines. The industry treats this as a knowledge transfer methodology. It’s not. It’s a hope.

The apprenticeship model has always been fragile. It depends on the senior underwriter being a willing and effective teacher, two qualities that don’t necessarily correlate with underwriting talent. It depends on the junior underwriter being assigned to the right mentor at the right time. It depends on sufficient overlap between the two careers, which the retirement wave is rapidly eliminating. And it depends on years of sustained, close-proximity interaction in an era when hybrid work, reorganizations, and M&A activity routinely disrupt those relationships. The Jacobson/Aon Q1 2026 study notes that 71 percent of carriers now expect most employees to work a hybrid schedule, making the old “sit next to Bob for three years” model even less viable.

Even when the model works perfectly, it’s fundamentally unscalable. One senior underwriter can meaningfully mentor one, maybe two junior colleagues at a time. When that senior underwriter retires, the knowledge transfer is only as complete as whatever the mentee happened to absorb over the preceding years. Whatever the mentee didn’t pick up is gone. Whatever the senior underwriter couldn’t articulate (remember: fifteen steps, not five) was never transmitted at all.

The industry has operated this way for so long that it feels like a natural law. But it’s worth stating plainly: carriers are running their most critical knowledge transfer process on an informal, unscalable system with no documentation, no quality control, and no fallback when it fails. They would never accept that level of rigor in their actuarial models or their claims processes. They accept it in knowledge transfer because they’ve never had a better option.

Now they do.

It’s Not Just Underwriting

Underwriting gets the most attention in these discussions, but the same knowledge-loss risk runs across the insurance value chain.

Complex claims adjusting. A senior workers’ compensation adjuster knows which medical providers in a given region drive up costs, which attorneys signal litigation early, and which injury patterns predict prolonged claims. The BLS projects about 21,600 openings per year for claims adjusters over the coming decade, almost entirely from retirements and transfers. Each departure takes years of local market intelligence with it.

Reserving and actuarial judgment. Actuarial roles have been the hardest to fill in the industry for five consecutive Jacobson/Aon surveys. Senior actuaries know which lines develop adversely in specific conditions, which assumptions have historically proven too optimistic, and when to override model output based on market intelligence. The U.S. P&C industry reported $15.8 billion in adverse prior-year reserve development for casualty lines in 2024, the highest on record for those segments according to Milliman. The value of experienced reserving judgment is not theoretical.

Regulatory and compliance navigation. With 23 states plus Washington D.C. having adopted the NAIC’s AI Model Bulletin and a draft model law on third-party AI oversight anticipated in 2026, compliance expertise is becoming more specialized and more valuable. The professionals who know how to navigate multi-state regulatory frameworks and translate evolving AI governance requirements into operational reality represent another layer of institutional knowledge at risk.

Broker and distribution relationships. In specialty and wholesale markets, the relationship between an underwriter and their broker network is itself a form of institutional knowledge. Which brokers produce quality submissions? Which ones are reliable when losses develop? Which accounts have history that doesn’t appear in the data? This intelligence is almost entirely undocumented.

The pattern is the same in every case: critical knowledge lives in the heads of experienced professionals, transfers informally if it transfers at all, and disappears permanently when those professionals leave.

Enter the AI Interrogator

Here’s where the technology has genuinely changed the game.

Large language models are, it turns out, remarkably well-suited to the specific task of expert knowledge elicitation. They can conduct long, structured interviews. They can follow conversational threads wherever they lead. They can probe for edge cases, ask “what about when that doesn’t work?” and circle back to inconsistencies. They can do this for hours without fatigue, ego, or political sensitivity.

Imagine the scenario: a senior specialty underwriter, six months from retirement, sits down with an AI system for structured sessions over her final quarter. The system walks her through scenarios drawn from real submissions in her book of business. “You’re looking at a habitational risk in coastal Florida. The loss history is clean, but the roof is 15 years old and the insured just switched from an admitted carrier. Walk me through your thinking.”

She doesn’t give the textbook answer. She gives the real answer. The factors she weighs that aren’t in the rating model. The broker behaviors that make her skeptical. The combinations of characteristics that, individually, look fine but together signal trouble. The system captures not just what she decides, but why, and under what conditions her reasoning changes.

Or consider the claims side. A senior complex claims examiner, three months from retirement, works through a scenario: “You’ve got a workers’ comp claim, soft tissue back injury, six months in. The claimant’s treating physician is recommending surgery, the employer is pushing for an IME, and the claimant just retained counsel. What are you watching for?” The examiner walks through the signals that predict whether this claim settles for $40,000 or spirals to $400,000. She explains why certain attorney-physician combinations in certain jurisdictions change her reserve immediately. She identifies the behavioral patterns in claimant communication that, in her experience, indicate either genuine distress or strategic positioning. None of this is in the claims manual.

The result isn’t a replacement for the underwriter or the adjuster. It’s a structured corpus of expert reasoning that can inform training for junior underwriters, provide context for AI decision-support tools, or simply preserve institutional memory that would otherwise evaporate on her last day.

This isn’t speculative. The concept is already being operationalized in other industries. Sandia National Laboratories began a formal knowledge management pilot in 2019 to capture tacit knowledge from experts retiring from the Nuclear Energy Fuel Cycle program. The program used focus groups, multi-day workshops, four-hour deep-dive sessions, and structured interviews with retiring employees, all recorded, transcribed, tagged, and stored in a searchable archive. Within Sandia’s 180-person nuclear energy program, the internal knowledge archive receives about 2,000 visits per month. As senior manager Tito Bonano put it: “The expertise and experiences of people like myself…walks away when we leave the organization. We had to act to stop the bleeding of tacit knowledge.”

The International Atomic Energy Agency has published formal technical guidance on tacit knowledge retention for nuclear organizations, noting that approximately 55 percent of staff in some nuclear operations may retire within 15 years. The Tennessee Valley Authority assessed the expertise of more than 2,000 positions and built structured knowledge retention programs after concluding that 30 percent of its workforce (about 4,000 employees across its coal, hydroelectric, and nuclear facilities) could retire within five years.

If nuclear power plants and national laboratories consider this problem urgent enough to build formal programs around it, the insurance industry’s continued reliance on informal mentorship should be embarrassing.

The University of Vermont now offers a Knowledge Transfer and Succession Planning Certificate specifically addressing AI-driven strategies for expertise capture. The program focuses on what they call “legacy interviews,” structured conversations designed to capture how experts think, not just what they produce. The concept of AI-facilitated knowledge elicitation has moved from academic theory to practical methodology.

The Incentive Problem (and How to Solve It)

Technology isn’t actually the hard part. Incentives are.

Active employees often resist structured knowledge capture because it feels like they’re training their replacement. That instinct isn’t irrational. In many organizations, being the “go-to” person is a significant source of job security and professional identity. Documenting your expertise can feel like diminishing your value. Research on knowledge hoarding identifies several drivers: fear of losing status, internal competition, resentment toward the organization, and simple lack of time or tools.

But retiring employees are a different proposition entirely. Someone who has already decided to leave has a fundamentally different relationship to their institutional knowledge. They don’t need to protect it. Many, in fact, want to see it preserved. They’ve spent decades building expertise and they don’t want it to disappear.

The key is making it worth their while. Consider three compensation models:

The knowledge premium. Increase the retiring employee’s final-year compensation by 10 to 15 percent, contingent on participation in a structured knowledge extraction program during their last three to six months. For a senior commercial underwriter (average salary in the range of $110,000 to $130,000 according to Salary.com and ZipRecruiter data), that’s an additional $11,000 to $19,500. For a senior specialty underwriter at a major carrier, where total compensation can reach $150,000 to $200,000 or more, the premium could run $15,000 to $30,000.

The exit bonus. Offer an additional lump sum equivalent to three to six months’ salary, payable upon completion of a documented knowledge transfer program. This has the advantage of tying compensation directly to deliverables.

The early retirement sweetener. Waive one or more years of service requirements for full retirement benefits in exchange for participation in a knowledge capture program. This reframes the entire transaction. You’re not asking them to train their replacement. You’re offering them a better exit in exchange for something valuable.

The Cost-Benefit Math

Run the numbers from the carrier’s perspective. The economics of knowledge capture become clear quickly, even using conservative estimates.

What knowledge loss costs. A senior specialty underwriter’s accumulated judgment allows them to spot adverse selection, read broker behavior, and recognize submissions that look actuarially sound but carry hidden correlation risk. Consider a mid-size commercial lines carrier with a $500 million book of business and 40 senior underwriters approaching retirement within the next five years. If even a fraction of those departures result in degraded risk selection, the impact compounds rapidly. In commercial auto alone, where the industry posted a $4.9 billion underwriting loss in 2024 (its 14th consecutive year of losses), a single large adverse verdict can run into the millions. The U.S. P&C industry reported $15.8 billion in adverse prior-year reserve development for casualty lines in 2024, the highest on record. Much of this reflects exactly the kind of judgment failure that occurs when experienced professionals are replaced by less experienced ones working with incomplete information.

What a knowledge capture program costs. For a carrier with 40 senior professionals approaching retirement, a structured program might look like this (these are illustrative estimates based on the salary data and program structures discussed above):

Knowledge premiums for 40 retiring professionals at an average of $15,000 each: $600,000. AI-facilitated interview platform and administration (estimated): $100,000 to $200,000 annually. Program management (one dedicated FTE plus partial support): $150,000 to $200,000. Knowledge structuring, tagging, and integration into decision-support tools: $100,000 to $150,000 annually.

Total estimated annual cost: roughly $950,000 to $1.15 million.

What the return looks like. Against that investment, the carrier needs to prevent only a handful of adverse outcomes that an experienced underwriter would have avoided. If the program’s captured knowledge helps junior underwriters avoid even two or three large losses per year that a less experienced professional would have written, the return is measured in multiples, not percentages. The Panopto research found that knowledge inefficiencies cost firms with 1,000 employees roughly $2.4 million annually. For a carrier with several thousand employees across underwriting, claims, and actuarial functions, the addressable cost of knowledge loss is almost certainly in the tens of millions.

These are estimates, not audited figures. Every carrier’s math will differ based on their size, specialty mix, and retirement profile. But the directional case is overwhelming: structured knowledge capture is cheap relative to the cost of the knowledge it preserves.

Why Retirees Are Actually Better Sources

There’s a counterintuitive advantage to capturing knowledge from people who have already left or are about to leave. They have no political incentive to shade their answers.

Active employees manage up, protect territory, and navigate internal politics. A current underwriter might hesitate to say “I never trusted the numbers on that program because the broker is known for massaging loss runs.” A retiree has no such constraints. They can be blunt about which risks they always avoided and why, which relationships they considered unreliable, and which institutional assumptions they thought were wrong.

This candor makes the captured knowledge more valuable, not less. The unvarnished version of expert judgment, including its skepticism and biases, is precisely what’s hardest to transmit through formal channels and most useful to preserve.

Sandia’s knowledge management team found this dynamic at work in their nuclear energy program. They discovered that recording interviews with recently retired employees was particularly productive because retirees could speak freely about institutional dynamics, historical decision-making context, and lessons learned from failures in ways that current employees sometimes could not.

The Regulatory Dimension: Data Provenance as Compliance Asset

There’s an angle to knowledge capture that most carriers haven’t considered: regulatory compliance.

The NAIC adopted its Model Bulletin on the Use of Artificial Intelligence Systems by Insurers in 2023, requiring insurers to implement written AI governance programs emphasizing transparency, fairness, and accountability. By late 2025, 23 states plus Washington D.C. had adopted the bulletin or similar guidance, with more following. The NAIC’s Big Data and AI Working Group has been actively exploring whether a comprehensive AI model law is necessary, with a draft model law on third-party AI oversight anticipated in 2026. Colorado’s AI Act, effective February 2026, will require insurers to follow governance and testing procedures to prevent unfair discrimination.

The core regulatory thrust is explainability. Regulators want to know how AI systems reach decisions, what data they were trained on, and whether the process can be audited. The NAIC Model Bulletin specifically requires “transparency and explainability” and the ability to explain “how inputs lead to specific outputs or decisions.” Insurers should expect regulatory examinations to include questions about AI governance frameworks, model documentation, and the provenance of training data.

This is where captured expert knowledge becomes a compliance asset, not just an operational one. A carrier that can demonstrate its AI decision-support tools were informed by structured, documented expert reasoning from its own senior professionals has a fundamentally different compliance posture than a carrier relying entirely on vendor models trained on undisclosed data. When a regulator asks “why did your system recommend declining this risk?” the carrier that can point to documented expert judgment as part of the model’s training has a credible answer. The carrier using a black-box vendor model does not.

As Fenwick’s regulatory analysis noted, insurers should prepare for “documentation of model origins and standards for explainability.” Expert knowledge capture, properly structured and documented, provides exactly this kind of audit trail.

What Other Industries Have Learned

Insurance is not the first industry to face this problem. It is, however, unusually late in addressing it.

Nuclear power. The International Atomic Energy Agency has been publishing formal guidance on tacit knowledge retention since the early 2000s, recognizing that nuclear plant safety depends on experienced operators whose judgment can’t be fully captured in procedures manuals. Sandia National Laboratories’ pilot program, built on ISO 30401:2018 standards and Knowledge Management Institute best practices, provides a practical template: focus groups to identify what knowledge matters most, structured workshops led by subject matter experts, deep-dive sessions on specific topics, and recorded interviews with departing and recently departed experts. The IAEA’s framework explicitly distinguishes between explicit knowledge (what can be written in a manual), implicit knowledge (what could be articulated if someone asked the right questions), and tacit knowledge (what the expert doesn’t even realize they know). All three require different capture methodologies.

Federal energy infrastructure. The Tennessee Valley Authority’s program assessed more than 2,000 positions for knowledge-loss risk after recognizing that 30 percent of its workforce could retire within five years. TVA’s key insight was that not all knowledge is equally worth capturing. They focused on knowledge that was both critical and undocumented, rather than attempting a comprehensive “brain dump.” As one program manager put it: “You don’t necessarily need to build the entire encyclopedia. What we do need is to find a few key pages and make sure we don’t lose those.”

Manufacturing and utilities. McKinsey has flagged that 57 percent of institutional knowledge in established “brown-stack” industries is at risk in the coming decade. The American Society for Training and Development found that 68 percent of companies in these sectors have no formal knowledge transfer program. eGain’s AI knowledge platform is already being deployed in facilities management and utilities, with one Fortune 100 manufacturer reporting that it had “spent years worrying about machinery depreciation while ignoring the depreciation of our knowledge assets.”

The insurance industry shares the key characteristics of these sectors: long-tenured workforces, expertise that takes decades to develop, high consequences for poor decisions, and knowledge that exists primarily in the heads of experienced professionals. The difference is that nuclear power, energy, and manufacturing have acknowledged the problem and built programs to address it. Most carriers have not.

The Strategic Disconnect

Carriers are collectively spending billions on AI initiatives. They are licensing vendor platforms, hiring data science teams, and building machine learning pipelines. The Jacobson Group/Aon Q1 2026 study confirms that technology roles remain the insurance industry’s top talent need, and companies are pausing hiring in other areas partly to assess how AI will improve various functions.

But what are they training these AI systems on? Structured data from policy administration systems, claims databases, and rating engines. Important, certainly. But it’s the 20 percent. It’s the documented, explicit knowledge. The 80 percent, the tacit knowledge that lives in the heads of experienced professionals, isn’t being captured at all. Carriers are building sophisticated AI tools and feeding them incomplete information. It’s like buying a Formula 1 car and fueling it with regular unleaded.

That disconnect should keep insurance executives awake at night.

The data provenance question makes it even more pointed. As AI systems take on larger roles in underwriting and claims decisions, the quality and origin of their training data becomes a competitive differentiator. A carrier whose AI tools are informed by structured, proprietary expert knowledge has a defensible advantage that can’t be replicated by licensing the same vendor platform everyone else uses. That proprietary knowledge base becomes a moat.

Meanwhile, a painter in New York figured this out on his own. Michael Hafftka understood that his most valuable contribution to the future wasn’t any single painting. It was the structured, documented, machine-readable record of fifty years of looking at the human figure. He didn’t wait for someone to scrape his work off the internet. He took control of the process.

Carriers could learn something from that.

What This Looks Like in Practice

A realistic implementation doesn’t require a massive technology investment. It requires a deliberate program with five components:

Identify the expertise at risk. Map your most experienced professionals against their retirement timelines. Focus first on specialty underwriting, complex claims, actuarial reserving, and any role where judgment materially affects outcomes. TVA’s approach offers a useful model: prioritize knowledge that is both critical to operations and currently undocumented. Not everything needs to be captured. The highest-value targets are the people whose departure would leave a gap that can’t be filled by reading a manual.

Design the capture process. Use AI-facilitated structured interviews built around real scenarios from the expert’s own book of business. The best knowledge elicitation isn’t abstract. It’s grounded in specific decisions the expert has actually made. Sandia’s methodology of combining focus groups, deep-dive workshops, and individual interviews offers a proven multi-format approach. Record reasoning, not just conclusions. Capture the conditions under which the expert would change their decision.

Create the right incentives. Compensation should reflect the genuine value of what’s being captured. An extra $15,000 to $30,000 for a retiring senior underwriter whose knowledge prevents a single large loss is the best return on investment in the building. Structure the incentive so participation feels like recognition, not extraction. Start knowledge capture six to twelve months before the planned retirement date, not in the final two weeks.

Make the knowledge actionable. Captured expertise has no value sitting in a transcript archive. It needs to flow into onboarding programs, decision-support tools, and eventually AI training data. Sandia’s approach of transcribing, tagging, and building a searchable archive with roughly 2,000 monthly visits from a 180-person team shows what engagement looks like when the knowledge is accessible and relevant. The goal isn’t documentation for its own sake. It’s institutional memory that compounds over time.

Document for compliance. Structure the capture program so that its outputs can serve as evidence of AI model provenance and governance when regulators come asking. Every structured interview, every documented decision rationale, every tagged knowledge artifact becomes part of an auditable trail that demonstrates how your organization’s AI systems are informed by genuine expert judgment. This turns a knowledge management initiative into a regulatory asset.

The Window Is Closing

The insurance industry has perhaps three to five years before the combined effects of mass retirement and AI transformation make this problem exponentially harder to solve. Every month that passes without a structured knowledge capture program is another month of institutional expertise quietly disappearing.

The carriers who act now will have something their competitors cannot replicate: proprietary, structured datasets of expert judgment built from decades of real-world decision-making. When regulators ask “what did your AI model learn from, and can you prove it?” those carriers will have an answer. When the next hard market cycle reveals which companies actually understand their risks and which were just running algorithms on incomplete data, the difference will show up on the balance sheet.

Other industries have recognized this problem and built formal programs to address it. Nuclear power plants have ISO-certified knowledge management frameworks. National laboratories run structured elicitation programs with retiring scientists. Utilities have assessed thousands of positions for knowledge-loss risk. The insurance industry, which arguably depends on expert judgment more than any of these sectors, is still relying on informal mentorship and hope.

Michael Hafftka published fifty years of artistic judgment as structured data because he understood that the future would be shaped by what machines could learn from human expertise. Insurance carriers have the same opportunity, with far more at stake.

The question isn’t whether this knowledge has value. It’s whether you’re capturing it before the retirement cake and the gift card.


InsuranceIndustry.AI provides independent analysis of artificial intelligence’s impact on the insurance industry. Subscribe to our newsletter for weekly coverage of the trends reshaping insurance.


Sources:

  • APQC/eGain, “Navigating the Great Retirement with KM & AI” (2025 survey of 1,000 KM professionals): apqc.org
  • The Jacobson Group/Aon, Q1 2026 Semi-Annual U.S. Insurance Labor Market Study: jacobsononline.com
  • Insurance Journal, “Study: AI May Be Tempering Insurer Hiring” (March 2026): insurancejournal.com
  • InsuranceNewsNet, “Insurance Industry Retirement Exodus Creating a Talent Gap” (July 2025), including Convex/Suzanne Bray behavioral science findings: insurancenewsnet.com
  • Accenture Insurance Blog, “How to Address the Urgent Insurance Workforce Gap with Technology”: insuranceblog.accenture.com
  • Bureau of Labor Statistics, Occupational Outlook Handbook: Insurance Underwriters; Claims Adjusters, Appraisers, Examiners, and Investigators: bls.gov
  • MIT research on tacit vs. explicit knowledge distribution (80/20 framework), as cited in eGain, “Capturing Tacit Knowledge from the Great Retirement Cohort Using GenAI”: egain.com
  • Deloitte study on Fortune 500 knowledge attrition costs ($31.5 billion annually), as cited in eGain
  • Panopto, “Let Your Experts Retire — Not Their Expertise” (Workplace Knowledge and Productivity Report findings): panopto.com
  • Milliman, “U.S. Casualty Insurance 2024 Financial Results: What Kind of Market Are We In?”: milliman.com
  • Risk & Insurance, “Commercial Auto Insurance Losses Hit $4.9 Billion” (September 2025): riskandinsurance.com
  • Sandia National Laboratories, Knowledge Management Program for Nuclear Energy Fuel Cycle: sandia.gov
  • Sandia National Laboratories, “Retaining Knowledge of Nuclear Waste Management” (2021): newsreleases.sandia.gov
  • International Atomic Energy Agency, IAEA-TECDOC-1510, “Knowledge Management for Nuclear Industry Operating Organizations”: iaea.org
  • FCW, “Knowledge Retention Helps Agencies Retain Employees’ Expertise” (Tennessee Valley Authority program): fcw.com
  • NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers (December 2023): naic.org
  • Fenwick, “Tracking the Evolution of AI Insurance Regulation” (December 2025): fenwick.com
  • Baker Tilly, “The Regulatory Implications of AI and ML for the Insurance Industry” (December 2025): bakertilly.com
  • Jonusgroup, “Insurance Talent: Why 1.4 Million Retirements Will Reshape the Industry”: jonusgroup.com
  • ZipRecruiter and Salary.com, Senior Insurance Underwriter and Senior Commercial Underwriter salary data (2025-2026)
  • University of Vermont, Knowledge Transfer and Succession Planning Certificate: learn.uvm.edu
  • Michael Hafftka, Catalog Raisonné Dataset: huggingface.co/datasets/Hafftka/michael-hafftka-catalog-raisonne

AI Disclaimer: This blog post was created with assistance from artificial intelligence technology. While the content is based on factual information from the source material, readers should verify all details, pricing, and features directly with the respective AI tool providers before making business decisions. AI-generated content may not reflect the most current information, and individual results may vary. Always conduct your own research and due diligence before relying on information contained on this site.