The ‘Lab’ and ‘Crowd’ Framework: Why Your AI Transformation Needs Both Technical Experts AND Everyone Else

Executive Summary

Insurance companies pursuing AI transformation face a persistent organizational challenge. They either create isolated teams of data scientists and AI specialists who struggle to gain traction, or they attempt to democratize AI across the entire workforce without the technical expertise to build effective solutions. Both approaches consistently fail to deliver meaningful business impact.

Recent research from leading consulting firms reveals a more successful organizational structure for AI implementation: the “Lab” and “Crowd” framework. This approach recognizes that AI transformation requires two distinct but interconnected groups. The “Lab” consists of technical specialists who rebuild processes and develop AI capabilities. The “Crowd” comprises employees across the organization who need sufficient AI fluency to integrate these tools into daily workflows.

Insurance companies that successfully implement both components are achieving genuine transformation. Those that focus on only one group remain stuck in pilot programs or see AI tools abandoned after initial deployment.

Key Takeaways:

  • Successful AI transformation requires both specialized technical teams (the Lab) and broad workforce AI literacy (the Crowd)
  • The Lab rebuilds processes and creates tools; the Crowd integrates them into business operations
  • Neither group alone delivers sustainable impact—transformation happens at their intersection
  • Insurance faces unique challenges including retiring expertise, talent shortages, and the need to maintain regulatory compliance
  • Companies must invest equally in building technical capabilities and developing organizational AI fluency

The Failed Patterns: Why One Without the Other Doesn’t Work

Insurance executives have witnessed two common failure patterns in AI implementation. Understanding why these approaches fall short illuminates why the Lab and Crowd framework offers a better path forward.

Pattern One: The Isolated Innovation Team

A carrier creates an “AI Center of Excellence” staffed with data scientists, machine learning engineers, and technology specialists. This team develops impressive AI models, generates pilot project success stories, and presents compelling demos to leadership. Yet six months later, operational impact remains marginal.

Why? The technical team operates in isolation from the business units whose work they’re trying to transform. Underwriters don’t trust the AI recommendations because they don’t understand how the models work. Claims adjusters resist using new tools that seem to add complexity rather than reduce it. Agency partners ignore AI-powered prospecting systems because no one explained how to integrate them into existing workflows.

According to BCG’s 2024 Global Build for the Future study, only 7% of insurance companies successfully scale AI enterprise-wide. The remaining 93% remain stuck in this pattern—technically proficient pilots that never achieve operational integration.

Pattern Two: Democratization Without Foundation

Alternatively, some companies pursue the opposite approach. Inspired by the consumer success of ChatGPT, they provide generative AI access to the entire workforce, conduct basic training sessions, and encourage experimentation. Everyone has access to AI tools, but few know what to do with them.

Underwriters generate claim summaries that contain hallucinated policy details. Customer service representatives use AI to draft responses that violate regulatory requirements. Actuaries build models on low-quality data without understanding the limitations. The result is scattered, inconsistent, and often problematic applications of AI technology.

As one senior partner at Bain noted, “Many CEOs view scaling AI as a hiring challenge. In reality, the talent to drive transformation often already exists within the business.” But that talent needs structure, technical support, and the right tools to be effective.

The Lab and Crowd Framework Explained

The more successful approach, documented in Bain’s Southeast Asia CEO guide to AI transformation and observed across insurance companies achieving genuine impact, involves deliberately building and connecting two distinct organizational capabilities.

The Lab: Technical Excellence and Process Reinvention

The Lab consists of specialized talent—data scientists, AI engineers, business analysts, and “translators” who bridge technical and business domains. This group has several critical responsibilities:

Rebuilding Core Processes: Rather than automating existing workflows, the Lab fundamentally redesigns how work gets done. When Aviva transformed its claims operation with over 80 AI models, the Lab didn’t just speed up existing processes. They reconceived the entire claims journey, creating what McKinsey calls a “double helix” approach where claims seamlessly switch between digital and human interaction based on complexity and customer needs.

Creating Production-Ready Tools: The Lab develops AI systems that actually work in insurance environments—handling real-world data quality issues, integrating with legacy systems, maintaining regulatory compliance, and providing explainable outputs. These aren’t research projects or proofs of concept. They’re production systems that can run at enterprise scale.

Establishing Technical Guardrails: For newer employees and those transitioning into AI-augmented roles, the Lab creates what Ken Bunn, VP of Claims at Builders Mutual Insurance, describes as “guardrails for our newer folks.” AI models trained on millions of historical claims help new adjusters assess severity and properly reserve specific claim types, essentially providing junior staff with access to decades of institutional knowledge.

Providing Continuous Improvement: The Lab monitors AI system performance, identifies model drift, addresses bias issues, and continuously refines outputs based on feedback from business users. At insurance companies experiencing success with AI, this isn’t a one-time deployment but an ongoing partnership between technical specialists and operational teams.

The Crowd: Organizational AI Fluency

The Crowd comprises employees throughout the organization—underwriters, claims adjusters, agents, actuaries, customer service representatives, and operational staff. These are the people who actually use AI tools in their daily work. The Crowd needs different capabilities than the Lab:

Understanding AI Capabilities and Limitations: Employees need sufficient knowledge to recognize when AI can help, what it can’t do, and how to interpret its outputs. This doesn’t mean everyone becomes a data scientist, but they should understand concepts like confidence scores, training data limitations, and the possibility of biased outputs.

Prompt Engineering and Tool Usage: For generative AI applications, the Crowd needs practical skills in crafting effective prompts, validating outputs, and integrating AI-generated content into their workflows. An underwriter using an AI assistant needs to know how to frame questions to get useful risk assessments, not just how to access the tool.

Feedback and Improvement Loops: The Crowd provides the Lab with critical information about what works, what doesn’t, and where improvements are needed. This requires creating structured channels for feedback and making employees comfortable reporting issues without fear that they’ll be blamed for technology problems.

Change Management and Adoption: Perhaps most importantly, the Crowd includes champions who help other employees adopt new AI-augmented workflows. Peer-to-peer learning often proves more effective than formal training sessions, particularly in insurance where trust in expertise matters significantly.

Why Insurance Needs This Framework Now

Several industry-specific factors make the Lab and Crowd framework particularly urgent for insurance companies in 2025.

The Retirement Brain Drain

The U.S. Bureau of Labor Statistics projects that U.S. insurers could lose 400,000 workers through attrition by 2026, with 50% of the current insurance workforce retiring within 15 years. This “retirement brain drain” represents not just headcount loss but the departure of deep, nuanced knowledge developed over decades.

The Lab and Crowd framework addresses this challenge by using AI to capture and transfer institutional knowledge. As one industry observer noted, AI “enables new adjusters to benefit from models that have learned from millions of past claims,” allowing junior staff to access expertise that would have retired with departing employees.

However, this knowledge transfer only works when both the Lab and Crowd function effectively. The Lab must build systems that accurately encode expert judgment without perpetuating historical biases. The Crowd must trust and effectively use these systems while developing their own expertise. Neither can succeed alone.

The Talent Attraction Challenge

Millennials and Generation Z show limited interest in insurance careers, with 60% of people aged 14-22 perceiving the industry as “boring.” Yet these generations enter the workforce with high expectations for modern technology and career development opportunities.

Insurance companies using the Lab and Crowd framework can offer a more compelling value proposition to younger talent. Rather than entry-level positions involving repetitive manual data entry, new hires work with AI systems that handle routine tasks while they focus on developing judgment, client relationships, and strategic thinking.

Risk & Insurance magazine documented this transformation at several carriers. Underwriting assistants who once spent days on manual data entry now use intelligent document processing for extraction and validation. This frees them to shadow underwriters, participate in upskilling programs, and explore roles in analytics and claims. Many quickly progress into underwriting roles, aided by the time and capacity AI creates.

This upward mobility only happens when both the Lab creates tools that genuinely reduce tedious work AND the Crowd has pathways to develop higher-value skills.

The Complexity of Insurance Operations

Insurance involves intricate workflows spanning multiple systems, regulatory requirements, and decision points. Unlike industries where AI can deliver value through standalone applications, insurance typically requires coordinated changes across interconnected processes.

As Bain research notes in their analysis of successful AI transformations: “The software development lifecycle includes more than 40 discrete use cases. With less than half of developer time spent ‘hands on keyboard,’ copilots alone are insufficient. Meaningful productivity gains require coordinated changes across design, testing, code review, and planning.”

The same principle applies to insurance operations. Transforming claims processing isn’t just about automating damage assessment—it involves coordinating first notice of loss, assignment, investigation, estimation, approval, payment, and closure. Each step has different requirements, involves different stakeholders, and connects to different systems.

The Lab provides the technical architecture and coordination to rebuild these complex workflows. The Crowd provides the domain expertise and operational reality checks to ensure redesigned processes actually work in practice.

Building Your Lab: What It Takes

For insurance executives preparing to establish or strengthen their Lab capabilities, several elements prove critical to success.

The Right Mix of Talent

Successful Labs aren’t just collections of data scientists. They combine multiple specialties:

Data Scientists and ML Engineers: Obviously essential for building AI models, but they need insurance domain context to create genuinely useful systems rather than technically impressive but operationally irrelevant solutions.

Business Analysts and Translators: Perhaps the most critical role. These individuals understand both technical possibilities and business requirements. They translate between the Lab and the Crowd, ensuring technical solutions address real operational needs. At Aviva, translators were “involved at every step to ensure each new iteration and improvement in the 80+ AI models the analytics team built precisely reflected the needs of the claims teams.”

Insurance Domain Experts: Senior underwriters, claims professionals, or actuaries who bring deep knowledge of how insurance actually works. Their expertise prevents the Lab from building technically sound solutions that violate regulatory requirements, ignore market realities, or fail to account for edge cases that matter in insurance.

Change Management Specialists: Technical excellence means nothing if new tools aren’t adopted. The Lab needs people who understand organizational psychology, can design effective training programs, and can work with the Crowd to smooth implementation.

Proper Resourcing and Authority

Many insurance companies create AI teams but don’t give them the resources or authority to drive real change. Successful Labs need:

Access to Data: Not just historical claims data, but complete access to all relevant internal and external data sources. Data silos kill AI initiatives faster than any other single factor.

Authority to Redesign Processes: The Lab can’t just build tools to automate existing workflows. They need permission and support to fundamentally rethink how work gets done. This requires C-suite backing when the Lab’s recommendations challenge established practices.

Adequate Budget and Tools: Building production-ready AI systems for insurance requires significant investment in infrastructure, third-party data, model development tools, and computing resources. Under-resourced Labs produce proofs-of-concept that never scale.

Time Horizons That Match Reality: AI transformation takes years, not quarters. Labs need protection from pressure to show immediate ROI, particularly in the early phases when they’re establishing foundational capabilities.

Integration Points With Business Units

The Lab cannot operate in isolation from operational teams. Successful organizational structures create formal integration mechanisms:

Embedded Lab Members: Rather than all Lab members working in a separate location, some embed directly with business units. An AI engineer might work within the underwriting department, attending their meetings and understanding their daily challenges firsthand.

Regular Feedback Cycles: Structured processes for business units to provide input on what’s working, what isn’t, and what they need next. This isn’t annual surveys—it’s ongoing collaboration built into development sprints.

Joint Success Metrics: The Lab and business units should share accountability for business outcomes, not just technical metrics. If the claims department isn’t achieving its loss ratio targets, that’s the Lab’s problem too, not just a claims issue.

Developing Your Crowd: More Than Training Sessions

Building organizational AI fluency across the Crowd proves more challenging than creating the Lab. It requires sustained effort, multiple approaches, and genuine cultural change.

Tiered Learning Paths

Not everyone needs the same level of AI knowledge. Leading insurance companies are developing tiered learning programs:

AI Literacy for Everyone: Basic understanding of what AI is, how it’s being used at the company, and how to identify when AI might help with a task. This is mandatory for all employees and includes critical topics like data security, bias awareness, and knowing when to escalate issues.

Functional AI Fluency: Deeper training for specific roles on the AI tools they’ll actually use. Underwriters learn prompt engineering for AI underwriting assistants. Claims adjusters learn how to validate AI damage assessments. This is role-specific and hands-on, not theoretical.

Advanced Practitioners: Employees in each business unit who develop enough expertise to serve as local AI champions, provide peer support, and give feedback to the Lab. These individuals bridge the gap between pure technical specialists and daily users.

Learning in the Flow of Work

Traditional training programs—pulling employees off the job for classroom sessions—rarely deliver lasting behavior change. More effective approaches integrate learning into daily work:

AI-Powered Mentorship: Systems that provide just-in-time guidance when employees perform specific tasks. When a new claims adjuster encounters an unusual case, the AI can surface similar historical claims, suggest investigation approaches, and highlight potential red flags based on millions of previous examples.

Progressive Assistance: As Swiss Re describes in their upskilling approach, employees don’t become AI experts overnight. Initial implementations provide significant AI assistance and guidance. As users gain confidence and competence, the level of assistance gradually decreases while responsibility increases.

Communities of Practice: Creating forums—both digital and physical—where employees share tips, discuss challenges, and learn from each other’s experiences with AI tools. This peer-to-peer learning often proves more valuable than formal instruction.

Addressing the Trust Gap

Insurance professionals often express skepticism about AI recommendations, particularly in underwriting and claims where judgment and expertise matter. Building trust requires:

Explainability: The Crowd needs to understand not just what the AI recommends but why. “The model says decline this risk” isn’t sufficient. “The model identifies three red flags based on similar risks that resulted in loss ratios above 150%” provides actionable insight that builds confidence.

Human-in-the-Loop Design: Especially in early implementations, AI should augment human decision-making rather than replace it. Aviva’s “double helix” approach lets straightforward cases flow through automated processing while complex or sensitive cases default to human handling. This builds trust by demonstrating that AI isn’t trying to eliminate human judgment.

Track Records and Validation: Showing the Crowd evidence that AI recommendations actually work. If the AI underwriting assistant suggests declining certain risks, track those decisions and share results showing that AI-declined risks did indeed have adverse loss experience. Empirical validation overcomes skepticism more effectively than any amount of explanation about model sophistication.

The Critical Intersection: Where Lab Meets Crowd

The real transformation doesn’t happen in the Lab or among the Crowd independently. It happens at their intersection—when technical capabilities and organizational adoption combine to fundamentally change how work gets done.

Case Study: Aviva’s Claims Transformation

Aviva’s claims operation transformation illustrates the Lab and Crowd framework in action. The company assembled a team of more than 50 data scientists, engineers, business leaders, change professionals, and translators. This Lab worked directly with Aviva’s claims teams, deploying translators to bridge between technologists and end users.

The Lab built over 80 AI models, but they didn’t dictate how claims professionals should use them. Instead, translators were “involved at every step to ensure each new iteration and improvement precisely reflected the needs of the claims teams.” Each new AI tool had to prove its worth—and Aviva wasn’t afraid to revert to non-digital means when that was preferable.

Simultaneously, Aviva invested heavily in developing their Crowd’s capabilities. Claims professionals learned to work with AI tools, understand their outputs, and provide feedback on what worked and what didn’t. The result wasn’t just implementing AI—it was creating a new operating model where AI and human expertise complemented each other.

The outcomes speak to the power of getting both the Lab and Crowd right: 23 days faster liability assessment for complex cases, 30% improvement in routing accuracy, 65% reduction in customer complaints, and £60 million in savings. More importantly, these gains are sustainable because both the technical infrastructure and organizational capabilities support ongoing improvement.

The Translator Role: Making the Connection

The most successful insurance AI implementations feature individuals who serve as translators between the Lab and the Crowd. These people understand enough about AI technology to communicate with data scientists, and enough about insurance operations to identify meaningful business applications.

At insurance companies, translators might be former underwriters who developed an interest in analytics, IT professionals who’ve worked closely with business units, or new hires specifically recruited to bridge this gap. Regardless of background, they share critical characteristics:

Bilingual Fluency: They can discuss model performance metrics with data scientists, then turn around and explain business implications to claims managers without getting lost in either conversation.

Problem Reframing: When a business unit requests “AI to speed up claims processing,” translators help unpack what that really means—which parts of claims processing create bottlenecks, what constraints exist, and what specific outcomes matter most.

Reality Testing: They push back on both sides. They tell the Lab when a proposed solution ignores operational reality. They tell the Crowd when their resistance to change is based on unfounded concerns rather than legitimate issues.

Organizations attempting AI transformation without sufficient translator capacity consistently struggle. The Lab builds technically impressive but operationally irrelevant tools. The Crowd resists adoption because they don’t understand the value proposition. Progress stalls.

Common Pitfalls and How to Avoid Them

Insurance companies implementing the Lab and Crowd framework encounter predictable challenges. Recognizing these pitfalls helps executives steer around them.

Pitfall #1: Treating AI as a Technology Initiative

When AI transformation reports to the CIO or exists purely as a technology initiative, it typically fails. As Bain research emphasizes, successful AI transformation must be business-led, with C-suite ownership and clear accountability for business outcomes.

The Lab should not be buried in IT. It needs to be a strategic initiative with direct access to business leadership. Similarly, developing the Crowd’s AI fluency isn’t an HR training issue—it’s a business transformation priority that requires executive attention and resources.

Pitfall #2: Underinvesting in the Crowd

Many insurance companies spend millions building AI capabilities (the Lab) but allocate minimal resources to developing organizational AI fluency (the Crowd). This imbalance dooms most implementations.

According to BCG research, insurers “underinvest in adoption, change management, and workforce upskilling.” The ratio of investment should be much closer to 50/50 between technical capabilities and organizational development than the 80/20 or 90/10 split commonly observed.

Pitfall #3: Moving Too Fast or Too Slow

Some insurance companies pursue aggressive AI transformation timelines, expecting enterprise-wide implementation in 12-18 months. Others move so cautiously that they’re still running pilots three years later while competitors pull ahead.

The right pace varies by organization, but successful approaches share common characteristics. They start with clearly defined, high-value use cases that can demonstrate results in 6-9 months. They run multiple initiatives in parallel rather than pursuing strictly sequential implementation. They scale aggressively once a capability proves successful rather than immediately moving to new use cases.

Pitfall #4: Ignoring Regulatory and Ethical Considerations

Insurance faces unique regulatory requirements around AI. Models must be explainable for rate filings. Decision-making can’t exhibit prohibited discrimination. Data usage must comply with privacy regulations. Customer-facing AI must meet specific disclosure requirements.

Both the Lab and the Crowd need strong grounding in these requirements. The Lab must build compliance into systems from the start, not treat it as an add-on. The Crowd needs training on regulatory constraints so they understand why certain AI applications aren’t permitted even if they’d be technically feasible.

Pitfall #5: Neglecting the Human Element

AI transformation creates anxiety. Will my job be eliminated? Will I be able to learn these new skills? Will I become obsolete if I don’t embrace AI quickly enough?

Successful implementation requires explicitly addressing these concerns. Leadership must clearly communicate which roles AI will genuinely eliminate (honest transparency matters), which roles will evolve significantly, and how the company will support employees through transitions. Companies that treat this as a “soft” issue that doesn’t require attention consistently see high resistance and limited adoption.

Measuring Success: Metrics for Lab and Crowd

How do you know if your Lab and Crowd framework is actually working? Leading insurance companies track different metrics for each component while also measuring the intersection.

Lab Metrics: Technical Excellence

Model Performance: Accuracy, precision, recall, and other relevant technical measures for each AI system. These metrics must meet industry standards and show improvement over time.

Time to Deployment: How long from concept to production-ready system? Successful Labs show steadily decreasing deployment timelines as they mature.

System Reliability: Uptime, error rates, and quality of outputs in production environments. AI systems that require constant manual intervention haven’t achieved genuine automation.

Integration Success: Ability to connect AI capabilities with existing systems and workflows without breaking critical operations.

Crowd Metrics: Organizational Adoption

Active Usage Rates: Percentage of target users actually using AI tools regularly, not just having access to them. Low adoption rates signal problems with usefulness, training, or change management.

Proficiency Development: Measured through assessments, the Crowd should show increasing competence over time. Are employees who use AI tools becoming more effective, or are they just clicking buttons without understanding?

Feedback Quality: The Crowd should be providing increasingly sophisticated and valuable feedback to the Lab. Early feedback might be “this doesn’t work.” Mature feedback identifies specific scenarios, suggests improvements, and helps prioritize development efforts.

Reduced Resistance: Over time, resistance to new AI capabilities should decrease as trust builds and benefits become evident. Persistent high resistance suggests fundamental problems with the Lab-Crowd connection.

Intersection Metrics: Business Impact

Ultimately, success appears in business results, not technical or adoption metrics:

Process Efficiency Gains: But not just productivity—look for genuine business impact. Has claims processing improved loss ratios, not just reduced processing time?

Revenue Impact: Can you measure premium growth, retention improvements, or new market entry enabled by AI capabilities?

Competitive Position: Are you gaining ground on competitors? Winning accounts you previously couldn’t compete for? Operating in markets that weren’t previously viable?

Sustainable Improvement: Are gains one-time achievements or do they continue to improve over time as both Lab and Crowd capabilities mature?

Building Your Implementation Roadmap

For insurance executives ready to implement or strengthen their Lab and Crowd framework, a phased approach reduces risk while building momentum.

Phase 1: Assess Current State (Months 1-2)

Inventory Existing Capabilities: What technical expertise exists in your organization? What AI literacy does the Crowd currently possess? Where are the gaps?

Identify Quick Wins: What high-value, relatively low-complexity use cases could demonstrate early success? Claims triage? Application processing? Customer service automation?

Secure Executive Sponsorship: This cannot be delegated to middle management. C-suite commitment and accountability are essential for success.

Phase 2: Build Foundations (Months 3-6)

Establish the Core Lab: Hire or develop key roles, particularly translators who bridge technical and business domains. Don’t try to build the full team immediately—start with a core group and expand based on demand.

Launch Crowd Literacy Programs: Begin baseline AI literacy training for all employees while developing deeper functional training for roles that will use specific AI capabilities.

Create Governance Structures: Establish how AI initiatives will be prioritized, funded, resourced, and measured. Define clear accountability for business outcomes.

Phase 3: Pilot and Learn (Months 6-12)

Implement Initial Use Cases: Launch 2-3 targeted AI applications with close Lab-Crowd collaboration. Expect iteration and refinement based on real-world usage.

Develop Feedback Loops: Create structured mechanisms for the Crowd to provide input to the Lab. Make adjustments based on what you learn.

Measure and Share Results: Document both successes and failures. Be transparent about what’s working and what isn’t. This builds credibility and helps refine approach.

Phase 4: Scale and Expand (Months 12-24)

Expand Successful Applications: Once capabilities prove effective, scale aggressively to other business units, regions, or product lines.

Build the Pipeline: The Lab should be working on the next generation of capabilities while supporting current implementations. The Crowd should be identifying new opportunities based on their experience with initial applications.

Institutionalize Practices: Move from special initiatives to standard operating procedures. AI literacy becomes part of onboarding. Lab-Crowd collaboration becomes how work gets done, not a special project.

The Path Forward

Insurance companies pursuing AI transformation face a critical strategic choice. They can continue pursuing failed approaches—isolated technical teams or undirected democratization—hoping this time will be different. Or they can deliberately build both Lab capabilities and Crowd AI fluency while creating the connections between them that enable genuine transformation.

The evidence strongly supports the latter path. Companies successfully scaling AI share common characteristics. They invest in specialized technical teams while simultaneously developing organizational AI literacy. They create translator roles that bridge the two groups. They measure success through business outcomes rather than technical metrics or training completion rates.

Most importantly, they recognize that AI transformation is fundamentally an organizational challenge that happens to involve technology, not a technology challenge that requires some change management on the side.

For insurance executives, this means approaching AI investment decisions differently. The question isn’t just “What AI capabilities do we need to build?” It’s also “How do we develop the organizational capacity to effectively use those capabilities?” Budget allocated, resources assigned, and success metrics defined should reflect both dimensions equally.

The Lab and Crowd framework provides a structure for making this happen. It’s not a silver bullet—implementation will be challenging and require sustained executive commitment over multiple years. But it offers a proven path to achieving AI transformation that actually delivers business value rather than creating collections of unused AI tools and demoralized technical teams.

As the insurance industry continues its rapid AI evolution, the companies getting both the Lab and Crowd right will pull steadily ahead of competitors still treating AI as purely a technology initiative. The time to begin building both capabilities is now.


Action Items for Insurance Executives

  1. Conduct a Lab and Crowd Assessment: Evaluate your current organizational structure against the framework. Do you have both specialized technical teams AND broad organizational AI literacy? Or have you focused disproportionately on one dimension?

  2. Identify Your Translators: Who in your organization can effectively bridge between technical teams and business operations? Explicitly designate and empower these individuals rather than hoping the connection happens organically.

  3. Rebalance Investment: Review your AI budget allocation. If 80%+ goes to technology and technical teams with minimal investment in organizational AI literacy development, rebalance toward 50/50.

  4. Establish Integration Points: Create formal mechanisms for Lab and Crowd collaboration—not just occasional meetings but ongoing working relationships built into project structures.

  5. Measure Both Dimensions: Develop metrics that track both technical capabilities and organizational adoption, with ultimate accountability focused on business outcomes achieved at their intersection.

  6. Commit to the Journey: Recognize that building effective Lab and Crowd capabilities takes years, not quarters. Secure C-suite commitment to sustained investment even when immediate results are limited.


Sources

Bain & Company. “The Southeast Asia CEO’s Guide to AI Transformation.” November 2025. https://www.bain.com/insights/the-southeast-asia-ceos-guide-to-ai-transformation/

BCG. “Insurance Leads in AI Adoption. Now It’s Time to Scale.” September 2025. https://www.bcg.com/publications/2025/insurance-leads-ai-adoption-now-time-to-scale

BCG. “To Win with AI, Insurers Must Go Beyond the Algorithm.” October 2025. https://www.bcg.com/publications/2025/to-win-with-ai-insurers-must-go-beyond-algorithm

McKinsey Digital. “Aviva: Rewiring the Insurance Claims Journey with AI.” https://www.mckinsey.com/capabilities/mckinsey-digital/how-we-help-clients/rewired-in-action/aviva-rewiring-the-insurance-claims-journey-with-ai

Insurance Thought Leadership. “Leveraging AI to Upskill Employees.” April 2023. https://www.insurancethoughtleadership.com/ai-machine-learning/leveraging-ai-upskill-employees

Workday. “How AI Can Help Solve the Insurance Industry’s Talent Crisis.” January 2024. https://blog.workday.com/en-us/2024/how-ai-can-help-solve-insurance-industrys-talent-crisis.html

Swiss Re. “The Power of AI Upskilling in Insurance.” August 2024. https://www.swissre.com/risk-knowledge/advancing-societal-benefits-digitalisation/ai-upskilling-in-insurance.html

Risk & Insurance. “AI Is Reshaping Work, Agility and Growth in Insurance.” August 2025. https://riskandinsurance.com/ai-is-reshaping-work-agility-and-growth-in-insurance/

Eliot Partnership. “How Insurance Leaders Are Tackling Skill Gaps in 2025.” May 2025. https://eliotpartnership.com/news-insights/how-insurance-leaders-are-tackling-skill-gaps-in-2025/

Insurance Thought Leadership. “AI in Insurance: 2025 Predictions.” January 2025. https://www.insurancethoughtleadership.com/ai-machine-learning/ai-insurance-2025-predictions

McKinsey & Company. “Insurance 2030: The Impact of AI on the Future of Insurance.” March 2021. https://www.mckinsey.com/industries/financial-services/our-insights/insurance-2030-the-impact-of-ai-on-the-future-of-insurance

AI Disclaimer: This blog post was created with assistance from artificial intelligence technology. While the content is based on factual information from the source material, readers should verify all details, pricing, and features directly with the respective AI tool providers before making business decisions. AI-generated content may not reflect the most current information, and individual results may vary. Always conduct your own research and due diligence before relying on information contained on this site.