Finance & Banking | 3 min read

Insurers See AI Gains but 44% Report Governance Failures, Grant Thornton Survey Finds

Nearly half of insurance executives say governance failures have directly derailed AI projects — even as the sector deploys AI faster than its compliance infrastructure can keep up.

Hector Herrera
Hector Herrera
A financial trading floor related to Governance Failures, Grant Thornton Survey Finds
Why this matters Nearly half of insurance executives say governance failures have directly derailed AI projects — even as the sector deploys AI faster than its compliance infrastructure can keep up.

Insurers See AI Gains but 44% Report Governance Failures, Grant Thornton Survey Finds

By Hector Herrera | May 4, 2026 | Vertical: Finance | Type: Data Research


Nearly half of insurance executives say governance or compliance failures have directly caused AI projects to underperform or fail outright — even as the sector accelerates AI deployment across underwriting, claims management, and risk modeling. A May 2026 Grant Thornton survey of insurance industry leaders puts the number at 44%, a figure that should alarm any executive treating AI as a pure efficiency play and treating governance as a second-order concern.

Insurance is one of the most regulated industries in the economy. That context makes the governance failure rate more significant, not less — these are organizations accustomed to compliance infrastructure, and they're still failing at AI governance at nearly half the rate.

What the Survey Found

The Grant Thornton survey captures a sector in an uncomfortable position: deploying AI aggressively enough to generate meaningful efficiency gains, but doing so faster than the governance structures needed to manage it. The key findings reflect a pattern that appears across financial services broadly:

  • 44% of respondents said governance or compliance challenges directly contributed to AI project failure or underperformance
  • Underwriting and claims are the highest-deployment areas — both are also heavily regulated functions with significant consumer protection implications
  • Legal and compliance teams are the bottleneck, struggling to evaluate AI outputs and risk at the pace that technology teams are deploying

The tension is structural. AI development moves in weeks. Insurance regulatory cycles move in years. The gap between deployment speed and governance maturity is where the failures are happening.

Why Insurance Is a High-Stakes Test Case

Insurance AI is not a back-office productivity tool. The models being deployed are making or informing consequential decisions: whether to approve a claim, how to price a policy, which risk factors to weight. Errors in these systems translate directly into consumer harm — denied claims that should be paid, policies priced unfairly against protected classes, or fraud detection systems with discriminatory error rates.

Regulators in New York, California, and Colorado have already begun issuing guidance on AI use in insurance underwriting. The Colorado AI Act, which takes effect June 30, 2026, applies directly to insurance companies using algorithmic systems that affect Colorado residents — and requires documented impact assessments for high-risk AI applications. Insurers that haven't completed those assessments are now operating with significant legal exposure.

The Governance Gap in Practice

The 44% failure figure from Grant Thornton reflects several recurring problems in enterprise AI governance:

Model validation lag. AI models are often deployed before legal and compliance teams have fully assessed their risk profiles. Speed-to-deployment pressure overrides review cycles.

Documentation gaps. Regulators increasingly require explainability — an insurer needs to be able to explain why a claim was denied or a premium was increased. Many AI systems being deployed today cannot produce that explanation in a form that satisfies regulatory requirements.

Third-party model risk. Insurance companies are deploying AI from vendors, not just building internally. The governance frameworks for third-party AI risk are less developed than those for proprietary models, creating exposure that many organizations haven't fully mapped.

What Insurers Need to Do

The insurers getting this right are treating AI governance as product infrastructure, not compliance checkbox. That means:

  1. Governance starts at procurement, not post-deployment review
  2. Legal and compliance review is integrated into the AI development cycle, not triggered after deployment
  3. Model documentation is maintained throughout the lifecycle, not created retroactively for regulators

The 56% of insurers not reporting governance failures aren't necessarily doing something exotic. They're typically the ones that built the review process into the deployment workflow from the beginning — and accepted the slower deployment pace that comes with it.

What to Watch

Colorado's June 30 deadline is the next forcing function. Watch for enforcement actions from the Colorado Attorney General's office in Q3 2026 as the first test cases under the new law emerge. Those cases will clarify what the governance documentation requirements actually mean in practice — and serve as a warning for insurers who treated the deadline as a suggestion.

Key Takeaways

  • Underwriting and claims
  • Legal and compliance teams
  • Model validation lag.
  • Third-party model risk.
  • Governance starts at procurement

Did this help you understand AI better?

Your feedback helps us write more useful content.

Hector Herrera

Written by

Hector Herrera

Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.

More from Hector →

Get tomorrow's AI briefing

Join readers who start their day with NexChron. Free, daily, no spam.

More from NexChron