Businesses want AI. They’re convinced AI is the future — even when the present isn’t quite there yet.

But AI carries risks. A Geneva Association survey in late 2025 shows more than 90% of businesses using AI want insurance against risks from bad AI output. Can insurers provide it?

The trouble with offering insurance against bad output from AI is that generative AI is a machine for getting things wrong. “Hallucinations” are part of how gen-AI works and cannot be completely mitigated. You can only safely use generative AI, and indeed machine learning in general, when errors are trivial and not load-bearing.

Insurers don't want to pick AI's bill

Air Canada’s website chatbot promised a customer a bereavement travel discount that didn’t exist; Air Canada tried to argue that the bot was a separate entity “responsible for its own actions,” and not literally a function put in place by Air Canada on its own website. The customer filed a tribunal case and won, the tribunal ruling that “it makes no difference whether the information comes from a static page or a chatbot”. 

Solar contractor Wolf River Electric found itself losing customers, then discovered that Google’s AI Overview was claiming the company had been sued by the Minnesota Attorney General for deceptive sales practices, something that never happened; Wolf River is suing Google for defamation. Meta settled a similar case over a defamatory claim in an image generated by their Llama chatbot. Multiple other AI defamation suits are in progress. DPD’s AI chatbot answered with swearwords and disparaged the firm after a customer asked it to.

Insurers do not want to pay the bills for these errors. So increasingly, companies such as AIG, Great American, W.R. Berkley and other insurers are asking permission from regulators to just not cover AI in general business insurance, errors and omissions, or cyber insurance. Agents, chatbots, even machine learning. Any of it.

The worry for the insurance industry is the possibility that a major issue in a popular AI model could cause a flood of claims.

According to multiple reports, including from the law firm Hunton Andrews Kurth, the insurer Berkley wanted to exclude “any actual or alleged use, deployment, or development of artificial intelligence by any person or entity” — any content or communications using AI; failure to identify AI content from third parties; deficient policies on AI; or, indeed, business processes that touch AI in any manner at all.

Berkeley's reported definition of AI seems written to cover machine learning and generative AI:

any machine-based system that for explicit or implicit objectives infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments, including without limitation, any system that can emulate the structure and characteristics of input data in order to generate derived synthetic content. including images, videos, audio, text, and other digital content.

Read expansively, this exception could extend to quite a lot of automated systems that you might not normally think of as “AI”. You may need to ask your insurer about particular systems and firmly clarify whether a policy with this exception would cover them.

You can, of course, already buy yourself some AI-specific coverage — at a price, with strong conditions.

But some may be looking to cash in

Insurers previously broke out cyberinsurance as a separate policy market — a good move for both the insurers and the insured, given UK cyberinsurance-specific payouts grew to £197 million just in 2024, up from just £59 million in 2023. AIG expects claims for AI issues will “likely increase over time”. 

Armilla, working via Lloyds, offers AI coverage that is not likely to cover you for single incidents. The insurer has to agree that the bot has “performed below initial expectations” — say, if the error rate went from 5% to 15% — broadly, over a period of time, Armilla told the FT in May.

Some insurers add endorsements to cover specific AI issues, according to the FT’s reporting. A $5 million errors and omissions policy might specify a $25,000 sublimit for AI issues. QBE covers possible fines under the EU AI Act — but limited to 2.5% of the total policy limit. 

Munich Re offers insurance and reinsurance on a broad range of AI errors, both from generative AI and older-style machine learning — though it stressed to the FT that the risks “will be reflected in the premium.”

Some insurers want to put into place stronger safety standards for chatbots, especially ones configured to take actions as AI agents, so as to make them more insurable.

Insure and the guardrails will come?

Munich Re is particularly keen on this approach. Michael von Gablenz of Munich Re told NBC: “When we’re looking at past technologies and their journey, insurance has played a major role in that, and I believe insurance can play the same role for AI.”

A lot of this is filled with “could” — that insurance could incentivise better guard rails from the AI vendors. This carries the implicit “could” that this is even possible — given that hallucinations are inherent to how large language models work and are not fixable. 

The NCSC recently warned of the inherent danger of prompt injection attacks in generative AI, where there’s “a good chance prompt injection will never be properly mitigated in the same way” of historical vulnerabilities like SQL injection. 

But there’s an emerging market for insurers to sell promises of future safety standards — much as AI companies sell promises of future effectiveness and efficiency.

Insurers are taking greater caution as to insuring the AI vendors themselves. OpenAI and Anthropic have conventional business insurance, but both have been sued repeatedly over claims the AI models’ training violated copyright. Anthropic has already had to pay out $1.5 billion in one such claim. OpenAI is also being sued for wrongful death after ChatGPT gave a 16-year-old methods of suicide. Google settled a similar lawsuit against Character.AI last month.

Aon plc told the FT that insurers as a sector “don’t yet have enough capacity for providers” or for “a systemic, correlated, aggregated risk”.

OpenAI has considered self-insuring via a ring-fenced “captive” subsidiary, though this is not in place as yet. Anthropic’s copyright settlement was paid out of funds from its investors.

At this point, AI risk insurance for businesses is a question for legal and accounting. Has there been a full risk assessment? Company executives can no longer say they didn’t know.

Sally forth into the exciting AI future — but be very aware precisely how covered you are.

The link has been copied!