Beyond Privacy and Regulation: The Insurance Coverage Questions Raised by Artificial Intelligence

Artificial intelligence has rapidly become embedded in everyday business operations. Companies now deploy AI tools to interact with customers, generate reports, assist employees, and automate decision-making across a wide range of industries.

Much of the legal commentary surrounding artificial intelligence has focused on issues such as data privacy, intellectual property, and regulatory oversight. However, an equally important—and largely underexplored—question is beginning to emerge:

When artificial intelligence allegedly contributes to harm or damage, will insurance coverage respond?

A recent lawsuit involving Gemini, an artificial intelligence chatbot developed by Google, highlights the types of liability and insurance questions courts may soon confront[1]. According to the complaint, interactions with the chatbot allegedly contributed to a user’s psychological deterioration and eventual suicide. While the allegations remain unproven and will ultimately be tested through litigation, the lawsuit raises broader questions regarding how traditional tort principles—and insurance coverage doctrines—apply to emerging AI technologies.

Importantly, the legal issues raised by the Gemini lawsuit are not limited to large technology companies. Although only a small number of organizations develop foundational AI models, millions of businesses are now deploying AI tools in everyday operations—raising new questions about liability and, critically, whether their existing insurance programs will respond when something goes wrong.

As artificial intelligence becomes more widely deployed, courts will likely confront a range of insurance coverage questions that traditional liability policies were not designed to address. Key issues may include

  • Whether harm or damage allegedly caused by AI constitutes an “occurrence” under CGL policies 
  • Whether suicide or self-inflicted harm breaks the chain of causation 
  • Whether assault, battery, or intentional act exclusions apply 
  • Whether AI systems should be treated as products, professional services, or informational content 
  • How courts should address multi-party AI liability chains involving different insurers 
  • How broadly the new Generative AI Exclusion will be interpreted and applied 

AI Liability Is Moving From Theory to Litigation

For several years, legal scholars and regulators have debated how existing legal frameworks might apply to artificial intelligence. The Gemini lawsuit suggests that these questions are no longer theoretical.

As AI systems become integrated into customer service platforms, workplace tools, and decision-support systems, plaintiffs are increasingly likely to bring claims alleging that automated systems contributed to harm.

Potential theories of liability could include: 

  • Negligent design of AI systems 
  • Negligent deployment of AI tools 
  • Failure to implement safeguards 
  • Negligent reliance on AI-generated outputs 
  • Intellectual property disputes involving AI training data 

While these theories may resemble traditional tort claims, they also raise novel questions about how insurance policies respond to harms allegedly caused by automated technologies. 

AI Liability Scenarios Every Company Should Consider

While the Gemini lawsuit has attracted significant public attention, the liability issues it raises may arise in a wide range of everyday business contexts. Companies deploying AI technologies should consider several potential risk scenarios.

Negligent Deployment of AI Systems

A company deploys an AI chatbot to interact with customers or employees. The system provides inaccurate or harmful information, resulting in damage, injury, or financial loss. Potential claims may include negligent implementation, failure to implement safeguards, or failure to monitor automated interactions.

Unauthorized Use of Workplace AI Tools

Many employers now authorize the use of AI tools on company devices. In remote work environments, however, those devices may be accessible to other members of the household. An employee’s child or other unauthorized user could interact with AI tools installed on a company laptop, potentially leading to harmful outcomes. Such situations may raise questions involving negligent supervision, workplace technology policies, and responsibility for the use of employer-issued devices.

Reliance on AI-Generated Information

Organizations increasingly rely on AI tools to generate reports, analyze data, and provide operational recommendations. If an AI system produces flawed information that is relied upon in critical decision-making – such as engineering calculations, financial analysis, or safety recommendations – companies may face allegations of negligent reliance on automated systems. In the construction industry, for example, a flawed AI-generated structural calculation or an erroneous project schedule could lead to design defects, project delays, accidents, and resulting bodily injury or property damage claims.

Autonomous AI Agents

Some companies are now developing AI agents capable of performing tasks autonomously, including communicating with customers, processing requests, or making operational decisions. If such systems provide incorrect information or take actions that result in harm, plaintiffs may argue that the company failed to properly supervise or control the system.

Multi-Party AI Liability

As with traditional products liability claims, AI-related lawsuits may involve multiple parties across a technology supply chain. Litigation may therefore include several defendants with different insurance programs, leading to complex coverage disputes across multiple carriers and policy types, as well as claims for contractual indemnity between the parties.

The AI Liability Chain

The Gemini lawsuit also highlights another emerging issue: AI-related claims may involve multiple parties across a technology ecosystem.

In many situations, potential defendants may include:  

  • The developer of the underlying AI model 
  • The platform provider integrating the technology 
  • The company deploying the AI system 
  • The end user interacting with the system 

This structure resembles traditional products liability litigation, where multiple entities in a distribution chain may be named as defendants. Because each entity may carry different insurance policies—including CGL, Technology E&O, professional liability, or cyber coverage—AI-related litigation may trigger complex disputes regarding which insurers must defend or indemnify the claims, as well as priority of coverage disputes among the triggered policies. 

The CGL Policy: Bodily Injury and the “Occurrence” Question

For many companies, the first place to look for insurance coverage is the Commercial General Liability (CGL) policy.

CGL policies typically cover damages because of “bodily injury” or “property damage” caused by an “occurrence,” which is generally defined as an accident – which is embedded in the concept of a fortuitous, unexpected event.

In cases involving alleged harm linked to AI systems, insurers may argue that the injury did not arise from an accident but from the intentional conduct of the individual interacting with the system.

Policyholders, however, may contend that the relevant perspective is that of the insured. From the standpoint of an insured company deploying  AI technology, any alleged harm caused by automated outputs would likely be unexpected and unintended—and therefore may qualify as an occurrence.

Courts have addressed similar issues in cases involving negligent supervision, negligent security, and other situations in which intentional acts by third parties allegedly resulted from negligent conduct by the insured.

Even where ultimate coverage remains disputed, allegations of negligent design or deployment may still trigger the insurer’s duty to defend, which is broader than the duty to indemnify. 

Suicide, Intervening Acts, and Intentional Conduct Exclusions 

Cases involving suicide raise additional legal and insurance questions. 

Insurers may argue that suicide constitutes an intervening or superseding act that breaks the causal chain between the insured’s conduct and the resulting injury. 

However, courts addressing negligence claims in other contexts—such as cases involving bullying, negligent supervision, or failure to protect vulnerable individuals—have sometimes held that suicide does not necessarily defeat liability where the insured’s alleged negligence contributed to the circumstances leading to the harm. 

Insurance coverage disputes may also implicate intentional act or assault and battery exclusions found in many liability policies. Insurers may argue that injuries resulting from intentional acts fall outside policy coverage. Policyholders, however, may contend that the underlying allegations concern negligence by the insured—such as negligent design, deployment, or supervision of AI systems. 

As with the occurrence analysis above, courts often distinguish between intentional conduct by third parties and negligence by the insured when evaluating such exclusions. Whether courts will apply similar reasoning in cases involving AI systems remains an open question. 

Technology E&O Coverage and the Bodily Injury/Property Damage Gap

Many companies also maintain Technology Errors & Omissions (Tech E&O) policies designed to cover liability arising from software failures or negligent technology services.

At first glance, claims involving alleged defects in AI systems might appear to fall squarely within Tech E&O coverage. However, these policies often focus primarily on economic losses—such as financial damages resulting from system outages or software malfunctions. Claims for bodily injury or property damage are generally excluded under such policies.  

This creates the possibility that insurers could dispute whether coverage belongs under the CGL policy or the Tech E&O policy—potentially leaving policyholders caught between competing coverage positions, with neither insurer willing to step forward.

New Exclusions Enter the Market: Generative AI Endorsements Across the Program

Even as policyholders work to establish coverage under existing policy language, a significant shift is already underway in the insurance market that threatens to close the door.

Carriers have begun issuing endorsements that specifically exclude claims arising out of generative artificial intelligence. The endorsements define “generative artificial intelligence” as “a machine-based learning system or model that is trained on data with the ability to create content or responses, including but not limited to text, images, audio, video, or code.” The operative exclusion language is straightforward: this insurance does not apply to claims, damages, or loss arising out of generative artificial intelligence. Such endorsements are now available on primary and excess lines of coverage (see, e.g., CX 34 12 01 26 or 14-E-1145 Ed. 01-2026). 

The practical reach of these exclusions is potentially significant as they employ the broad “arising out of” standard. Insurers may seek to apply them to claims involving any aspect of AI or large language models, regardless of whether the AI component was central or merely incidental to the underlying harm.

These exclusions are likely to be adopted quickly and broadly—particularly at renewal. Policyholders should not assume that silence in a renewal submission means their existing coverage remains intact. A company that deploys an AI tool, relies on AI-generated outputs, or integrates any AI-powered technology into its operations should carefully review whether new exclusionary language has been added to their insurance program.

This development makes it essential for companies to engage their broker and coverage counsel now—before a claim arises and before renewal language is accepted without scrutiny. 

Conclusion

For companies adopting AI tools, the window to assess and address coverage gaps may be narrowing. Businesses should review their existing insurance programs with their broker to identify potential exposures and engage coverage counsel to evaluate policy language before claims arise. Waiting until renewal—or until a claim is filed—may mean the opportunity to negotiate meaningful protection has already passed.

[1] Gavalas v. Google LLC, No. 5:26‑cv‑01849‑EKL (N.D. Cal. filed Mar. 4, 2026).

*Janeen M. Thomas is Of Counsel at Saxe Doernberger & Vita, P.C., where she focuses on insurance coverage and risk transfer on behalf of corporate policyholders. She can be reached at jthomas@sdvlaw.com*