Generative AI: Emerging Risks and Insurance Market Trends

Generative AI: Emerging Risks and Insurance Market Trends
Q4 2023 Global Insurance Market Insights

03 of 03

This insight is part 03 of 03 in this Collection.

November 16, 2023 11 mins

Generative AI: Emerging Risks and Insurance Market Trends

Generative AI article hero

Generative Artificial Intelligence (AI) – a type of artificial intelligence that has the ability to create material such as images, music or text – is already a proven disruptor and its adoption is growing at an explosive rate.

Key Takeaways
  1. The insurance market's understanding of generative AI-related risk is in a nascent stage.
  2. This developing form of AI will impact many lines of insurance including Technology E&O/Cyber, Professional Liability, Media Liability and Employment Practices Liability among others.
  3. AI presents significant opportunities but also introduces new risks. Organizations should work with experts to set policies and establish governance frameworks that align with regulatory requirements and industry standards.

As generative AI changes the way companies do business, it is creating new risks and new causes of loss that impact not only the companies themselves but also their business partners such as third-party vendors and digital supply chains. Recent events and court cases highlight the developing forms of risks associated with generative AI, including copyright, trademark and patent infringement, discrimination, and defamation. Bearing in mind that there is an important difference in the risks – and risk management approaches – associated with model creation versus model usage and different approaches, some examples in this emerging risk field include:

  • Data Privacy and Confidential Information

    The training of large language models (LLMs) like ChatGPT and Bard requires the digestion of vast amounts of data, which may – depending how the model is trained – include sensitive data such as personal data or proprietary client data. Generally, LLMs do not have the ability to “unlearn”, meaning that, if sensitive information is input into these models, it is very difficult to remove or correct this information.

    For example, an anonymous group of plaintiffs filed a suit in the United States against a leading AI company and a leading technology firm alleging that the AI company misappropriated private and personal information belonging to millions of people by using publicly-available data from the Internet to develop and train its generative AI tools. The plaintiffs allege that this use case constituted theft, misappropriation, and a violation of their privacy and property rights. The complaint also includes claims for violations of the Electronic Communications Privacy Act, the Computer Fraud and Abuse Act (CFAA), various state consumer protection statutes, and a number of common law claims.

  • Unreliable Model Training

    Generative AI is only as good as the information on which it was trained. Data used in AI learning may be from questionable sources or quality, which may lead to inaccurate or otherwise unreliable outputs. Plausible sounding but entirely fictitious outputs can be generated.

    For example, it was reported that a New York lawyer asked AI to write a brief for a dispute his firm was handling. The AI model invented authoritative sounding – but actually non-existent – case law to support the brief. Ultimately, the court sanctioned the lawyer for the error-riddled brief.

  • Unintended AI Actions

    From hiring decisions to healthcare and loan application vetting, AI may make incorrect conclusions or decisions, and where human oversight is ineffective, create risk to organizations either directly or through their subcontractors.

    For example, the US Equal Employment Opportunity Commission recently settled a case wherein a firm they prosecuted elected to pay $365,000 to more than 200 job applicants who alleged age discrimination after being rejected by AI hiring software.

  • IP/Confidential Information/Trade Secrets

    As generative AI models bring forward new concepts, ideas and designs, they may have borrowed heavily from other sources without permissions and may infringe patents or bury others’ protected work product in their learning.

    For example, a leading image-generating media company filed a complaint against an AI firm alleging that the firm illegally used images from the media company’s library to train its own model, which would compete with the plaintiff’s. The media company claimed copyright infringement, trademark infringement, trademark dilution, and unfair competition, amongst other assertions. The case, which remains pending as of this writing, seeks damages and an order to destroy models related to the allegations. The media company has since launched a competing image generating model.

Insurance Market Trends

The insurance market's understanding of generative AI-related risk is in a nascent stage. This developing form of AI will impact many lines of insurance including Technology Errors and Omissions/Cyber, Professional Liability, Media Liability, Employment Practices Liability among others, depending on the AI’s use case. Insurance policies can potentially address artificial intelligence risk through affirmative coverage, specific exclusions, or by remaining silent, which creates ambiguity.

Insurers are defining their strategies around this rapidly changing risk landscape including:

  • Clarifying coverage intent/addressing "silent AI coverage" through revised policy language related to AI risk.
  • Building out their underwriting requirements which are, already very robust. While underwriters are just beginning to ask questions, the process has the potential to become burdensome and prolonged with the many potential applications that could be created and deployed.
  • Developing creative AI products and solutions (e.g., a leading insurer has developed a product that provides a performance guarantee based on an AI risk assessment).
  • Expanding their technology-based talent competencies – either organically or through partnerships and/or acquisitions – to support underwriting and pricing through technical assessments and monitoring.

AI is likely to become the next big issue to increase earnings volatility for companies across the globe, and will become a top 20 risk in the next three years, according to our Global Risk Management Survey.

Managing AI Risk

While the productivity gains of generative AI are easily recognizable, organizations should take great care and conduct regular risk assessments as they embrace this new world. Aon suggests that organizations work with their Aon team as well as technology experts, attorneys and consultants to set policies and establish a governance framework that aligns with regulatory requirements and industry standards. As respects the organizations' use of AI, some components of that framework may include:

  • Routine audits of your AI models to ensure that algorithms or data sets do not propagate unwanted bias.
  • Ensuring an appropriate understanding of copyright ownership of AI-generated materials.
  • Developing and implementing this same framework into a mergers and acquisitions checklist.
  • Mitigating risk through the implementation of B2B contractual limitation of liability, as well as vendor risk management.
  • Insertion of human control points to validate that the governance model used in the AI’s development aligns with legal and regulatory frameworks.
  • Conducting a legal, claims and insurance review and considering alternative risk transfer mechanisms in the event of the insurance market begins to avoid these risks.

Bloomberg Research forecasts the generative AI market will grow to $1.3 trillion over the next 10 years – up from $40 billion in 2022. As firms race to share in that growth, they would do well to stay focused on the potential risks and issues that will arise along the journey.

$1.3T

The generative AI market could grow to a value of $1.3 trillion over next 10 years, up from $40 billion in 2022, according to Bloomberg Research.

General Disclaimer

This document is not intended to address any specific situation or to provide legal, regulatory, financial, or other advice. While care has been taken in the production of this document, Aon does not warrant, represent or guarantee the accuracy, adequacy, completeness or fitness for any purpose of the document or any part of it and can accept no liability for any loss incurred in any way by any person who may rely on it. Any recipient shall be responsible for the use to which it puts this document. This document has been compiled using information available to us up to its date of publication and is subject to any qualifications made in the document.

Terms of Use

The contents herein may not be reproduced, reused, reprinted or redistributed without the expressed written consent of Aon, unless otherwise authorized by Aon. To use information contained herein, please write to our team.

More Like This

View All
Subscribe CTA Banner