In the tables below, the risks that are unique to AI are discussed. The risks that come into play as a combination of AI, data and security breach are intentionally excluded because they have overlap with perils commonly covered in Cyber Insurance.
Performance Risks
Risks | Description | Examples |
---|---|---|
Risk of errors | Al model/algorithm might result in false prediction or large over- / under-prediction for a future event. | Tesla’s full-self driving failed to work in some scenarios such as complex intersections etcetera. Tesla had to recall 300,000 vehicles. [2023] |
Risk of stability of performance | The risk that AI is not designed to adequately provide a stable performance and thus disrupts operations. | Zillow failed to update its AI model (iBuying algorithm) when the market changed. Zillow lost more than 300 million USD. [2021] |
Risk of bias | AI can discriminate against legally protected classes (race, color, gender, etc.) | iTutorGroup used an AI model that rejected applications of old candidates. iTutorGroup settles the case filed by EEOC for more than 300,000 USD. [2023]
Workday Inc.’s AI systems and screening tools allegedly disqualify applicants who are Black, disabled, or over the age of 40 at a disproportionate rate, according to a lawsuit. [2023] |
Risk of opaqueness / black-box | In opaque AI models, it is difficult to identify and trace vulnerabilities. | Mount Sinai’s AI detects high risk patients with high accuracy based on X-ray imaging but when the AI was used outside Mount Sinai, the accuracy plummeted. The AI model did not learn clinically relevant information from the images and relied on metadata provided by the specific X-ray machines. [2018] |
Risk of spreading hate speech/harmful content | Generative AI models are susceptible to generating hateful content. | Microsoft’s bot posted inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its operation. [2016] |
Risk of false / defamatory information / misinformation | Generative AI models are susceptible to generating factually wrong content. | ChatGPT allegedly accused an individual for embezzling funds as per a lawsuit. [2023]
ChatGPT allegedly wrongly accuses a Law Professor of Sexual Assault as per the opinion piece written by the professor. [2023] The National Eating Disorder Association (Neda) has taken down an artificial intelligence chatbot, “Tessa”, after reports that the chatbot was providing harmful advice. [2023] |
Control Risks
Risks | Description | Examples |
---|---|---|
Risk of AI misuse | The AI platform fails to put necessary guardrails to prevent misuse of their platform/technology. | Increase in AI-generated harmful content like child sexual abuse material (CSAM) [2023] |
Economic Risks
Risks | Description | Examples |
---|---|---|
Liability risk | Flawed AI models for a user or a developer might trigger large losses to business partners or customers. | An automated trading platform loses 20 million USD as alleged in the lawsuit filed by the investor. [2017] |
Reputation risk | Flawed AI models cause bad outcomes harming the reputation (both first party & third party). | A factual mistake by Bard (Google’s AI) hurts it reputation, consequently, Alphabet’s stock fell by 7.7% resulting in a loss of 100 billion USD. (first party) [2023]
Stability AI allegedly generated “bizarre or grotesque” images with Getty’s watermark, thereby, damaging Getty’s reputation, as per the lawsuits filed by Getty against Stability AI. (third party) [2023] |
Copyright violation risk | AI models trained on copyright-protected data and Generative AI models that produce plagiarized content could expose the company to copyright infringement claims. | Microsoft, GitHub, and OpenAI are being sued for allegedly violating copyright law by reproducing open-source code using AI. [2023]
Stability AI and Midjourney allegedly produced unauthorized derivative works using DeviantArt’s copyright protected content. [2023] Stability AI allegedly processed millions of copyright protected images and associated metadata as per a lawsuit filed by Getty in the UK. [2023] |
Data lineage risk | The data used to train an AI model might have been collected in a way that violates fair use of the data. | New York Times is considering legal action against OpenAI because OpenAI powered Bing search engine allegedly uses NY Times’ articles to respond to its users’ questions, resulting in a reduction in traffic to the NY Times website. [2023] |