AI Insurance might be more important than you think

AI Insurance might be more important than you think
Photo by Mika Baumeister / Unsplash

Two of our board members are veterans in both AI and Insurance.

A topic often not appreciated by the public and sometimes by regulators themselves is the power of the insurance industry to create helpful or harmful defacto regulation.

For the last two years we have been researching the application of Errors and Omissions or "Cyber Liability" insurance - a kind of liability insurance that most companies interacting with financial data, governments, and large corporate partners must have — to include and exclude AI uses.

The insurance industry is slow to respond but the stakes are high. The industry must through its operation learn how to sort out mundane from high risk use cases of AI, and decide what transparency and visibility will allow them to achieve that in an accurate rather than a superficial way. If this is done poorly, then we will create a moral hazard, where mundane actors subsidize un-careful risk takers, and if done properly we have a functional market driven regulatory intervention that will likely outperform legislative regulators and be less open to regulatory capture.

As a demonstration that the state of affairs has not been solve, both Microsoft and Google have taken the extraordinary step of indemnifying their own cloud computing users against copyright infringement claims for using AI, in order to drive business rather than wait for the insurance industry to create products.

You will hopefully hear more from our board on this topic soon.

In the mean time, here are some papers you may find of interest.

Some from friends and community members, faculty and former students from our ML Alignment and Theory Scholars, whom we provided a small grant.

Insuring Uninsurable Risks from AI: The State as Insurer of Last Resort
Many experts believe that AI systems will sooner or later pose uninsurable risks, including existential risks. This creates an extreme judgment-proof problem: few if any parties can be held accountable ex post in the event of such a catastrophe. This paper proposes a novel solution: a government-provided, mandatory indemnification program for AI developers. The program uses risk-priced indemnity fees to induce socially optimal levels of care. Risk-estimates are determined by surveying experts, including indemnified developers. The Bayesian Truth Serum mechanism is employed to incent honest and effortful responses. Compared to alternatives, this approach arguably better leverages all private information, and provides a clearer signal to indemnified developers regarding what risks they must mitigate to lower their fees. It’s recommended that collected fees be used to help fund the safety research developers need, employing a fund matching mechanism (Quadratic Financing) to induce an optimal supply of this public good. Under Quadratic Financing, safety research projects would compete for private contributions from developers, signaling how much each is to be supplemented with public funds.
Liability and Insurance for Catastrophic Losses: the Nuclear Power Precedent and Lessons for AI
As AI systems become more autonomous and capable, experts warn of them potentially causing catastrophic losses. Drawing on the successful precedent set by the nuclear power industry, this paper argues that developers of frontier AI models should be assigned limited, strict, and exclusive third party liability for harms resulting from Critical AI Occurrences (CAIOs) - events that cause or easily could have caused catastrophic losses. Mandatory insurance for CAIO liability is recommended to overcome developers’ judgment-proofness, mitigate winner’s curse dynamics, and leverage insurers’ quasi-regulatory abilities. Based on theoretical arguments and observations from the analogous nuclear power context, insurers are expected to engage in a mix of causal risk-modeling, monitoring, lobbying for stricter regulation, and providing loss prevention guidance in the context of insuring against heavy-tail risks from AI. While not a substitute for regulation, clear liability assignment and mandatory insurance can help efficiently allocate resources to risk-modeling and safe design, facilitating future regulatory efforts.
Tort Law as a Tool for Mitigating Catastrophic Risk from Artificial Intelligence
The capabilities of artificial intelligence (AI) systems have improved markedly over the past decade. This rapid progress has brought greater attention to longs
Contract Design With Safety Inspections
We study the role of regulatory inspections in a contract design problem in which a principal interacts separately with multiple agents. Each agent’s hidden action includes a dimension that determines whether they undertake an extra costly step to adhere to safety protocols. The principal’s objective is to use payments combined with a limited budget for random inspections to incentivize agents towards safety-compliant actions that maximize the principal’s utility. We first focus on the single-agent setting with linear contracts and present an efficient algorithm that characterizes the optimal linear contract, which includes both payment and random inspection. We further investigate how the optimal contract changes as the inspection cost or the cost of adhering to safety protocols vary. Notably, we demonstrate that the agent’s compensation increases if either of these costs escalates. However, while the probability of inspection decreases with rising inspection costs, it demonstrates nonmonotonic behavior as a function of the safety action costs. Lastly, we explore the multi-agent setting, where the principal’s challenge is to determine the best distribution of inspection budgets among all agents. We propose an efficient approach based on dynamic programming to find an approximately optimal allocation of inspection budget across contracts. We also design a random sequential scheme to determine the inspector’s assignments, ensuring each agent is inspected at most once and at the desired probability. Finally, we present a case study illustrating that a mere difference in the cost of inspection across various agents can drive the principal’s decision to forego inspecting a significant fraction of them, concentrating its entire budget on those that are less costly to inspect.