Unpredictable Patterns #120: The varieties of AI-liability
A CDA 230 for AI, the many possible liability regimes and re-designing legal insurance futures
Dear reader!
Thank you for the comments on the data protection piece! A lot of good questions around the balance between reform / re-interpretation or implementation in practice. I continue to think reform is important, and more than before think that the law should allow for differentiation along technical dimensions. But there is plenty of reasonable disagreement here. This week saw an essay in Svenska Dagbladet on the Turing test celebrating 75 years, and a policy column at CEPA on the importance of not just protecting, but actively curating, growing and strengthening the public domain. There is also more coming soon on the book that the excellent David Skelton and I are penning - The Future Habit - so stay tuned. This week we discuss AI agents and liability, a tricky subject, but one that will become more and more important as agentic capabilities increase. Enjoy!
TL;DR
Why liability exists: it prices harm (Calabresi/Coase), spreads risk via insurance (~ 2 % US GDP), and surfaces information through litigation.
What happens without it: safety spending collapses, insurance dries up, losses land on households, regulators resort to blunt bans—slowing innovation.
When to impose it: early, rigid liability can choke immature tech; precedents like Section 230, Price-Anderson, and the Vaccine Fund show that time-limited safe harbours can buy learning time while still compensating victims.
Proposal: consider a conditional, sunset-clause safe harbour for agentic AI, with transparency + baseline insurance, to turn unknown risks into priceable ones and keep pace geopolitically.
Next steps: map the design space with a Zwicky table; explore servant/animal/child analogues and native “liability certificates” that travel with each agent.
Bottom line: legal innovation must sprint alongside technical progress—or Europe and the US will discover whose liability valve was set right only after the fact.
Agents and liability
Agents become ever more capable across a number of different dimensions. They can perform tasks that take longer, are more complex and they can perform them in parallel with other agents as well.
One of the core questions for the agentic AI shift is how we think about liability. The spectrum of views on this question varies from arguing that there is no change needed from existing regimes, and that liability already deals with complex artefacts and systems, to arguing that we need entirely new rules and regimes.
The European Union recently withdrew its AI liability directive, fearing, from what it seems, that the added legal complexity could handicap Europe's attempts to ensure competitiveness in AI -- but it is far from obvious that this retraction is final (and other product liability rules will matter greatly as well). The US, more litigious than Europe, may well rely on existing tort law, but will need to examine complex causation chains and evolve new case law.
But before we get into the nitty gritty on who should be liable, we should ask another question. Why do we have liability at all? What is the function that liability law performs in modern economies? Let's assume, for a second, that we had no tort or liability law at all -- what would then be the consequences?
Liability law is the price tag that modern societies pin on harmful conduct. By making the injurer—not random victims—bear the expected cost of accidents, the legal system turns dangers into internalised prices and steers behaviour toward the social optimum. Guido Calabresi’s The Costs of Accidents and Ronald Coase’s “Social Cost” laid out what is now the canonical formula: invest in precaution up to the point where the marginal cost of prevention equals the expected harm. Without that background price, private actors would compare safety spending only with their own expected losses, chronically under-investing whenever harms spill onto others.
Liability also smooths risk and shares losses. In practice most tort payouts are financed by insurers; the existence of a predictable liability baseline lets risk be pooled and premiums be set. In 2022, the U.S. tort system channelled roughly $529 billion—about 2 percent of GDP—through this compensation machinery. Abolish liability and these costs would not disappear; they would simply remain on the balance-sheets of injured households or fall on taxpayers.
A third, subtler function is information production. Litigation uncovers internal e-mails, engineering reports and expert testimony that regulators and competitors could never compel ex ante. Each public verdict updates the market’s knowledge of hidden hazards—think asbestos, thalidomide, or the Ford Pinto—and recalibrates incentives far beyond the parties to the suit. Liability proceedings also carry a normative charge: they name wrong-doers and vindicate rights, sustaining the public trust that allows strangers to transact without fear of unredressed harm.
Now, imagine a jurisdiction that repeals tort and product-liability law overnight. What happens?
Safety investment collapses. Producers now weigh safety spending only against the harm _they_ would suffer (reputational loss, warranty costs), not against the full social loss. Victims, realising this, over-invest in self-protection: double inefficiency.
Insurance becomes first-party only and far costlier. Because insurers could no longer recoup payouts from injurers (no so called subrogation), premiums must cover the entire accident burden. High-risk goods—from autonomous drones to synthetic-biology kits—would face razor-thin or non-existent insurance markets.
Aggregate losses land on households. The CDC estimates the economic cost of injury in the United States at $4.2 trillion in 2019, roughly 20 percent of GDP. With no liability channel, that full bill—medical costs, lost earnings, quality-of-life losses—sticks to victims or the public purse.
Markets for innovation shrink. Consumers mistrust novel products when the legal backstop vanishes; investors demand higher risk premia. Consider automobiles: even _with_ liability rules, 2019 U.S. motor-vehicle crashes imposed $340 billion in economic costs. Strip the liability backdrop and adoption of emerging mobility technologies would slow dramatically. Liability law done right, in effect, is innovation policy.
Regulation fills the vacuum, but crudely. To tame externalities, governments would default to command-and-control licences, design mandates and outright bans—tools far blunter than price-like liability. The historical analogue is the pre-workers’-compensation factory era, when riot, boycott and strict policing substituted for an absent civil remedy.
So, in a chart:
This is key, then, because if we discuss agentic AI and liability all of these considerations apply: we should aim for a regulatory regime that achieve these goals - and at the same time advances the technology.
The when of liability
There is another question here as well — and that is when you apply liability. When we discuss emerging technologies, this is especially sensitive, since a heavy liability regime at the very beginning of the introduction of the technology might stymie it severely, and so rob us of the possible benefits. For any technology, then, we need to ask both the question of why we care about liability, and when we believe that liability regimes should be applied.
The thing is that timing matters as much as allocation. In the very early life-cycle of a technology fixed costs are high, knowledge about externalities thin, and market adoption fragile. Dropping a full-blown strict-liability regime into that context can freeze experimentation: investors price the worst-case accident into capital costs, insurers either refuse cover or charge prohibitive premiums, and users shy away from unproven products. Law-and-economics scholars sometimes talk about a “liability valve” that regulators can tighten only once the industry’s learning curve has flattened and risk becomes estimable.
Safe-harbour precedents show why the valve can be useful.
Section 230 gave online platforms a broad exemption from publisher liability for user content; and while controversial, is still regarded as a key factor contributing to the Internet’s early growth, lowering entry barriers for startups and allowing experimentation with recommendation algorithms and ad-supported models.
The Price-Anderson Act capped nuclear-incident liability and wrapped the sector in a federal indemnity pool; without it, private insurers would not have underwritten fission power in the 1950s, and the U.S. civilian reactor fleet would likely be far smaller.
The National Vaccine Injury Compensation Program shifts most tort claims to a no-fault fund; Congress adopted it in 1986 after ordinary liability had pushed several manufacturers out of the childhood-vaccine market.
These carve-outs were not meant to be permanent blank cheques. Both Price-Anderson and Section 230 include renewal or sunset triggers; Congress periodically re-examines whether the initial justification—capital scarcity, data paucity, systemic benefit—still outweighs the moral-hazard cost. And this is key as the shadow any safe harbour casts is quite long, and waiting too long for a normalization of liability may skew the social outcomes the other way. This is why there are, in many of these examples, sunsetting rules.
So, let’s ask the obvious question: should congress put in place a moratorium on agentic AI liability, where developers and producers of the technology are exempt from liability? Or are there other liability exemptions that should be tried out in the AI-space? There is an overwhemling policy majority that reject this idea completely, and are more concerned with putting various duties and obligations on the frontier model companies - but that doesn’t mean that we can’t ask the question.
Do we need safe habours for AI-agents? What is the strongest case we can make here? Maybe something like this:
A time-limited, conditional safe harbour for AI agents is an economic accelerant: it suppresses the uninsurable tail-risk that otherwise inflates the cost of capital for early entrants, yet it still leaves room to impose full liability once empirical loss data exist. When uncertainty about frequency and severity of harm is high, investors and insurers treat the entire distribution as worst-case; startups cannot absorb those premiums and incumbents face powerful disincentives to release genuinely novel capabilities. A narrowly drawn immunity—contingent on basic transparency (model cards, incident logs) and baseline insurance to guarantee victim compensation—lets experimentation produce the very statistics policymakers need to calibrate an efficient long-run liability rule. The safe harbour thus converts a “black-box” externality into measurable, priceable risk without freezing the technology in its infancy.
Geopolitically, such a shield is a strategic lever in the race to set global AI norms. Jurisdictions that combine predictable, limited liability with auditable guard-rails will become magnets for talent, training compute, and venture finance, thereby shaping de-facto technical standards before slower actors can legislate. If rival blocs adopt permissive regimes while a cautious jurisdiction imposes strict liability from day one, the latter still imports external harms via transborder digital services but forfeits the jobs, intellectual property, and diplomatic weight that come from hosting the core innovation stack. A phased safe harbour aligns domestic welfare with international competitiveness: it gives local firms room to scale responsibly, supplies regulators with real-world safety telemetry, and provides a credible threat that full liability will attach the moment empirical thresholds—user base, incident count, or compute scale—indicate the technology has matured past its “unknown unknowns.”
Whether we agree with this idea or not, it is worth considering. My own preference would be to at least explore it, especially within the geopolitical logic we are now facing. And it would be quite the strategic surprise if Europe did this!
The varieties of liability
When we discuss AI-liability we benefit from doing some initial morphological analysis in order to figure out what the variety of possible AI-liability rules looks like. Morphological analysis, or Zwicky analysis, is simply decomposing a phenomenon in its constituent dimensions, and then exploring variation within those. The goal is to describe the space of possible solutions for a problem, and not get settled to early on any specific idea. We have already asked the unpopular question of if we need a CDA 230 or similar safe habour for AI - a question that I have no doubt would be answered negatively by most policy makers - so we can now continue on to explore possible liability regimes.
Here is a first Zwicky table that lays out the dimensions and possibilities:
Even with this first stab at laying out possible regimes, we see that there are quite a lot of choice in how to think about liability regimes — and this should encourage us to model them even more, and try figure out if there is a way to map the different regimes across core values that we want to balance in different ways. Arguing in models (if you want to try a simple model, you can play around with this - and improve it!) in this way allows us to think about what we believe are the right targets to hit and discuss if existing regimes are close or need to be complemented.
If we map this to current US / EU regimes we get something like this in a first approximation.
Clearly, Europe is in a very different place then the US — even if we consider the emerging state laws. This provides us with a natural experiment as we enter the agentic economy; the development of agents in Europe and the US will show if liability regimes matter or not — and if the balance each has struck is better or worse. My bet is that Europe needs to rebalance, and I think it could actually be things like re-examining the exorbitant fines and the significant uncertainty that attach to them; it seems a small thing, but the threat of a percentage of global turnover and no clear indications on when that would be used creates a wide shadow of uncertainty over possible outcomes.
Liability, insurance and innovation
For agents, we may also want to explore entirely new frameworks. The idea that an agent is a product, or even a service, seems simplistic.
Classical vicarious-liability doctrines—master/servant, parent/child, owner/animal—already embody the intuition that law can impute wrongful conduct to someone who empowered (or failed to restrain) a semi-autonomous actor. Borrowing from those doctrines yields at least three workable archetypes:
Servant model – The principal is strictly liable for acts “within the scope of delegation,” but gains a negligence defence for rogue behaviour that could not reasonably be anticipated.
Animal-owner model – A rebuttable presumption: if the agent shows a known dangerous propensity (e.g., jailbreak bias), the keeper is strictly liable; otherwise only upon negligence.
Child model – Graduated liability that tightens as the agent’s capability matures, mirroring how parental responsibility converts to the child’s own responsibility at adulthood.
Each archetype encourages different investments—oversight tools, capability gating, or post-market monitoring—and courts are already culturally comfortable applying them.
We should also explore embedding insurance as a native feature, not an afterthought. Every agent instance could carry a cryptographically signed liability certificate pointing to (i) the active insurance policy, (ii) the underwriter’s rating, and (iii) a hashed incident ledger. This flips conventional insurance from an opaque balance-sheet item into a public trust signal: higher-risk or poorly audited agents would pay steeper premia, instantly visible to integrators and end-users. Insurers, in turn, obtain continuous telemetry from the agent, letting them re-price coverage dynamically—much like usage-based car insurance. Regulators could even mandate a minimum cover indexed to the agent’s autonomy tier, ensuring a floor for victim compensation without smothering entry - and creating a standard as well.
In addition to this, legislators could design incentives around renewal and robustness. Let’s say we make the certificate renewable every six or twelve months, contingent on (a) fresh red-team reports, (b) incident frequency below a published threshold, and (c) proof of patched vulnerabilities. An agent that loses cover would be unable to transact with APIs or payment networks that require a valid certificate—creating a market-driven kill-switch without blanket bans. Over time, a statistical map of incidents versus premium differentials would guide legislators toward the liability allocation that actually minimises social cost - in real time.
There are many other ways of doing this as well, of course - but the key argument here is that we can do so much more now, with the technology we have, and we have to be careful as not to get stuck in earlier legal regimes frozen accidents. We should actively pursue legal innovation on top of the technical, and seek institutional arrangements that are truly novel.
Thank you for reading!
Nicklas