In February 2024, the Civil Resolution Tribunal of British Columbia, Canada, issued a landmark ruling in AI legal history: Air Canada was required to honor a bereavement discount that its customer service chatbot had promised to passenger Jake Moffatt—even though the discount policy did not actually exist.[1] Air Canada argued that the chatbot was a "separate legal entity" and that its statements did not represent the company's position. The tribunal rejected this argument, ruling that a company is liable for all outputs of AI systems it deploys—regardless of whether those outputs are accurate. This case may seem straightforward, but it touches on one of the most fundamental legal questions of the AI era: when an AI Agent autonomously makes decisions, interacts with humans, or even causes harm, who should bear legal responsibility? This is no academic abstraction—as Agentic AI moves from the laboratory to commercial deployment, AI systems are autonomously performing increasingly complex tasks: booking flights, managing investment portfolios, drafting legal documents, and even assisting with medical diagnoses. Every autonomous decision carries latent legal risks regarding liability attribution. Drawing on my past experience conducting technology governance research at Cambridge University and my current role leading Meta Intelligence in deploying AI systems for enterprises, I have come to deeply appreciate that AI liability is not merely a technical legal issue—it is an institutional design challenge involving efficiency, fairness, and innovation incentives.
I. The Triple Dilemma of Traditional Legal Frameworks
To understand the difficulty of the AI liability problem, one must first understand why existing legal frameworks cannot be directly applied.
The dilemma of Agency Law. Traditional agency law governs the legal relationship between a "principal" and an "agent"—the agent acts within the scope of the principal's authorization, and the legal effects of those actions are attributed to the principal.[2] On the surface, AI Agents share structural similarities with human agents: an enterprise (principal) deploys an AI Agent (agent) to perform tasks on its behalf. However, the core premise of agency law is that the agent possesses legal capacity—the ability to understand the legal implications of its actions and bear responsibility for them. AI lacks this capacity. More fundamentally, agency law presupposes that the agent's actions can be reasonably foreseen and controlled by the principal—but for AI systems using deep learning, the specific outputs in particular situations are often unpredictable even to the developers themselves.[3] In the Air Canada case, the airline's defense strategy of claiming the chatbot was a "separate entity" was precisely an attempt to exploit this gap in agency law—if AI is not a qualified agent, should the enterprise still be liable for its actions? The tribunal's answer was unequivocal: yes. But this answer was based more on consumer protection policy considerations than on a theoretical reconstruction of agency law.
The dilemma of Product Liability Law. Traditional product liability law classifies products as tangible goods and determines the manufacturer's strict liability based on "manufacturing defects," "design defects," or "warning defects."[4] AI systems challenge multiple premises of this framework. First, AI is software, not a tangible good—whether the "product" definition in most product liability laws encompasses pure software remains unresolved across jurisdictions. Second, AI systems continuously learn—an AI model that has been fine-tuned or retrained after deployment may exhibit behavioral characteristics significantly different from those at the time of release, making it difficult to pinpoint the moment of "defect." Third, AI behavior is context-dependent—the same model may produce entirely different outputs with different inputs and different contexts, rendering the criteria for "design defect" ambiguous.[5]
The dilemma of Negligence Law. The core elements of negligence torts are: the defendant breached a "duty of reasonable care," causing foreseeable harm to the plaintiff.[6] In the AI context, each element becomes blurred. What is the standard for the "duty of reasonable care"?—Is it industry best practices? The lowest hallucination rate achievable with current technology? Or an accuracy rate comparable to human professionals? What does "foreseeability" mean in statistical learning systems?—Developers can foresee that a model "will sometimes err," but cannot predict in which specific situation it will make what specific error.[7] How is "causation" established in complex AI decision chains?—When an AI Agent makes a final decision based on the cascaded outputs of multiple sub-modules, which link's "error" is the legal cause of the harm?
II. Law and Economics Analysis: Designing Optimal Liability Rules
Law and Economics provides a normative analytical framework for the institutional design of AI liability. Guido Calabresi, in his classic work The Costs of Accidents, proposed that the goal of an accident liability system should be to "minimize the total social cost of accidents"—including the harm from accidents themselves, the cost of preventing accidents, and the administrative cost of processing accidents.[8]
Under this framework, optimal AI liability rules should satisfy three conditions. First, create correct prevention incentives—liability should be allocated to the party best able to prevent harm at the lowest cost (the "least-cost avoider"). In the AI supply chain, the least-cost avoider is typically the AI system's developer and deployer, rather than the end user—because the former possess the technical capability to improve system safety, while the latter generally lack the ability to understand and modify AI behavior.[9]
Second, do not excessively inhibit innovation. If liability rules are overly strict—for example, imposing absolute strict liability for all AI errors—companies may abandon the development and deployment of socially valuable AI applications. Steven Shavell's analysis shows that when the injurer's activity level is variable, strict liability is superior to negligence, because it simultaneously incentivizes the injurer to adopt both optimal precaution levels and optimal activity levels.[10] However, this conclusion must be carefully applied in the AI context—if strict liability causes AI medical diagnostic systems to exit the market due to liability risk, society may lose lives that the technology could otherwise have saved.
Third, reduce information asymmetry. The internal workings of AI systems are typically opaque to victims—they cannot know the model's training data, architectural choices, or known limitations. This information asymmetry makes it nearly impossible for victims under a traditional negligence regime to prove the developer's "negligence."[11] The EU's AI Liability Directive proposal addresses this problem by introducing a "reversal of burden of proof" and a "right to disclosure of information"—allowing victims to request that AI system providers disclose relevant technical information, and under specific conditions, to presume that causation is established, with the burden shifting to the defendant to rebut.
From a game theory perspective, the design of AI liability rules can be modeled as a "mechanism design" problem—the designer (legislator) needs to devise a set of rules such that, under these rules, the self-interested behavior of all participants (developers, deployers, users) leads to socially optimal outcomes. The optimal mechanism should be "incentive compatible"—meaning that each party, while pursuing its own interests, naturally takes actions that benefit overall social welfare.
III. The EU Model: The AI Liability Directive and the Revised Product Liability Directive
The EU is at the global forefront of institutional construction for AI liability, having put forward two complementary legislative proposals.
The AI Liability Directive (AILD) was proposed in September 2022, aiming to adapt non-contractual civil liability rules to AI systems.[12] Its core innovations include two mechanisms. First, "presumption of burden of proof"—when the plaintiff can demonstrate that the defendant failed to comply with specific duties of care (such as those stipulated by the EU AI Act), and there is damage related to AI output, causation is presumed to be established unless the defendant can rebut it. This substantially lowers the evidentiary burden for victims. Second, "right to disclosure"—victims may apply to the court to require providers or users of high-risk AI systems to disclose technical evidence related to the damage. This right directly addresses the information asymmetry caused by the "black box" nature of AI systems.
The revision of the Revised Product Liability Directive (PLD) is equally far-reaching. The new PLD, adopted in 2024, explicitly includes "software" in the definition of "product"—encompassing AI systems and AI-driven services.[13] This means that AI software developers will, like traditional manufacturers, bear strict liability for defects in their products (without the need to prove negligence). More importantly, the new PLD expands the definition of "product defect" to include defects arising from continuous learning after the product has been placed on the market—directly addressing the "dynamic evolution" characteristic of AI systems.
The combined effect of these two pieces of legislation is the establishment of a "dual-track system": for high-risk AI systems (as defined by the EU AI Act), strict liability (through the new PLD) combined with reversal of burden of proof (through the AILD); for other AI systems, negligence liability equipped with disclosure mechanisms.[14] This design reflects a risk-tiered governance philosophy—the stringency of liability is proportional to the risk level of the AI system.
IV. Autonomous Driving: The Frontier Battleground of Liability Attribution
Autonomous driving is the most concrete and urgent application scenario for AI liability. The six-level classification of driving automation defined by SAE International (Level 0 through Level 5) is not only a technical standard—it is also a spectrum of liability transfer.[15]
At Level 2 (partial automation, such as Tesla Autopilot), the human driver must continuously supervise system operation—the liable party is clearly the human driver. But as automation increases to Level 3 (conditional automation) and above, humans transition from "drivers" to "passengers," and the center of liability inevitably shifts from individuals to the system's developers and deployers.[16]
Different countries have adopted different legislative strategies. Germany amended its Road Traffic Act (StVG) in 2021, becoming the first country in the world to establish a legal framework for Level 4 autonomous driving. The legislation requires Level 4 vehicles to be equipped with a "technical supervisor" and established a dedicated compensation fund of up to 10 million euros.[17] The United Kingdom passed the Automated Vehicles Act in 2024, introducing the concept of an "Authorised Self-Driving Entity" (ASDE)—the ASDE bears primary civil liability for accidents occurring in self-driving mode, replacing traditional driver liability.[18] The United States presents a fragmented landscape—there is no unified federal legislation, with each state formulating its own rules, and the standards in California, Arizona, and Texas all differ.
Tesla's cases provide a window into AI liability disputes. Between 2023 and 2025, multiple fatal accidents in the United States involving Tesla's Autopilot and "Full Self-Driving" (FSD) triggered extensive litigation. Tesla's defense strategy is that FSD is merely a Level 2 driver assistance system, and the human driver must be ready to take over at all times—therefore, the liability lies with the human. But plaintiffs' attorneys counter that Tesla's marketing language ("Full Self-Driving") created a reasonable expectation that "the system can drive autonomously," constituting a "warning defect" under product liability law.[19] These cases reveal a broader tension: AI companies tend to exaggerate their systems' autonomous capabilities during marketing but emphasize the necessity of human oversight when facing liability—this "double standard of liability deflection" is an issue that AI liability governance must squarely address.
V. AI Liability Insurance: Market Mechanisms for Risk Sharing
The effective operation of a legal liability framework requires the support of the insurance market. If AI developers and deployers face potential liabilities that are uninsurable, they may choose not to develop or deploy socially valuable AI systems—representing a social loss.
AI liability insurance is a rapidly developing emerging market. Between 2024 and 2025, insurance giants such as Lloyd's of London, AXA, and Munich Re have launched dedicated AI liability insurance products.[20] However, actuarial science for AI systems faces unique challenges: traditional insurance actuarial methods rely on extensive historical loss data to estimate risk probabilities and loss distributions—but the rapid evolution of AI technology means that historical data has limited predictive value. Moreover, AI's "tail risks" (low-probability, high-impact events)—such as large-scale automated trading system failures or chain autonomous vehicle accidents—may exceed the underwriting capacity of traditional insurance.
A possible institutional innovation is an "AI Damage Compensation Fund"—similar to environmental pollution damage compensation funds or nuclear accident compensation funds.[21] Participants in the AI supply chain (developers, deployers) would contribute to the fund in proportion, and the fund would be used to compensate for AI-caused damages. This collectivized risk-sharing mechanism can address large-scale damage problems that individual companies cannot bear, while creating correct prevention incentives through differentiated contribution rates (based on the risk level of AI systems). The 10 million euro compensation fund in Germany's autonomous driving legislation can be seen as a pioneering experiment in this direction.
VI. Taiwan's Institutional Construction: From the Basic Act to Implementation Rules
Taiwan's Artificial Intelligence Basic Act (2025) established fundamental principles such as "human-centric," "safe and trustworthy," and "fair and transparent," but significant gaps remain in the specific institutional design of AI liability.[22]
Currently, the legal handling of AI-related damages in Taiwan relies primarily on three existing frameworks: Civil Code tort provisions (Article 184 for negligence liability, Article 191-1 for manufacturer liability), the Consumer Protection Act (Article 7 for enterprise operator liability), and sector-specific legislation (such as the Medical Care Act and the Financial Consumer Protection Act).[23] However, as the preceding analysis has revealed, these frameworks all face difficulties in application when confronted with the unique characteristics of AI systems.
I believe Taiwan should develop dedicated AI liability implementation rules under the framework of the Artificial Intelligence Basic Act. My specific recommendations are as follows:
First, adopt a risk-tiered liability model. Reference the EU's dual-track design: apply strict liability or presumed negligence for high-risk AI systems (such as medical diagnostics, credit assessment, and autonomous driving); maintain negligence liability for general AI systems but strengthen evidentiary mechanisms. The risk tiers should be aligned with Taiwan's future AI risk classification framework.
Second, establish evidentiary assistance mechanisms for AI damage claims. Reference the EU AILD's "right to disclosure" design, allowing victims in litigation to require AI system providers to disclose technical information related to the damage—including the sources of training data, known limitations of the model, and the decision logic of outputs (to the extent technically feasible).
Third, explore AI liability insurance and compensation fund systems. For high-risk AI applications (such as autonomous driving), require providers to purchase a minimum level of liability insurance or contribute to a compensation fund. This both safeguards victims' right to seek compensation and provides the AI industry with a predictable risk management framework.
VII. Conclusion: Liability as the Infrastructure of Trust
The construction of an AI liability framework is, at its essence, an answer to a question of social contract: under what conditions are we willing to delegate decision-making authority to autonomous AI systems? The answer is inevitably not "unconditional trust," nor "complete rejection"—but rather "conditional trust, safeguarded by appropriate liability mechanisms."
A well-designed AI liability framework should simultaneously serve three goals: protect victims—ensuring that individuals harmed by AI systems can obtain reasonable compensation; incentivize safety—driving AI developers and deployers to invest appropriate resources in preventing harm; and promote innovation—not inhibiting the development and deployment of socially valuable AI applications through excessive liability risk. Tensions exist among these three goals, and any liability rule represents an attempt to find a dynamic equilibrium among them.
Returning to the lesson of the Air Canada case: companies cannot enjoy the efficiency gains of AI automation on one hand while attempting to attribute AI errors to a "separate system" on the other. The decision to deploy AI brings benefits, and it must also carry corresponding liability—this is a fundamental principle of the market economy, and it does not change simply because the decision-maker has shifted from a human to an algorithm. In an era of rapid expansion of the AI Agent economy, establishing a clear, fair, and enforceable liability framework is not an obstacle to innovation—it is the infrastructure of trust upon which innovation depends.
References
- Moffatt v. Air Canada (2024). Civil Resolution Tribunal, British Columbia. Decision No. 2024 BCCRT 149. canlii.org
- Restatement (Third) of Agency. (2006). American Law Institute.
- Chopra, S. & White, L. F. (2011). A Legal Theory for Autonomous Artificial Agents. University of Michigan Press.
- Restatement (Third) of Torts: Products Liability. (1998). American Law Institute. §§ 1-2.
- Selbst, A. D. (2020). Negligence and AI's Human Users. Boston University Law Review, 100(4), 1315–1376. bu.edu
- Prosser, W. L. (1971). Handbook of the Law of Torts (4th ed.). West Publishing.
- Scherer, M. U. (2016). Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies. Harvard Journal of Law & Technology, 29(2), 353–400. jolt.law.harvard.edu
- Calabresi, G. (1970). The Costs of Accidents: A Legal and Economic Analysis. Yale University Press.
- Shavell, S. (2004). Foundations of Economic Analysis of Law. Harvard University Press.
- Shavell, S. (1980). Strict Liability versus Negligence. The Journal of Legal Studies, 9(1), 1–25. doi.org
- Buiten, M. C. (2019). Towards Intelligent Regulation of Artificial Intelligence. European Journal of Risk Regulation, 10(1), 41–59. doi.org
- European Commission. (2022). Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive). COM(2022) 496 final. eur-lex.europa.eu
- European Parliament and Council. (2024). Directive (EU) 2024/2853 on liability for defective products (Revised Product Liability Directive). eur-lex.europa.eu
- Wendehorst, C. (2023). Liability for AI-Based Products and Services. In The Cambridge Handbook of Responsible Artificial Intelligence. Cambridge University Press.
- SAE International. (2021). Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. SAE J3016. sae.org
- Geistfeld, M. A. (2017). A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and Federal Safety Regulation. California Law Review, 105(6), 1611–1694. doi.org
- Bundesministerium für Digitales und Verkehr. (2021). Gesetz zum autonomen Fahren (Autonomous Driving Act). BGBl. I S. 3108. bmdv.bund.de
- UK Parliament. (2024). Automated Vehicles Act 2024. legislation.gov.uk
- Reuters. (2025). Tesla faces lawsuits over Autopilot and Full Self-Driving crashes. reuters.com
- Lloyd's of London. (2024). AI Risk: The Insurance Opportunity. lloyds.com
- Vladeck, D. C. (2014). Machines without Principals: Liability Rules and Artificial Intelligence. Washington Law Review, 89(1), 117–150. digitalcommons.law.uw.edu
- Executive Yuan. (2025). Artificial Intelligence Basic Act. Executive Yuan
- Laws and Regulations Database. Civil Code, Consumer Protection Act. law.moj.gov.tw