Is Artificial Intelligence a ‘Product’? Products Liability Implications for AI-Based Products

Feature
Article

As the physical products we use evolve to become increasingly complex, traditional products liability frameworks may not always fit to provide remedies for harm that can result from using novel product types.

Danny Tobey

Danny Tobey, MD, JD
Partner
Global co-chair and chair
DLA Piper Americas AI and Data Analytics Practice

The emergence of artificial intelligence (AI) is likely to have an impact on nearly every facet of daily life, and the products we use every day are no exception. From AI-powered self-driving cars to automated dosing pumps that rely on AI analysis based on past use, products are increasingly incorporating AI. But what happens when the self-driving car fails to avoid a preventable accident? Or when the pump administers an incorrect dose for an individual patient? As the physical products we use evolve to become increasingly complex, traditional products liability frameworks may not always fit to provide remedies for harm that can result from using novel product types.

The basic premise underlying products liability in the United States is that entities in a product’s distribution chain may be liable when a defective product results in harm to an end user. This is centered on the concept of attributing responsibility to entities or individuals for products that they make or distribute that cause harm and has led to the development of three core theories of liability based on a product’s defective design, manufacturing, and inadequate warnings.

Christopher Campbell

Christopher G. Campbell
Partner
Chair, product liability and mass torts practice
Chair, Atlanta office and D&I committee
DLA Piper

Historically, courts in the U.S. have defined products for purposes of products liability as tangible personal property, taking the approach outlined in the Second and Third Restatement of Torts. For this reason, courts have traditionally determined that intangibles like software are not products for purposes of product liability. For example, in a 2022 case in the Western District of Washington, the court held that an online video game was not a “product” giving rise to traditional product liability claims, and in a 2020 case, the 3rd circuit held that AI software also is not a product.

However, over the past few years, litigants have attempted to expand the scope of what constitutes a product to encompass intangible technological systems. Some courts have been persuaded by this argument in favor of expansion and have been more willing to consider things like software and social media to be products for purposes of products liability claims. For example, in 2014, the California Court of Appeals held that drug distribution software may be considered a product, and in 2014, a federal court in the Western District of Louisiana held that software used with physical implants may be considered a product.

Kelsey Tavares

Kelsey Tavares
Associate
DLA Piper

Taking a somewhat novel approach, in 2023, a federal court in the Northern District of California ruling on a motion to dismiss in the social media adolescent addiction and personal injury products liability MDL declined to decide that social media and other intangibles are categorically products. Instead the court held that whether product liability claims based on such software or platforms may proceed depends on how they function and whether those functionalities that caused the alleged harm are similar to tangible personal property, in which case they should be considered products. Interestingly, this approach diverged from that taken in the related state court social media coordinated proceeding pending in Los Angeles County Superior Court in which the Court categorically determined that social media platforms “are not ‘products’ for purposes of product liability claims.”

Taken to its logical conclusion, the precedent currently being generated interpreting whether various technology platforms qualify as products giving rise to products liability could apply to artificial intelligence, particularly as manufacturers increasingly integrate AI into consumer-facing products. However, unlike software, a distinguishing characteristic of AI is its capacity for learning and adaptation. While traditional software simply implements human-designed algorithms, AI can create its own algorithms. This brings into play issues of not only input inaccuracies and errors, but also things like algorithm bias and discrimination, which may be unique to AI-based products and even less amenable to fitting within existing products liability frameworks.

Out of recognition for the paucity of caselaw on topics relating to AI and civil liability, in October of 2024, the American Law Institute launched a project to develop Principles of the Law on Civil Liability for Artificial intelligence.1 The project is intended to analyze common-law AI liability topics and generate principles to help guide legal best practices as to the “core problem of physical harms (bodily injury and property damage).” The project’s principles are anticipated to provide some much-needed guidance for courts, legislatures, and private actors as to how AI-based products may fit within existing products liability frameworks in the U.S.

One possibility is that the United States may adopt a similar approach taken recently by the European Union. The EU’s Artificial Intelligence Act entered into force in August of last year and is the first comprehensive legal framework for the use and development of AI. Accompanying the introduction of that Act is the new 2024 European Union Product Liability Directive2 (PLD), also known as Directive EU 2024/2853, which sets forth liability and compensation mechanisms for individuals harmed by software and AI systems.

The PLD specifically recognizes that “products in the digital age can be tangible or intangible.” As such, it considers software and applications or AI systems to be “products” which may give rise to products liability. Further, out of recognition for the increasing complexity of modern “products”, including those incorporating AI, the PLD also provides a rebuttable presumption that a product is defective and that a product caused a plaintiff’s injuries where two conditions are met: (i) a claimant would face “excessive difficulties” in “proving the defectiveness of the product or the causal link between its defectiveness and the damage, or both” particularly due to technical or scientific complexity of the product at issue, and (ii) the claimant demonstrates it is “likely that the product is defective or that there is a casual link between the defectiveness of the product and the damage, or both.”

For life-sustaining medical devices in particular, the PLD establishes that it is possible for courts to find such products defective “without establishing [their] actual defectiveness, where [they] belong to the same production series as a product already proven to be defective.” Thus, a manufacturer may not only face liability under the EU’s product liability regime for their AI-based medical device because it is now considered to be a product, but that device also may be presumed to be defective and to have caused a claimant’s injuries, provided certain conditions are met.

It is unclear if the United States will take as expansive of an approach as the European Union, particularly as to the defect and causation presumptions. However, what is clear is that case law concerning whether AI-based products appropriately fit within the bounds of current products liability regimes in the United States is in flux and as it evolves, additional and novel liability exposure is likely to develop for various stakeholders involved in AI-based product supply chains.

SOURCES

  1. ALI Launches Principles of the Law, Civil Liability for Artificial Intelligence. The American Law Institute. October 22, 2024. https://www.ali.org/news/articles/ali-launches-principles-law-civil-liability-artificial-intelligence
  2. 2024 European Union Product Liability Directive. European Parliament. March 12, 2024. https://www.europarl.europa.eu/doceo/document/TA-9-2024-0132_EN.html
Recent Videos
Marcel Botha, 10XBeta
Matthew Yelovich, Cleary Gottlieb, Theranos Case
Marcel Botha, 10XBeta
Jennifer Kyle, Condor
Related Content