headlines

Artificial Intelligence & Product Liability | McCarter & English, LLP


As federal agencies and states grapple with regulating artificial intelligence (AI) to enhance its safety profile, and as businesses race to adopt AI for myriad purposes, it is important to recognize a general safety framework already exists in the form of product liability laws. Notably, many industry experts have opined that AI systems are “black boxes” and not even their own creators are sure how they work insofar as decision logic traceability is concerned.[1] As manufacturers and sales distribution entities embrace AI and incorporate it into their products and services, they should account for and establish policies, procedures, and processes designed to limit personal injury and property damage (and the related exposure) caused by dangerous defects in products that incorporate AI. Product liability is a complex area of state law relating to but distinct from legal concepts of negligence, breach of warranty, and strict liability in tort (liability without fault). Product liability may be governed by statute, case law, or both. As such, the laws and rules surrounding product liability are not uniform and can vary significantly from one jurisdiction to another. However, despite jurisdictional differences, there are common principles underlying product liability that provide a general roadmap to formulating policies and procedures that can help limit exposure for businesses in the chain of distribution of products that incorporate AI.[2]

The fundamental theory of product liability is that a manufacturer, seller, or other person in a product’s sale distribution chain is liable for damages when a “product” is sold to an end user or consumer in a defective, unreasonably dangerous condition that causes physical harm to the person or their property. In the case of AI, depending upon its use context (both intended and unintended), product liability becomes an area requiring careful consideration. Generally, if an AI-incorporating offering is regulated by the applicable law (e.g., is a “product” as defined by statute or case law), there are three (3) bases that may give rise to product liability: manufacturing defects, design defects (which are often difficult to distinguish from manufacturing defects), and warnings defects.

Although laws vary by jurisdiction, a manufacturing defect is generally the presence of a dangerous nonconformity that deviates from product specifications or a dangerous post-manufacturing product modification by a party in the chain of distribution. In some jurisdictions, the manufacturer or seller is “strictly liable” for damages caused by a manufacturing defect, meaning the manufacturer or seller is liable to the end user even if it was not negligent. For AI, mitigating manufacturing defects is intrinsically challenging, especially when incorporating or using third-party AI. AI models and underlying data sets can be opaque and generally cannot be interrogated at a logical instruction level to ensure that an AI system will do what it is designed to do under all circumstances for which it is designed. Unlike tangible products or traditional software employing Boolean logic, the ability to inspect an AI instantiation against its design specifications is not feasible. This is because of its complexity, hyperspace topology, probabilistic algorithms, transformations, and modularized or tokenized data constructs. Rather, model suitability is behaviorally assessed by the level of accuracy of its output. The inability to inspect for conformity is magnified when general intelligence or deep AI is employed to perform interpretive or perceptive functions that invoke near real time decision making that has immediate effects or consequences in the real world. Unforeseen edge case conditions or novel situations, breadth of user adoption, and the type of activity may radically increase the magnitude of risk exposure for a business. Because of the opacity of these systems, it raises the question whether an AI component that misbehaves in an unexpected way might be considered a design defect rather than a manufacturing defect.

A design defect is generally a design aspect that makes a product unreasonably dangerous. Even if a product is manufactured to specification, the product may nonetheless be defective and unreasonably dangerous because of the way it is used, could be used, operates, or functions. Some courts consider design defects as a species of negligence. Other courts don’t, and unlike negligence where the manufacturer or seller may be exculpated if it did not intend and could not reasonably foresee a use that caused harm, the threshold instead is what the end user or consumer reasonably expected. Design defect liability is a complex and often subjective area and is often a matter for which expert testimony is required to show whether the product could have been designed in a safer way. Given that AI is a relatively new product “component” and is being developed, marketed, and adopted while we are still discovering its capabilities and limitations, it remains an open question whether, for example, a deep neural network should be considered inherently dangerous as a matter of design—that is, whether the means of its internal behavior (i.e., how it operates in a conventional step transformation process) is sufficiently unknowable and uncontrollable that it renders it unreasonably or inherently dangerous, or at least unreasonably or inherently dangerous for certain types of use. On the one hand, exhaustive testing coupled with monitoring and adequate safety controls may be sufficient to mitigate black box deficiencies. On the other, because many AI systems enjoy plasticity (i.e., weightings, transformation functions, and topological relationships between layers can change with additional information or feedback), their adaptivity takes on an amorphous, or shape-shifting attribute.

Many businesses are likely to employ third-party AI rather than invent their own. Therefore, downstream licensees are more likely to adapt AI systems with private data models or changing various model parameters for specific uses. Doing so bears similarity to product modifications made by the distributor of a physically manufactured product, inviting sub-tier primary manufacturing and design, or manufacturing defect exposure and providing upstream entities with defenses that the defect originated down the chain. Further, powerful AI systems allow users to use AI-powered systems or products in ways unforeseen or unintended by its developer or distributor. Thus, businesses adopting AI systems may find themselves inviting more risk exposure than may be apparent on the surface. The ability to control use and instruction becomes another important consideration in product design and leads to the third prong of product liability – failure to warn defects.

Failure to warn defects arise when a product lacks appropriate instructions or warnings to enable an end user to avoid using a product in an unreasonably dangerous way. Again, because AI can be used in myriad ways, the ability to sufficiently anticipate potential uses and warn users is challenging. Generally, the more general and powerful the AI, the greater likelihood that a system will be adapted or applied (used) in unforeseeable ways. Ensuring proper user instruction and limiting an AI’s use through license terms, functional governors, and exception monitoring all need to be considered. An area that requires particular attention is the role UI/UX plays in failure to warn. The quality, clarity, and conspicuousness of instruction, system state, action, and confirmation messaging become highly important. This is especially true where human validation is used in systems as a failsafe mechanism in high risk systems. Yet, it is only as effective as the UI/UX and machine to human communication and a human’s ability to take appropriate action without delay, confusion, or mistake. In this regard, “paper” solutions such as relying on references to online acceptable use policies or user instructions may not be sufficient in themselves, and holistic system design becomes an important factor in risk mitigation.

Overall, businesses should appreciate that AI-based product liability litigation will undoubtedly be extremely complex due to the black box nature of these systems. Businesses using AI in their products and services offerings need to develop a thorough risk management framework (RMF) with governance policies, procedures, and process that protect against many potential AI internal and external risks. An RMF is a complex multi-domain endeavor that includes security, data and privacy protection, licensing, insurance, indemnification, regulatory compliance, intellectual property, and a host of other considerations. However, product design, verification and validation testing, controlled in-market testing, monitoring, and remediation serve as the backbone of a sound risk mitigation framework. Product design informed by the principles of product liability law will help businesses limit unexpected exposures.


[1] See, e.g., Blouin, L. “AI’s mysterious ‘black box’ Problem, Explained,” University of Michigan- Dearborn News (Mar. 6, 2023) (link: https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained). However, Anthropic claims to have made progress decoding the AI blackbox problem. See, Roose, K., “A.I.’s Black Boxes Just Got a Little Less Mysterious,” New York Times (May 21, 2024) (link: https://www.nytimes.com/2024/05/21/technology/ai-language-models-anthropic.html).

[2] See, Restatement (Second) of Torts § 402A (1965), Special Liability of Seller of Product for Physical Harm to User or Consumer.

[View source.]



Source link