Introduction
Artificial Intelligence (AI) development is rapidly advancing and more and more AI-based applications and products are reaching the market. The development of AI has, in addition, led to the creation of smart robots which have the capability to learn and make independent decisions. These high- tech smart robots are unpredictable, which entails risks, among other things, for third parties. This raises the questions: What rules are applicable if an AI-based product causes personal injury or property damage? Who can potentially be held responsible for the damage and can product liability be claimed at all? This article discusses and explains some basic questions and issues related to non-contractual product liability for damage caused by AI products.
A brief depression about AI and smart robots
AI is advanced and powerful software which can be used for several reasons in many different businesses. In simplified terms, AI technology is based on the fact that a software achieves goals which it has been programmed and instructed to achieve by analyzing data and statistics and then taking the measures that the AI system considers to be the best way to achieve those goals.
For example, an AI system, such as the software in an automated vehicle, assesses whether the object in front of the car is a human being that will cross the car’s trajectory or not and then decide whether the vehicle should stop or not. Simplified, machine learning and deep learning are based on the fact that an AI system learns on its own and improves on its own ability to achieve the goals that it is programmed and instructed to achieve. In this way, the AI system can learn to achieve complex goals without being hindered by preconceived opinions of humans or by limitations in humans’ ability to process and analyze large amounts of information. Through systems learning themselves and constantly improving their ability to achieve their goals, the hopes are that AI systems will be able to act independently without human supervision and be able to find “new” and “creative” solutions to problems that we humans have not yet been able to solve. These kinds of AI products are called “smart robots”. The European Parliament Draft Report on Civil Law Rules on Robotics has proposed defining smart robots as ones whose “autonomy” is established by its interconnectivity with the environment (potentially through the use of sensors), and its ability to adapt its actions to changes in it.
The technology in the smart robots is very complex and in many cases it can be difficult to understand exactly how a smart robot has made a decision or reached a conclusion. It is usually mentioned that smart robots are a black box because it can be difficult to see and understand how the system works. It can also be difficult to predict how a smart robot will act when exposed to the real world or scenarios it has not been developed or trained for. One consequence of this is that it can be difficult to predict what damage a smart robot may cause as well as how these damage may be caused. It can also lead to major problems if a smart robot actually causes damage. For example, if a fully automated vehicle hits a human, it can be difficult to investigate how and why the smart robot made a certain decision. These problems, in turn, make the assessment of damages more difficult under the current Product liability directive.
There are plenty of examples of when AI technology did not work as expected, sometimes with few consequences, but sometimes with very serious consequences. Some examples are Microsoft’s chatbot Tay, Knight Capital algorithm, Amazon’s HR algorithm, Boeing 737-Max algorithm, and racist automated vehicles. When it comes to how to handle damages caused by so called smart robots, there are no existing examples, since there are no fully automated AI products introduced to the market yet.
About product liability
Product liability law is one of the civil law instruments contributing to the overall goal of protecting victims of damages by products (consumers, but not only) in modern industrial societies. Typically, product liability law establishes the right of a victim of a defective product to claim compensation for the damages suffered under a special regime that complements the general system of tort law.
In the EU legal system, product liability law was complemented by the Product Liability Directive (PLD), which was conceived as a technology-neutral horizontal instrument applying to all types of products that has essentially not been amended since 1985. The PLD introduced a “strict” liability system, whereby there is no need for the victim to prove the fault of the producer. In order to establish a claim under the PLD, the victim needs to prove the damage, the defect of the product, and the causal link between defect and damage (Art. 4). Unless the producer proves the existence of a circumstance excluding his liability (Art. 7), the producer shall normally compensate the victim pursuant to the terms and conditions of the PLD, without prejudice to other contractual or non-contractual remedies under national law.
AI and product liability – two basic questions
Who can be held liable according to the product liability legislation?
The first question that arises if an AI system causes personal injury or property damage is who can be held responsible for the damage. The AI system is software and therefore it lacks legal capacity. In other words, AI systems do not have the ability to own property or have rights, debts, or obligations. Therefore, if an AI system causes damage, it cannot be punished or held responsible. However, according to PLD “The producer shall be liable for damage caused by a defect in his product”. The directive is also considered to cover software associated with the product, even though the term ‘product’ refers to “movables”.
In principle, the question who is the “producer” of an AI-product may require thorough analysis of underlying agreements and legislation. However, when the PLD is applied, a claimant does not need to concern himself with that issue. The injured party is not even bound to the “producer” but may demand compensation from, for example, the person who has presented the product as his own by “putting his name, trade mark or other distinguishing feature on the product”, the person who imported the product “into the Community,” and if the manufacturer does not appear anyone who has “put the product into circulation.” When an individual has purchased the AI product from a country outside the EU, liability issues can become much more difficult.
What is covered by the product liability?
Product liability covers personal injury and property damage for private use, but excludes damage to the product itself. This means two relevant limitations on liability. First, the “private use” criteria leads to commercial vehicles to fall outside the product liability system. Secondly, damage to the product itself is not compensated for. For example, if someone owns the automated vehicle they will not receive compensation. Furthermore, product liability assumes that the product has had a safety defect, meaning it has not been as secure as it could have been, among other things, according to the background of the product information provided by the manufacturer.
However, the PLD states the opportunity for countries to introduce liability exemption when “the state of scientific and technical knowledge at the time when he put the product into circulation was not such as to enable the existence of the defect to be discovered.”
Piñera del Olmo comment
In cases where AI systems cause personal injury or property damage, the producer is strictly liable. However, for the strict liability to be claimed, it requires that the damage was foreseeable. If it was not possible for the manufacturer to foresee the damage, there is no strict liability. This means that there is a risk of a liability vacuum. Since an AI system is fully autonomous or can be far removed from human decision making, it will become more difficult to establish proximity and foreseeability. Furthermore, such cases are likely to involve complicated and competing expert evidence regarding whether an AI system functioned as it should have done.
On a more general note, it can be said that two conflicting interests are against each other. On the one hand, the public’s interest is not to suffer from personal or property damage, and if AI systems cause such damage, compensation for the damage should be paid. On the other hand, the interest of those who develop, train, and use AI systems is to not be held responsible for damage that they could not foresee or did not have the opportunity to prevent. How to balance these two interests is the big question that must be decided in cases where AI systems cause personal or property damage. As was the case with the GDPR, Piñera del Olmo sees an opportunity here to set a new regulatory gold standard for the world.
Article By Lovisa Lundin
Disclaimer: This information in this article is not definitive, for up to date legal advice which is relevant to your case please contact our lawyers.
Piñera del Olmo
Granada del Penedès 10, entlo
08006 Barcelona
Phone: +34 93 514 39 97
Fax: +34 93 127 07 66
Email: rpinera@pineradelolmo.com