Artificial Intelligence (“AI”) is commonly associated with progress, growth and technological improvements. It has the potential of transforming businesses, but what if something goes wrong? Who will be liable if anything goes wrong with an AI system? Should the AI system itself be made liable for its output? Or should we look at the producer of the system, or even the user?
In this article we will touch upon (i) the current AI liability framework in the EU, (ii) the new proposal for an EU regulation on AI, and (iii) the importance of agreements.
1. Current legal framework in the EU
As with many technologies, the contrast between the fast development of AI systems and the rather slow process of a legal framework regulating AI is significant.
Today, there is no specific liability regime in relation to AI in the EU. Therefore, in most EU member states, the law of tort will regulate questions on AI liability. However, the law of tort is largely non-harmonized, with the exception of the liability of product manufacturers for defective products, as governed by the Directive 85/374/EC on liability for defective products.
While most liability regimes in the EU Member States do ensure a basic protection of harmed or adversely impacted persons whose damage is caused by the operation of AI, the allocation of liability may be inadequate or inefficient.
2. New EU Proposal for a regulation laying down harmonized rules on AI
The EU legislator has acknowledged these deficiencies and has launched a series of initiatives since 2017. As part of this process, the European Parliament and the European Council have laid down a Proposal for a Regulation laying down harmonized rules on AI (the “Artificial Intelligence Act” or “AIA”) on 21 April 2021.
This regulation aims at regulating the use of AI in accordance with the level of risk the AI system presents to fundamental human rights and safety. The AIA introduces a sophisticated ‘product safety regime’ constructed around a set of four risk categories:
(i) an “unacceptable risk”, such as live remote biometric identification systems in publicly accessible spaces for law enforcement purposes, such as facial recognition;
(ii) a “high risk”, such as systems for recruitment or medical purposes;
(iii) a “limited or low risk”, such as chatbots presenting certain risks of manipulation;
(iv) a “minimal or no risk”, such as AI-enabled video games or spam filters.
For each risk level, specific rules apply:
(i) AI systems presenting an “unacceptable risk” are banned from the EU, because they are regarded as violating fundamental rights and may cause serious harm;
(ii) AI systems presenting a “high-risk” are subject to strict requirements such as:
• the establishment of a risk management system consisting of a continuous iterative process run throughout the entire lifecycle of the AI system; and
• the integration of record-keeping capabilities into the AI system.
(iii) AI systems presenting a “limited or low risk” are either subject to transparency requirements such as certain information duties to natural persons interacting with an AI system, or are not regulated at all.
(iv) AI systems presenting a “minimal or no risk” are permitted without restrictions. However, the European Commission encourages stakeholders to set up codes of conduct for such low-risk AI systems.
Consequently, the new proposal introduces an entirely new body of regulatory obligations for providers, users, manufacturers, importers and distributors of AI systems. AI stakeholders will therefore need to set up compliance tools in order to ensure that their systems meet the legal requirements, somehow similar to the requirements imposed by the General Data Protection Regulation (GDPR).
The AIA is a first legislative step to provide a better AI framework. It pursues the twin objectives of (i) addressing the risks associated with specific AI systems and (ii) promoting the uptake of AI. It is, therefore, mostly focused on ensuring that AI systems are safe and respect existing law on fundamental rights and Union values. It does however not resolve our basic question: who will be liable if anything goes wrong with an AI system?
The answer to this question is to be expected soon: in its “Coordinated Plan on AI” of 2021, the European Commission will propose further EU measures adapting the liability framework to the challenges of new technologies, including AI in the course of 2022.
The expert group appointed by the EC to prepare the AIA has already given some interesting guidance on the division of responsibilities :
– A manufacturer of an AI system should be liable for damages caused by defects in their products, even if the defect was caused by changes made to the product under the producer’s control after it had been placed on the market.
– An operator of an AI system that carries an increased risk of harm to others, for example AI-driven robots in public spaces, should be subject to strict liability for damage resulting from its operation.
– A service provider ensuring the technical framework has a higher degree of control than the owner or user of an actual product or service equipped with AI. This should be taken into account in determining who primarily operates the technology.
– A person using a technology that does not pose an increased risk of harm to others should still be required to abide by duties to properly select, operate, monitor and maintain the technology in use. Upon its failure to comply with these duties, it should be liable for breach of such duties if at fault. In addition, a person using a technology which has a certain degree of autonomy should not be less accountable for ensuing harm than if said harm had been caused by a human auxiliary.
In addition to this, the EC has already clarified in its report on liability for Artificial Intelligence and other emerging digital technologies that there is currently no need to give AI a legal personality in order to hold it liable.
3. Importance of clear agreements
While the AIA and the preparatory legislative works at EU level are a good starting point for clarifying some of the liability aspects associated with AI systems, certain issues at the civil law level currently remain unanswered and may continue to cause discussions between parties collaborating in the development and use of AI systems.
This is where contracts come in to provide answers. As long as there is no clear legal framework or case law, it is crucial for parties to AI projects to duly define their mutual liabilities in an agreement, in order to avoid liability discussions.
The Gevers AI team will be happy to assist you in drafting such agreements and with any questions you may have. Please contact us at email@example.com or reach out directly to one of our specialized attorneys.
1. Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products; OJ L 210; 07/08/1985; p. 29–33.
2. The European Commission also identified the liability of the controller and/or processor, for any damage caused by processing to the data subject, as governed by article 82 General Data protection Regulation (GDPR) and the liability of undertakings infringing competition law, governed by the Directive 2014/104/EU on liability for infringing competition law.
3. For a complete overview you can visit A European approach to artificial intelligence | Shaping Europe’s digital future (europa.eu)
4. Coordinated Plan on Artificial Intelligence 2021 review, p.33
5. Expert Group New Technologies Formation, “report on liability for Artificial Intelligence and other emerging digital technologies”, Publications Office of the European Union, 2019, doi:10.2838/573689, p. 3-4.
6. Expert Group New Technologies Formation, “report on liability for Artificial Intelligence and other emerging digital technologies”, Publications Office of the European Union, 2019, doi:10.2838/573689.