Digital transformation and associated technical innovations require adapted standards of protection for consumers. With regard to Artificial Intelligence (AI), the EU Commission has now presented a proposal for a Directive, which lays down special liability rules for damages caused by AI systems. This will pursue the target to enable victims to gain easy access to compensation.
As early as spring 2020, die EU Commission used the White Paper on AI to refer – apart from the manifold possible applications and benefits – also to the challenges and potential damages caused by AI. Hence, the application of AI may result in a number of fundamental rights violations, such as infringements against data protection regulations, discrimination in case of automated application processes or damages to health due to AI-supported medical technology. According to the EU Commission, the law on Artificial Intelligence, which was proposed in the following year, should also contribute to people’s protection of fundamental rights and security, even if, from AK’s point of view, there is quite a need for improvement to effectively protect consumers. The objective of the now presented proposal for a Directive on AI liability is to harmonise liability rules for AI systems, whereby in particular the enforcement of compensation claims by victims shall be simplified.
The proposal’s two core elements
The draft proposes the introduction of two concrete liability rules:
One the one hand, it provides for the introduction of a so-called presumption of causality, which shall make it easier for victims to meet the burden of proof: even though the claimant (often consumers) has to provide evidence that an AI user has infringed against an obligation, for example against a due diligence obligation from the law on Artificial Intelligence, and that this infringement might have probably influenced the result generated by the AI system. In that case, the court may assume that the damage has been caused due to the infringement, whereby the person infringing against the obligation is subject to liability. However, this presumption of causality may be refutable if the defendant provides a rebuttal that the damage has not been caused by the infringement of obligation.
On the other hand, victims shall gain easier access to evidence in connection with AI systems. In specific terms, filing a motion at court may achieve the disclosure of information on high-risk AI systems, such as medical devices. This shall assist victims in identifying liable persons and the actual cause of the occurrence of damage.
Customer protection not guaranteed
Even though the EU Commission’s Directive proposal names customer protection as a major guiding theme, the liability rules remain completely inadequate. On the contrary, the proposed rules are rather a protection programme for developers and users of AI systems. Taking a closer look, both the presumption of causality and facilitating access to evidence as the two core elements of the proposal are not suitable to achieve a simplified enforcement of compensation claims. The requirements on the burden of proof continue to be too high if one considers that victims have to prove both an infringement of obligation caused by an AI producer or user and the connection between AI result and damage. And if that were not enough, lack of diligence only then constitutes liability if it infringes against a law, whose purpose it is to prevent exactly the damage incurred. The sole concession to completely overwhelmed consumers: the EU Commission brought itself to accept the legal presumption of a causal connection between culpability and AI result. However, in case of high-risk AI, this tiny relief only applies if the defendant company cannot provide evidence that the claimant has reasonable access to “sufficient evidence and expertise”. In case of non-high-risk AI, the relaxation of the burden of proof only applies, if in the Court’s opinion there are “excessive difficulties” to provide evidence.
The application to the court regarding access to evidence must also first be subjected to a proportionality assessment and may be legally challenged by the defendant companies. It is almost certain that this will result in a lengthy legal battle until a final enforceable decision has been made.
The EU Commission’s inadequate proposals are also remarkable as in case of the public consultations only non-SMEs, hence, large companies, have come out against no-fault liability for certain AI-supported technologies. Against this background, one has to ask the question why the EU Commission did not follow the majority request for no-fault liability. However, from the point of view of consumer protection one can only deduce one request: back to square one.