With the proposal for a regulation on Artificial Intelligence (AI), the European Commission wants to create a legal framework for the dealing with and using AI systems. Are the proposed regulations sufficient to actually provide adequate protection and transparency and to clarify responsibility issues?
Artificial Intelligence (AI) is increasingly finding its way into all areas of life. The advancing digitalisation does not happen without the increased use of so-called AI technologies. However, as much as these primarily self-learning systems (machine learning) are able to facilitate and enable many things, they do not come without risks. It is therefore essential to create a legal framework, which protects against these risks, observes fundamental and civil rights and liberties and which above all also creates transparency as to what impact these systems have.
Hence, in 2020, the presentation of the White Paper AI and the consultation associated with it already set a discussion process into motion, which is now – due to the proposed regulation – taking a next step. However, given the fact that too many questions remain open, this must be seen as an intermediate step and not as an end of the discussion.
Even though the present proposal for a regulation does not ignore the fact that AI systems may harbour a manifold of risks, it nevertheless has been limited to regulating “high-risk“ AI. These are applications, which may entail the death of a person, serious health issues or damage to property. However, what will be rated as “high-risk” in practice, is, due to constricting requirements, too vague and in the worst case far too restrictive. In the end, it is irrelevant for aggrieved employees, citizens and consumers whether their loss is the result of high-risk or only risky AI. In any case, they expect state regulation in form of loss prevention through prior vetting, transparency, and rights of appeal.
Apart from that, one misses approaches in respect of rights of appeal and information as well as the right to co-determine the extent to which AI systems are used regarding oneself. The fact that not only responsibility but also control over AI conforming with the provisions of the regulation shall be primarily left to the producers themselves, appears to be particularly problematic. This results in many high-risk systems being withdrawn from systematic control by independent third parties. This is a highly unsatisfactory solution, in particular if one has set oneself the goal of increasing the trust of the European Population in Artificial Intelligence.
The proposal is in particular disappointing with regard to working life. Already now, AI systems are the constant companions of many people’s everyday work. They often support employees doing their work, but of course the constant data flow, which is often generated “along the way”, is also associated with many opportunities of surveillance and control.
A right to co- and self-determination of employees, who are confronted with AI in their work processes is therefore urgently required. The integration of employees and their representatives regarding the introduction of AI as well as accompanying information and control rights for employees and works councils during the use of such systems is therefore urgently required. This applies even more as it is often the case that the early and comprehensive integration of employees in respect of the introduction of AI in a firm results in better, more efficient and broadly accepted solutions and is therefore in the interest of all participants. However, the draft regulation mostly ignores these aspects.
Hence, there is much room for improvement to ensure that a regulation will come into effect, which creates an adequate legal framework for Artificial Intelligence.