In its White Paper on Artificial Intelligence (AI) from early 2021, the EU Commission stated the maxim that AI had to be trustworthy. As correct as this diagnosis is, the legal instruments published in April 2021, which the EU Commission relies on in its proposal for a regulation, are weak. Its proposal only inadequately protects affected consumers: only a few high-risk AI applications shall entail obligations. For everything else, voluntary self-regulation should be sufficient.
Unfortunately, in its proposal for a regulation on AI, the EU Commission trivialises the risks that are associated with non-covered sectors, by recommending non-binding sectoral codes. When using “smart” consumer goods or digital services, consumers are often unreasonably confronted with non-transparency, violations of fundamental rights, discrimination and behavioural manipulation by AI. Consumers are rarely able to prove or react against damaging automated decisions, for example insurance companies, which let AI determine individual risks or providers of language assistants, who give algorithmically controlled manipulative recommendations or online shops, which determine individualised online prices via AI, thereby discriminating against certain target groups. All these cases have not been classified as high-risk sectors within the meaning of the proposal for a regulation by the Commission; however, they are cases, where those concerned will probably draw the short straw with regard to transparent information, self-determination and successful defence against disadvantages and discrimination.
Decision making based on purely statistical assumptions and prognoses raises elementary issues in respect of ethics, democracy and human rights. Obviously, this will – if decisions are prepared or taken by algorithms – touch people in their statistically non-ascertainable diversity. Without stricter dos and don’ts, people would therefore become guinea pics in countless AI test fields; they would be classified in a discriminatory manner, be discriminated against and their objections would be ignored. According to the philosopher Richard David Precht (“Künstliche Intelligenz und der Sinn des Lebens”/Artificial Intelligence and the Meaning of Life), the profane target of almost all AI was to “gain more control and to generate bigger profits; be it through medical or military technology, more efficient production, lower costs or even more information on citizens or customers.”
The Austrian Chamber of Labour strongly advocates to improve the protection of consumers against undermining their fundamental and civil rights and other risks of damage, which originate from self-learning analysis software. This requires:
- Rules to be implemented not only for high-risk AI. A graded, mandatory legal framework is necessary for all AI risk categories. Voluntary self-regulation is not suitable to protect consumer rights and to boost confidence.
- Enshrining rights for all affected, whose needs were not considered at all. These include among other the right to information, self-determination in respect of the possibility to reject AI analyses and decisions based on personal data, but also the right to appeal.
- To ban socially undesired AI systems without exception instead of patchy bans only for a few varieties of “Social Scoring” (assessment of a person’s social behaviour by authorities to determine their trustworthiness), biometric remote monitoring and behavioural manipulation.
- To specifically name the risks to be eliminated by producers and users. Risks to safety, health and fundamental rights have to be minimised. However, neither a general ban on discrimination has been enshrined, nor has it been standardised in which risk-free or risky condition AI may get onto the market.
- AI certifications without exception by independent authorities instead of certifications issued by the producers themselves.
- That all AI decisions, services and products – in case of any other ban – remain verifiable and that they are explained to those concerned in a comprehensive manner. Only then, will people be able to become aware of discrimination, behavioural manipulation or fraud and react against these.
- An institutional integration of representatives of the those concerned in case of decisions, considering any interests, on the (in)admissibility of concrete AI applications.
- Implementing collective possibilities of lodging an appeal for those concerned through legal standing.