AK EUROPA, together with European Consumer Organisation (BEUC) and the Austrian Federal Ministry of Social Affairs, Health, Care and Consumer Protection (BMSGPK) organised a webinar on consumer protection as an integral part of trustworthy Artificial Intelligence.
Artificial Intelligence (AI) is increasingly influencing more sectors of daily life, e.g. in respect of credit analysis, within the scope of mobility or microtargeting in online advertising. In April 2021, the EU Commission proposed a European AI Act, which is to contribute to the development and utilisation of trustworthy AI systems. Against the background of the ongoing negotiating process in the Council and the EU Parliament, the webinar was a good chance to highlight significant deficits of the commission's proposal and the need for improvement regarding consumers.
In her opening remarks, Maria Reiffenstein (BMSGPK) stated that the horizontal framework of the AI Act had been anticipated with high expectations. However, from the consumers and end users point of view the proposal is disappointing as disclosing algorithms to them is not required. Unfortunately, most consumer relevant AI systems had not been classified as “high risk” applications. Professor Wendehorst’s study commissioned by the BMSGPK drew up concrete recommendations on how to improve the Commission's proposal in the interests of consumers.
Walter Peissl (ITA), author of the AK study “Artificial Intelligence – Explainability and Transparency”, pointed out that AI is used more frequently. However, the Commission’s proposal, largely, ignores consumers as a target group. Transparency is vital for building trust, as AI needs to be understandable for users. According to the author of the study, it would be important to encourage a broad debate, in respect of contents and society, and to broaden the AI definition of the Commission’s proposal. There is also a need for creating a transparent set of criteria, which looks at transparency from a wide range of dimensions and contains a right to information. Furthermore, AI systems compromising fundamental rights and freedoms, democracy or ethical principles shall be banned.
Christiane Wendehorst (University of Vienna) presenting her study “The Proposal for an Artificial Intelligence Act from a consumer policy perspective” emphasised that the list of prohibited AI practices is too narrow. The definition of the Commission’s proposal should be expanded in a range of areas, and include whether people are vulnerable because of their economic and social situation among other things. Wendehorst also demanded the AI Act to include binding prohibitions of total surveillance and processing of brain data. The Act should also incorporate additional obligations regarding the right of individuals to an independent audit of specific decisions. New obligations to improve the enforcement of systemic AI risks posed by very large online platforms are needed.
For Ursula Pachl (BEUC) the main problem from a consumer’s point of view lies within the EU Commission’s internal market approach, which effectively excludes civil society and consumers from the proposal. Hence, it is within the responsibility of the EU Parliament and the Council to remedy this situation and to correct the fundamental orientation of the proposal. Pachl also referred to the fact that horizontal consumer legislation is not specific enough to regulate AI. Because of this, clarifications and enshrining consumer rights in the AI Act are urgently required. Furthermore, the Commission’s proposal shall include a general prohibition on social scoring and the use of face recognition by private users.
Daniela Zimmer (AK Wien) emphasised that algorithms make decisions on the fate of individuals as well as on who gets something and who does not (a contract, an apprenticeship, state benefits etc.). AI penetrates many areas and calculates solutions based on statistical correlations without naming any reasons. However, decisions without comprehensible reasoning are profoundly undemocratic. This fact should be used as a starting point to impose strict rules on AI, making it subject to non-profit alignment. Information on automated decisions has to make sense for the people affected in order for the latter to question and challenge these decisions. Furthermore, the EU AI Act would require stricter bans on constitutionally unacceptable AI (such as emotion recognition), an independent certification, regulatory approval of high-risk AI products and easily accessible legal protection for disadvantaged individuals. The Commission’s proposal forgot about consumers at the end of the AI value chain – a fact that needs to be urgently fixed.