News
BackHardly any technological development is racing at the speed of Artificial Intelligence. It is also playing a not to be underestimated role in connection with the COVID-19 crisis. However, in spite of all the advantages the technology has to offer, it is vital that employees, consumers and the environment are adequately protected against the dangers it poses.
Right from the start, applications using Artificial Intelligence (AI) have supported the fight against the pandemic. More precisely, AI has been embedded into diagnostics and has, among other, been used in search of a vaccine as well as in respect of observing and predicting the spread of the virus – but also for the purpose of monitoring the imposed lockdown and restrictions on outdoor activities.
White Paper on Artificial Intelligence
In order to meet the challenges, which accompany the application of AI, the EU Commission had published its White Paper on Artificial Intelligence as early as February. In it, it urges to adopt a “risk-based” approach. According to this, only those applications shall be regulated, which, in the Commission’s opinion, are used in particular sensitive areas and which might have potentially particularly serious consequences. A possible ban on biometric identification technologies (e.g. facial recognition) did not, even though originally intendent in the Commission’s working papers, find its way into the White Paper. The Chamber of Labour (AK) wants to use the debate on the White Paper and the consultation related with it, to point out the importance of an ambitious level of protection.
“Risk-based” approach not sufficient
Even if the discussion at European level is to be welcomed in principle and the technology without a doubt has a huge potential, the risks, linked to AI must not be allowed to be glossed over. From the Chamber of Labour’s point of view, a “risk-based” approach is not enough to sufficiently protect consumers, employees and their rights as well as the environment. Hence, in these sectors, the application of AI has to be evaluated as particularly sensitive.
In particular, employees must be protected against applications, which affect their rights or working conditions. With regard to applying AI, they should be integrated in the same way as the social partners. However, the use of applications with particularly far-reaching effects on working conditions and employees’ rights, for example in case of personnel decisions, should be entirely banned. The impact of applying AI in the work environment has to be strictly evaluated and accompanying measures regarding training and development are essential. The protection of consumers requires a graded and mandatory legal framework for all applications. Loopholes in the General Data Protection Regulation (GDPR) have to be closed and the regulation has to be enforced effectively. If decisions, services or products are based on algorithms, the latter must be explainable and verifiable. Relevant examples would be algorithmic controlled recommendations – e.g. of language assistants such as Alexa – which constitute a high risk of manipulation or automated decisions regarding consumers’ creditworthiness. The use of AI-based facial recognition should be explicitly banned. Science and Research too must not be exempt from data protection requirements and people affected have to be involved when decisions are made on the (in)admissibility of AI applications. Apart from that, the AK would welcome a greater consideration of the issues of environment and climate protection as well as resource conversation. Here, AI should be predominantly used for public transport and sustainable traffic concepts.
Vestager in dialogue with the EU Parliament
On 23 June 2020, executive Vice President and Digital Commissioner Margrethe Vestager arrived for a dialogue at the European Parliament’s Committee of Legal Affairs (JURI). She once again emphasised that the transition towards a digital society would continue to be the focus of her work and that the Recovery plan would be guided by this accordingly. In reply to questions, Vestager once more referred to the risk-based approach of the White Paper. Only if clear guidelines would be set, it would be possible to gain the necessary trust of the population. Following the exchange with Vestager, the Legal Committee debated the ethical aspects of using AI. Whilst the necessity of a clear and legally uniform definition of AI was emphasised several times, the opinions regarding the creation of a European Agency for Artificial Intelligence and the scope of a possible regulation were very different.
Dangers
Which specific dangers could be harboured by AI, is shown by the example of face recognition. This is (still) too immature and might furthermore strengthen racist tendencies – also with regard to criminal prosecution. As a reaction to the violent death of George Floyd and the subsequent protests, many technology corporations announced that they would no longer supply the US police forces with facial recognition software. In addition, the European Trade Union Confederation (ETUC) worries that the use of AI would lead to more gender inequality and urges to make suitable efforts to ensure that women will not be discriminated against. Hence, it is absolutely clear that the handling of Artificial Intelligence has to continue in more detail – and with the involvement of employees and social partners.
Further information:
AK EUROPA Position Paper: Artificial Intelligence – a European Concept for Excellence and Trust.
AK EUROPA: The COVID-19 crisis as an accelerator for digitalization
AK EUROPA: Artificial Intelligence and other challenges of the digital age