On April 8th 2019, the European Commission published a strategy on further developments in the Artificial Intelligence sector. The Commission emphasises that ethical and legal issues will be the focus of further research and development in order to create a technology, which enjoys the trust of people.
Artificial Intelligence (AI) is regarded as the future technology per se and even now it is gaining more and more influence on our life. Such technological systems, which have the ability to learn, are already used in a wide range of sectors: for example in medicine to make conclusive diagnoses, but also with regard to automatic face recognition, computer games or to make it easier for pupils to learn their subjects. In the view of many, Artificial Intelligence is the key technology of the 21st century; that is why for quite some time a race has been going on to be at the head of AI research in order to gain a competitive advantage over other countries. So far, the USA with its research institutes and software companies was regarded as the leader of the “AI revolution”; however, in recent years, China, based on its proclaimed “New Generation Artificial Intelligence Development Plan” has decisively entered this sector and plans - with huge investments - to become an Artificial Intelligence super power by 2030. China also has a rather dubious competitive advantage due to the problematic data protection regulations, as the relatively free collection, use and processing of data benefits Chinese companies and research institutions.
At the moment, the European Union is trailing behind in this technology sector and so far the Member States have not yet developed a common roadmap for the future. Only on national level were plans presented by Finland, France and the United Kingdom. However, now the European Commission is following suit and has presented a strategy to position the EU.
EU wants “human-centric” Artificial Intelligence
The Commission intends to use the presented strategy to pursue the target of driving forward the development of Artificial Intelligence in Europe and to build trust in this new type of technology. The planned approach shall create “human-centric” Artificial Intelligence, which increases the wellbeing of people. However, at the same time the ethical orientation shall strengthen the trust of citizens in digital development, thereby creating a competitive advantage for European AI companies. Applying this strategy, the Commission wants the European industry to be a pioneer that creates a distinctive brand in this field, which enjoys global trust. A high-level expert group has been working on creating suitable ethical guidelines.
Ethical guidelines and next steps
The group of experts has created the following seven ethical core requirements, which should be considered in further developments in Europe’s Artificial Intelligence sector:
- Human agency and oversight
- Technical robustness and safety
- Privacy and data governance
- Diversity, non-discrimination and fairness
- Societal and ecological well-being
In a next step, the ethical guidelines will be tested in practice and evaluated in detail by the group of experts, incorporating lobby groups from the private and public sector, as to their practicality and the ability to implement the requirements. Apart from that and in view of the global dependencies with regard to Artificial Intelligence, the Commission will also make international efforts to reach consensus concerning human-centric AI.
From a consumers’ point of view, the prepared ethical core requirements are important issues, which must also be considered when developing services and products with Artificial Intelligence. Regarding such innovations, the focus should be on users’ wellbeing; this requires clear framework conditions, which protect consumers and guarantee sensible and careful handling of data. Apart from that, the question of legal responsibility when using Artificial Intelligence and automated systems must also be placed at the centre of developments.