The 7 EU rules for the ethical development of Artificial Intelligence
The European Union has summoned over 50 experts to draw up 7 guidelines that governments and companies will have to consider to distribute their Artificial Intelligence systems in the Old Continent.
The European Union has just published a series of guidelines that companies and governments should follow to develop ” ethical ” solutions based on Artificial Intelligence. The objective of the new rules is to effectively tackle the problems that will presumably spread in the near future with the integration of new technologies in sensitive sectors such as health care, education and consumer applications.
For example: if an AI system is able to diagnose an advanced disease to a patient, the new guidelines impose a generic procedure to follow in order to prevent the software from being influenced by information on race and gender, objections are ignored of a doctor or a human operator, and so that the patient is given the opportunity to receive explanations and information from a human being.
To address the problems that could arise from the establishment of artificial intelligence technologies in sensitive sectors, the European Union has put together a group of 52 experts, who have compiled seven requirements that every Artificial Intelligence system should meet in the coming years. Here they are after translated freely from the note issued by the EU:
- Human control and supervision : Artificial Intelligence systems should lead to a fair society, supporting human beings and fundamental rights, and not diminishing, limiting or misleading their autonomy. Man’s autonomy must not be sacrificed in favor of total automation, and people should not be manipulated or forced by AI systems.
- Robustness and security : Artificial Intelligence must be reliable and secure algorithms with total certainty, and reliable enough to know how to deal independently with any errors or inconsistencies during the various phases of the life cycle. The system must be poorly vulnerable to external attacks.
- Privacy and data ownership : citizens should have full control over their data, and data concerning them should not be used to damage or discriminate against them.
- Transparency : government agencies and companies should ensure the traceability of data processed by Artificial Intelligence systems. Algorithms should be accessible to third parties, and humans should be able to ” follow and track ” decisions made by the software.
- Diversity, non-discrimination and equity : in their choices Artificial Intelligence systems should consider the full range of human abilities and skills, ensuring accessibility to all ages, all genders, all races.
- Social and environmental well-being : Artificial Intelligence systems should be used to enable positive change in society, increasing sustainability and ecological responsibility.
- Responsibility : Artificial Intelligence mechanisms should be put in place to ensure responsibility for the actions taken. The negative impacts of the systems should be understood and reported in advance of use.
It is clear that some of the requirements of the EU regarding the subject are rather abstract and subjective, such as the aspect of ” positive social change “, but there are also rather interesting and more objective aspects: for example: the EU proposes in an obvious way is the supervision of the systems by the governments, the sharing of the data and the ways of training the systems, all to combat the onset of potentially part automated systems.
These are not legal rules, but guidelines that should guide the future choices of European governments regarding the establishment of Artificial Intelligence systems, all to protect the digital rights (and not) of the citizens of the Old Continent.
The European Union wants to establish itself among the leaders of the AI revolution, it is clear that today, it is at a disadvantage compared to the USA and China, which currently occupy a much more important role in terms of research and development on the AI.