The European Union (“EU”) has proposed the laying down of a uniform legal framework ‘in particular for the development, marketing and use of artificial intelligence in conformity with EU values’. This is to be known as the Artificial Intelligence (“AI”) Act, and its main aim is to ensure the ‘development of human-centred, sustainable, safe, ethical and trustworthy artificial intelligence’. The proposed text for the AI Act was debated on the 13th of June 2023, and on the 14th of June 2023, the proposal was voted in by the parliament.
The AI Act promotes the creation of regulatory sandboxes by the national authorities, in order to test the various AI systems in a controlled environment before their offering to the public. The EU is also placing an emphasis for AI literacy and shall be promoting measures for the development of training in this emerging field.
In a report, the Committee on the Internal Market and Consumer Protection and the Committee on Civil Liberties, Justice, and Home Affairs adopted a report in which they explained the main points which are to be included in the AI Act.
The report states that all AI systems will need to keep in mind the following general principles:
- Human agency and oversight;
- Technical robustness and safety;
- Privacy and data governance;
- Diversity, non-discrimination and fairness; and
- Social and environmental well-being.
There have been fears that AI will be used for malevolent means and, as such, the proposed AI Act creates a classification of the risk posed by an AI system. The proposed text differentiates between unacceptable risk, high-risk, and low or minimal risk. Title II of the proposed text provides a list of prohibited AI practices which include the use of real-time remote biometric identification systems in publicly accessible spaces for the use of law enforcement, and the use of emotion recognition systems in law enforcement, border management, the workplace or educational institutions.
In the case of there being an AI system which is high risk, the EU has provided certain requirements by which the AI system must abide. Among these is a risk management system which would ensure that all risks are accounted for and how they are being mitigated. Furthermore, data must be collected in the appropriate manner and all records must be kept updated and stored in a safe manner. Users of the AI system must be kept informed of the way the system works and there shall be complete transparency as regards the intended purpose of the AI system, its level of accuracy, any known or foreseeable circumstance of the high-risk AI system, its performance as regards the persons on which the system is to be used, and even the specifications for the input data.
Above all, the EU makes it clear that it wants to ensure human oversight. In fact, in Article 14 of the proposed text, it is stated that ‘high-risk AI systems shall be designed in such a way… that they can be effectively overseen by natural persons during the period in which the AI system is in use’. For this reason, the proposal establishes a European Artificial Intelligence Office which shall be an independent body of the Union, as well as a database for all high-risk AI systems.