EP’s Amendments to the AI Act
On 14 June 2023, the European Parliament adopted amendments to the legislative proposal for a regulation on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).
The MEPs expanded the list of intrusive and discriminatory AI to include the following:
- “Real-time” remote biometric identification systems in publicly accessible spaces;
- “Post” remote biometric identification systems, with the only exception being law enforcement for the prosecution of serious crimes and only after judicial authorization;
- Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- AI systems aiming to detect emotions, physical features, or physiological features (e.g. facial expressions, movements, pulse frequency, or voice) when they are used in law enforcement, border management, the workplace, and educational institutions;
- Untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases.
AI systems intended to influence the outcome of an election or referendum or the voting behaviour of natural persons should be classified as high-risk AI systems. AI systems whose output natural persons are not directly exposed to, such as tools used to organise, optimise, and structure political campaigns from an administrative and logistical point of view, are not included in this high-risk classification. By contrast, AI systems intended for biometric identification of natural persons and AI systems intended to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems (with the exception of those prohibited under this Regulation), should be classified as a high risk, according to the Parliamentarians.
The MEPs also added that, in light of the rapid pace of technological development and potential changes in the use of AI systems, the list of high-risk areas and use cases in Annex III should be subject to ongoing review by means of regular assessments.
Providers of foundation models will be required to assess and mitigate possible risks (to health, safety, fundamental rights, the environment, democracy, and the rule of law) and register their models in the EU database before their release onto the EU market. Providers of foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio data, and video data (“generative AI”) and providers who remodel a foundation model into a generative AI system shall additionally comply with the transparency obligations outlined. They must also train, and where applicable, design and develop the foundation model in such a way as to ensure adequate safeguards against the generation of illegal content. They further have to make publicly available a sufficiently detailed summary of the use of training data protected under copyright law.
The MEPs also added the obligation for users of high-risk AI, to the extent they exercise control over the high-risk AI system, to proceed as follows:
- Implement human supervision according to the requirements laid down in this Regulation;
- Ensure that the natural persons assigned to carry out human supervision of the high-risk AI systems are competent, properly qualified and trained, and have the necessary resources in order to ensure the effective supervision of the AI system in accordance with draft Art. 14 AI Act;
- Ensure that relevant and appropriate robustness and cybersecurity measures are regularly monitored for effectiveness and are regularly adjusted or updated.
With regard to the vote in the Parliament, co-rapporteur Brando Benifei said: “All eyes are on us today. While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose. We want AI’s positive potential for creativity and productivity to be harnessed but we will also fight to protect our position and counter dangers to our democracies and freedoms during the negotiations with Council.”
The EP's amendments to the AI Act take account of proposals made at the Conference on the Future of Europe (→ eucrim 2/2022 84-85). These proposals included ensuring human oversight of AI-related processes; making full use of the potential of trustworthy AI; and using AI and translation technologies to overcome language barriers. The text will now be debated in trilogue negotiations between the EP, Council and the Commission. The AI Act is one of the priorities of the Spanish Council Presidency that started on 1 July 2023. The aim is to reach a political agreement by the end of the year.