Council's Common Position on Artificial Intelligence Act
On 6 December 2022, the Council adopted its common position (general approach) on the Artificial Intelligence Act. The Commission presented the draft regulation on 21 April 2021 (→ eucrim 2/2021, 77), which followed up on the Commission’s White Paper on AI from 2020 (→ eucrim 1/2020, 8-9). The aim of the Artificial Intelligence Act (AIA) is to turn Europe into a global hub for trustworthy artificial intelligence (AI) and to balance the numerous risks and benefits that the use of AI can provide.
In its common position, the Council narrowed down the definition of an AI system to systems developed through machine learning approaches and logic- and knowledge-based approaches. With this narrowed-down definition, the Council wanted to make the difference between AI and simpler software systems clearer.
Regarding the prohibition of AI practices, the Council decided to extend the prohibition on using AI for social scoring to private actors. Additionally, the provision that prohibits the deployment of AI systems exploiting the weaknesses of a specific group of persons has been expanded to include persons who are vulnerable because of their social or economic situations. As for the use of "real-time" remote biometric identification systems in publicly accessible spaces by law enforcement authorities, the compromise text clarified the objectives according to which such use is considered strictly necessary for law enforcement purposes; law enforcement authorities should therefore be allowed to use such systems as an exception.
In order to prevent AI systems that are not expected to seriously violate fundamental rights or pose other significant hazards from being classified as high risk, the compromise proposal now includes an additional horizontal layer on top of the high-risk classification made in Annex III. In fact, while categorizing AI systems as high risk, it should also be taken into consideration how significant the output of the AI system is in relation to the relevant action or decision to be made. The significance of the output of an AI system is assessed based on whether or not it is purely accessory in respect of the relevant action or decision to be taken.
Many of the requirements involving high-risk AI systems, as provided in Chapter 2 of Title III of the proposal, have been clarified and adjusted in such a way that they are more technically feasible and less burdensome for stakeholders to comply with. In view of the fact that AI systems are developed and distributed through complex value chains, the compromise text includes changes amending the allocation of responsibilities and roles.
The Council also defined the scope of the proposed AI Act and provisions relating to law enforcement authorities in order to exclude national security, defence, and military purposes from its scope. The Council further clarified that the AI Act should not apply to AI systems (and their outputs) used for the sole purpose of research and development or to the obligations of people using AI for non-professional purposes.
A number of amendments have been made to the rules governing the use of AI systems for law enforcement in order to take into account the unique characteristics of law enforcement agencies. Notably, some of the related definitions in Art. 3, such as "remote biometric identification system" and "real-time remote biometric identification system", have been fine-tuned in order to make clear which situations fall under the related prohibition and high-risk use case and which situations do not. The compromise proposal also includes additional changes that, under the right conditions, are intended to guarantee a suitable degree of flexibility in the use of high-risk AI systems by law enforcement authorities and take into account the necessity of maintaining the confidentiality of sensitive operational data in connection with their operations.
In order to simplify the compliance framework for the AI Act, the compromise text contains a number of clarifications and simplifications to the provisions on the conformity assessment procedures. It also substantially modifies the provisions on the AI Board, with the objectives of ensuring its greater autonomy and strengthening its role in the governance architecture for the AIA.
The compromise text further includes a number of changes that increase transparency with regard to the use of high-risk AI systems. It specifies that a natural or legal person who has reason to believe that an infringement of the provisions of the AI Act has occurred may make a complaint to the relevant market surveillance authority and reasonably expect such a complaint to be handled according to the dedicated procedures of that authority.
Next steps: once the EP agreed on its position, trilogue negotiations can start.