Guidelines on Prohibited AI Practices
17 March 2025 // Preprint Issue 1/2025
Pingen Kopie Dr. Anna Pingen

On 4 February 2025, the European Commission published non-binding Guidelines on Prohibited Artificial Intelligence (AI) Practices, pursuant to Art. 5 of Regulation (EU) 2024/1689 (AI Act), which entered into force on 1 August 2024. The AI Act is the EU’s flagship legislation on artificial intelligence, introducing a risk-based framework for AI governance that categorizes AI systems into four levels of risk: unacceptable, high, limited, and minimal. The Guidelines are intended to promote the consistent, effective, and uniform application of the prohibitions set out in the Act and to assist stakeholders in interpreting and operationalizing its provisions.

The Commission clarified that AI practices falling under the unacceptable risk category are those that contravene fundamental rights protected under Union law. These include, inter alia, the use of subliminal techniques and manipulative strategies that materially distort user behavior, AI systems exploiting vulnerabilities related to age or disability, social scoring systems based on personal characteristics or behavior, and the use of predictive AI to infer criminal risk solely through profiling. The Guidelines also addressed the prohibition of untargeted scraping of facial images from online sources for biometric identification purposes as well as the use of emotion recognition systems in workplaces or educational settings—except under narrowly defined safety or medical exceptions.

Particular attention was given to real-time remote biometric identification (RBI) in public spaces for law enforcement purposes. While generally prohibited, the Guidelines acknowledged limited exceptions subject to strict legal and procedural safeguards. Further prohibited were biometric categorization systems that infer sensitive attributes such as political orientation or sexual preference, unless demonstrably justified under Union law.

The Guidelines elaborated on both the material and personal scope of the prohibitions. They distinguished between providers and deployers of AI systems and outlined cases in which the AI Act does not apply, such as military applications, national security contexts, and scientific research. Emphasis was placed on ensuring that these exclusions are interpreted narrowly so as not to undermine the protective aims of the AI Act.

In terms of enforcement, the Guidelines reiterated that national market surveillance authorities bear primary responsibility for monitoring compliance. The AI Act empowers these authorities to impose significant administrative fines — up to €35 million or 7% of annual global turnover— for breaches of Art. 5.

The Guidelines concluded that the prohibited AI practices outlined in Art. 5 pose a significant threat to the protection of fundamental rights such as autonomy, privacy, non-discrimination, and human dignity. Accordingly, the Commission recommended a case-by-case approach to interpretation and enforcement, stressing the importance of contextual analysis and the precautionary principle. It is also underscored that institutional coordination is needed, both across Member States and within EU bodies, facilitated by the AI Board, in order to foster coherent implementation.

News Guide

EU Artificial Intelligence (AI) Legislation Commission

Author