Assessing Fundamental Rights Risks in High-Risk AI Systems
At the beginning of January 2026, the European Union Agency for Fundamental Rights (FRA) published a report focusing on the key provisions of the AI Act (→eucrim 2/2024, 92-93) and how it can be used to ensure effective protection of fundamental rights. Based on interviews with AI developers, vendors, and users, the FRA examined the challenges associated with the use of AI in critical domains, including education, employment, migration, law enforcement, and public (social) benefits.
The report covers the following aspects:
- Defining AI;
- Explaining what constitutes high-risk AI systems under the AI Act;
- Outlining how to classify such systems in practice;
- Describing how to assess high-risk AI systems with regard to fundamental rights;
- Setting out mitigation measures;
- Explaining how to evaluate the fundamental rights risks posed by high-risk AI systems.
Law enforcement is also a focus of the report, in particular:
- Identifying several issues of relevance to law enforcement authorities regarding the deployment of high-risk AI systems under the EU AI Act;
- Highlighting that AI systems used in law enforcement – such as those for risk assessment, profiling, crime analytics, or evidence evaluation – can significantly affect fundamental rights due to the power imbalance between authorities and individuals and the potential consequences for liberty, due process, and privacy.
FRA's key observations include the following:
- Many organisations lack structured methods to assess fundamental-rights risks beyond data protection and non-discrimination;
- There is limited consideration of rights such as the presumption of innocence, access to remedies, and fair trial guarantees;
- Mitigation practices are fragmented and often rely heavily on human oversight, which may be ineffective if operators over-rely on AI outputs or fail to detect errors;
- Broad interpretations of exemptions or “filters” for high-risk classification could allow law-enforcement-related AI systems with substantial rights impacts to circumvent stricter safeguards, creating potential loopholes in protection.
Overall, the report concludes that addressing fundamental rights risks in AI requires practical guidance, effective oversight, and collaboration among all relevant stakeholders. It notes that self-assessments by AI providers and deployers are often insufficient, due to knowledge gaps and the potential for minimal compliance. There exists a need for enhanced support, research, and resources to identify, evaluate, and mitigate risks. The report emphasises that the proper implementation of the AI Act, supported by guidance, stakeholder cooperation, and empowered oversight, is essential to safeguarding fundamental rights while promoting responsible innovation and public trust.