Secretive Security AI Agenda Sparks Concern Over Civil Liberties
13 May 2025 // Preprint Issue 1/2025
Pingen Kopie Dr. Anna Pingen

The European Union is facing renewed criticism over its secretive development of artificial intelligence (AI) tools for policing, border control, and criminal justice, following a report published by Statewatch in April 2025: Automating Authority. The report reveals that the EU and its Member States have quietly expanded efforts to deploy so-called “security AI" technologies. According to the authors, this development occurred largely outside public scrutiny, despite its far-reaching implications for privacy and civil liberties. They warn of serious threats to human rights, democratic oversight, and accountability.

When the EU adopted the landmark Artificial Intelligence Act in 2024 to regulate high-risk AI systems and uphold fundamental rights, the law included sweeping exemptions for security-related uses. Among the most concerning was a full exemption until at least 2031 for high-risk AI used by public authoritiescarve-outs for biometric surveillance, profiling, and data categorisation by law enforcement.

Documents obtained via access to information requests have revealed how internal EU bodies, including the European Clearing Board and eu-LISA, have worked to weaken safeguards and lay the institutional groundwork for security AI deployment. Consultancy firm Deloitte, for instance, had reportedly drafted initial plans for a “centre of excellence” at eu-LISA, although that proposal was later shelved.

The report also sheds light on the technical infrastructure being built to support security AI systems. The Security Data Space for Innovation (SDSI), an EU-funded project, was found to be mapping types of police-held data—including photos, audio data, and scraped web content—for use in AI training. Europol’s parallel initiative included developing an AI “sandbox” to test tools like voice analysis and facial recognition in a controlled environment. Statewatch cautions that these developments risk entrenching bias in policing, especially as AI systems trained using flawed or discriminatory datasets.

According to Romain Lanneau, co-author of the report, EU police and migration authorities would effectively self-assess the legality of their experiments with highly intrusive technologies. This engenders risks such as violations of freedom of expression, the right to asylum, and the principle of non-discrimination. As concerns about AI governance and the influence of far-right actors in Europe grow, the report is calling for robust democratic oversight and urgent public debate on the future of “security AI” in the EU.

News Guide

EU Fundamental Rights Artificial Intelligence (AI) Police Cooperation

Author