Statewatch: The EU Security AI Complex Must Be Questioned
In April 2025, Statewatch published a report entitled "Automating Authority - Artificial Intelligence in European Police and Border Regimes". The report explores the scene how police, border and criminal justice agencies take advantage of AI tools for their daily work and how the EU creates new infrastructure for "security AI".
In the first section, the report scrutinizes the EU's AI Act (→eucrim 2/2024, 92-93). It states that the Act achieves two main things in the field of security: (1) it establishes conditions for increased development and use of security AI systems, (2) it ensures that those systems are subject to extremely limited accountability, oversight and transparency measures.
In the subsequent section, the report looks into AI projects and activities that were launched by eu-LISA, Europol, Frontex, Eurojust and the EU Asylum Agency. It is found that a wide variety of AI technologies exists: from facial recognition to machine learning and "predictive" technologies.
In the last section, the report explores the infrastructure required for the development of the "EU security AI complex". It considers two types of infrastructure in this context: institutional and technical.
Annexes provide information on high-risk systems under the AI Act; registration items to be required in the EU database of high-risk AI systems, and AI technologies and techniques of interest to EU policing, migration and criminal justice institutions and agencies.
In its conclusions, Statewatch calls for questioning the EU security AI complex. It is criticised that the AI Act provides an extremely limited framework for the oversight and accountability of security AI. In addition, the law is also confusing and unclear, which will necessitate clarifications through jurisprudence. The new infrastructure being established to embed security AI in EU policy and practice is secretive, complex and confusing. Even basic transparency measures are lacking; this poses risks to democracy.