Comparative Report Calls for Ban on "Predictive Policing" Systems
At the end of June 2025, civil liberties organisation Statewatch published a report that aggregates in-depth research conducted in Belgium, France, Germany, and Spain on the use and operation of automated decision-making systems and databases for "prediction", profiling and risk assessments in policing.
In each country, researchers exposed the "predictive" systems in use, how they work, the outputs they produce, and the impacts these have on people, groups, and communities. The Statewatch report summarises the findings on the use of the following four "systems": location-focused systems, person-focused systems, AI video surveillance, and databases. In each section, the report provides key examples on the use of the system in the respective countries, the purpose of the system, data used and the outcomes/impact of the system. In the final section, the report outlines key concerns and infringements on individual rights, including discrimination, criminalisation, transparency, accountability, and unlawfulness.
According to Statewatch, "the report demonstrated a clear trend of police forces increasingly implementing 'predictive', profiling, and other data-driven decision-making systems. These are often acquired from surveillance tech companies, including companies that have faced criticism for their involvement with the Israeli state." In conclusion, Statewatch and its partner organisations call for a ban on the "predictive" systems under scrutiny, because:
- Their use leads to racial and socio-economic profiling, discrimination and criminalisation;
- They result in unjust and discriminatory consequences;
- Their use is deliberately secretive and opaque, meaning that people are not aware of the use and thus unable to challenge outputs.
The full report is available in English, German, French, Spanish and Dutch.
Civil society organisations have recently been increasingly vocal in their support for restrictions or prohibitions on the development and deployment of "predictive policing" tools. They argue, inter alia, that these tools violate the EU's AI Act (→eucrim 1/2025, 35 and eucrim 1/2022, 12). See also the report by the EU Agency for Fundamental Rights "Bias in Algorithms" →eucrim 1/2023, 12-13).