45 Civil Society Organisations Call for Prohibition of Predictive and Profiling AI Systems in Law Enforcement and Criminal Justice
45 Civil society organisations issued a call for the prohibition of AI predictive and profiling AI systems in law enforcement and criminal justice in the Artificial Intelligence Act (→ eucrim 2/2021, 77). They see a danger in the use of these systems that will lead to the following problems:
- An increase in discrimination, surveillance, and over-policing: The organisations claim that the law enforcement and criminal justice data used to create, train and operate AI systems is often biased and will therefore reinforce the discrimination, surveillance, and over-policing of racialised people, communities, and geographic areas.
- A violation of the right to liberty, the right to a fair trial, and the presumption of innocence: Predictive profiling and risk assessment AI systems in the area of law enforcement and criminal justice will lead to the profiling of individuals and groups as criminals before they have even carried out the alleged acts for which they are being profiled. Serious criminal justice and civil outcomes and punishments, including deprivations of liberty may therefore occur even before the individuals or groups have acted criminally.
- A violation of the right to an effective remedy, risks of intransparency, and problems with accountability: Individuals affected by decisions made by these systems should be made aware of their use and informed about clear and effective routes of criminal procedure by which to challenge the use of these systems.
Against this background, the civil society organisations therefore stressed that such systems must be included as a "prohibited AI practice" in Article 5 of the planned Artificial Intelligence Act.