FRA Report on Use of AI in Predictive Policing and Offensive Speech Detection
8 March 2023
Pingen Kopie Dr. Anna Pingen

On 8 Dezember 2022, the European Union Agency for Fundamental Rights (FRA) published its report on the use of artificial intelligence (AI) in predictive policing and offensive speech detection. The report took a closer look at the possible bias in algorithms that can amplify over time and affect people’s lives, potentially leading to discrimination. The FRA stressed that the question on bias of algorithms is still underresearched and evidence-based assessments are lacking. The two "use cases" on (faulty) crime predictions and (legitimate) content posted online should contribute to fill this gap. The FRA makes six recommendations:

  • The quality of training data and other sources influencing bias need to be assessed by users of predictive algorithms. Using data based on outputs of algorithmic systems becomes the basis for updated algorithms, which might amplify the bias over time. With regard to predictive policing, this means that an assessment needs to be made before and during the use of algorithm.
  • Additional implementing guidance on the collection of sensitive data under Art. 10 (5) of the proposed Artificial Intelligence Act should be considered, notably with respect to the use of proxies and to outline protected grounds (such as ethnic origin or sexual orientation).
  • Increased transparency and assessments of algorithms are required as the first step when safeguarding against discrimination. Companies and public bodies using speech detection should be required to share the information necessary to assess bias, with relevant oversight bodies and – to the extent possible – with the public. When exercising their mandates, oversight entities responsible for upholding fundamental rights, such as equality commissions and data protection authorities, should pay special attention to the potential discrimination in language-based prediction models.
  • Given that speech algorithms include strong bias against persons based on several different characteristics (such as ethnic origin, gender, religion, and sexual orientation), the EU legislator and Member States should strive to ensure consistent and high levels of protection against discrimination on all grounds. This discrimination is to be tackled by applying existing laws that safeguard fundamental rights. Existing data protection laws must also be used to ensure non-discrimination when algorithms are used for decision-making. Equality bodies should employ specialised staff and cooperate with data protection authorities and other relevant oversight bodies in order to step up their efforts to address discrimination complaints and cases linked to the use of algorithms.
  • The EU and its Member States need to consider measures fostering greater language diversity in Natural Language Processing (NLP) tools as a way of mitigating bias in algorithms and improving the accuracy of data. As a first step, this should include promoting and funding NLP research on a range of EU languages other than English in order to promote the use of properly tested, documented, and maintained language tools for all official EU languages. The EU and its Member States should also consider building a repository of data for bias testing in NLP.
  • An increase in EU and national funding for fundamental rights assessments of current software and algorithms is required for studies of the available, general-purpose algorithms in order to increase the deployment of trustworthy AI that complies with fundamental rights. The EU and its Member States could improve access to data and data infrastructures when identifying and combating the risk of bias in algorithmic systems by ensuring access to data infrastructures for EU-based researchers. Investments in storage and cloud computing infrastructures that meet EU criteria for data protection, software security, and energy efficiency would help to achieve this.

FRA's report aims to inform policymakers, human rights practitioners and the general public about risk of bias when using AI. It particularly feeds into the discussion on the proposed Artificial Intelligence Act (→ eucrim 2/2021, 77). Here, the question on the protection of fundamental rights plays an important role.

News Guide

EU Fundamental Rights Artificial Intelligence (AI) Agency for Fundamental Rights (FRA)

Author