EP-Studies on Algorithmic Decision-Making
17 June 2019
2018-Max_Planck_Herr_Wahl_1355_black white_Zuschnitt.jpg Thomas Wahl

The European Parliamentary Research Service published two studies dealing with algorithms used in systems to support decision-making. The studies were designed to provide a basis for future debates in the European Parliament on the issue of algorithmic decision-making systems.

The first study, “Understanding algorithmic decision-making: Opportunities and challenges,” focuses on the technical aspects of algorithmic decision systems (“ADS”)and explores the benefits and risks of ADS for individuals, for the public sector, and for the private sector. The study also includes examples of ADS in criminal justice, e.g., predictive policing, risk assessments for recidivism, and the use of ADS for sentencing. In conclusion, the study puts forward various options for policymakers and the public to address precautionary measures that meet the raised challenges. These options include:

  • Developing and disseminating knowledge about ADS;
  • Publicly debating the benefits and risks of ADS;
  • Adapting legislation to enhance the accountability of ADS;
  • Developing tools to enhance the accountability of ADS;
  • Effectively validating and monitoring measures for ADS.

The second study develops policy options for a governance framework for algorithmic accountability and transparency. It analyses social, technological, and regulatory challenges posed by algorithmic systems. The study, inter alia, deals with algorithm-based decision-making in the US criminal justice system as an example of algorithmic fairness – in view of the authors, algorithmic fairness is a guiding principle for transparency and accountability.

As regards governance frameworks, the study explains a number of fundamental approaches to technology governance, provides a detailed analysis of several categories of governance options, and reviews specific proposals for the governance of algorithmic systems as discussed in the existing literature. The study breaks down the assessments into four policy options:

  • Awareness raising: education, watchdogs, and whistleblowers;
  • Accountability in public-sector use of algorithmic decision-making;
  • Regulatory oversight and legal liability in the private sector;
  • Global dimension of algorithmic governance.

Each option addresses a different aspect of algorithmic transparency and accountability and includes concrete recommendations for policy-makers.

News Guide

Region

EU

Foundations

Security Union