Guest editorial eucrim 1-2023
Dear Readers,
Artificial Intelligence (AI) has the potential to help us in many ways. One of the promising fields in which AI can be employed is in the fight against crime, as is spotlighted by a number of contributions in this issue, e.g. on AI’s impact on anti-money-laundering regimes or on the employment of AI to prevent cross-border human trafficking. AI also shows its immense potential when applied in the field of forensic analysis, where robots equipped with advanced imaging and analysis capabilities can assist. They are not only capable of processing evidence, collecting fingerprints, analysing DNA samples, and performing other tasks that require technical precision and complex calculations; they also open up effective options for surveillance: Robots equipped with cameras and sensors can be deployed for surveillance purposes, without restrictions on working hours or other human constraints.
In “I, Robot”, Isaac Asimov writes: “You just can’t differentiate between a robot and the very best of humans.” In our view, this is not true: In some ways, robots are better. And law enforcement seems to agree, as their hope is that AI monitoring of public areas and the gathering of video footage can help in the prevention and detection of crimes as well as in the enhancement of public safety, with robots being deployed to patrol high-security areas. Robots can also be utilized in search and rescue operations, especially in hazardous environments in which human access is limited or dangerous. They can navigate debris, locate missing persons, and provide rescue teams with critical information. Robots designed for bomb disposal can be used to handle potentially explosive devices safely and defuse dangerous situations without risking human lives.
However, the benefits are accompanied by drawbacks. Two research projects have been launched to fully understand the pros and cons: at the University of Basel on “Human-Robot Interactions: Legal Blame, Criminal Law and Procedure” and at the University of Luxembourg on “Criminal Proceedings and the use of AI Output as Evidence”. They do not only explore in detail the possible impact of AI on fact-finding in criminal justice but they also tackle other legal and societal concerns, including the detrimental effects on democracy when surveillance becomes a permanent feature of daily life, the lack of accountability for decisions taken by AI, and potential biases in algorithmic decision-making that can lead to discrimination.
These concerns have led to a demand for regulation, which is a complex issue. By now, several fixpoints for mitigating the risks have been identified such as more transparency in AI systems to allow humans to better understand the decision-making process and trace bias. While regulators grapple with the construction of an adequate legal framework by which to harness the benefits of AI, they must also ensure that it is balanced with the preservation of data privacy, safety, and security.
There are many reasons why human supervision and responsibility will be key for the application of AI, with a differentiation between more or less sensitive areas. Thus, the potentially most provocative question is asked in this issue: Why do we still need a “human court” if we could use an AI-driven tool to render decisions much more cheaply, quickly, and possibly even more fairly? The answer might well be that we, as humans, want to take meaningful responsibility for decisions made on the lives of others and for shaping the society they live in. After all, we wish to avoid a future like the one described in Alex Garland’s “Ex Machina”: “AI looks back on us the same way we look at fossil skeletons on the plains of Africa. An upright ape living in dust with crude language and tools, all set for extinction.”
To avoid this, the core human task persists: to understand the capabilities and limitations of technology and use it wisely based on scientific research.