AI High-Level Expert Group Publishes Ethics Checklist
On 17 July 2020, the High-Level Expert Group on Artificial Intelligence (AI HLEG) presented its final Assessment List for Trustworthy Artificial Intelligence (ALTAI). ALTAI provides a checklist for self-assessment that guides developers and users of AI when implementing the seven key EU requirements for trustworthy AI in practice. These requirements are:
- Human agency and oversight;
- Technical robustness and safety;
- Privacy and data governance;
- Transparency;
- Diversity, non-discrimination, and fairness;
- Environmental and societal well-being;
- Accountability.
ALTAI aims to help businesses and organisations become aware of the risks an AI system might generate and how these risks can be minimised while maximising the benefit of AI. The AI HLEG emphasises that ALTAI is best completed with a multidisciplinary team of people. The team members can be from within and/or outside the organisation of an entity, with specific competences or expertise on each of the seven requirements and related questions. Possible stakeholders could be:
- AI designers and AI developers of the AI system;
- Data scientists;
- Procurement officers or specialists;
- Front-end staff who will use or work with the AI system;
- Legal/compliance officers;
- Management.
ALTAI is available both as a document version and as a prototype of a web-based tool.
ALTAI was developed over a period of two years, from June 2018 to June 2020. Following a pilot phase (second half of 2019), the assessment list was revised and further developed on the basis of interviews, surveys, and best practice feedback.