On 17 December 2024, the European Data Protection Board (EDPB) published its Opinion on certain data protection aspects related to the processing of personal data in the context of Artificial Intelligence (AI) models. It follows a request addressed by the Irish data protection supervisory authority to the EDPB (pursuant to Art. 64(2) GDPR).

The Opinion addresses the following questions:

  • When and how can an AI model be considered "anonymous"?
  • How can controllers demonstrate the appropriateness of legitimate interest as a legal basis in the development phase?
  • How can controllers demonstrate the appropriateness of legitimate interest as a legal basis in the deployment phase?
  • What are the consequences of the unlawful processing of personal data in the development phase of an AI model on the subsequent processing or operation of the AI model?

The Opinion follows a request addressed by the Irish data protection supervisory authority to the EDPB (pursuant to Art. 64(2) GDPR).

In response to the first question, the EDPB confirms that not all AI models trained with personal data can necessarily be considered anonymous, and therefore the assessment of the anonymity of AI models should be carried out by competent supervisory authorities on a case-by-case basis. The Opinion provides a list of methods that may be used by controllers in their demonstration of anonymity. It can be considered by the supervisory authorities when assessing a controller’s claim of anonymity.

Looking at the second and third questions, the Opinion reiterates that there is no hierarchy between the legal bases provided by the GDPR and that it is up to data controllers to identify the appropriate legal basis for their processing activities. To do so, they should apply the three-step test developed to assess legitimate interest under the GDPR: (1) identify a legitimate interest, (2) demonstrate that the processing is necessary to fulfil it, and (3) balance the processing against the rights and freedoms of the data subjects. The EDPB provides further advice on how the three-step test should be applied in the given context. With regard to the third step (balancing test), the Opinion particularly highlights the role of data subjects’ reasonable expectations and that the context of the processing is important to be taken into account.

With regard to the fourth question, the EDPB emphasizes that supervisory authorities enjoy discretionary powers to assess any possible infringement(s) and to choose appropriate, necessary, and proportionate measures, taking into account the circumstances of each individual case. These discretionary powers vary, depending on the given scenario, i.e., whether the personal data retained by the AI model are processed lawfully/unlawfully by the same model or by another controller.

News Guide

EU Artificial Intelligence (AI) Data Protection

Author

Riehle_Cornelia_Neu_SW.jpg
Cornelia Riehle LL.M.

Institution:
Academy of European Law (ERA)

Department:
Criminal Law

Position:
Deputy Head of Section