Commission: 6th Evaluation of Code of Conduct on Countering Illegal Hate Speech Online
17 November 2021
Pingen Kopie Dr. Anna Pingen

On 7 October 2021, the European Commission released the results of its sixth evaluation of the Code of Conduct on countering illegal hate speech online. Since the introduction of the Code of Conduct on countering illegal hate speech online on 31 May 2016 (→ see also eucrim 2/2016, p. 76) by the European Commission and four major IT companies (Facebook, Microsoft, Twitter, and YouTube), other IT companies have joined the Code, including Instagram, Google+, Snapchat, Dailymotion, Jeuxvideo.com, and TikTok. LinkedIn joined on 24 June 2021.

Each monitoring exercise was carried out following a commonly agreed methodology which makes it possible to compare the results over time. The sixth exercise was carried out over a period of six weeks (from 1 March to 14 April 2021) by 35 organisations, which reported on the outcomes of a total sample of 4543 notifications from 22 Member States. The report indicates that, although the average of notifications reviewed within 24 hours remains high (81%), it has decreased compared to 2020 (90.4%); the average removal rate was also lower than in 2019 and 2020.

Regarding the assessment of notifications of illegal speech, the Code of Conduct prescribes that the majority of notifications should be assessed within 24h. The report noted that, in 81% of the cases, the IT companies assessed the notifications in less than 24 hours, an additional 10.1% in less than 48 hours, 8.1% in less than a week, and it took more than a week in 0.8% of cases.

In comparison to 2019 and 2020, IT companies have a lower removal rate for notified content (62.5% of the content notified to them was removed, while 37.5% remained online). The report notes that the removal rates varied, depending on the severity of the hateful content. On average, 69% of content calling for murder or violence against specific groups was removed, while content using defamatory words or pictures to name certain groups was removed in 55% of cases. Twitter and Instagram made progress compared to 2020. Facebook and YouTube had higher removal rates in 2020.

IT companies responded with less feedback than in the previous monitoring exercise, going from 67.1% to 60.3%. The most commonly reported grounds for hate speech in this monitoring exercise were sexual orientation and xenophobia, including anti-migrant hatred (18.2% and 18% respectively), followed by anti-gypsyism (12.5%).

In conclusion, the Commission calls upon IT companies to reinforce the dialogue with trusted flaggers and civil society organisations in order to address the gaps in reviewing notifications, taking action, and improving their feedback to users. The Commission advocates more binding rules on the matters foreseen in the Digital Services Act (→ eucrim 4/2020, 273-274).

News Guide

Racism and Xenophobia

Author