Commission Launches AI Act Whistleblower Tool
At the end of November 2025, the European Commission launched a dedicated whistleblower tool to support the enforcement of the EU Artificial Intelligence Act. The secure reporting channel was set up within the European AI Office, the Commission’s centre of expertise for AI governance, and allows individuals to report suspected breaches of the AI Act confidentially and, if desired, anonymously.
The tool was designed for individuals professionally connected to providers of general-purpose AI models or certain AI systems, including employees, contractors, shareholders, and members of management bodies. Reports can be submitted in any official EU language and supported by relevant documentation. A secure inbox system enables two-way communication with the AI Office while preserving anonymity.
According to the Commission, the tool aims to facilitate the early detection of potential violations that could endanger fundamental rights, health, safety, or public trust. The AI Office has committed to strict confidentiality standards, including certified encryption mechanisms and restricted internal access to reports. Whistleblowers receive confirmation of receipt within seven working days and are to be informed within fourteen working days whether the AI Office is competent to handle the case. Feedback on follow-up measures is to be provided within three months or, in exceptional circumstances, six months.
The Commission clarified that, until 2 August 2026, legal protection against retaliation under the EU Whistleblower Directive will not automatically apply to reports concerning infringements of the AI Act. During this interim period, confidentiality serves as the primary safeguard. From the above-mentioned date onwards, reports relating to AI Act breaches will fall within the Directive’s scope. In certain cases involving product safety, consumer protection, privacy, or information security, whistleblowers can already benefit from existing protection under EU law.
The launch of the tool was presented as part of the broader implementation of the AI Act, which seeks to promote trustworthy AI while addressing systemic risks associated with high-risk and general-purpose AI models. The measure strengthens the enforcement architecture by providing an additional channel for detecting non-compliance within the emerging EU AI governance framework.