STeP Talks 2026

13 February 2026
Blowing the whistle: Is there a “right to warn” about advanced AI risks?
The development of advanced AI models and systems involves high potential impact, complex technical detail, and private-sector progress covered by secrecy and non-disclosure agreements. In certain instances, whistleblowers’ disclosures have led to the timely identification of serious AI safety-related risks and harms. Current and former employees at frontier AI labs have issued public warnings, highlighting the urgent need for robust protections to ensure accountability. In particular, they advocate for a “right to warn” about AI risks without retaliation. This raises the pressing need for, on the one hand, nurturing and exercising professional ethical responsibility and, on the other hand, having a normative framework supporting this responsibility in the name of the public interest.
In this talk, Mando Rachovitsa discusses the following questions/issues:
-
Do AI researchers have an ethical responsibility to warn about AI risks and credibly attest to their employers’ claims about such risks in AI R&D?
-
What is the rationale underpinning the value of whistleblower protection in connection to AI safety? AI safety scholars and practitioners highlight whistleblower protection as a part of advanced AI risk governance and risk management and as a mechanism for AI developers and deployers to verify transparency on AI risks.
-
Do AI companies and research labs (e.g., OpenAI, Anthropic, Google DeepMind, Microsoft and META) have whistleblowing policies and, if yes, what do these policies provide?
-
Does the EU regulatory framework address whistleblowing concerns in AI R&D?
We will explore the relevance of the following: the EU AI Act and the Code of Practice for General-Purpose AI Models (Safety and Security Chapter), the EU Whistleblowing Directive and the AI Whistleblowing Channel recently launched by the EU AI Office, the recent California Senate Bill No. 53 on Transparency in Frontier AI Act.
Join us for this STeP Talk on February 13th from 15.00-16.30 online or on-site in Groningen (Bakker-Nortzaal (ground floor), Röling Building, Oude Boteringestraat 18) by registering here!

Mando Rachovitsa
Mando Rachovitsa is an Associate Professor in Human Rights Law at the School of Law, University of Nottingham. She is also the Deputy Director of the Human Rights Law Centre. Her expertise lies in the area of Human Rights and Technology Law. Mando has written on the human rights assessment of the use of new technologies, including encryption, digital ID systems, and how human rights law may inform the design and implementation of Internet standards. Her latest research focuses on the intersection of human rights law and technologies, including advanced AI and AI safety.

Ida Varošanec
Ida Varošanec works as an Assistant Professor of Technology Law at the Faculty of Law, University of Groningen. Her research focuses on AI regulation, IP law, Transparency, Technology law, and Public policy. She will be moderating this STeP Talk.
|
Each month, the Security, Technology and ePrivacy research group organizes an exciting workshop on a wider variety of topics related to tech law. Together with a speaker from the STeP research group, an invited speaker from a different institution will shed light on current developments in the field of law and technology. We will cover a broad range of topics, from the digital aspects of the energy transition to AI-generated art and intellectual property. Join us, in Groningen or online, by registering for the STeP Talks!
|
|
We look forward to seeing you there!
For any questions, please contact us via step-talks step-rug.nl |