Skip to ContentSkip to Navigation
About us Faculty of Law Current Affairs Calendar PhD Ceremonies Law Faculty

Transparency and secrecy as a spectrum in AI-assisted trustworthy public decision-making

EU Legislator Walking a Tightrope
PhD ceremony:I. (Ida) Varosanec, LLM
When:December 19, 2024
Start:11:00
Supervisors:G.P. (Jeanne) Mifsud Bonnici, Prof, prof. dr. S.H. Ranchordás
Co-supervisor:dr. N.E. (Nynke) Vellinga
Where:Academy building RUG
Faculty:Law
Transparency and secrecy as a spectrum in AI-assisted trustworthy
public decision-making

The legislator is often placed on a tightrope. This PhD dissertation explores the complexity of balancing transparency and secrecy in using artificial intelligence (AI) systems within the public sector, particularly when these technologies affect individual rights and freedoms. AI can make public services more efficient, but when decisions are made by or with the help of algorithms, citizens and oversight bodies often struggle to understand the rationale behind them, due to secrecy concerns linked to intellectual property (IP) and confidentiality.

Although transparency is the currency of trust, trust necessitates a dose of secrecy. This research examines how current EU laws applicable to AI try to reconcile transparency – needed for accountability and public trust – with the confidentiality that protects proprietary AI technologies. It introduces a transparency-secrecy spectrum, where transparency fosters trust but cannot always be absolute due to privacy, security, and business interests. The study shows that whereas laws like the EU AI Act strive to ensure ‘sufficient transparency’, they fall short of fully balancing these needs, leaving public bodies with limited guidance on what should be disclosed.

Through an analysis of legal frameworks, the dissertation proposes that transparency in AI could be enhanced with technical tools (e.g. explainability methods), tailored laws, and ethical standards. Such an approach could help public authorities use AI responsibly, maintaining public trust while respecting the confidentiality required by AI developers. The findings suggest that a holistic approach combining transparency and secrecy can support ethical, transparent, and trustworthy AI in governance.