|
09:30
|
Registration
|
|
10:00
|
Welcome Address by Ramses Wessel, Vice Dean and Professor of European Law, University of Groningen and Radina (Adi) Stoykova, Assistant Professor in Technology Law, University of Groningen
|
|
10:00– 12:00
|
Panel 1: Artificial Intelligence in Criminal Justice, Chair: Glen Thodé, Assistant Professor of Criminal Law, University of Groningen
-
AI in Law Enforcement: Academic Alarm and Regulatory Solutions by Paul de Hert, Professor of Law Vrije Universiteit Brussels
-
AI in Blue: Europol`s Approach to AI Regulation, by Daniel Drewer, Data Protection Officer, Europol
Europol's organizational approach to AI implementation highlighting key issues with AI in the heavily-regulated law enforcement context. Topics include the use of machine learning for processing Sky.ecc/EncroChat intercepts, the adoption of an LLM office assistant, and AI in forensic analysis.
-
Responsible AI in Criminal Justice? Let's first talk about data by Donatella Casaburo, CiTiP, Belgium
This presentation addresses the issue of the completeness of the datasets used to train, test, and validate AI systems used in criminal justice and the legal and operational impacts of using incomplete datasets.
|
|
12:00–12:30
|
Poster pitches
|
|
12:30 – 13:30
|
Lunch
|
|
13:30-
15:00
|
Panel 2: Artificial intelligence in law enforcement, Chair: Xavier Tracol, Senior Legal Officer, Eurojust
-
AIWitness: Designing Responsible AI for Witness Interviews by Dr. Laura Peters, Associate Professor in Criminal Law, University of Groningen
AIWitness explores how AI can be designed to support witness interviews in criminal investigations. The project, a collaboration between the University of Groningen, Capgemini, and ScottyAI, aims to enhance the speed, consistency, and qualitative value of witness interviews while safeguarding reliability and fundamental rights. AIWitness brings together legal, behavioural, societal, and technical expertise from academia and practice within a multidisciplinary development framework. This presentation introduces the project, its objectives and the methodological approach guiding the development of an AI-supported interview environment in the context of criminal procedure.
-
The Human in the Loop – or in the Shadow? Compliance-by-Design and the Threshold of High-Risk AI by Anana Postoaca, Swedish Police Authority
Who really decides in modern policing, the officer or the system? This talk follows a quiet fault line in the AI Act: high-risk classification can hinge not only on what an AI system does, but on whether human oversight is truly possible or only appears to be. Through biometrics, investigative transcription, and AI-generated case summaries, it examines how transparency-by-design, XAI and AI literacy determine whether humans can genuinely question and override AI outputs. The takeaway for deployers is unsettling: even AI systems classified as low-risk may need AI Act-style safeguards from the start to keep their intended use from drifting into high-risk in practice.
-
Navigating the Legal Foundations for Training and Validating AI-Systems by Kelly Vink, Netherlands Police
AI-systems are revolutionizing policing and is more and more becoming a necessary instrument to analyze big amounts of seized data and complex criminal structures that also use AI-systems. But with great power comes big responsibility; how do we ensure that these AI-systems are trained and validated with the correct legal grounds? In this presentation we will firstly address the necessity of validation and training of AI-systems with representative datasets. Secondly, we will zoom in on how these datasets are generally obtained by data scientists and we will conclude with legal challenges and how the Netherlands Police estimate to solve these challenges.
|
|
15:00–15:15
|
Coffee break
|
|
15:15–17:3
|
Panel 3: AI for evidence: Case studies, Chair: Rolf Hoving, Professor by Special Appointment of Supervision of Governmental Criminal Proceedings
-
Recidivism Risk Prediction in Catalonia: Context, Accuracy, and Equity by Carlos Castillo, Professor in Data Science Universitat Pompeu Fabra, Spain
This talk summarizes research on the RisCanvi recidivism risk prediction instrument used in Catalonia since 2009, focusing on the extent to which it leads to better decisions, the accuracy of its predictions, and whether it is equally accurate across different subgroups.
-
Detecting Deception: Ensuring Authenticity of Image Evidence in the Era of AI by Lea Uhlenbrock, PhD Candidate Friedrich Alexander University, Germany
Deepfakes have rapidly advanced in quality and are beginning to affect criminal investigations and court proceedings. This presentation examines current technical challenges in deepfake detection— including generalisability, reliability, and explainability — and how AI-based detection methods can be responsibly integrated into criminal justice systems under evolving European AI regulation.
-
AI-Mediated Evidence, Admissibility, and the Crisis of Epistemic Authority by Dr. Qin (Sky) Ma, Post-doctoral Researcher Max Planck Institute for the Study of Crime, Security and Law, Germany
This paper examines AI-mediated evidence through recent doctrinal reform efforts, focusing on Proposed Federal Rules of Evidence 707 and 901(c). It argues that current reforms remain constrained by formalist logic inherited from Daubert, treating AI systems as neutral evidentiary tools while obscuring a deeper transformation of fact-finding. The article reveals how epistemic authority increasingly shifts from judicial decision-makers to opaque algorithmic processes, challenging the court's institutional role as authoritative arbiter of truth.
|
|
17:30
|
Concluding remarks
|