Skip to ContentSkip to Navigation
Jantina Tammes School of Digital Society, Technology and AIPart of University of Groningen
Jantina Tammes School of Digital Society, Technology and AI
Digital prosperity for all
Jantina Tammes School of Digital Society, Technology and AI News

AI ADRIFT: Between Regulation and Implementation of AI in CriminalProcedure

Report by Dr. Radina (Adi) Stoykova
07 April 2026

Artificial intelligence is no longer a distant prospect for criminal justice — it is already  processing intercepted communications, summarising witness interviews, predicting  recidivism, and authenticating evidence in court. Yet current AI systems cannot explain their  outputs, be cross-examined, or produce reliable results across different cases. European and  national criminal procedures are designed for humans, not algorithms, while AI regulation  tends to focus on technological aspects rather than fair trial principles. On 16 March 2026, the  University of Groningen brought together legal scholars, law enforcement, data scientists, and  policymakers to ask the questions that matter most: how can AI systems contribute to the  protection of human rights and the reliability of digital evidence in criminal procedures.

Report by Dr. Radina (Adi) Stoykova1

Regulation: Accountability Tools or Modernized Criminal Law? 

The adequacy of the current legislative framework depends on how well the AI Act2and the  Law Enforcement Directive (LED)3are integrated into criminal procedure. There is a lack of  research if law enforcement utilize existing accountability tools under the data protection  regime such as the prohibition of automated decision-making (Art. 11 LED), the right to  notification (Art. 13 LED), and data protection by design (Art. 20 LED) – and how they can be  implemented for AI technology. The AI Act itself acknowledges that the use of AI in criminal  procedure can introduce ‘a significant degree of power imbalance’ (rec. 59 AI Act), but permits  broad AI police powers for biometric identification, emotion recognition, profiling, offending  and re-offending risk assessment, which are not necessarily in compliance with the data  protection regime. The AI Act's right to explanation (Art. 86) may also prove insufficient as a  safeguard in criminal proceedings where right to challenge and cross-examine evidence are required. Bridging these gaps will ultimately fall to national criminal law reforms. 

At the institutional level, Europol's growing role as a European AI hub for operational and  forensic purposes raises further questions about mandate, capacity, and data governance. This  also raises the question on the impact of the AI Act on national criminal justice systems, cross border cooperation in investigations, and the fair trial principle.

Implementation: Operational Needs vs. Legal Gaps

Law enforcement practitioners consider that AI is an operational necessity given the volumes  of data involved in modern investigations. Yet the AI Act is widely seen as insufficiently  tailored to the law enforcement context and disconnected from LED. Criminal procedure is  better suited to address accountability concerns, but has yet to be updated to regulate AI  technology. 

Several practical tensions can be highlighted. First, there is no clear legal basis for processing  personal data to train AI tools for law enforcement purposes. Recognising ‘scientific research for law enforcement purposes’ as a distinct basis — requiring judicial authorisation and strict  separation from operational use — has been proposed as a solution by an international group  of law enforcement experts to be included in the Omnibus proposal. Second, investigative data  is by nature messy, unstructured, and inconsistently labelled across agencies, which hinders the development of high-quality datasets for AI. Efficient training currently depends on  publicly scraped data. The AI Act bans only ‘untargeted scraping of facial images’ (Art. 5  (1)(e)) without providing a clear rules on other types of scrapping activities or their compliance  with data protection rules. Synthetic and historic datasets, meanwhile, are insufficiently  representative and risk reinforcing historical biases — producing issues even in technically  accurate systems. Third, transferring investigative and forensic expertise into AI models  remains underdeveloped, while plain data alone is insufficient. 

Beyond data, there is the question of human control. When officers default to following AI  outputs, their role shrinks from decision-maker to rubber-stamper. Even low-risk systems — such as case summary tools — can subtly influence investigators to focus on particular facts or  suspects. Risk classifications that seem appropriate at deployment can drift toward high-risk or  prohibited territory through accumulated use, without anyone noticing. This calls for epistemic  control: ensuring officers can understand and challenge AI outputs, and applying high-risk  safeguards even to seemingly low-risk tasks. It also illustrates the value of multidisciplinary  teams in developing AI tools for law enforcement. 

Evidence: When Algorithms Enter the Courtroom 

A final major concern is to avoid a transfer of epistemic authority from judges and law  enforcement to algorithms. 

Recidivism risk prediction tools illustrate the problem clearly. Tools like Catalonia's RisCanvi  are designed for use by expert teams trained to interpret them, yet in practice they reach judges  and prosecutors who lack that context. The mismatch undermines the tool's utility even where  predictions are statistically more accurate than human assessments. The fact that RisCanvi has  been in use since 2009 is a reminder that AI Act compliance is not only a forward-looking  challenge — many legacy systems already embedded in criminal justice will equally need to  undergo audits, transparency requirements, and procedural checks. 

Deepfake detection in evidence procedure presents a different but equally serious challenge.  Traditional image and video forensics such as metadata analysis fail because social media  compression strips identifying features, making AI-based detection necessary. In evidence  context AI deep-fake detectors might fall under the high-risk category (Art. 6 (2) in conj. Annex  III (6) (c)), while deep fake generation remains now a low-risk system. This is particularly  problematic since AI detectors rely on noise patterns and must be trained on fakes produced by  the specific generators they aim to detect — creating a critical dependency between detector  and generator. These models perform well but are notoriously difficult to explain and validate. 

Visualising decision features through tools like heatmaps can help address the explainability  gap. Crucially, understanding when a detector fails matters as much as when it succeeds: falsely  labelling a deepfake as real can be detrimental in criminal proceedings. The harm of deep fakes  is more proliferated than the AI act or judicial authorities understand. The judicial system needs  frameworks to evaluate the reliability of such detection methods before deepfake evidence  claims overwhelm it. 

In the US context, the new Federal Rule of Evidence 707 on machine-generated evidence  codifies an existing expert evidence standard, requiring not only that a method be tested, but  that its reliable application in each particular case be assessed. The rule is broad, however, and  provides no specific guidance on machine learning technology — leaving significant gaps

Conclusion 

Where, in the words of Damaska, forensic science once required expert witnesses to translate  the silent testimony of instruments4, AI introduces a deeper problem: the instrument no longer  merely measures — it prioritizes, predicts, and classifies. Common sense and conventional  standards of proof now compete with algorithmic outputs in establishing factual questions for  legal decisions. The probative significance of a recidivism score, a deepfake detection result,  or an AI-generated case summary cannot be assessed the way a fingerprint match or DNA  profile can. The logic is buried in layers of training data, model architecture, and statistical  inference that resists plain explanation even for those who built the system. 

The tensions discussed — between investigative efficiency and fundamental rights, between  predictive power and equal treatment, between the pace of technological deployment and the  pace of legal adaptation — are genuine dilemmas. Moving forward requires clear legal bases,  operational rules on dataset use and reuse, and meaningful judicial oversight for large-scale  data processing in law enforcement. 

1 This report is a reflection on the Seminar ‘Regulation of AI in Criminal Justice’ which was organized by Dr. Stoykova and  sponsored by Jantina Tames School of Digital Society, Technology and AI, with participants: Prof. Paul de Hert (Vrije  Universiteit Brussels), Daniel Drewer (Europol), Donatella Casaburo (CiTiP Belgium), Laura Peters (RUG), Anana Postoaca  (Sweedish Police Authority), Kelly Vinck (Netherlands Police), Carlos Castillo (Universitat Pompeu Fabra, Spain), Lea  Uhlenbrock (FAU, Gemrany), Qin (Sky) Ma (MPI, Germany). 

2 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules  on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU)  2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial  Intelligence Act) (Text with EEA relevance) OJL, 2024/1689. 

3 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural  persons with regard to the processing of personal data by competent authorities for the purposes of the prevention,  investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA, OJL 119-89.

4 Mirjan Damaska, Evidence Law Adrift (Yale University Press 2008) 143–144.

Last modified:07 April 2026 12.01 p.m.
Share this Facebook LinkedIn
Volg ons optwitter linkedin youtube