Skip to ContentSkip to Navigation
Rijksuniversiteit Groningenfounded in 1614  -  top 100 university
Over ons Organisatie Bestuurlijke organisatie Medezeggenschap Universiteitsraad Wetenschapsfractie
Header image Science Faction Council speeches & blog

Council speech on the AI Network

Datum:20 november 2025

Speech by Marcello Seri, University Council member for the Science Faction

20 November 2025
University Council 337


The need for a centralized AI governance structure within our university is undeniable. As artificial intelligence and the legal framework around it continues to develop, it is important to establish a coherent process to guide and coordinate our efforts. It is then natural to think of a centralized point of governance for the use of AI across all faculties, aimed at helping coordination of efforts and ensuring consistency, clarity, and compliance in our initiatives.

To this end, we do need a governance officer and a team of experts, chosen for their expertise and, crucially, for their ability to collaborate across disciplines, acting as bridges between ethical, legal, social and technical boundaries.

We also need clearly defined policies that are straightforward to implement at the program level. These should be designed to support and enhance our research and educational activities, not to hinder them. They should provide a clear framework within which our faculty and students can operate, ensuring that we are all working towards the same goals and standards.

This should be accompanied by expert technical support and, importantly, free AI literacy for staff and students. We appreciate that the latter is already well integrated as part of the initiative.

While we are thankful for the updates that were sent on Friday after the technical meetings, the proposed governance structure still has several weak points that should be addressed and worked out in the proposal if we are to achieve our objectives. These were discussed thoroughly with the proposal proponents on Tuesday, but I’d like to spell them out in more detail also here.

First, with centralization and new administrative structures, there is a risk, if not a tradition, of creating new unnecessary and often heavy bureaucracy, introducing collections of forms and red tapes that overprotect the administration while getting in the way of researchers and educators, at times duplicating efforts and information. This should be absolutely avoided. A prime way to make sure that this is not happening is to have a direct and continuative open line of feedback with the users of the office. We will come back to this later, as feedback evaluation and accountability are important issues per se. In this respect, the AI Council should be from the beginning an integral part in the design of policy and of the center itself, to guarantee that the execution is aligned with our strategic goals and with our needs. What could be a better sounding board than the people involved with the relevant struggles on a daily basis?

Second. AI is an overinflated word, and at our university, there are many different types of AI and machine learning that are not part of generative AI. At the same time, and understandably, a big focus of this proposal is on the latter. We should make absolutely sure that AI researchers from the first category (machine learning that does not belong to current hype of generative AI) participate in the oversight of the AI Council and ensure that policies and processes do not work as a one-size-fits-all regulation but are clearly targeting what is needed while leaving all others in peace. I think this should be explicitly stated in the document.

Thirdly, the AI Council should include expert staff members at different seniority levels and from different relevant faculties and services. This diversity is crucial for ensuring that all challenges and opportunities are carefully considered. We would go as far as saying that the Council should have the authority to approve major decisions and ensure accountability and transparency. Without this authority, the Council risks becoming a toothless body, unable to effect real change or provide meaningful oversight.

Coming back to the feedback. To ensure agility and adaptability, there should be regular reviews and updates to policies and practices. Not necessarily for keeping up with technological advancements, but, more importantly, to ensure that our governance structure remains effective and does not consolidate itself into a rigid, unresponsive, bureaucratic body.

The evaluations should include anonymous feedback from all researchers who have interacted or are interacting with the center, providing a comprehensive understanding of what is working and what is not, and informing the necessary adjustments. We would suggest conducting evaluations with the oversight of the council, on an annual basis at first, using clear defined metrics. For instance, the number of AI projects implemented, stakeholder satisfaction, compliance rates. But these are just examples, the AI council would know better what makes sense here.

A last worry concerns the budget and its distribution. Even looking at the minimal scenario in the initial research plan, the majority of costs come from tooling and infrastructure, not from administrative position. However, there is no clear plan for establishing the necessary funding streams to support these costs, perhaps in the future, in the current plan. Without a clear plan for funding, we risk creating a center that offers expertise but not the actual tools that faculties need to use, and that require a constant stream of external funding also for small teaching-related experimentation. We think it should be a clear part of the plan to periodically reevaluate opportunities to provide structural funding in this direction. 

Similarly, the expertise from JTS, BI, NLP, and TAG should be recognized, used and valorized: properly rewarded or accounted for in the budget. Reward not in kind at the faculties expenses, but as actual administrative, scientific or technical effort. Let’s make it count as actual work. This is particularly important for the people joining the council. We have plenty of researchers with real life expertise with AI act and other regulations at national and international levels. We should not forget them when we move to the implementation phase and should give them reasons to actively be involved.

We know these are a lot of points. But that is also why our recommendation is to give us some time to integrate them with other comments from the council, forward them to you and await for an update that takes them into account before giving our advice on the proposal, postponing this discussion to the next council meeting.

Finally, given the financial figures and strategic implications, we should carefully evaluate if it would be better having it up for consent instead of advice.

Tags: Speech, AI, Governance
Deel dit Facebook LinkedIn