Skip to ContentSkip to Navigation
University of Groningenfounded in 1614  -  top 100 university
Research Zernike (ZIAM) News

Advent calendar - December 21st - Madison Cotteret

21 December 2025

In the Zernike Institute Advent Calendar, we are presenting 24 short spotlights in December. In these specials, we highlight PhD students, postdocs, support staff and technicians of our research groups and team - providing a glimpse into their typical day at work. In Episode 21 meet Madison Cotteret, postdoc in the Bio-Inspired Circuits and Systems group of prof. Elisabetta Chicca and part of CogniGron.

Madison Cotteret
Madison Cotteret

Like many researchers within CogniGron, I believe that taking closer inspiration from the brain may be the key to enabling AI that is not only more efficient, but more capable than the state of the art. Rather than training deep artificial neural networks (ANNs) with gradient descent, the so-called neuromorphic approach is to study networks of time-continuous neurons which communicate only very sparsely in time, known as spiking neural networks (SNNs).

However, I’m lucky in that I can conduct research relevant to both ANNs and neuromorphic computing by studying a shared topic: neurosymbolic methods. From the deep learning side, there is growing evidence that LLMs fail to efficiently apply learned symbolic relationships to new domains. One need only ask a colleague for their favourite example of ChatGPT failing at supposedly solved tasks. Not long ago, the then-billion-parameter models could reliably add together 2 or 3 digit integers (AGI achieved?), but not any longer (perhaps not).

Neuromorphic computing has more immediate issues to address however. While we’ve become pretty good as a field at designing chips with millions of biologically-plausible neurons and synapses, training such large SNNs to perform useful function remains a headache, even before contending with nonidealities such as noise and parameter mismatch. Neurosymbolic approaches offer us a way forwards, by bridging high-level symbolic algorithms with their implementation with spiking neurons. More on that here.

My day(s) thus consist of trying to understand what computation can be efficiently performed using neuro-vector-symbolic methods. One very large outstanding challenge is how to meaningfully integrate them with learning, for example. Then, coming up with ways to map these networks to spiking neuromorphic hardware, such as Intel’s digital SNN chip Loihi (proudly brandished in picture), or our group’s home-grown analogue SNN chip Texel (suggestively placed upon desk).

At lunchtime, I can often be found partaking in a civilised yet robust scientific discussion about the latest posting to the papers_please group chat, which every research group could benefit from. Afterwards you may even hear the shuffling of playing cards, as we delve into a fresh round of Sheep Showdown, a board game developed by our very own technician, available in all worthy game stores. Past 5pm however, you’re more likely to find me on the squash court, or beginning an 8 hour Civilisation binge.

See all Advent Calendar items 2025 here!

Last modified:20 December 2025 12.48 a.m.
Share this Facebook LinkedIn