Skip to ContentSkip to Navigation
Rijksuniversiteit Groningenfounded in 1614  -  top 100 university
Over ons Faculty of Science and Engineering Our Research CogniGron
Header image CogniGron@Work Blog

From Bricks to Blueprints: Building the Structure Neuromorphic Computing Needs 

Datum:09 december 2025
decoratieve afbeelding
Dr. Madison Cotteret

Building neuromorphic computing architectures requires many different disciplines working together to create new hardware components, methods for local computation, new materials, spiking neurons, etc. However, an outstanding fundamental challenge is how these components should be put together to solve larger tasks. As Dr. Madison Cotteret, CogniGron's researcher at the Bio-Inspired Circuits and Systems group, explains, what works at the level of a few neurons does not automatically scale when you have billions. Imagine trying to build a house when everyone is perfecting cement or designing new types of bricks, but no one has yet drawn the plans. We have remarkable ingredients, but the missing piece is the blueprint for combining them into larger systems.

Why not just use deep learning? 

Artificial intelligence has made huge progress through deep learning, so why not apply the same methods here? The problem is that they do not easily carry over to spiking neural networks. Training is computationally expensive, and the algorithms are not biologically plausible. In other words, they do not reflect how real brains learn. Neuromorphic chips also bring their own difficulties. They are inherently noisy and difficult to model. Trying to directly optimize them is messy and unreliable. So instead of starting with low-level wiring, Madison argues that we need new mathematical approaches that provide structure at a higher level. 

The vector-symbolic shortcut 

This is where vector-symbolic architectures (VSAs) come in. VSAs represent symbols and their relationships as high-dimensional vectors. The idea is to start with abstract, symbolic structure and then translate it into the activity of large populations of neurons. Think of neurons as the raw material. VSAs provide a way to arrange them so that meaning emerges. For example, the classic binding problem—how to represent that one feature belongs to another—can be handled in VSAs by combining vectors in ways that keep track of structure. A colour and a shape do not get confused, because their vectors can be bound together and later unpacked. 

Does it work in practice? 

Madison shows that it does. Symbolic structures such as finite state machines, which are fundamental in computer science, can be mapped directly into networks using VSAs and attractor dynamics. These networks can remember states, respond to inputs, and be implemented across different types of neuromorphic hardware without retraining. Another advantage is robustness. Because information is distributed across many neurons, the system can tolerate noise and variability in individual components. This makes it well suited for real hardware, where imperfections are unavoidable. 

From discrete to continuous thought 

Brains do not only work with discrete states. They also compute with continuous variables such as space, time, and evidence. Madison’s research extends the same vector approach to continuous structures by embedding smooth manifolds, like circles or spheres, into networks. This makes it possible to represent processes such as spatial reasoning in a principled way. 

The big picture: architecting the future 

Madison’s work does not claim to have solved general neuromorphic computation. Instead, it provides one of the first general frameworks for designing large-scale spiking systems in a way that remains scalable and hardware-friendly. By moving from wiring neurons one by one to working with symbolic structures that can be compiled into neural form, VSAs point toward a path where our bricks can finally be assembled into larger architectures.


We have strong building materials. Now we are learning how to design the house.

Deel dit Facebook LinkedIn