Skip to ContentSkip to Navigation
Research Zernike (ZIAM) Bio-inspired Circuits & Systems Chicca group

Classification of event-based spatio-temporal patterns with spiking neural networks

Type and duration:

Bachelor project, 2-4 months.


Humans are able to capture sound and spot words in it thanks to a very important organ in our ear: the cochlea. This element mechanically decomposes the sound into frequencies and communicates to the auditory nerve the amplitude of each frequency (like a fourier transform!) using spikes. However, how to extract meanings from such a transformation is still unclear. One possibility is that our brain uses the delays between different frequencies to decode the temporal pattern of the sound. In this project, we will use the Time Difference Encoder (TDE) [1] to extract the delays between frequencies, then use a Tempotron to classify the extracted spatio-temporal pattern.


  • Understand how the Tempotron [2] model learns event-based spatio-temporal patterns using the SpikingJelly [3] framework for spiking neural networks.
  • Experiment the Tempotron on the N-TIDIGITS [4] dataset of spoken digits extracted from an artificial cochlea.
  • Add a TDE layer and quantify its impact on the keyword spotting accuracy while minimizing the number of required TDEs for hardware efficiency.

Required skills:

Python programming, basic machine learning knowledge, strong interest in computational and cognitive neurosciences.

Contact person:

Lyes Khacef


  1. M. B. Milde, O. J. N. Bertrand, H. Ramachandran, M. Egelhaaf, E. Chicca; Spiking Elementary Motion Detector in Neuromorphic Systems, Neural Computing, 2018.
  2. Gütig, R., Sompolinsky, H; The tempotron: a neuron that learns spike timing–based decisions, Nature Neuroscience, 2006.
  3. SpikingJelly, Tempotron,
  4. N-TIDIGITS Cochlea Spikes Dataset,
Last modified:01 April 2021 09.15 a.m.