A key function of brains is undoubtedly the abstraction and maintenance of information from the environment for later use. Neurons in association cortex play an important role in this process: during learning these neurons become tuned to relevant features and represent the information that is required later as a persistent elevation of their activity. It is however not well known how these neurons acquire their task-relevant tuning.
Here we present a biologically plausible neural network model based on reinforcement learning that explains how neurons learn to represent task-relevant information in delayed response tasks. This model generalizes the Attention-Gated Reinforcement Learning (AGREL) model by Roelfsema and van Ooyen (2005) to the temporal domain. An attention-based feedback signal from the motor layer to earlier processing layers is combined with a novel memory mechanism to solve the structural and temporal credit-assignment problems. We can show that on average the updates are equal to a variant of the Error-Backpropagation algorithm.
The model can explain how neurons in lateral intraparietal cortex (LIP) learn to represent task-relevant information in 1) a memory (anti)saccade task, 2) an orientation discrimination task and 3) a probabilistic classification task. Comparisons with experimental results from animals trained on these same tasks show that the model neurons learn representations that are similar to those observed in biological neurons.
This is joint work with Pieter Roelfsema and Sander Bohte
Fotoreportage over de Ocean Grazer van de RUG, een systeem om energie op zee te ‘oogsten’ en op te slaan.
The festive opening of the Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence (UG) will be held on 1 November, with a Symposium that will combine pitches of interdisciplinary research at the Bernoulli, poster sessions...
Gosens wins the Prix Galien Research Award 2018