Skip to ContentSkip to Navigation
About us Faculty of Science and Engineering Research CogniGron - Groningen Cognitive Systems and Materials Center News Seminars

CogniGron Seminar: Alberto Riminucci (CNR Bologna, It)- "Charge and spin transport in molecular spin valves and their application in neuromorphic computing"

When:Fr 10-05-2019 15:00 - 16:00
Where:5161.151 (Bernoulliborg)

Molecular spin valves use molecular materials, such as tris(8-hydroxyquinoline)aluminum (Alq3) to transport spin and charge between two ferromagnetic electrodes, that provide a spin polarized current. The hallmark of a spin valve is that its electrical resistance depends on the relative orientation of the magnetization of the two ferromagnetic electrodes. Such change of resistance is known as magnetoresistance.

These devices also have a memristive behaviour: their electrical resistance can be changed, in a reversible and non-volatile fashion, by the application of voltage. Interestingly, there is an interplay between the memristive behavior and the magnetoresistance. This is intimately linked to the most intriguing property of these devices: they show no spin precession.

Spin precession is observed when electrons are free to precess under the action of an applied magnetic field. This is expected in charge transport in molecular materials, in which charges hop between molecules and are virtually unaffected by local magnetic fields. Nevertheless, spin precession was not observed in Alq3, and this is interpreted as being due to the presence of local magnetic order.

In addition to being interesting for their basic physics, these devices can find a number of applications. In particular they can be used as synapses for neuromorphic computing and can be used in several types of neural networks, from a simple single layer perceptron to spiking neural networks. The presence of magnetoresistance adds a unique tool to effect parallel, selective changes on the weight of synapses.

We tested the effectiveness on the learning process of a neural network. We considered reward-based learning in an actor-critic framework. That is, the model consists of two networks. The actor network learns a policy, that is a suitable action (output) y(x) for the current input x. The critic network learns the expected R(x) reward for this input. The critic network is used to improve the learning of the actor network. In both cases, the nonlinear update has a significant advantage in terms of the speed at which the performance goal of a mean reward of 0.95 is reached.