GEFIFES Seminar: Herbert Jaeger (RUG) - "An introduction to reservoir computing - A bridge between neuroscience and machine learning"
|When:||Th 05-09-2019 16:00 - 17:00|
Recurrent neural networks (RNNs) have recently become widely used in the "deep learning" field of machine learning, especially for speech and language processing tasks. However, such deep learning set-ups are computationally very expensive, require very large volumes of training data, and need high-precision numerical processing -- all of these are "no-go" conditions for biological brains.
Reservoir Computing (RC) is an alternative machine learning approach for RNNs which is in many aspects complementary to the ways of deep learning, and closer to biological brains. In RC, a large, random, possibly low-precision and noisy RNN is used as a nonlinear excitable medium - called the "reservoir" - which is driven by an input signal. The reservoir itself is not adapted or trained. Instead, only a "readout" mechanism is trained, which assembles the desired output signal from the large variety of random, excited signals within the reservoir. This simple principle is surprisingly powerful - and it represents one of the few real bridging concepts between computational neuroscience and machine learning. Recently RC has become popular in research that aims at useful computations on the basis on unconventional hardware - hardware that is non-digital, stochastic, low-precision, and time-varying just like biological brains.