Skip to ContentSkip to Navigation
Research

Herke van Hoof - Robot Learning and Explorations with Realistic Sensor Input

When:Mo 21-03-2016 15:00 - 16:00
Where:5161.0222

To act autonomously in unstructured environments with realistic sensor input, robots have to be able to adapt to unforeseen conditions. This adaptation requires the robot to learn from its own actions and their sensory effects -- often with weak or non-existent supervision. I will present two possible learning methods that are tailored to such situations.

Learning about unknown environments without supervision is a challenging tasks. In such situations, a robot can only learn through sensory feedback obtained though interaction with its environment. In our bottom-up, probabilistic approach, the robot tries to segment the objects in its environment through non-parametric clustering based on observed movement, with minimal prior knowledge. Information-theoretic principles can be used to autonomously select actions that maximise the expected information gain, and thus learning speed.

In the reinforcement learning paradigm, on the other hand, feedback is available, in the form of reward signals. In contrast to these weak reward signals stand rich sensory data, that even for simple tasks is often non-linear and high-dimensional. Sensory data can be leveraged to learn a system model, but in high-dimensional sensory spaces this step often requires manually designing features. We propose a robot reinforcement learning algorithm with a non-parametric learned model, value function, and policy that can deal with high-dimensional state representations. As such, the algorithm is well-suited to deal with real-robot tactile and visual sensors.

Share this Facebook LinkedIn
View this page in: Nederlands