Skip to ContentSkip to Navigation
Research Bernoulli Institute Autonomous Perceptive Systems Research

PhD project: Continuous learning in robot navigation using virtual categorization and reinforcement learning

robotic arm
robotic arm

Name: Amir Shantia

Supervisor:
prof. dr. L.R.B. (Lambert) Schomaker
dr. M.A. (Marco) Wiering

Summary of PhD project:

We expect that there will be a lot of robots assisting people in their daily lives in the future (e.g., 20 years from now). In this project we focus on robots that can be effective assistants to humans in their own houses. A robot could help to clean up a place, search for wanted objects, fetch some item for its owner, etc. In order to arrive to this intelligent behaviour of the robot, we propose the development of novel algorithms in the fields of machine vision, machine learning, and robotics.

The robot is located in a physical environment, and therefore it has to navigate to particular goal-locations in the environment. In some cases, such as searching for an object, there is not a known goal location, and therefore the robot has to explore its environment in the most efficient manner. In other cases, the robot wants to fetch an object that is at a standard location, so the robot should be able to navigate from its dynamic and partially known initial position to the goal position.

Currently, robotic mapping and localization methods are mostly dominated by using a combination of spatial alignment of sensory inputs, loop closure detection, and a global fine-tuning step. This requires either expensive depth sensing systems, or fast computational hardware during run-time to produce a 2D or 3D map of the environment which is suitable for navigation.

In this project, we adopt a visual approach by using image information for tackling robot localization and navigation problems. We compare the effects of different feature vectors (i.e. plain images, edge and color information), and multiple machine learning techniques (i.e. Deep neural networks, support vector machines, etc.) on the precision of localization results.

After this step, the robot can plan to navigate to a goal either using a location-based approach (e.g. A* methods) or attempt to build a visual path toward the goal by using reinforcement learning techniques. The experiments are first performed in a 3D simulation environments, and after success they are tested on the real robot in different environments.

This navigation block will be used in combination with other capabilities of the robot (i.e. human tracking and interaction, object recognition, manipulation, etc.) which are being developed and maintained by the cognitive robotics laboratory (CRL) team during the process.

Last modified:26 January 2024 3.42 p.m.