Lifelong Interactive Robot Learning
While a lot of developments have been made in the field of robotics, computer vision, and machine learning, service robots do not yet live among humans to assist in various daily tasks. The underlying reason is that robots are usually painstakingly coded and trained in advance to perform specific tasks in the right way. Therefore, the knowledge of such robots is fixed after the training phase, and any changes in the environment require complicated, time-consuming, and expensive robot reprogramming by expert users.
In our group, we focus on “Lifelong Interactive Robot Learning” to make robots capable of learning in an open-ended fashion by interacting with non-expert users. In this line of research, apart from robot self-learning, non-expert human users could interactively guide the process of experience acquisition by teaching new concepts, or by correcting insufficient or erroneous concepts. This way, the robot will constantly learn how to help humans in various household/industrial tasks, adapt to user preferences, and learn from user input without the need for re-programming. We have been developing this theme over six specific research directions:
1. Lifelong Interactive Robot Learning: In human-centric environments, no matter how extensive the training data used for batch learning, a robot will always face new objects. Therefore, the robot should be able to continually learn about new objects/tasks from a few training examples on-site by interacting with non-expert users.
2. Robot Perception and Perceptual Learning: We are interested in attaining a 3D understanding of the world around us. To assist humans in various tasks, a robot needs to know which kinds of objects exist in a scene and where they are. Each of these questions is still a challenging problem because of the high demand for accurate and real-time responses, an endless number of categories, large variations in object appearances, and concept drift. Therefore, apart from batch learning, the robot should be able to learn new object categories from very few training examples online supported by human-in-the-loop feedback.
3. Object Grasping and Manipulation: A service robot must be able to grasp and manipulate objects in different situations to interact with the environment as well as human users. Object grasping and manipulation is one of the challenging tasks in robotics and requires knowledge from different fields. We are interested in fundamental research in object-agnostic grasping, affordance detection, task-informed grasping, and object manipulation.
4. Dual-Arm Manipulation: For humans, doing something with "one hand behind their back" is seen as a challenge. Similarly for robots, doing a task with only one gripper is often possible, but may be more difficult. Furthermore, some tasks may even be impossible to perform. A dual-arm robot has very good manipulability and maneuverability which is necessary to accomplish a set of everyday tasks (dishwashing, hammering, and screwing tasks). We are interested in efficient imitation learning, collaborative manipulation, and large object manipulation.
5. Dynamic Robot Motion Planning: We are interested in attaining fully reactive manipulation functionalities in a closed-loop manner. In particular, service robots should be able to quickly plan a collision-free trajectory to manipulate a target object to a desired goal pose. Reactive robotic systems have to continuously check if they are at risk of colliding while planners should check every configuration that the robot may attempt to use.
6. Exploiting Multimodality: A service robot may sense the world through different modalities that may provide visual, haptic or auditory cues about the environment. In this vein, we are interested in exploiting multimodality for learning better representations to improve robot's performance.
|Last modified:||17 November 2021 3.48 p.m.|