00:00 This wheeled-legged robot is able to perform hybrid motions to combine the advantages of wheeled and legged locomotion.
00:07 Recently, the robot learned how to stand up.
00:11 Its front legs can now act as arms to fulfill manipulation tasks, such as opening a door or moving a package.
00:19 In this work, we present a curiosity-driven reinforcement learning approach to achieve such behaviors.
00:26 Learning and non-learning-based methods usually require an extensive amount of task-specific engineering, for example in the form of reward shaping.
00:42 We overcome this limitation by defining a single task-specific reward that is given if the task is achieved.
00:48 Resulting control policies show a high level of repeatability.
00:55 To be able to discover this sparse reward, the agent needs to explore its environment.
01:00 Making the agent curious intrinsically motivates exploration.
01:04 We define a state to focus the agent's curiosity on the object of interest.
01:09 For the door-opening task, the curiosity state is simply defined as the door's position and velocity, as well as the distance between the robot and the door.
01:17 As a result, the robot starts to play with the door until the task is achieved.
01:22 Note that the curiosity state is the only thing we have to change to achieve a different task, such as manipulating a package.
01:36 We employ a simple perception system relying on a single camera for all task-specific observations.
01:50 We simulate the camera's field of view during training to enable active tracking of visual markers.
01:55 [END]
02:00 [BLANK_AUDIO]
Comments