Affordance Learning from Human Demonstrations
The work carried out on this project is part of the EU FP7 DARWIN Project. The aim of this project is to create a robotic assembly system that is able, in the general case, to assemble an object finding a solution on its own if necessary. The robot will be able to reason, giving the layout of the objects in the scene, and decide which object is the best fit for the moment, pursuing the final assembly goal. It is fundamental, in fulfilling a goal, understanding how to grasp an object (“Grasp Affordance”) and how to best use an object (“Object Affordance”). Examples of Object Affordances for an Umbrella are protection surface from rain, tool to reach far objects and extension of the arm for pressing a button. All those three usages require a different grasping affordance to be properly fulfilled. The main focus of KCL’s work is to extract and exploit the object and grasp affordances by also taking advantage of human demonstrations.
- Provide a framework for teaching the robot how to grasp and how to use objects based on demonstrations.
- Extract and exploit objects affordances for accomplishing complex tasks, like building a stack or tidying up a desk.
- Study how human grasping skills may be transferred to robotics.
- Exploring how different shapes can trigger different grasping affordances.
- Learning assembly tasks from human demonstrations.
Learning to grasp from Kinaesthetic demonstrations
Preliminary results in the context of Grasp Affordances have been achieved using kinaesthetic teaching of grasp synergies. After performing kinaesthetic demonstrations on the humanoid robot iCub at Italian Institute of Technology, the robot is able to learn from demonstrations how to perform a successful grasp. In contrast of traditional grasp techniques, requiring complex math to calculate a geometry-dependent grasp postures, we developed a controller for grasping requiring a grasp policy and an enveloping phase. A grasp policy is a set of joint encoders of the robotic hand collected in a grasping demonstration on a simple shape. The matrix can be simplified using a mathematical decomposition to 2 or 3 components, reducing the large dimensionality of a robotic hand to few variables. Afterwards, the posture is generalised to different and complex shapes using a kinematic enveloping of the digits around the object. The videos below show the kinaesthetic teaching phase and the results of the grasping algorithm in a simulated environment.
Study of attention in detecting and optimizing vision for robotic assemblies
The second aim of KCL in the DARWIN project is to investigate how the usage of an object can impact attention and vice versa. Attention is required to drive the grasping process to the objects of interest and to optimise conventional computer vision algorithms.
This is currently our work in progress.