The so-called “visuospatial skills” allow people to visually perceive objects and the spatial relationships among them. This video demonstrates a novel machine learning approach that allows a robot to learn simple visuospatial skills for performing object reconfiguration tasks. The main advantage of this approach is that the robot can learn from a single demonstration, and can generalize the skill to new initial configurations. The results from this research work were presented at the International Conference on Intelligent Robots and Systems (IROS 2013) in Tokyo, Japan in November 2013.
We present a novel robot learning approach based on visual perception that allows a robot to acquire new skills by observing a demonstration from a tutor. Unlike most existing learning from demonstration approaches, where the focus is placed on the trajectories, in our approach the focus is on achieving a desired goal configuration of objects relative to one another. Our approach is based on visual perception which captures the object’s context for each demonstrated action. This context is the basis of the visuospatial representation and encodes implicitly the relative positioning of the object with respect to multiple other objects simultaneously. The proposed approach is capable of learning and generalizing multi-operation skills from a single demonstration, while requiring minimum a priori knowledge about the environment. The learned skills comprise a sequence of operations that aim to achieve the desired goal configuration using the given objects. We illustrate the capabilities of our approach using three object reconfiguration tasks with a Barrett WAM robot.
Link to publication:
S. Ahmadzadeh, P. Kormushev, D. Caldwell, “Visuospatial Skill Learning for Object Reconfiguration Tasks,” in Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013), Tokyo, Japan, 3-8 Nov 2013.