The video presents experiments during which a humanoid robot is subjected to external pushes and recovers stability by changing the step placement and duration.
It starts from showing effectiveness of the feedback controller during stepping in place. Then it continues to present how the developed algorithm regenerates the step placement and duration to regain stability after lateral pushes. It concludes with showing how the algorithm works during forward locomotion.
Citation:
Przemyslaw Kryczka, Petar Kormushev, Nikos Tsagarakis, Darwin G. Caldwell, “Online Regeneration of Bipedal Walking Gait Optimizing Footstep Placement and Timing”, In Proc. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS 2015), Hamburg, Germany, 2015.
The video demonstrates a novel concept for kinematic-free control of a robot arm. It implements an encoderless robot controller that does not rely on any joint angle information or estimation and does not require any prior knowledge about the robot kinematics or dynamics.
The approach works by generating actuation primitives and perceiving their effect on the robot’s end-effector using an external camera, thereby building a local kinodynamic model of the robot.
The experiments with this proof-of-concept controller show that it can successfully control the position of the robot. More importantly, it can adapt even to drastic changes in the robot kinematics, such as 100% elongation of a link, 35-degree angular offset of a joint, and even a complete overhaul of the kinematics involving the addition of new joints and links.
The proposed control approach looks promising and has many potential applications not only for the control of existing robots, but also for new robot designs.
Citation:
Petar Kormushev, Yiannis Demiris, Darwin G. Caldwell, “Kinematic-free Position Control of a 2-DOF Planar Robot Arm”, In Proc. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS 2015), Hamburg, Germany, 2015.
WALK-MAN is a humanoid robot developed by the Italian Institute of Technology and University of Pisa in Italy, within the European funded project WALK-MAN (www.walk-man.eu). The project is a four-year research programme which started in October 2013 and aims to developing a humanoid robot for disaster response operations.
The first prototype of the WALK-MAN robot will participate in the DARPA Robotics Challenge finals in June 2015, but it will be further developed both in hardware and software, in order to validate the project results through realistic scenarios, consulting also civil defense bodies. The technologies developed within Walk-man project have also a wide range of other applications, including industrial manufacturing, co-worker robots, inspection and maintenance robots in dangerous workspaces, and may be provided to others on request.
The robot perception system includes torque sensing, end effector F/T sensors, and a head module equipped with a stereo vision system and a rotating 3D laser scanner, the posture of which is controlled by a 2DOF neck chain. Extra RGB-D and colour cameras mounted at fixed orientations provide additional coverage of the locomotion and manipulation space. IMU sensors at the head and the pelvis area provide the necessary inertial/orientation sensing of the body and the head frames. Protective soft covers mounted along the body will permit the robot to withstand impacts including those occurred during falling incidents. The software interface of the robot is based on YARP (www.yarp.it) and ROS (www.ros.org).
The video shows a hopping robot that uses a bungee cord in the knee for energy-efficient continuous hopping.
We investigate how the passive and active compliance of the leg can help to absorb the shock of landing impact and protect the harmonic drives which are the most fragile part of a robot.
Citation:
Houman Dallali, Petar Kormushev, Nikolaos Tsagarakis, Darwin G. Caldwell, “Can Active Impedance Protect Robots from Landing Impact?”, In Proc. IEEE Intl Conf. on Humanoid Robots (Humanoids 2014), Madrid, Spain, 2014.
This is the video accompanying our IROS 2014 paper – “Haptic Exploration of Unknown Surfaces with Discontinuities”. We used a KUKA LWR (Lightweight arm) robot, equipped with a 6-axis force/torque sensor and a rolling pin at the end-effector. The work will be presented in September at IROS 2014 conference, Chicago, USA.
Video credit: Rodrigo Jamisola and Petar Kormushev
The so-called “visuospatial skills” allow people to visually perceive objects and the spatial relationships among them. This video demonstrates a novel machine learning approach that allows a robot to learn simple visuospatial skills for performing object reconfiguration tasks. The main advantage of this approach is that the robot can learn from a single demonstration, and can generalize the skill to new initial configurations. The results from this research work were presented at the International Conference on Intelligent Robots and Systems (IROS 2013) in Tokyo, Japan in November 2013.
Abstract:
We present a novel robot learning approach based on visual perception that allows a robot to acquire new skills by observing a demonstration from a tutor. Unlike most existing learning from demonstration approaches, where the focus is placed on the trajectories, in our approach the focus is on achieving a desired goal configuration of objects relative to one another. Our approach is based on visual perception which captures the object’s context for each demonstrated action. This context is the basis of the visuospatial representation and encodes implicitly the relative positioning of the object with respect to multiple other objects simultaneously. The proposed approach is capable of learning and generalizing multi-operation skills from a single demonstration, while requiring minimum a priori knowledge about the environment. The learned skills comprise a sequence of operations that aim to achieve the desired goal configuration using the given objects. We illustrate the capabilities of our approach using three object reconfiguration tasks with a Barrett WAM robot.
Citation:
S. Ahmadzadeh, P. Kormushev, D. Caldwell, “Visuospatial Skill Learning for Object Reconfiguration Tasks,” in Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013), Tokyo, Japan, 3-8 Nov 2013.
The video shows the humanoid robot WABIAN-2R walking with dynamically generated gait. Two scenarios are demonstrated: (1) sudden stopping and reversing, and (2) sudden step change to avoid an obstacle. The walking gait is dynamically generated using a hybrid gait pattern generator capable of rapid and dynamically consistent pattern regeneration.
Abstract:
We propose a two-stage gait pattern generation scheme for the full-scale humanoid robots, that considers the dynamics of the system throughout the process. The fist stage is responsible for generating the preliminary motion reference, such as step position, timing and trajectory of Center of Mass (CoM), while the second stage serves as dynamics filter and modifies the initial references to make the pattern stable on the full-scale multi-degree-of-freedom humanoid robot. The approach allows employment of easy to use models for motion generation, yet the use of the dynamics filtering ensures that the pattern is safe to execute on the real-world humanoid robot. The paper contains description of two approaches used in the first and second stage, as well as experimental results proving the effectiveness of the method. The fast calculation time and the use of the system’s dynamic state as initial conditions for pattern generation makes it a good candidate for the real-time gait pattern generator.
Citation:
Przemyslaw Kryczka, Petar Kormushev, Kenji Hashimoto, Hun-ok Lim, Nikolaos Tsagarakis, Darwin G. Caldwell and Atsuo Takanishi. Hybrid gait pattern generator capable of rapid and dynamically consistent pattern regeneration. Proc. URAI 2013.
The video shows a KUKA robot that learns how to grasp and turn a valve autonomously. The robot learns not only how to achieve the goal of the task, but also how to react to different disturbances during the task execution. For example, the robot learns a reactive behavior that allows it to pause and resume the task in response to the changes of the uncertainty in the valve position. This helps the robot to avoid collision with the valve, and improves the reliability and robustness of the task execution.
The setup of this experiment comprises: the robot arm which is a KUKA LWR (Lightweight robotic arm), an Optitrack system for motion capture, a T-bar valve with adjustable friction level.
The initial task demonstration and reproduction phases are performed with kinesthetic teaching. The reactive behavior is implemented using a Reactive Fuzzy Decision Maker (RFDM).
The valve turning task is challenging, especially if the valve is moving dynamically. A similar valve-turning task is also included in the DARPA robot competition (DRC). However, in that challenge the valves are fixed, while here the valve is moving, which makes it even more difficult to accomplish the task.
Citation:
Seyed Reza Ahmadzadeh, Petar Kormushev and Darwin G. Caldwell. Autonomous Robotic Valve Turning: A Hierarchical Learning Approach. IEEE Intl. Conf. on Robotics and Automation (ICRA 2013), Karlsruhe, Germany, 2013.
We present a learning-based approach for minimizing the electric energy consumption during walking of a passively-compliant bipedal robot. The energy consumption is reduced by learning a varying-height center-of-mass trajectory which uses efficiently the robot’s passive compliance. To do this, we propose a reinforcement learning method which evolves the policy parameterization dynamically during the learning process and thus manages to find better policies faster than by using fixed parameterization. The method is first tested on a function approximation task, and then applied to the humanoid robot COMAN where it achieves significant energy reduction.
A Japanese humanoid robot (Fujitsu HOAP-2) learns to clean a whiteboard by upper-body kinesthetic teaching during full-body balance control. The research is from an Italian-Japanese collaboration between the Italian Institute of Technology and Tokyo City University.
We present an integrated approach allowing a free-standing humanoid robot to acquire new motor skills by kinesthetic teaching. The proposed full-body control method controls simultaneously the upper and lower body of the robot with different control strategies. Imitation learning is used for training the upper body of the humanoid robot via kinesthetic teaching, while at the same time Reaction Null Space method is used for keeping the balance of the robot. During demonstration, a force/torque sensor is used to record the exerted forces, and during reproduction, we use a hybrid position/force controller to apply the learned trajectories in terms of positions and forces to the end effector. The proposed method is tested on a 25-DOF Fujitsu HOAP-2 humanoid robot with a surface cleaning task.
This research was presented at the International Conference on Robotics and Automation (ICRA) in May 2011 in Shanghai, China.
Humanoid robot iCub learns the skill of archery. After being instructed how to hold the bow and release the arrow, the robot learns by itself to aim and shoot arrows at the target. It learns to hit the center of the target in only 8 trials.
The learning algorithm, called ARCHER (Augmented Reward Chained Regression) algorithm, was developed and optimized specifically for problems like the archery training, which have a smooth solution space and prior knowledge about the goal to be achieved. In the case of archery, we know that hitting the center corresponds to the maximum reward we can get. Using this prior information about the task, we can view the position of the arrow’s tip as an augmented reward. ARCHER uses a chained local regression process that iteratively estimates new policy parameters which have a greater probability of leading to the achievement of the goal of the task, based on the experience so far. An advantage of ARCHER over other learning algorithms is that it makes use of richer feedback information about the result of a rollout.
For the archery training, the ARCHER algorithm is used to modulate and coordinate the motion of the two hands, while an inverse kinematics controller is used for the motion of the arms. After every rollout, the image processing part recognizes automatically where the arrow hits the target which is then sent as feedback to the ARCHER algorithm. The image recognition is based on Gaussian Mixture Models for color-based detection of the target and the arrow’s tip.
The experiments are performed on a 53-DOF humanoid robot iCub. The distance between the robot and the target is 3.5m, and the height of the robot is 104cm.
This research was presented at the Humanoids 2010 conference in December 2010 in USA.
Robot learns to flip pancakes! I am teaching a Barrett WAM robot to flip pancakes:
The video shows a Barrett WAM 7 DOFs manipulator learning to flip pancakes by reinforcement learning.
The motion is encoded in a mixture of basis force fields through an extension of Dynamic Movement Primitives (DMP) that represents the synergies across the different variables through stiffness matrices. An Inverse Dynamics controller with variable stiffness is used for reproduction.
The skill is first demonstrated via kinesthetic teaching, and then refined by Policy learning by Weighting Exploration with the Returns (PoWER) algorithm. After 50 trials, the robot learns that the first part of the task requires a stiff behavior to throw the pancake in the air, while the second part requires the hand to be compliant in order to catch the pancake without having it bounced off the pan.