– – – – – – – – – – –

Follow me

YouTube
Twitter
LinkedIn
Facebook

– – – – – – – – – – –

Categories

WALK-MAN robot

WALK-MAN robot

WALK-MAN is a humanoid robot developed by the Italian Institute of Technology and University of Pisa in Italy, within the European funded project WALK-MAN (www.walk-man.eu). The project is a four-year research programme which started in October 2013 and aims to developing a humanoid robot for disaster response operations.

WALK-MAN is the acronym of “Whole Body Adaptive Locomotion and Manipulation” underlining its main research goal: enhancing the capabilities of existing humanoid robots, permitting them to operate in emergency situations, while assisting or replacing humans in civil damaged sites including buildings, such as factories, offices and houses. In such scenarios, the Walk-man robot will demonstrate human type locomotion, balance and manipulation capabilities. To reach these targets, Walk-man design principles and implementation relied on the use of high performance actuation systems, compliant body and soft under actuated hand designs taking advantage of the recent developments in mechanical design, actuation and materials.

The first prototype of the WALK-MAN robot will participate in the DARPA Robotics Challenge finals in June 2015, but it will be further developed both in hardware and software, in order to validate the project results through realistic scenarios, consulting also civil defense bodies. The technologies developed within Walk-man project have also a wide range of other applications, including industrial manufacturing, co-worker robots, inspection and maintenance robots in dangerous workspaces, and may be provided to others on request.

Technical details

The prototype WALK-MAN platform is an adult size humanoid with a height of 1.85m an arm span of 2m and a weight of 118Kg. The robot is a fully power autonomous, electrically powered by a 2KWh battery unit; its body has 33 degrees of freedom (DOF) actuated by high power electric motors and all equipped with intrinsic elasticity that gives to the robot superior physical interaction capabilities.

The robot perception system includes torque sensing, end effector F/T sensors, and a head module equipped with a stereo vision system and a rotating 3D laser scanner, the posture of which is controlled by a 2DOF neck chain. Extra RGB-D and colour cameras mounted at fixed orientations provide additional coverage of the locomotion and manipulation space. IMU sensors at the head and the pelvis area provide the necessary inertial/orientation sensing of the body and the head frames. Protective soft covers mounted along the body will permit the robot to withstand impacts including those occurred during falling incidents. The software interface of the robot is based on YARP (www.yarp.it) and ROS (www.ros.org).

The WALK-MAN team information is available on the DARPA DRC website:
http://www.theroboticschallenge.org/finalist/walk-man

Photo slideshow

Selected best photos

Reviewing Revolution

(excerpt from Ludmila Kuncheva’s page)

“Reviewing will become obsolete. It has been needed in the past because there has been no way to tap on a larger readers’ audience for an opinion poll. Peer reviewing has been the only credible way to maintain standards of publication. The growing diversity of topics makes this process impractical, biased or spurious. We have technology now! We can allow for peer reviewing on a massive scale. Imagine a large pool of papers, automatically clustered and positioned within a big mosaic. Where do you look for papers? I doubt very much that you browse the contents of all relevant journals. Thank God for Internet! Now suppose that you have access to all papers. The best ones will be spotted and cited over and over. The citations will replace the reviews.

There will be fewer journals such as Nature, Science and Lancet. Only the best papers will find their place in the journals. These papers will no longer be original research, they will be rather “the best of…”. Selected by citation from the pool, say for the past 1 year, these papers can undergo a round of peer review. This time, however, the reviewing rules will be different:

  • First, all reviews will be handsomely paid.
  • Second, reviewers will bid for a paper. The candidates should submit their records, and the Editor will have the task to select among them.

As an additional benefit, we will kill fewer trees. Plus, a lot of human resource will be freed for better use of their expertise and energy. “

IEEE Technical Committee on Robot Learning

The robot hardware is progressively becoming more complex, which leads to growing interest in applying machine learning and statistics approaches within the robotics community. At the same time, there has been a growth within the machine learning community in using robots as motivating applications for new algorithms and formalisms. Considerable evidence of this exists in the use of robot learning approaches in high-profile competitions such as RoboCup and the DARPA Challenges, and the growing number of research programs funded by governments around the world. Additionally, the volume of research is increasing, as shown by the number of robot learning papers accepted to IROS and ICRA, and the corresponding number of learning sessions.


iCub robot archer - additional photo - IMG 5144  Pancake flipping robot - IMG 4086
The primary goal of the Technical Committee on Robot Learning is to act as a focus point for wide distribution of technically rigorous results in the shared areas of interest around robot learning. Without being exclusive, such areas of research interest include:

  • learning models of robots, tasks or environments
  • learning deep hierarchies or levels of representations, from sensor and motor representations to task abstractions
  • learning of plans and control policies by imitation and reinforcement learning
  • integrating learning with control architectures
  • methods for probabilistic inference from multi-modal sensory information (e.g., proprioceptive, tactile, vison)
  • structured spatio-temporal representations designed for robot learning such as low-dimensional embedding of movements
  • developmental robotics and evolutionary-based learning approaches


COMAN robot learning to walk - MVI 4773COMAN robot learning to walk - IMG 4682  HOAP-2 holding eraserHOAP-2 kinesthetic teaching with force sensor 01

News

  • [May 21, 2013] New Job opening – Post-doc in Robot Learning on topic: “Machine Learning for Robotics”. Details here: http://kormushev.com/news/postdoc-opening-in-machine-learning-for-robotics-2013/
  • [August 10, 2012] New IROS 2012 Workshop: “Beyond Robot Grasping – Modern Approaches for Dynamic Manipulation”. The workshop will be held on October 12, 2012 in Algarve, Portugal. More information at the website of the workshop: http://www.ias.informatik.tu-darmstadt.de/Research/IROS2012
  • [March 28, 2012] New AIMSA 2012 Workshop organized by the TC on “Advances in Robot Learning and Human-Robot Interaction”. The workshop will be held on September 12, 2012 in Varna, Bulgaria. More information at the website of the workshop: http://kormushev.com/AIMSA-2012/
  • [March 13, 2012] New chairs of the TC. After three very successful years for this TC on Robot Learning, the founding chairs Jan Peters, Jun Morimoto, Russ Tedrake and Nicholas Roy are stepping down as chairs of the committee. They will be replaced by Petar Kormushev, Edwin Olson, Ashutosh Saxena, and Wataru Takano who have kindly agreed to take the reign of the committee. Please see the changes in the mailing list addresses here.

Recent Activities of the Technical Committee

The technical committee regularly organizes special sessions associated with the “Robot learning” RAS keyword. If you want your paper to be considered for such a session and have used the above keyword in your submission, please forward an email to the TC co-chairs (contact info at: http://www.ieee-ras.org/robot-learning/contact). The technical committee will not be involved in the reviewing process but will organize the session based on the list of accepted submissions with this keyword.


Ironing robot - IMG 4366WAM robot - ArmInFridge

TC-organized Workshops

This is a summary of the workshops which were organized by the IEEE TC on Robot Learning:

 

Technical Committee Website:

http://www.learning-robots.de

 

Chairs of the Technical Committee

ashutosh saxena
Ashutosh Saxena
Cornell University, USA

edwin olson
Edwin Olson
University of Michigan, USA

petar kormushev
Petar Kormushev
Italian Institute of Technology, Italy

wataru takano
Wataru Takano
University of Tokyo, Japan

 

Who is Peter Joseph?

A nice interview with Peter Joseph:

Zeitgeist Addendum

A very shocking and inspiring movie. I fully agree with the ideas in it:

http://www.thezeitgeistmovement.com/

Zeitgeist: Moving Forward

Something definitely worth watching:

http://www.thezeitgeistmovement.com/

The ultimate test of human character

The ultimate test of character

Writing good scientific papers

Good papers are like good wine: they need time to mature.

Of course, there are a few jerks out there, as Marc Raibert puts it, who can write perfect manuscripts on the first try, but if you’re reading this, I assume you are not one of these disgusting individuals.

So, for the rest of us mortals, I have tried to collect advice from various sources about how to write good scientific papers. Also, I contribute some of my own humble personal experience.

One of my most favourite papers on this topic is, without doubt, Marc Raibert’s paper about “Spilling the beans”. If you haven’t read it yet, please do so!
I totally agree with Raibert, and always try to “spill the beans” in my own papers as much and as early as possible.

Another classic in the genre is Jim Kajiya’s article “How to Get Your SIGGRAPH Paper Rejected”.

The “Cargo Cult Science”, as named by Richard Feynman, is a must-see for all researchers, in my opinion. If you don’t know what I’m talking about, I recommend watching Feynman’s commencement address at Caltech at my Inspiration page.

Favorite quotes

Be prapared: ‘luck’ is where preparation meets opportunity.
― Randy Pausch

The quickest way to succeed is to accelerate the rate at which you fail.
― Reinforcement Learning folklore

Stay hungry, stay foolish.
― Steve Jobs

I hear, I forget; I see, I may remember; I do, I will never forget.
― Confucius

The only way of discovering the limits of the possible is to venture a little way past – into the impossible.
― unknown

Life is not measured by the number of breaths we take, but by the number of moments that take our breath away.
― unknown

Things should be as simple as possible, but no simpler.
― Albert Einstein

The best way to predict the future is to invent it.
― Alan Kay

Don’t only practice your art, but force your way into its secrets, for it and knowledge can raise men to the divine.
― Ludwig van Beethoven

We must become the change we want to see in the world.
― Mahatma Gandhi

If you don’t fail, you’re not even trying.
― unknown

To get something you never had, you have to do something you never did.
― unknown

Failing to plan is planning to fail.
― Winston Churchill

Talent does what it can, Genius does what it must.
― Edward Bulwer-Lytton

Good judgment comes from experience. Experiences comes from bad judgment.
― unknown

Our virtues and our failings are inseparable, like force and matter. When they separate, man is no more.
― Nikola Tesla

Life is a journey, not a destination.
― Ralph Waldo Emerson

We cannot solve our problems with the same thinking we used when we created them.
― Albert Einstein

The two most important days in your life are the day you were born and the day you find out why.
― Mark Twain

If you want to go fast, go alone. If you want to go far, go together.
― old African proverb

What I cannot build, I do not understand.
― Richard Feynman

Any sufficiently advanced technology is indistinguishable from magic.
― Sir Arthur Charles Clarke

PhD comics

One of my favorite comics is the PhD comics (“Piled Higher and Deeper”), which relates to many problems and funny moments of a PhD-student’s and a post-doc’s life.

 

Inspiration

A collection of highly-motivational and inspirational materials (at least for me):


Steve Jobs’s Commencement Speech in 2005 at Stanford University



Randy Pausch’s Last Lecture: Achieving Your Childhood Dreams



Jeff Hawkins’s TED Talk: Brain science is about to fundamentally change computing



Andrew Ng (Director of Stanford Artificial Intelligence Lab) – The Future of Robotics and Artificial Intelligence



Richard Feynman’s commencement address given in 1974 at Caltech – “Cargo Cult Science”



Bruno Bozzetto – Freedom must always be conquered (Celebration of 60 years of freedom, by Council of Bergamo)


Melinda Gates’ Graduation Speech at Duke University, 2013


Al Pacino’s “Inches” Speech, from the movie “Any Given Sunday”, 1999


Lectures in mechatronics and robotics

Sorry, this entry is only available in Български.

Robotics in Bulgaria

Sorry, this entry is only available in Български.

Invited talk

I give invited talks occasionally, to present my latest research in machine learning and its application for robot control.

If you would like me to give a presentation at your institution, please invite me!
You can download a tentative abstract of my invited talk here.

Robot Learning of Motor Skills

Abstract

Endowing robots with human-like abilities to perform motor skills in a smooth and natural way is a dream of many researchers. It has become clear now that this can only be achieved if robots, similarly to humans, are able to learn new skills by themselves. However, acquiring new motor skills is not simple and involves various forms of learning. Some tasks can be successfully transferred to the robot using only imitation strategies. Other tasks can be learned very efficiently by the robot alone using reinforcement learning. The efficiency of the process lies in the interconnections between imitation and self-improvement strategies.
In this talk, a variety of robot skill learning examples are presented, such as: autonomous valve turning using reactive policy learning, energy-efficient bipedal walking exploiting the passive compliance, whole-body motor skill learning for erasing a whiteboard, learning for improved control of autonomous underwater vehicles, etc. Throughout these examples, the important role of the policy representation for speeding up the learning process is highlighted.

Biography

Dr. Petar Kormushev is a researcher and a team leader at the Advanced Robotics department of the Italian Institute of Technology (IIT). His research interests include robotics and machine learning, especially reinforcement learning for intelligent robot behavior. He obtained his PhD degree in Computational Intelligence from Tokyo Institute of Technology in 2009. He holds MSc degree in Artificial Intelligence and MSc degree in Bio- and Medical Informatics. He is a technical coordinator in two EU FP7 projects, as well as the recipient of the 2013 John Atanasoff award by the President of Bulgaria for outstanding young scientist.

 

You can find an old version of my invited talk here.

English-Bulgarian Scientific Dictionary

I am trying to create a contemporary English-Bulgarian scientific dictionary which contains modern and state-of-the-art scientific terms and their corresponding translations from English to Bulgarian and vice-versa. Most of the included words are too new and do not yet have a well-established translation in Bulgarian, which is one of the main reasons for trying to build such a dictionary in the first place, by trying to propose appropriate Bulgarian terms for the novel English terms.

The current version contains mostly terms from robotics and machine learning, because these are my main areas of research interest.

Try it here!

–Petar

iCub robot

The iCub robot is a humanoid robot developed within the project RobotCub. The iCub was designed and built mainly by the Italian Institute of Technology in Genova.

The iCub robot

Reinforcement Learning

Reinforcement Learning is a type of Machine Learning approach in which the learning algorithm discovers by itself how to reach a given goal by a trial-and-error process.

Reinforcement Learning is different than supervised learning and unsupervised learning. It is a separate class of learning approaches that rely on information given by a reward function.

The reward function is the way in which the goal is specified.

To be continued…

COMAN – compliant humanoid robot


The COMAN robot is a compliant humanoid robot which is currently under development by the Advanced Robotics dept. of the Italian Institute of Technology in Genoa, Italy.

COMAN stands for “COmpliant huMANoid”, because this robot is designed with passive compliance (via springs) in his joints. This allows it to be more robust to environment perturbations (e.g. walking on uneven ground), to be safer for human-robot interaction (soft to touch), to be more energy-efficient, and to perform more dynamic motions (e.g. jumping, running).

COMAN can also be interpreted as Co-Man, meaning a co-worker, a robot which is a partner to humans, designed for safe physical human-robot interaction. The robot’s design is derived from the compliant joint design of the cCub bipedal robot.

Petar with the robot COMAN developed at the Advanced Robotics department of IIT



This is a close-up of the passively-compliant legs of the robot:


Below is a video of the COMAN walking experiment I did together with Barkan Ugurlu and Nikos Tsagarakis. The goal was to learn to minimize the energy consumption used for walking by COMAN. This video accompanies my IROS 2011 paper presented in San Francisco, in September 2011.


We present a learning-based approach for minimizing the electric energy consumption during walking of a passively-compliant bipedal robot. The energy consumption is reduced by learning a varying-height center-of-mass trajectory which uses efficiently the robot’s passive compliance. To do this, we propose a reinforcement learning method which evolves the policy parameterization dynamically during the learning process and thus manages to find better policies faster than by using fixed parameterization. The method is first tested on a function approximation task, and then applied to the humanoid robot COMAN where it achieves significant energy reduction.

Link to publication:
Kormushev, P., Ugurlu, B., Calinon, S., Tsagarakis, N., and Caldwell, D.G., “Bipedal Walking Energy Minimization by Reinforcement Learning with Evolving Policy Parameterization“, IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS-2011), San Francisco, 2011. [pdf] [bibtex]


This is an older prototype of COMAN, in July 2012:

Learning Robots

In recent years, there have been some amazing demonstrations of successful learning robots, which master some difficult motor skills.
Here I have collected some of the most impressive ones, which I consider being major milestones at the time they were done:


This is work done by my former colleague Stephen Hart: Dexter robot learning to reach


Work by James Kuffner in CMU:


This is work done by my friend and colleague Sylvain Calinon:


Work by Pieter Abbeel:

Robot Learning

One of my main research topics is robot learning. Normally, in machine learning, the algorithms are classified in three classes: supervised (aka. imitation learning in robotics), unsupervised (aka. exploration in robotics), and semi-supervised (aka. reinforcement learning in robotics).

50 години роботика

Sorry, this entry is only available in Български.

Bulgarian roboticists

I maintain a list of active Bulgarian researchers in robotics and machine learning. If you would like to be added to this list please contact me.

I also maintain a mailing list called Bulgarian Robotics Group, for exchanging useful information related to ongoing robotics projects, job opportunities, and other news to help each other. You can sign up for the mailing list at at Google Groups here:
https://groups.google.com/forum/?fromgroups#!forum/bulgarian-robotics

List of active Bulgarian researchers in robotics and machine learning:

  • Petar Kormushev is a team leader of a research group at Italian Institute of Technology, working on robot learning by imitation and reinforcement learning.
  • Dragomir N. Nenchev is a professor at Tokyo City University, working on motion/force control, space robots, humanoid robots, and service robots.
  • Lubomir Lilov is a professor at Sofia University, heading the master’s program on Mechatronics and Robotics at the Faculty of Mathematics and Informatics.
  • Rosen Diankov is the author of OpenRAVE robotics platform for manipulation planning, etc.
  • Alexander Stoytchev is a professor at Iowa State University. He constructed a dual-arm Barrett WAM robot and is researching in developmental robotics.
  • Jivko Sinapov is a PhD student of prof. Alexander Stoytchev.
  • Alexander A. Petrov is an assistant professor at Ohio State University.
  • Ivan Kalaykov is a professor at Örebro University.
  • Dimitar N. Dimitrov is a post-doc at Örebro University.
  • Todor Stoyanov is a PhD student at Örebro University.
  • Stanimir Dragiev is a PhD student at CoR-Lab / TU Berlin.
  • Nikolay Jetchev is a PhD student at TU Berlin.
  • Kostadin G. Kostadinov is a professor in Mechatronics & Robotics in “Mechatronic Systems” Department of BAS.
  • Krasimir D. Kolarov was teaching robotics at Stanford University, now works at Apple Inc.
  • Anelia Angelova finished her PhD at California Institute of Technology in 2007, now works at Google.
  • Nayden Chivarov and Nedko Shivarov are with the Service Robots group at the Central Laboratory of Mechatronics and Instrumentation at BAS, working on SRS FP7 project.
  • Boicho Kokinov is an associate professor at New Bulgarian University. His work is mostly on cognitive science and not robotics.
  • Petko Kiriazov is an associate professor at Institute of Mechanics, Bulgarian Academy of Sciences.
  • Kiril Kiryazov is a PhD student at University of Skövde, Sweden.
  • Nikolay Stefanov is doing research at TUM in Germany.
  • Emanuel/Emo Todorov is an associate professor at University of Washington.
  • Vladimir Zamanov is teaching robotics at Technical University of Sofia.
  • Ivan Dryanovski is a PhD student at CCNY (City College of New York), doing research on 3D SLAM and Micro Air Vehicle Navigation.
  • Dragomir Anguelov was a PhD student at Stanford University, collaborating with prof. Sebastian Thrun on computer vision for robots. Now he is working at Google in Mountain View.
  • Dimitar Ivanov Chakarov is an associate professor at the Mechatronic Systems dept. of BAS in Bulgaria.
  • Evtim Venets Zahariev is an associate professor and head of the department of Dynamics and optimization of controlled mechanical systems at BAS in Bulgaria.
  • Andrey Popov is at Hamburg University of Technology, doing research on UAV robots (esp. quadrocopters and H-infinity controllers).
  • Marin Kobilarov is a post-doc in control and dynamical systems at Caltech, doing research on motion planning and control.
  • Bojan Jakimovski is the CEO of Bionics4Robotics, which is a robotics- and AI- related company in Munich, Germany.
  • Ilian Bonev is a professor working on precision robotics and parallel manipulators at ETS, Canada.
  • Roko Tschakarow works at SCHUNK as a Business Unit Manager System Solutions Mechatronics. His work is on building lightweight and modular robots.
  • Alexander Gegov is a Reader at University of Portsmouth, UK. His main research interests are in computational intelligence.
  • Dimitar H. Stefanov is with the Department of Electrical Engineering and Computer Sciences, Korea Advanced Institute of Science and Technology (KAIST).
  • Danail Stoyanov is with the Department of Computer Science, University College London (UCL), doing research in medical robotics.
  • Andon Topalov is a professor at the Control Systems Department of Technical University of Sofia, Branch Plovdiv, Bulgaria.
  • Petko Hr. Petkov is a professor at the Department of Systems and Control of Technical University of Sofia, Bulgaria.
  • Stefan Markov was a student at University Bremen, Germany. Research areas: Robot Perception and Learning, AI, Mobile Sensor Networks.
  • Chavdar Papazov is a post-doctoral researcher at Technische Universität München, Germany. Research areas: 3D shape registration, object recognition and pose estimation.
  • Svetlin Penkov is a student at Edinburgh University, UK.
  • Atanas Popov is a professor at the Faculty of Engineering at University of Nottingham, UK.
  • Svetan Ratchev is the director of the Institute for Advanced Manufacturing at University of Nottingham, UK.
  • Nikolay Atanasov is a PhD student at GRASP Lab, University of Pennsylvania, USA.
  • Kalin Gochev is a PhD student at the University of Pennsylvania, USA.
  • Marina Horn is a PhD student at the University of Heidelberg, Germany.
  • Galia Tzvetkova is an Associate Professor at the Institute of Mechanics, Bulgarian Academy of Sciences, Bulgaria.
  • Velin Dimitrov is a PhD student at Worcester Polytechnic Institute (WPI), USA.


List of other Bulgarian robotics enthusiasts and hobbyists:


List of robotics related events and websites in Bulgaria: