(excerpt from Ludmila Kuncheva’s page)
“Reviewing will become obsolete. It has been needed in the past because there has been no way to tap on a larger readers’ audience for an opinion poll. Peer reviewing has been the only credible way to maintain standards of publication. The growing diversity of topics makes this process impractical, biased or spurious. We have technology now! We can allow for peer reviewing on a massive scale. Imagine a large pool of papers, automatically clustered and positioned within a big mosaic. Where do you look for papers? I doubt very much that you browse the contents of all relevant journals. Thank God for Internet! Now suppose that you have access to all papers. The best ones will be spotted and cited over and over. The citations will replace the reviews.
There will be fewer journals such as Nature, Science and Lancet. Only the best papers will find their place in the journals. These papers will no longer be original research, they will be rather “the best of…”. Selected by citation from the pool, say for the past 1 year, these papers can undergo a round of peer review. This time, however, the reviewing rules will be different:
As an additional benefit, we will kill fewer trees. Plus, a lot of human resource will be freed for better use of their expertise and energy. “
The robot hardware is progressively becoming more complex, which leads to growing interest in applying machine learning and statistics approaches within the robotics community. At the same time, there has been a growth within the machine learning community in using robots as motivating applications for new algorithms and formalisms. Considerable evidence of this exists in the use of robot learning approaches in high-profile competitions such as RoboCup and the DARPA Challenges, and the growing number of research programs funded by governments around the world. Additionally, the volume of research is increasing, as shown by the number of robot learning papers accepted to IROS and ICRA, and the corresponding number of learning sessions.
Recent Activities of the Technical Committee
The technical committee regularly organizes special sessions associated with the “Robot learning” RAS keyword. If you want your paper to be considered for such a session and have used the above keyword in your submission, please forward an email to the TC co-chairs (contact info at: http://www.ieee-ras.org/robot-learning/contact). The technical committee will not be involved in the reviewing process but will organize the session based on the list of accepted submissions with this keyword.
This is a summary of the workshops which were organized by the IEEE TC on Robot Learning:
Technical Committee Website:
Chairs of the Technical Committee
A nice interview with Peter Joseph:
A very shocking and inspiring movie. I fully agree with the ideas in it:
The ultimate test of character
Good papers are like good wine: they need time to mature.
Of course, there are a few jerks out there, as Marc Raibert puts it, who can write perfect manuscripts on the first try, but if you’re reading this, I assume you are not one of these disgusting individuals.
So, for the rest of us mortals, I have tried to collect advice from various sources about how to write good scientific papers. Also, I contribute some of my own humble personal experience.
One of my most favourite papers on this topic is, without doubt, Marc Raibert’s paper about “Spilling the beans”. If you haven’t read it yet, please do so!
Another classic in the genre is Jim Kajiya’s article “How to Get Your SIGGRAPH Paper Rejected”.
The “Cargo Cult Science”, as named by Richard Feynman, is a must-see for all researchers, in my opinion. If you don’t know what I’m talking about, I recommend watching Feynman’s commencement address at Caltech at my Inspiration page.
Be prapared: ‘luck’ is where preparation meets opportunity.
The quickest way to succeed is to accelerate the rate at which you fail.
Stay hungry, stay foolish.
I hear, I forget; I see, I may remember; I do, I will never forget.
The only way of discovering the limits of the possible is to venture a little way past – into the impossible.
Life is not measured by the number of breaths we take, but by the number of moments that take our breath away.
Things should be as simple as possible, but no simpler.
The best way to predict the future is to invent it.
Don’t only practice your art, but force your way into its secrets, for it and knowledge can raise men to the divine.
We must become the change we want to see in the world.
If you don’t fail, you’re not even trying.
To get something you never had, you have to do something you never did.
Failing to plan is planning to fail.
Talent does what it can, Genius does what it must.
Good judgment comes from experience. Experiences comes from bad judgment.
Our virtues and our failings are inseparable, like force and matter. When they separate, man is no more.
Life is a journey, not a destination.
One of my favorite comics is the PhD comics (“Piled Higher and Deeper”), which relates to many problems and funny moments of a PhD-student’s and a post-doc’s life.
A collection of highly-motivational and inspirational materials (at least for me):
Steve Jobs’s Commencement Speech in 2005 at Stanford University
Randy Pausch’s Last Lecture: Achieving Your Childhood Dreams
Jeff Hawkins’s TED Talk: Brain science is about to fundamentally change computing
Andrew Ng (Director of Stanford Artificial Intelligence Lab) – The Future of Robotics and Artificial Intelligence
Richard Feynman’s commencement address given in 1974 at Caltech – “Cargo Cult Science”
Bruno Bozzetto – Freedom must always be conquered (Celebration of 60 years of freedom, by Council of Bergamo)
Melinda Gates’ Graduation Speech at Duke University, 2013
Al Pacino’s “Inches” Speech, from the movie “Any Given Sunday”, 1999
I give invited talks occasionally, to present my latest research in machine learning and its application for robot control.
If you would like me to give a presentation at your institution, please invite me!
Robot Learning of Motor Skills
Endowing robots with human-like abilities to perform motor skills in a smooth and natural way is a dream of many researchers. It has become clear now that this can only be achieved if robots, similarly to humans, are able to learn new skills by themselves. However, acquiring new motor skills is not simple and involves various forms of learning. Some tasks can be successfully transferred to the robot using only imitation strategies. Other tasks can be learned very efficiently by the robot alone using reinforcement learning. The efficiency of the process lies in the interconnections between imitation and self-improvement strategies.
Dr. Petar Kormushev is a researcher and a team leader at the Advanced Robotics department of the Italian Institute of Technology (IIT). His research interests include robotics and machine learning, especially reinforcement learning for intelligent robot behavior. He obtained his PhD degree in Computational Intelligence from Tokyo Institute of Technology in 2009. He holds MSc degree in Artificial Intelligence and MSc degree in Bio- and Medical Informatics. He is a technical coordinator in two EU FP7 projects, as well as the recipient of the 2013 John Atanasoff award by the President of Bulgaria for outstanding young scientist.
You can find an old version of my invited talk here.
I am trying to create a contemporary English-Bulgarian scientific dictionary which contains modern and state-of-the-art scientific terms and their corresponding translations from English to Bulgarian and vice-versa. Most of the included words are too new and do not yet have a well-established translation in Bulgarian, which is one of the main reasons for trying to build such a dictionary in the first place, by trying to propose appropriate Bulgarian terms for the novel English terms.
The current version contains mostly terms from robotics and machine learning, because these are my main areas of research interest.
The iCub robot is a humanoid robot developed within the project RobotCub. The iCub was designed and built mainly by the Italian Institute of Technology in Genova.
Reinforcement Learning is a type of Machine Learning approach in which the learning algorithm discovers by itself how to reach a given goal by a trial-and-error process.
Reinforcement Learning is different than supervised learning and unsupervised learning. It is a separate class of learning approaches that rely on information given by a reward function.
The reward function is the way in which the goal is specified.
To be continued…
COMAN stands for “COmpliant huMANoid”, because this robot is designed with passive compliance (via springs) in his joints. This allows it to be more robust to environment perturbations (e.g. walking on uneven ground), to be safer for human-robot interaction (soft to touch), to be more energy-efficient, and to perform more dynamic motions (e.g. jumping, running).
COMAN can also be interpreted as Co-Man, meaning a co-worker, a robot which is a partner to humans, designed for safe physical human-robot interaction. The robot’s design is derived from the compliant joint design of the cCub bipedal robot.
This is a close-up of the passively-compliant legs of the robot:
Below is a video of the COMAN walking experiment I did together with Barkan Ugurlu and Nikos Tsagarakis. The goal was to learn to minimize the energy consumption used for walking by COMAN. This video accompanies my IROS 2011 paper presented in San Francisco, in September 2011.
We present a learning-based approach for minimizing the electric energy consumption during walking of a passively-compliant bipedal robot. The energy consumption is reduced by learning a varying-height center-of-mass trajectory which uses efficiently the robot’s passive compliance. To do this, we propose a reinforcement learning method which evolves the policy parameterization dynamically during the learning process and thus manages to find better policies faster than by using fixed parameterization. The method is first tested on a function approximation task, and then applied to the humanoid robot COMAN where it achieves significant energy reduction.
Link to publication:
This is the latest prototype of COMAN, as of July 2012:
In recent years, there have been some amazing demonstrations of successful learning robots, which master some difficult motor skills.
This is work done by my former colleague Stephen Hart: Dexter robot learning to reach
Work by James Kuffner in CMU:
This is work done by my friend and colleague Sylvain Calinon:
Work by Pieter Abbeel:
One of my main research topics is robot learning. Normally, in machine learning, the algorithms are classified in three classes: supervised (aka. imitation learning in robotics), unsupervised (aka. exploration in robotics), and semi-supervised (aka. reinforcement learning in robotics).
I maintain a list of active Bulgarian researchers in robotics and machine learning. If you would like to be added to this list please contact me.
I also maintain a mailing list called Bulgarian Robotics Group, for exchanging useful information related to ongoing robotics projects, job opportunities, and other news to help each other. You can sign up for the mailing list at at Google Groups here:
List of active Bulgarian researchers in robotics and machine learning:
List of other Bulgarian robotics enthusiasts and hobbyists:
List of robotics related events and websites in Bulgaria:
|2014 © Designed by Petar Kormushev|