I will also continue supervising PhD students at the Robot Learning and Interaction Lab which I was leading until now at the Italian Institute of Technology (IIT).
The newly established Dyson School of Design Engineering at Imperial College London is currently recruiting for a Senior Lecturer vacancy with expertise in Robotics and Physical Computing and is looking for highly skilled, enthusiastic and well-motivated applicants wishing to make a career in one of the world’s leading teaching and research institutions.
The Dyson School of Design Engineering was launched in July 2014, providing leading edge design engineering undergraduate and postgraduate education and research. The School offers a new four-year MEng undergraduate programme in Design Engineering, launched in October 2015, which represents a rigorous approach to design engineering, creativity, commerce and enterprise appropriate to 21st century industry. In addition the School offers the established two double-masters programmes in Innovation Design Engineering (IDE) and Global Innovation Design (GID), run jointly with the Royal College of Art.
Applications are invited from individuals with a strong academic record (including a relevant PhD or equivalent) in a relevant engineering field (e.g. mechanical, electrical, software or control engineering), or a related field to robotics, computing, manufacturing, intelligent systems, industrial design, or innovation design engineering. Where relevant, experience in a multi- disciplinary context would be desirable. Applicants must have a track record of: high quality research, demonstrated by recent exceptional publications in internationally leading journals and conferences in robotics; and proven teaching excellence. Applicants are required to submit together with their applications their 4 best journal papers published since January 2010.
Successful candidates will be expected to contribute to undergraduate and postgraduate teaching and to play a leading role in developing the School’s research in the relevant area, building on and extending the School’s current activities.
Informal enquiries may be made to Dr Petar Kormushev (p.kormushev@imperial.ac.uk) and Prof. Peter Childs (p.childs@imperial.ac.uk) who is Head of the Dyson School of Design Engineering.
Job details
Location: London, South Kensington
Salary: £57,020 per annum
Hours: Full Time
Contract Type: Permanent
Application deadline: 15th July 2016
How to apply
The preferred method of application is online via the website https://www.imperial.ac.uk/job-applicants/ (please select “Job Search/Academic” then the job title or vacancy reference number, EN20160173AM). Please complete and upload an application form as directed.
There are outstanding opportunities for becoming a postdoctoral researcher (PDRA) at Imperial College London. Here I have listed five of the highly-competitive funding schemes.
I am available to mentor potential Post-Doc applicants on research topics related to robotics and machine learning. Interested candidates should contact me by e-mail before submitting their application.
Students can apply to receive a bursary (funding) for the duration of their UROP project. The bursary will provide the student with a contribution towards their living costs for 6-12 weeks while undertaking a research experience within Imperial College during the summer of 2016.
The deadline to submit an application for funding is 14 March 2016.
Supervision
I am available to supervise undergraduate students for a UROP project on topics related to robotics and machine learning. Interested applicants should contact me by e-mail [p.kormushev (at) imperial.ac.uk].
UROP project description (tentative)
Title: Robotics and Machine Learning UROP Description: Depending on the skills and interests of the student, this UROP project could include designing a new robot, creating it using 3D printing, and controlling it. The main focus is on novelty – coming up with a novel robot design, or novel robot controller, or novel way to manufacture a robot, such as a robot arm or a mobile robot. In terms of software, the focus is on applying Machine Learning methods for the flexible control of a robot, and to allow the robot to learn new skills from experience. The topic is quite flexible and will be defined in collaboration with the student. Requirements: Basic knowledge of robotics, software programming skills, creativity.
Fully funded (all tuition fees paid) for UK/EU nationals, with additional stipend: 18,000 GBP per annum
While this position is also open to Overseas applicants, they will only be funded up to the UK/EU level, and will be expected to provide self-funding for the remaining tuition fees.
PhD Research Topic
The foundations of robotics and robot control were established at a time when there was very limited computational power available. Therefore, the robots’ design and control algorithms were simplified to extreme. Nowadays, we have at our disposal huge computational resources, but we still continue building and controlling robots based on the old concepts. For example, the assumption that the robot links are rigid bodies and that the pose of the end-effector can be calculated through simple forward kinematics by measuring the joint angles is still standard. Such assumptions lead to bulky and heavy robots because the links must be designed not to bend during operation. Even series-elastic actuation relies on the same assumption of rigid links.
The goal of this PhD research project is to investigate a radically new approach for controlling robots based on Machine Learning. Instead of using hand-made analytic models of a robot, the robot will learn its own model. Machine learning, including Deep Learning and Reinforcement Learning can be used to autonomously learn forward and inverse models of a robot’s kinematics and dynamics. Computer vision can be used to provide perception for both the environment and the robot’s own body. The ultimate goal would be the creation of a plug-and-play controller that works without any prior knowledge of the robot.
Such a solution offers tremendous potential to revolutionize the way we design and control robots, and to significantly expand their capabilities. For example, the robot links will no longer need to be so stiff, and the kinematics will no longer need to be fixed. As an illustration, imagine a lightweight prosthetic arm or a robot exoskeleton that can grow, bend, and adapt to accommodate its patient. Such a device would be impossible to control with the existing control methods. Another example is flexible use of tools, where the robot easily adapts its controller to use any new tool by online learning of the combined arm-plus-tool kinodynamics. Further applications are envisioned to soft robots (e.g. elephant trunk like robots) which are difficult to control with conventional approaches.
This research has the potential to lead to re-thinking of the established robot design paradigm (stiff links, fixed kinematics), since robot design and control are tightly coupled: the way we control robots determines the way we design them, and vice versa. Novel robot designs will be sought that leverage the rise of affordable 3D printing and novel smart materials, and could lead to the development of hybrid soft-hard robots, modular and reconfigurable robots (evolving hardware), self-repairing and self-improving robots, etc.
Funding
The funding for this PhD position is provided by Dyson Ltd. Their focus is on forward-looking research in robot perception and control with the goal of developing the breakthrough technology which will lie at the heart of new categories of robotic products for the home and beyond. Potential applications for the developed research will be sought in close collaboration with Dyson’s Robotics Research group.
Supervision
The PhD student will be supervised by Dr Petar Kormushev at the Dyson School of Design Engineering, with possible co-supervision from the Dyson Robotics Lab at Imperial’s Department of Computing.
Workplace
The Dyson School of Design Engineering is the 10th and newest engineering department at Imperial College London. It was formed in July 2014, building on the long-standing design and engineering expertise at Imperial as well as the world-renowned Innovation Design Engineering (IDE) programme run jointly by Imperial and the Royal College of Art. The School has a fast growing population of both staff and students. It is located at the South Kensington campus of Imperial, right next to Hyde Park.
Requirements
– You must have an MEng or MSc degree (or equivalent experience and/or qualifications) in an area pertinent to the subject area, i.e. Computing, Mathematics or Engineering.
– You must have a high standard undergraduate degree at UK 1st class or 2:1 level (or international equivalent)
– You must be fluent in spoken and written English and meet Imperial’s English standards.
– You must have excellent communication skills and be able to organise your own work and prioritise work to meet deadlines.
– The ideal candidate will have strong background in both Machine Learning and Robotics.
– Strong academic track record and practical software skills are desired.
– Any published scientific papers would be a plus.
How To Apply
All applications must be sent to Dr Petar Kormushev (p.kormushev [at] imperial.ac.uk) with the keyword “[PhD-2016-Imperial-Dyson]” in the subject field.
Applications must include the following:
– Full CV, with a list of any significant course projects and/or industrial experience;
– A 2-page research statement indicating what you see are interesting research issues relating to the above PhD topic description and why your expertise is relevant;
– Full academic transcripts/grades;
– A copy of all publications of the applicant (if any).
The Imperial College PhD Scholarship Scheme offers an outstanding opportunity for potential PhD students.
If you are a high performing undergraduate or Master’s student, and have a strong desire to undertake a PhD programme at a world class research institution, you could be selected to receive full tuition fees and a generous stipend for a PhD place at Imperial College London.
Opportunities for PhD funding are extremely competitive. In the 2015-16 PhD admissions period, less than half of the eligible PhD applicants who nominated themselves for the IC PhD Scholarship were shortlisted by their chosen Department to be considered for this scheme, Imperial’s most prestigious award. Ultimately, only 19% of those who self-nominated were awarded the scholarship. Applicants should be confident that they are able to demonstrate outstanding academic performance before applying for this scholarship scheme.
The scheme aims to provide up to 50 research students with great potential the opportunity to work within their chosen research field with the support of an excellent supervisor.
The earliest start date for funded places is 1 August 2016, the latest start date is 1 November 2016.
Funding
Successful candidates will receive the following financial support for up to 3.5 years:
Full funding for tuition fees
A stipend of £20,600 per annum to assist with living costs
A consumables fund of £2,000 per annum for the first 3 years of study
Deadlines
Applications put forward for this scholarship scheme will be considered throughout the academic year.
Applicants who apply before 29 January 2016 and are awarded a scholarship will be notified by 23 March 2016.
Applicants who apply before 1 April 2016 and are awarded a scholarship will be notified by 27 May 2016.
Supervision
I am available to supervise PhD students on topics related to robotics and machine learning. Interested applicants should contact me by e-mail before submitting their PhD application.
The video presents experiments during which a humanoid robot is subjected to external pushes and recovers stability by changing the step placement and duration.
It starts from showing effectiveness of the feedback controller during stepping in place. Then it continues to present how the developed algorithm regenerates the step placement and duration to regain stability after lateral pushes. It concludes with showing how the algorithm works during forward locomotion.
Citation:
Przemyslaw Kryczka, Petar Kormushev, Nikos Tsagarakis, Darwin G. Caldwell, “Online Regeneration of Bipedal Walking Gait Optimizing Footstep Placement and Timing”, In Proc. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS 2015), Hamburg, Germany, 2015.
The video demonstrates a novel concept for kinematic-free control of a robot arm. It implements an encoderless robot controller that does not rely on any joint angle information or estimation and does not require any prior knowledge about the robot kinematics or dynamics.
The approach works by generating actuation primitives and perceiving their effect on the robot’s end-effector using an external camera, thereby building a local kinodynamic model of the robot.
The experiments with this proof-of-concept controller show that it can successfully control the position of the robot. More importantly, it can adapt even to drastic changes in the robot kinematics, such as 100% elongation of a link, 35-degree angular offset of a joint, and even a complete overhaul of the kinematics involving the addition of new joints and links.
The proposed control approach looks promising and has many potential applications not only for the control of existing robots, but also for new robot designs.
Citation:
Petar Kormushev, Yiannis Demiris, Darwin G. Caldwell, “Kinematic-free Position Control of a 2-DOF Planar Robot Arm”, In Proc. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS 2015), Hamburg, Germany, 2015.
WALK-MAN is a humanoid robot developed by the Italian Institute of Technology and University of Pisa in Italy, within the European funded project WALK-MAN (www.walk-man.eu). The project is a four-year research programme which started in October 2013 and aims to developing a humanoid robot for disaster response operations.
WALK-MAN is the acronym of “Whole Body Adaptive Locomotion and Manipulation” underlining its main research goal: enhancing the capabilities of existing humanoid robots, permitting them to operate in emergency situations, while assisting or replacing humans in civil damaged sites including buildings, such as factories, offices and houses. In such scenarios, the Walk-man robot will demonstrate human type locomotion, balance and manipulation capabilities. To reach these targets, Walk-man design principles and implementation relied on the use of high performance actuation systems, compliant body and soft under actuated hand designs taking advantage of the recent developments in mechanical design, actuation and materials.
The first prototype of the WALK-MAN robot will participate in the DARPA Robotics Challenge finals in June 2015, but it will be further developed both in hardware and software, in order to validate the project results through realistic scenarios, consulting also civil defense bodies. The technologies developed within Walk-man project have also a wide range of other applications, including industrial manufacturing, co-worker robots, inspection and maintenance robots in dangerous workspaces, and may be provided to others on request.
Technical details
The prototype WALK-MAN platform is an adult size humanoid with a height of 1.85m an arm span of 2m and a weight of 118Kg. The robot is a fully power autonomous, electrically powered by a 2KWh battery unit; its body has 33 degrees of freedom (DOF) actuated by high power electric motors and all equipped with intrinsic elasticity that gives to the robot superior physical interaction capabilities.
The robot perception system includes torque sensing, end effector F/T sensors, and a head module equipped with a stereo vision system and a rotating 3D laser scanner, the posture of which is controlled by a 2DOF neck chain. Extra RGB-D and colour cameras mounted at fixed orientations provide additional coverage of the locomotion and manipulation space. IMU sensors at the head and the pelvis area provide the necessary inertial/orientation sensing of the body and the head frames. Protective soft covers mounted along the body will permit the robot to withstand impacts including those occurred during falling incidents. The software interface of the robot is based on YARP (www.yarp.it) and ROS (www.ros.org).
Call for PhD students for 2016
PhD Program in Bioengineering and Robotics
University of Genoa, jointly with the Italian Institute of Technology (IIT)
PhD positions with scholarships are available at the Italian Institute of Technology (IIT)
Location: Genoa, Italy
Starting date: November 2015
Application deadline: June 10th, 2015, noon (Italian time: GMT+2)
Please note that IIT is an English-language research institute, so it is not required to speak Italian.
IIT has state-of-the-art facilities and has rapidly established itself among top research institutes worldwide. IIT has a strong international character, with more than 40% foreign scientific staff drawn from over 50 countries worldwide.
I have 4 PhD student positions open in my lab (Robot Learning and Interaction Lab) under Themes 26, 27, 28 and 29 in the area of Robot Learning as described below.
THEME 26. Robotic Surgery with Improved Safety using Machine Learning for Intelligent Robot Tele-operation and Partial Autonomy
Tutors: Dr. Petar Kormushev, Prof. Darwin G. Caldwell
Description: Flexible hyper-redundant systems are becoming of increasing interest in medical applications where the flexibility of the robot can be used to direct the surgery around delicate tissues, however, these system are highly non-linear with complex dynamic making them very difficult to control.
This project will develop and implement machine learning algorithms to improve the intelligence of control and perception in flexible devices and enhance safety.
The advantages of using machine learning will be investigated in multiple potential areas, as follows: in low-level robot control using model learning approaches; in feedback control considering multi-modal input from position, force and pressure sensors; in tele-operation using learning of context-dependent skills for assisting the human operators (surgeons).
The work will also investigate the possibility of using partial autonomy at a lower control level using reactive strategies for robot control. With respect to safety the project will consider how to use the development of learning algorithms to automatically detect
abnormalities during robot teleoperation. These abnormalities may include excessive forces/pressure, excessive bending, unusual signals potentially indicating problems during the medical procedure.
Requirements: background in computer science, mathematics, engineering, physics or related disciplines.
THEME 27. Novel Robot Control Paradigms enabled by Machine Learning for Intelligent Control of the Next Generation Compliant and Soft Robots
Tutors: Dr. Petar Kormushev, Prof. Darwin G. Caldwell
Description: Despite the significant mechatronic advances in robot design, the motor skill repertoire of current robots is mediocre compared to their biological counterparts. Motor skills of humans and animals are still utterly astonishing when compared to robots. This PhD theme will focus on machine learning methods to advance the state-of-the-art in robot learning of motor skills. The type of motor skills that will be investigated include object manipulation, compliant interaction with objects, humans and the environment, force control and vision as part of the robot learning architecture.
The creation of novel, high-performance, passively-compliant humanoid robots (such as the robot COMAN developed at IIT) offers a significant potential for achieving such advances in motor skills. However, as the bottleneck is not the hardware anymore, the main efforts should be directed towards the software that controls the robot. It is no longer reasonable to use over-simplified models of robot dynamics, because the novel compliant robots possess much richer and more complex dynamics than the previous generation of stiff robots. Therefore, new solutions should be sought to address the challenge of compliant robot control.
Ideas from developmental robotics will be considered, in search for a qualitatively better approach for controlling robots, different than the currently predominant approach based on manually-engineered controllers.
The work within this PhD theme will include developing novel robot learning algorithms and methods that allow humanoid robots to easily learn new skills. At the same time, the methods should allow for natural and safe interaction with people. To this end, the research will include learning by imitation and reinforcement learning, as well as human-robot interaction.
Requirements: background in computer science, mathematics, engineering, physics or related disciplines.
THEME 28. Agile Robot Locomotion using Machine Learning for Intelligent Control of Advanced Humanoid Robots
Tutors: Dr. Petar Kormushev, Dr. Nikos Tsagarakis
Description: The state-of-the-art high-performance, passively-compliant humanoid robots (such as the robot COMAN developed by IIT) offer a significant potential for achieving more agile robot locomotion. At this stage, the bottleneck is not the hardware anymore, but the software that controls the robot. It is no longer reasonable to use over-simplified models of robot dynamics, because the novel compliant robots possess much richer and more complex dynamics than the previous generation of stiff robots. Therefore, a new solution should be sought to address the challenge of compliant humanoid robot control.
In this PhD theme, the use of machine learning and robot learning methods will be explored, in order to achieve novel ways for whole-body compliant humanoid robot control. In particular, the focus will be on achieving agile locomotion, based on robot self-learned dynamics, rather than on pre-engineered dynamics model. The PhD candidates will be expected to develop new algorithms for robot learning and to advance the state-of-the-art in humanoid robot locomotion.
The expected outcome of these efforts includes the realization of highly dynamic bipedal locomotion such as omni-directional walking on uneven surfaces, coping with multiple contacts with the environments, jumping and running robustly on uneven terrain and in presence of high uncertainties, demonstrating robustness and tolerance to external disturbances, etc. The ultimate goal will be achieving locomotion skills comparable to a 1.5 – 2 year-old child.
Requirements: background in computer science, mathematics, engineering, physics or related disciplines.
THEME 29. Dexterous Robotic Manipulation using Machine Learning for Intelligent Robot Control and Perception
Tutors: Dr. Petar Kormushev, Prof. Darwin G. Caldwell
Description: This project will investigate collaborative human-robot task learning and execution that uses the available perception (particularly tactile). The work will develop algorithms for learning of collaborative skills by direct interaction between a non-expert user and a robot. The tasks will build the necessary control algorithms to allow effortless and safe physical human-robot interaction using the available tactile feedback.
The final objectives will include: acquiring the perceptual information needed for robot to co-manipulate an object with human, understanding human’s state in an interaction task so as to react properly, building a framework for online compliant human-robot interaction based on real-time feedback of the state of the object and human.
The project will also consider semi-supervised and unsupervised skill learning approaches. It will develop tactile-guided autonomous learning algorithms based on state-of-the-art methods for reinforcement learning and deep learning. The tactile feedback will help to increase the performance of skill execution autonomously by the robot through trial-anderror interactions with the objects in the environment.
In addition this work will focus on supervised skill learning approaches. It will develop tactile-guided learning algorithms based on state-of-the-art methods for learning by imitation and visuospatial skill learning. The tactile perception information will be used both in the learning phase and the execution phase, to improve the robustness and the range of motor skill repertoire.
Requirements: background in computer science, mathematics, engineering, physics or related disciplines.
WALK-MAN is a humanoid robot developed by the Italian Institute of Technology and University of Pisa in Italy, within the European funded project WALK-MAN (www.walk-man.eu). The project is a four-year research programme which started in October 2013 and aims to developing a humanoid robot for disaster response operations.
The first prototype of the WALK-MAN robot will participate in the DARPA Robotics Challenge finals in June 2015, but it will be further developed both in hardware and software, in order to validate the project results through realistic scenarios, consulting also civil defense bodies. The technologies developed within Walk-man project have also a wide range of other applications, including industrial manufacturing, co-worker robots, inspection and maintenance robots in dangerous workspaces, and may be provided to others on request.
The robot perception system includes torque sensing, end effector F/T sensors, and a head module equipped with a stereo vision system and a rotating 3D laser scanner, the posture of which is controlled by a 2DOF neck chain. Extra RGB-D and colour cameras mounted at fixed orientations provide additional coverage of the locomotion and manipulation space. IMU sensors at the head and the pelvis area provide the necessary inertial/orientation sensing of the body and the head frames. Protective soft covers mounted along the body will permit the robot to withstand impacts including those occurred during falling incidents. The software interface of the robot is based on YARP (www.yarp.it) and ROS (www.ros.org).
University of Genoa, jointly with the Italian Institute of Technology (IIT)
PhD Program in Bioengineering and Robotics
Call for PhD students for 2015
PhD positions with scholarships are available at the Italian Institute of Technology (IIT)
Location: Genoa, Italy
Starting date: November 2014
Application deadline: August 22, 2014 at 12:00 noon (Italian time/CET)
Please note that IIT is an English-language research institute, so it is not required to speak Italian.
Useful links:
I have two PhD positions open in my team, in Themes 21 and 22 respectively. Both are in the area of Robot Learning, as described below. For anyone interested, please contact me well before the application deadline!
THEME 21. Robot Learning of Motor Skills
Tutors: Dr. Petar Kormushev, Prof. Darwin G. Caldwell
Description: Despite the significant mechatronic advances in robot design, the motor skill repertoire of current robots is mediocre compared to their biological counterparts. Motor skills of humans and animals are still utterly astonishing when compared to robots. This PhD theme will focus on machine learning methods to advance the state-of-the-art in robot learning of motor skills. The type of motor skills that will be investigated include object manipulation, compliant interaction with objects, humans and the environment, force control and vision as part of the robot learning architecture.
The creation of novel, high-performance, passively-compliant humanoid robots (such as the robot COMAN developed at IIT) offers a significant potential for achieving such advances in motor skills. However, as the bottleneck is not the hardware anymore, the main efforts should be directed towards the software that controls the robot. It is no longer reasonable to use over-simplified models of robot dynamics, because the novel compliant robots possess much richer and more complex dynamics than the previous generation of stiff robots. Therefore, new solutions should be sought to address the challenge of compliant robot control.
Ideas from developmental robotics will be considered, in search for a qualitatively better approach for controlling robots, different than the currently predominant approach based on manually-engineered controllers.
The work within this PhD theme will include developing novel robot learning algorithms and methods that allow humanoid robots to easily learn new skills. At the same time, the methods should allow for natural and safe interaction with people. To this end, the research will include learning by imitation and reinforcement learning, as well as human-robot interaction.
THEME 22. Robot Learning for Agile Locomotion
Tutors: Dr. Petar Kormushev, Dr. Nikos Tsagarakis
Description: The state-of-the-art high-performance, passively-compliant humanoid robots (such as the robot COMAN developed by IIT) offer a significant potential for achieving more agile robot locomotion. At this stage, the bottleneck is not the hardware anymore, but the software that controls the robot. It is no longer reasonable to use over-simplified models of robot dynamics, because the novel compliant robots possess much richer and more complex dynamics than the previous generation of stiff robots. Therefore, a new solution should be sought to address the challenge of compliant humanoid robot control.
In this PhD theme, the use of machine learning and robot learning methods will be explored, in order to achieve novel ways for whole-body compliant humanoid robot control. In particular, the focus will be on achieving agile locomotion, based on robot self-learned dynamics, rather than on pre-engineered dynamics model. The PhD candidates will be expected to develop new algorithms for robot learning and to advance the state-of-the-art in humanoid robot locomotion.
The expected outcome of these efforts includes the realization of highly dynamic bipedal locomotion such as omni-directional walking on uneven surfaces, coping with multiple contacts with the environments, jumping and running robustly on uneven terrain and in presence of high uncertainties, demonstrating robustness and tolerance to external disturbances, etc. The ultimate goal will be achieving locomotion skills comparable to a 1.5 – 2 year-old child.
Department: ADVR (Department of Advanced Robotics, Istituto Italiano di Tecnologia) http://www.iit.it/advr
Reference: P. Kormushev, S. Calinon, D.G. Caldwell. Reinforcement Learning in Robotics: Applications and Real-World Challenges. MDPI Journal of Robotics (ISSN 2218-6581), Special Issue on Intelligent Robots, vol.2, pp.122-148, 2013.
The video shows a hopping robot that uses a bungee cord in the knee for energy-efficient continuous hopping.
We investigate how the passive and active compliance of the leg can help to absorb the shock of landing impact and protect the harmonic drives which are the most fragile part of a robot.
Citation:
Houman Dallali, Petar Kormushev, Nikolaos Tsagarakis, Darwin G. Caldwell, “Can Active Impedance Protect Robots from Landing Impact?”, In Proc. IEEE Intl Conf. on Humanoid Robots (Humanoids 2014), Madrid, Spain, 2014.
This is the video accompanying our IROS 2014 paper – “Haptic Exploration of Unknown Surfaces with Discontinuities”. We used a KUKA LWR (Lightweight arm) robot, equipped with a 6-axis force/torque sensor and a rolling pin at the end-effector. The work will be presented in September at IROS 2014 conference, Chicago, USA.
Video credit: Rodrigo Jamisola and Petar Kormushev
Machine Learning PhD Summer Course in Genova, Italy 30 June – 4 July 2014
Topic: Regularization Methods for Machine Learning (RegML)
Instructors: Francesca Odone, Lorenzo Rosasco
A 20 hours advanced machine learning course including theory classes and practical laboratory session. The course covers foundations as well as recent advances in Machine Learning with emphasis on high dimensional data and a core set techniques, namely regularization methods. In many respect the course is compressed version of the 9.520 course at MIT.
The course started in 2008 has seen an increasing national and international attendance over the years with a peak of 85 participants in 2013.
Registration required: send an e-mail to the instructors by May 24th. The course will be activated if a minimum number of participants is reached.
“Reviewing will become obsolete. It has been needed in the past because there has been no way to tap on a larger readers’ audience for an opinion poll. Peer reviewing has been the only credible way to maintain standards of publication. The growing diversity of topics makes this process impractical, biased or spurious. We have technology now! We can allow for peer reviewing on a massive scale. Imagine a large pool of papers, automatically clustered and positioned within a big mosaic. Where do you look for papers? I doubt very much that you browse the contents of all relevant journals. Thank God for Internet! Now suppose that you have access to all papers. The best ones will be spotted and cited over and over. The citations will replace the reviews.
There will be fewer journals such as Nature, Science and Lancet. Only the best papers will find their place in the journals. These papers will no longer be original research, they will be rather “the best of…”. Selected by citation from the pool, say for the past 1 year, these papers can undergo a round of peer review. This time, however, the reviewing rules will be different:
First, all reviews will be handsomely paid.
Second, reviewers will bid for a paper. The candidates should submit their records, and the Editor will have the task to select among them.
As an additional benefit, we will kill fewer trees. Plus, a lot of human resource will be freed for better use of their expertise and energy. “
The robot hardware is progressively becoming more complex, which leads to growing interest in applying machine learning and statistics approaches within the robotics community. At the same time, there has been a growth within the machine learning community in using robots as motivating applications for new algorithms and formalisms. Considerable evidence of this exists in the use of robot learning approaches in high-profile competitions such as RoboCup and the DARPA Challenges, and the growing number of research programs funded by governments around the world. Additionally, the volume of research is increasing, as shown by the number of robot learning papers accepted to IROS and ICRA, and the corresponding number of learning sessions.
The primary goal of the Technical Committee on Robot Learning is to act as a focus point for wide distribution of technically rigorous results in the shared areas of interest around robot learning. Without being exclusive, such areas of research interest include:
learning models of robots, tasks or environments
learning deep hierarchies or levels of representations, from sensor and motor representations to task abstractions
learning of plans and control policies by imitation and reinforcement learning
integrating learning with control architectures
methods for probabilistic inference from multi-modal sensory information (e.g., proprioceptive, tactile, vison)
structured spatio-temporal representations designed for robot learning such as low-dimensional embedding of movements
developmental robotics and evolutionary-based learning approaches
[August 10, 2012] New IROS 2012 Workshop: “Beyond Robot Grasping – Modern Approaches for Dynamic Manipulation”. The workshop will be held on October 12, 2012 in Algarve, Portugal. More information at the website of the workshop: http://www.ias.informatik.tu-darmstadt.de/Research/IROS2012
[March 28, 2012] New AIMSA 2012 Workshop organized by the TC on “Advances in Robot Learning and Human-Robot Interaction”. The workshop will be held on September 12, 2012 in Varna, Bulgaria. More information at the website of the workshop: http://kormushev.com/AIMSA-2012/
[March 13, 2012] New chairs of the TC. After three very successful years for this TC on Robot Learning, the founding chairs Jan Peters, Jun Morimoto, Russ Tedrake and Nicholas Roy are stepping down as chairs of the committee. They will be replaced by Petar Kormushev, Edwin Olson, Ashutosh Saxena, and Wataru Takano who have kindly agreed to take the reign of the committee. Please see the changes in the mailing list addresses here.
Recent Activities of the Technical Committee
The technical committee regularly organizes special sessions associated with the “Robot learning” RAS keyword. If you want your paper to be considered for such a session and have used the above keyword in your submission, please forward an email to the TC co-chairs (contact info at: http://www.ieee-ras.org/robot-learning/contact). The technical committee will not be involved in the reviewing process but will organize the session based on the list of accepted submissions with this keyword.
TC-organized Workshops
This is a summary of the workshops which were organized by the IEEE TC on Robot Learning:
“Advanced Robotics” is the official international journal of the Robotics Society of Japan (RSJ).
More information about the journal here: http://www.rsj.or.jp/advanced_e/
I am co-editing this special issue, so I encourage everyone who considers submitting a paper to contact me well in advance before the deadline.
Special Issue on Humanoid Robotics
Guest Editors:
Prof. Wataru Takano (The University of Tokyo, Japan)
Prof. Tamim Asfour (Karlsruhe Institute of Technology, Germany)
Dr. Petar Kormushev (Italian Institute of Technology, Italy)
SUBMISSION DEADLINE: March 31, 2014April 14, 2014
Publication in Vol. 29, No. 5 (March 2015)
Humans understand the world through their actions upon the environment and their perception. The so-called anthropomorphism underlies this cognitive mechanism. Anthropomorphic robots, especially humanoid robots, can perform human-like actions, and enhance human viewers’ understanding of the intended effects of these actions. Humanoid robotics is a research area to pursue this capability from multiple viewpoints, such as body motion generation, motor skill learning, semantic perception, and to develop artificial systems able to communicate with humans. This research field has received significant attention in the last decades and will continue to play a central role in the robotics and cognitive systems research. This special issue will present the theoretical and technical achievements related to humanoid robotics, ranging from the mechanical design to artificial intelligence. Papers on all aspects of humanoid robots are welcome, including but not limited to, the following topics:
Humanoid design
Representation of humanoid robot motion
Synthesizing human-like motions for humanoid robots
Understanding intention of human actions
Learning motor skills through imitation and reinforcement
Control theory for humanoid behaviors
Innovative sensing and actuation technologies applied to humanoid robots
Modeling physical interaction between humans and humanoid robots
Human-robot interfaces for skill transfer and communication
Submission:
PDF format file of the full-length manuscript should be sent by March 31, 2014 to the office of Advanced Robotics, the Robotics Society of Japan through the homepage of Advanced Robotics (http://www.rsj.or.jp/advanced_e/submission). Sample form of the manuscript is available at the homepage.
Also, please send another copy to: Prof. W. Takano (takanoynl.t.u-tokyo.ac.jp), Prof. T. Asfour (asfourkit.edu), and Dr. P. Kormushev (petar.kormusheviit.it) for submission confirmation.
The so-called “visuospatial skills” allow people to visually perceive objects and the spatial relationships among them. This video demonstrates a novel machine learning approach that allows a robot to learn simple visuospatial skills for performing object reconfiguration tasks. The main advantage of this approach is that the robot can learn from a single demonstration, and can generalize the skill to new initial configurations. The results from this research work were presented at the International Conference on Intelligent Robots and Systems (IROS 2013) in Tokyo, Japan in November 2013.
Abstract:
We present a novel robot learning approach based on visual perception that allows a robot to acquire new skills by observing a demonstration from a tutor. Unlike most existing learning from demonstration approaches, where the focus is placed on the trajectories, in our approach the focus is on achieving a desired goal configuration of objects relative to one another. Our approach is based on visual perception which captures the object’s context for each demonstrated action. This context is the basis of the visuospatial representation and encodes implicitly the relative positioning of the object with respect to multiple other objects simultaneously. The proposed approach is capable of learning and generalizing multi-operation skills from a single demonstration, while requiring minimum a priori knowledge about the environment. The learned skills comprise a sequence of operations that aim to achieve the desired goal configuration using the given objects. We illustrate the capabilities of our approach using three object reconfiguration tasks with a Barrett WAM robot.
Citation:
S. Ahmadzadeh, P. Kormushev, D. Caldwell, “Visuospatial Skill Learning for Object Reconfiguration Tasks,” in Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013), Tokyo, Japan, 3-8 Nov 2013.
I was awarded by the President of Bulgaria with the prestigious John Atanasoff award in 2013.
The award is named after Prof. John Vincent Atanasoff, an American physicist of Bulgarian descent who was the inventor of the first electronic digital computer ABC.
The 33-year-old scientist in the area of information technology, Dr. Petar Kormushev, became the holder of the 2013 John Atanasoff аward. Petar Kormushev has been nominated for the award for his work in robotics, machine learning, and artificial intelligence. The distinction was given to him by the President of Bulgaria, Mr. Rosen Plevneliev, at a ceremony in Sofia on October 4th, 2013. Other young scientists were singled out with diplomas.
Photos from the award ceremony
The President of Bulgaria, Mr. Rosen Plevneliev, giving the John Atanasoff award to me
John Atanasoff’s son, the President of Bulgaria, me, and my father
I gave a short speech to thank the President for the award and promised to work even harder in the future.
Group photo with John Atanasoff’s son and daughter and the President of Bulgaria
The President of Bulgaria had an informal chat with me after the end of the official ceremony.
University of Genoa, jointly with the Italian Institute of Technology (IIT)
PhD Program in Bioengineering and Robotics
Call for PhD students for 2014
PhD positions with scholarships are available at the Italian Institute of Technology (IIT)
Location: Genoa, Italy
Starting date: January 2014
Application deadline: September 20, 2013 at 12:00 noon (Italian time)
Please note that IIT is an English-language research institute, so it is not required to speak Italian.
Useful links:
I have two PhD positions open in my team, in Themes 8 and 9 respectively. Both are in the area of Robot Learning, as described below. For anyone interested, please contact me well before the application deadline!
THEME 8. Developmental Robotics And Robot Learning Of Motor Skills
Tutors: Dr. Petar Kormushev, Prof. Darwin G. Caldwell
Department: ADVR (Department of Advanced Robotics, Istituto Italiano di Tecnologia) http://www.iit.it/advr
Description: Motor skills of humans and animals are still utterly astonishing when compared to robots. This PhD theme will focus around developmental robotics and robot learning methods to advance the state-of-the-art in robot motor skills.
Developmental robotics offers a qualitatively different approach for controlling humanoid robots than the currently predominant approach based on manually engineered controllers. As a result, despite the significant mechatronic advances in humanoid robot design, the motor skill repertoire of current humanoid robots is mediocre compared to their biological counterparts.
This PhD theme aims to bring forward advances in the quality of robot motor skills towards biological richness. The creation of novel, high-performance, passively-compliant humanoid robots (such as the robot COMAN developed at IIT) offers a significant potential for achieving such advances in motor skills. However, as the bottleneck is not the hardware anymore, the main efforts should be directed towards the software that controls the robot. It is no longer reasonable to use oversimplified models of robot dynamics, because the novel compliant robots possess much richer and more complex dynamics than the previous generation of stiff
robots. Therefore, new solutions should be sought to address the challenge of compliant humanoid robot control. And developmental robotics offers one promising alternative for achieving this.
The PhD theme will explore developing novel robot learning algorithms and methods that allow humanoid robots to easily learn novel skills. At the same time, robots should be capable of natural and robust interaction with people. The focus of the research will be on intelligent exploration techniques, robot learning and human-robot interaction.
Reference: P. Kormushev, S. Calinon, D.G. Caldwell. Reinforcement Learning in Robotics: Applications and Real-World Challenges. MDPI Journal of Robotics (ISSN 2218-6581), Special Issue on Intelligent Robots, vol.2, pp.122-148, 2013.
Contact: petar.kormusheviit.it
THEME 9. Robot Learning For Agile Locomotion Of Compliant Humanoid Robots
Tutors: Dr. Petar Kormushev, Prof. Nikos Tsagarakis
Department: ADVR (Department of Advanced Robotics, Istituto Italiano di Tecnologia) http://www.iit.it/advr
Description: The creation of novel, high-performance, passively-compliant humanoid robots (such as the robot COMAN developed by IIT) offers a significant potential for achieving more agile locomotion. At this stage, the bottleneck is not the hardware anymore, but the software that controls the robot. It is no longer reasonable to use over-simplified models of robot dynamics, because the novel compliant robots possess much richer and more complex dynamics than the previous generation of stiff robots. Therefore, a new solution should be sought to address the challenge of compliant humanoid robot control.
In this PhD theme, the use of machine learning and robot learning methods will be explored, in order to achieve novel ways for whole-body compliant humanoid robot control. In particular, the focus will be on achieving agile locomotion, based on robot self-learned dynamics, rather than on pre-engineered dynamics model. The PhD candidates will be expected to develop new algorithms for robot learning and to advance the state-of-the-art in humanoid robot locomotion.
The expected outcome of these efforts includes the realization of highly dynamic bipedal locomotion such as omni-directional walking on uneven surfaces, jumping and running robustly on uneven terrain and in presence of high uncertainties, demonstrating robustness and tolerance to external disturbances, etc. The ultimate goal will be achieving locomotion skills comparable to a 1.5 – 2 year-old child.
Reference: P. Kormushev, S. Calinon, D.G. Caldwell. Reinforcement Learning in Robotics: Applications and Real-World Challenges. MDPI Journal of Robotics (ISSN 2218-6581), Special Issue on Intelligent Robots, vol.2, pp.122-148, 2013.
I just participated in the Dagstuhl Seminar No.13321. The topic was Reinforcement Learning, and it was a very well-attended event with some high-profile experts in RL, such as Richard Sutton, Thomas Dietterich, Csaba Szepesvári, and Doina Precup among others. This Dagstuhl Seminar served also as the 11th European Workshop on Reinforcement Learning (EWRL 2013).
The video shows the humanoid robot WABIAN-2R walking with dynamically generated gait. Two scenarios are demonstrated: (1) sudden stopping and reversing, and (2) sudden step change to avoid an obstacle. The walking gait is dynamically generated using a hybrid gait pattern generator capable of rapid and dynamically consistent pattern regeneration.
Abstract:
We propose a two-stage gait pattern generation scheme for the full-scale humanoid robots, that considers the dynamics of the system throughout the process. The fist stage is responsible for generating the preliminary motion reference, such as step position, timing and trajectory of Center of Mass (CoM), while the second stage serves as dynamics filter and modifies the initial references to make the pattern stable on the full-scale multi-degree-of-freedom humanoid robot. The approach allows employment of easy to use models for motion generation, yet the use of the dynamics filtering ensures that the pattern is safe to execute on the real-world humanoid robot. The paper contains description of two approaches used in the first and second stage, as well as experimental results proving the effectiveness of the method. The fast calculation time and the use of the system’s dynamic state as initial conditions for pattern generation makes it a good candidate for the real-time gait pattern generator.
Citation:
Przemyslaw Kryczka, Petar Kormushev, Kenji Hashimoto, Hun-ok Lim, Nikolaos Tsagarakis, Darwin G. Caldwell and Atsuo Takanishi. Hybrid gait pattern generator capable of rapid and dynamically consistent pattern regeneration. Proc. URAI 2013.
I have an open postdoctoral position in my team, in the field of Machine Learning for Robotics. The details are listed below. For further information please contact me by e-mail.
The Department of Advanced Robotics at the Italian Institute of Technology (an English-language research institute) is seeking to appoint a well-motivated full-time postdoctoral researcher in the area of machine learning applied to robotics in general, and in particular to Autonomous Underwater Vehicles (AUV).
The successful candidate will join an ongoing research project funded by the European Commission under FP7 in the category Cognitive Systems and Robotics called “PANDORA” (Persistent Autonomy through learNing, aDaptation, Observation and ReplAnning) which started in January 2012. The project is a collaboration of five leading universities and institutes in Europe: Heriot Watt University (UK), Italian Institute of Technology (Italy), University of Girona (Spain), King’s College London (UK), and National Technical University of Athens (Greece). Details about the project can be found at: http://persistentautonomy.com/
The accepted candidate will contribute to the development and experimental validation of novel reinforcement learning and imitation learning algorithms for robot control, as well as their specific application to autonomous underwater vehicles. The research will be conducted at the Department of Advanced Robotics within the “Learning and Interaction Group” with project leader Dr. Petar Kormushev.
The research work will include conducting experiments with two different AUVs (Girona 500 and Nessie V) in water tanks in Spain and UK in collaboration with the other project partners. The developed machine learning algorithms can also be applied to other robots available at IIT, such as the compliant humanoid robot COMAN, the hydraulic quadruped robot HyQ, the humanoid robot iCub, two Barrett WAM manipulator arms, and a KUKA LWR arm robot.
Application Requirements:
PhD degree in Computer Science, Mathematics or Engineering
Good programming skills, preferably in MATLAB and C/C++
Experience in robot control and ROS is a plus
International applications are encouraged. The successful candidate will be offered a fixed-term project collaboration contract for the remaining duration of the project due to end in December 2014 with a highly-competitive salary which will be commensurate with qualifications and experience. Expected starting date is as soon as possible, preferably before September 1st, 2013.
Application Procedure:
To apply please send a detailed CV, a list of publications, a statement of research interests and plans, degree certificates, grade of transcripts, the names of at least two referees, and other supporting materials such as reference letters to: Dr. Petar Kormushev (petar.kormusheviit.it), quoting [PANDORA-PostDoc] in the email subject. For consideration, please apply by June 21th, 2013.
The video shows a KUKA robot that learns how to grasp and turn a valve autonomously. The robot learns not only how to achieve the goal of the task, but also how to react to different disturbances during the task execution. For example, the robot learns a reactive behavior that allows it to pause and resume the task in response to the changes of the uncertainty in the valve position. This helps the robot to avoid collision with the valve, and improves the reliability and robustness of the task execution.
The setup of this experiment comprises: the robot arm which is a KUKA LWR (Lightweight robotic arm), an Optitrack system for motion capture, a T-bar valve with adjustable friction level.
The initial task demonstration and reproduction phases are performed with kinesthetic teaching. The reactive behavior is implemented using a Reactive Fuzzy Decision Maker (RFDM).
The valve turning task is challenging, especially if the valve is moving dynamically. A similar valve-turning task is also included in the DARPA robot competition (DRC). However, in that challenge the valves are fixed, while here the valve is moving, which makes it even more difficult to accomplish the task.
Citation:
Seyed Reza Ahmadzadeh, Petar Kormushev and Darwin G. Caldwell. Autonomous Robotic Valve Turning: A Hierarchical Learning Approach. IEEE Intl. Conf. on Robotics and Automation (ICRA 2013), Karlsruhe, Germany, 2013.
Doctoral Course on “Robotics, Cognition and Interaction Technologies”
Call for PhD students for 2013
PhD positions with scholarships are available at the Italian Institute of Technology (IIT) in Genoa, Italy.
Doctoral course starting in January 2013
Application deadline: September 21, 2012 Online application here
Please note that IIT is an English-language research institute, so it is not required to speak Italian.
I have one PhD opening in my team, in the field of Reinforcement Learning with application to Robot Control. The details can be found in Annex A4 – Doctoral course on “Robotics, Cognition and Interaction Technologies”, and are as follows.
[Section 3. Department of ADVANCED ROBOTICS – PROF. DARWIN CALDWELL]
STREAM 1: Machine Learning, Robot Control and Human-Robot Interaction
Theme 3.1: Developmental robotics and robot learning for agile locomotion of compliant humanoid robots Tutor: Dr. Petar Kormushev, Dr Nikos Tsagarakis
Developmental robotics offers a completely different approach for controlling humanoid robots than the currently predominant approach based on manually engineered controllers. For example, currently, the majority of bipedal walking robots use variants of ZMP-based walking with largely simplified models of the robot dynamics. As a result, despite the significant mechatronic advances in humanoid robot legs, the locomotion repertoire of current bipedal robots merely includes slow walking on flat ground or inclined slopes, and primitive forms of disturbance rejection. This is far behind from even a two-year old child.
The creation of novel, high-performance, passively-compliant humanoid robots (such as the robot COMAN developed at IIT) offers a significant potential for achieving more agile locomotion. However, the bottleneck is not the hardware anymore, but the software that controls the robot. It is no longer reasonable to use over-simplified models of robot dynamics, because the novel compliant robots possess much richer and more complex dynamics than the previous generation of stiff robots. Therefore, a new solution should be sought to address the challenge of compliant humanoid robot control.
In this PhD theme, the use of developmental robotics and robot learning methods will be explored, in order to achieve novel ways for whole-body compliant humanoid robot control. In particular, the focus will be on achieving agile locomotion, based on robot self-learned dynamics, rather than on pre-engineered dynamics model. The PhD candidates will be expected to develop new algorithms for robot learning and to advance the state-of-the-art in developmental robotics.
The expected outcome of these efforts includes the realization of highly dynamic bipedal locomotion such as omni-directional walking on uneven surfaces, jumping and running robustly on uneven terrain and in presence of high uncertainties, demonstrating robustness and tolerance to external disturbances, etc. The ultimate goal will be achieving locomotion skills comparable to a 1.5 – 2 year-old child.
Requirements: This is a multidisciplinary theme where the successful candidates should have strong competencies in machine learning and artificial intelligence, and good knowledge of robot kinematics and dynamics. The candidates should have top-class degree and a background in Computer Science, Engineering, or Mathematics. Required technical skills: C/C++ and/or MATLAB. Knowledge of computer vision is a plus.
For further details about this particular PhD position, please contact me by e-mail.
In February of 2012 the first Global Future 2045 Congress was held in Moscow. There, over 50 world leading scientists from multiple disciplines met to develop a strategy for the future development of humankind. One of the main goals of the Congress was to construct a global network of scientists to further research on the development of cybernetic technology, with the ultimate goal of transferring a human’s individual consciousness to an artificial carrier.
The Department of Advanced Robotics at the Italian Institute of Technology (an English-language research institute), has a Post Doc opening in the research areas of Reinforcement learning and Imitation learning applied to robot control of Autonomous Underwater Vehicles (AUV).
The successful candidate will participate in a 3-year research project funded by the European Commission under the Seventh Framework Programme (FP7-ICT, STREP, Cognitive Systems and Robotics) called “PANDORA” (Persistent Autonomy through learNing, aDaptation, Observation and ReplAnning) which started in January 2012 (http://persistentautonomy.com/).
The project is a collaboration of five universities and institutes in Europe: Heriot Watt University (UK), Italian Institute of Technology (Italy), University of Girona (Spain), King’s College London (UK), and National Technical University of Athens (Greece).
The accepted candidate will contribute to the development and experimental validation of novel reinforcement learning and imitation learning algorithms for specific application to robot control of autonomous underwater vehicles.
The research work includes conducting experiment with AUVs in water tanks in collaboration with the other project partners. The developed machine learning algorithms will also be applied for other robots available at IIT, such as the compliant humanoid robot COMAN, the humanoid robot iCub, Barrett WAM manipulator arm, and KUKA LWR arm robot.
The salary will depend on the candidate’s experience. Policies provide additional pension and health benefits. Applicants may also qualify for reduced taxes benefits. Contracts will be for the duration of the project. Expected starting date is as soon as possible.
International applications are encouraged and will receive logistic support with visa issues. For further information please contact: Dr. Petar Kormushev (petar.kormushev AT iit.it).
The Cost of Knowledge is a movement started by mathematicians and other academics who are protesting against the business model of the big publishers like Elsevier, Springer, Wiley, and etc.
Currently, the academics are set to boycott the Elsevier’s business practices, as explained in this Statement of Purpose.
The invited speaker will be Prof. Alexander Stoytchev, from Iowa State University, USA. All accepted papers will be published in a special issue of a journal (for details see the website of the workshop). The workshop location is really nice, it is the biggest and best sea resort in Bulgaria, with magnificent sand and pleasant weather.
Good papers are like good wine: they need time to mature.
Of course, there are a few jerks out there, as Marc Raibert puts it, who can write perfect manuscripts on the first try, but if you’re reading this, I assume you are not one of these disgusting individuals.
So, for the rest of us mortals, I have tried to collect advice from various sources about how to write good scientific papers. Also, I contribute some of my own humble personal experience.
One of my most favourite papers on this topic is, without doubt, Marc Raibert’s paper about “Spilling the beans”. If you haven’t read it yet, please do so!
I totally agree with Raibert, and always try to “spill the beans” in my own papers as much and as early as possible.
The “Cargo Cult Science”, as named by Richard Feynman, is a must-see for all researchers, in my opinion. If you don’t know what I’m talking about, I recommend watching Feynman’s commencement address at Caltech at my Inspiration page.
Postdoctoral positions in Machine Learning for Robot Control of Autonomous Underwater Vehicles (AUV)
The Department of Advanced Robotics (http://www.iit.it/en/advanced-robotics) at the Italian Institute of Technology (IIT, an English-language research institute, located in Genoa, Italy) has 2 Post-Doc openings (starting from JanuaryMarch 2012) in the research areas of Reinforcement learning and Imitation learning applied to robot control of Autonomous Underwater Vehicles (AUV).
The successful candidates will participate in a 3-year research project funded by the European Commission under the Seventh Framework Programme (FP7-ICT, STREP, Cognitive Systems and Robotics) called “PANDORA” (Persistent Autonomy through learNing, aDaptation, Observation and ReplAnning) which will start in January 2012.
The project is a collaboration of five universities and institutes in Europe: Heriot Watt University (UK), Italian Institute of Technology (Italy), University of Girona (Spain), University of Strathclyde (UK), and National Technical University of Athens (Greece).
The accepted candidates will contribute to the development and experimental validation of novel reinforcement learning and imitation learning algorithms for specific application to robot control of autonomous underwater vehicles. The research work includes conducting experiment with AUVs in water tanks in collaboration with the other project partners. The developed machine learning algorithms will also be applied for other robots available at IIT, such as the compliant humanoid robot COMAN, the humanoid robot iCub, Barrett WAM manipulator arm, and KUKA LWR arm robot.
The salary will depend on the candidate’s experience and also includes additional pension and health benefits. Applicants may also qualify for reduced taxes benefits. Contracts are for up to 3 years with a possible renewal and future career options upon successful completion. Expected start date is FebruaryMarch 2012.
International applications are encouraged and will receive logistic support with visa issues. For further information please contact: Dr. Petar Kormushev (by e-mail).
Application Requirements:
– PhD degree in Computer Science, Mathematics or Engineering
– High-quality publication record
– Strong interest in machine learning algorithms
– Strong competencies in some of these areas: machine learning, reinforcement learning, imitation learning, MATLAB and C/C++ programming
– Experience in robot control is a plus
– Fluency in both spoken and written English
Application Procedure:
To apply please send a detailed CV, a statement of motivation, degree certificates, grade of transcripts, contact information of at least two references, and other support materials such as reference letters to: Dr. Petar Kormushev (by e-mail).
For consideration, please apply by: December 4, 2011 DEADLINE EXTENDED TO: January 29, 2012
—
Petar Kormushev, PhD
Team Leader – Advanced Robotics Dept.
Italian Institute of Technology (IIT)
Via Morego 30, 16163 Genova
We present a learning-based approach for minimizing the electric energy consumption during walking of a passively-compliant bipedal robot. The energy consumption is reduced by learning a varying-height center-of-mass trajectory which uses efficiently the robot’s passive compliance. To do this, we propose a reinforcement learning method which evolves the policy parameterization dynamically during the learning process and thus manages to find better policies faster than by using fixed parameterization. The method is first tested on a function approximation task, and then applied to the humanoid robot COMAN where it achieves significant energy reduction.
One of my favorite comics is the PhD comics (“Piled Higher and Deeper”), which relates to many problems and funny moments of a PhD-student’s and a post-doc’s life.
Doctoral Course on Robotics, Cognition and Interaction Technologies Call for PhD students for 2012
30 open PhD positions with scholarship are available at the Italian Institute of Technology (IIT) in Genoa, Italy.
Doctoral course starting in January 2012
Application deadline: September 23, 2011 Online application here
I have 2 PhD openings in my group, in the field of Reinforcement Learning with application to Robot Control. You can find details in this PDF document.
The 2 PhD positions under my supervision are in:
STREAM 3: Machine Learning, Robot Control and Human- Robot Interaction
Theme 2.7: Machine learning for robot control of autonomous underwater vehicles Tutor: Dr. Petar Kormushev, Dr. Sylvain Calinon, Prof. Darwin G. Caldwell
Number of available positions: 1
Theme 2.8: Machine learning for a soft robotic arm assisting in minimally invasive surgery Tutor: Dr. Petar Kormushev, Dr. Sylvain Calinon, Prof. Darwin G. Caldwell
Number of available positions: 1
For further details about these particular PhD positions, please contact me by e-mail.
At the international conference AAAI 2011 in San Francisco, my colleague Sylvain and I presented our pizza-making robot.
The event was the so-called “Robotic Challenge @ AAAI”, and this year the topic was “Food preparation”.
Our robot is a modified Barrett WAM 7-dof robot arm manipulator, with a wooden rolling pin at the end-effector.
The robot learns from demonstrations how to roll out the pizza dough, in order to make the most perfect circular pizzas! Below you can see a video of the Robotic Challenge event, and here are a few photos of our setup:
Learning from Demonstration Robotics Challenge @ AAAI 2011
Video credit: Brandon Rohrer
Endowing robots with human-like abilities to perform motor skills in a smooth and natural way is a dream of many researchers. It has become clear now that this can only be achieved if robots, similarly to humans, are able to learn new skills by themselves. However, acquiring new motor skills is not simple and involves various forms of learning. Some tasks can be successfully transferred to the robot using only imitation strategies. Other tasks can be learned very efficiently by the robot alone using reinforcement learning. The efficiency of the process lies in the interconnections between imitation and self-improvement strategies.
In this talk, a variety of robot skill learning examples are presented, such as: autonomous valve turning using reactive policy learning, energy-efficient bipedal walking exploiting the passive compliance, whole-body motor skill learning for erasing a whiteboard, learning for improved control of autonomous underwater vehicles, etc. Throughout these examples, the important role of the policy representation for speeding up the learning process is highlighted.
Biography
Dr. Petar Kormushev is a researcher and a team leader at the Advanced Robotics department of the Italian Institute of Technology (IIT). His research interests include robotics and machine learning, especially reinforcement learning for intelligent robot behavior. He obtained his PhD degree in Computational Intelligence from Tokyo Institute of Technology in 2009. He holds MSc degree in Artificial Intelligence and MSc degree in Bio- and Medical Informatics. He is a technical coordinator in two EU FP7 projects, as well as the recipient of the 2013 John Atanasoff award by the President of Bulgaria for outstanding young scientist.
I am trying to create a contemporary English-Bulgarian scientific dictionary which contains modern and state-of-the-art scientific terms and their corresponding translations from English to Bulgarian and vice-versa. Most of the included words are too new and do not yet have a well-established translation in Bulgarian, which is one of the main reasons for trying to build such a dictionary in the first place, by trying to propose appropriate Bulgarian terms for the novel English terms.
The current version contains mostly terms from robotics and machine learning, because these are my main areas of research interest.
The iCub robot is a humanoid robot developed within the project RobotCub. The iCub was designed and built mainly by the Italian Institute of Technology in Genova.
Reinforcement Learning is a type of Machine Learning approach in which the learning algorithm discovers by itself how to reach a given goal by a trial-and-error process.
Reinforcement Learning is different than supervised learning and unsupervised learning. It is a separate class of learning approaches that rely on information given by a reward function.
The reward function is the way in which the goal is specified.
The COMAN robot is a compliant humanoid robot which is currently under development by the Advanced Robotics dept. of the Italian Institute of Technology in Genoa, Italy.
COMAN stands for “COmpliant huMANoid”, because this robot is designed with passive compliance (via springs) in his joints. This allows it to be more robust to environment perturbations (e.g. walking on uneven ground), to be safer for human-robot interaction (soft to touch), to be more energy-efficient, and to perform more dynamic motions (e.g. jumping, running).
COMAN can also be interpreted as Co-Man, meaning a co-worker, a robot which is a partner to humans, designed for safe physical human-robot interaction. The robot’s design is derived from the compliant joint design of the cCub bipedal robot.
This is a close-up of the passively-compliant legs of the robot:
Below is a video of the COMAN walking experiment I did together with Barkan Ugurlu and Nikos Tsagarakis. The goal was to learn to minimize the energy consumption used for walking by COMAN. This video accompanies my IROS 2011 paper presented in San Francisco, in September 2011.
We present a learning-based approach for minimizing the electric energy consumption during walking of a passively-compliant bipedal robot. The energy consumption is reduced by learning a varying-height center-of-mass trajectory which uses efficiently the robot’s passive compliance. To do this, we propose a reinforcement learning method which evolves the policy parameterization dynamically during the learning process and thus manages to find better policies faster than by using fixed parameterization. The method is first tested on a function approximation task, and then applied to the humanoid robot COMAN where it achieves significant energy reduction.
In recent years, there have been some amazing demonstrations of successful learning robots, which master some difficult motor skills.
Here I have collected some of the most impressive ones, which I consider being major milestones at the time they were done:
This is work done by my former colleague Stephen Hart: Dexter robot learning to reach
Work by James Kuffner in CMU:
This is work done by my friend and colleague Sylvain Calinon:
One of my main research topics is robot learning. Normally, in machine learning, the algorithms are classified in three classes: supervised (aka. imitation learning in robotics), unsupervised (aka. exploration in robotics), and semi-supervised (aka. reinforcement learning in robotics).
A Japanese humanoid robot (Fujitsu HOAP-2) learns to clean a whiteboard by upper-body kinesthetic teaching during full-body balance control. The research is from an Italian-Japanese collaboration between the Italian Institute of Technology and Tokyo City University.
We present an integrated approach allowing a free-standing humanoid robot to acquire new motor skills by kinesthetic teaching. The proposed full-body control method controls simultaneously the upper and lower body of the robot with different control strategies. Imitation learning is used for training the upper body of the humanoid robot via kinesthetic teaching, while at the same time Reaction Null Space method is used for keeping the balance of the robot. During demonstration, a force/torque sensor is used to record the exerted forces, and during reproduction, we use a hybrid position/force controller to apply the learned trajectories in terms of positions and forces to the end effector. The proposed method is tested on a 25-DOF Fujitsu HOAP-2 humanoid robot with a surface cleaning task.
This research was presented at the International Conference on Robotics and Automation (ICRA) in May 2011 in Shanghai, China.
Humanoid robot iCub learns the skill of archery. After being instructed how to hold the bow and release the arrow, the robot learns by itself to aim and shoot arrows at the target. It learns to hit the center of the target in only 8 trials.
The learning algorithm, called ARCHER (Augmented Reward Chained Regression) algorithm, was developed and optimized specifically for problems like the archery training, which have a smooth solution space and prior knowledge about the goal to be achieved. In the case of archery, we know that hitting the center corresponds to the maximum reward we can get. Using this prior information about the task, we can view the position of the arrow’s tip as an augmented reward. ARCHER uses a chained local regression process that iteratively estimates new policy parameters which have a greater probability of leading to the achievement of the goal of the task, based on the experience so far. An advantage of ARCHER over other learning algorithms is that it makes use of richer feedback information about the result of a rollout.
For the archery training, the ARCHER algorithm is used to modulate and coordinate the motion of the two hands, while an inverse kinematics controller is used for the motion of the arms. After every rollout, the image processing part recognizes automatically where the arrow hits the target which is then sent as feedback to the ARCHER algorithm. The image recognition is based on Gaussian Mixture Models for color-based detection of the target and the arrow’s tip.
The experiments are performed on a 53-DOF humanoid robot iCub. The distance between the robot and the target is 3.5m, and the height of the robot is 104cm.
This research was presented at the Humanoids 2010 conference in December 2010 in USA.
I maintain a list of active Bulgarian researchers in robotics and machine learning. If you would like to be added to this list please contact me.
I also maintain a mailing list called Bulgarian Robotics Group, for exchanging useful information related to ongoing robotics projects, job opportunities, and other news to help each other. You can sign up for the mailing list at at Google Groups here: https://groups.google.com/forum/?fromgroups#!forum/bulgarian-robotics
List of active Bulgarian researchers in robotics and machine learning:
Petar Kormushev is a team leader of a research group at Italian Institute of Technology, working on robot learning by imitation and reinforcement learning.
Dragomir N. Nenchev is a professor at Tokyo City University, working on motion/force control, space robots, humanoid robots, and service robots.
Lubomir Lilov is a professor at Sofia University, heading the master’s program on Mechatronics and Robotics at the Faculty of Mathematics and Informatics.
Rosen Diankov is the author of OpenRAVE robotics platform for manipulation planning, etc.
Alexander Stoytchev is a professor at Iowa State University. He constructed a dual-arm Barrett WAM robot and is researching in developmental robotics.
Jivko Sinapov is a PhD student of prof. Alexander Stoytchev.
Vladimir Zamanov is teaching robotics at Technical University of Sofia.
Ivan Dryanovski is a PhD student at CCNY (City College of New York), doing research on 3D SLAM and Micro Air Vehicle Navigation.
Dragomir Anguelov was a PhD student at Stanford University, collaborating with prof. Sebastian Thrun on computer vision for robots. Now he is working at Google in Mountain View.
Dimitar Ivanov Chakarov is an associate professor at the Mechatronic Systems dept. of BAS in Bulgaria.
Evtim Venets Zahariev is an associate professor and head of the department of Dynamics and optimization of controlled mechanical systems at BAS in Bulgaria.
Andrey Popov is at Hamburg University of Technology, doing research on UAV robots (esp. quadrocopters and H-infinity controllers).
Marin Kobilarov is a post-doc in control and dynamical systems at Caltech, doing research on motion planning and control.
Bojan Jakimovski is the CEO of Bionics4Robotics, which is a robotics- and AI- related company in Munich, Germany.
Ilian Bonev is a professor working on precision robotics and parallel manipulators at ETS, Canada.
Roko Tschakarow works at SCHUNK as a Business Unit Manager System Solutions Mechatronics. His work is on building lightweight and modular robots.
Alexander Gegov is a Reader at University of Portsmouth, UK. His main research interests are in computational intelligence.
Dimitar H. Stefanov is with the Department of Electrical Engineering and Computer Sciences, Korea Advanced Institute of Science and Technology (KAIST).
Danail Stoyanov is with the Department of Computer Science, University College London (UCL), doing research in medical robotics.
Andon Topalov is a professor at the Control Systems Department of Technical University of Sofia, Branch Plovdiv, Bulgaria.
Petko Hr. Petkov is a professor at the Department of Systems and Control of Technical University of Sofia, Bulgaria.
Stefan Markov was a student at University Bremen, Germany. Research areas: Robot Perception and Learning, AI, Mobile Sensor Networks.
Chavdar Papazov is a post-doctoral researcher at Technische Universität München, Germany. Research areas: 3D shape registration, object recognition and pose estimation.
Svetlin Penkov is a student at Edinburgh University, UK.
Atanas Popov is a professor at the Faculty of Engineering at University of Nottingham, UK.
Svetan Ratchev is the director of the Institute for Advanced Manufacturing at University of Nottingham, UK.
Nikolay Atanasov is a PhD student at GRASP Lab, University of Pennsylvania, USA.
Kalin Gochev is a PhD student at the University of Pennsylvania, USA.
Marina Horn is a PhD student at the University of Heidelberg, Germany.
Galia Tzvetkova is an Associate Professor at the Institute of Mechanics, Bulgarian Academy of Sciences, Bulgaria.
Velin Dimitrov is a PhD student at Worcester Polytechnic Institute (WPI), USA.
List of other Bulgarian robotics enthusiasts and hobbyists:
Orlin Dimitrov (Орлин Димитров) is a student who did restoration of the Bulgarian robot ROBKO 01 and created various controllers for it. He has a website about ROBKO 01.
PRACTRO – Robotics Conference with International Participation. Dimitar Cenev and Veselin Pavlov are constructing a website in Bulgarian called Robotic Design.
Robot learns to flip pancakes! I am teaching a Barrett WAM robot to flip pancakes:
The video shows a Barrett WAM 7 DOFs manipulator learning to flip pancakes by reinforcement learning.
The motion is encoded in a mixture of basis force fields through an extension of Dynamic Movement Primitives (DMP) that represents the synergies across the different variables through stiffness matrices. An Inverse Dynamics controller with variable stiffness is used for reproduction.
The skill is first demonstrated via kinesthetic teaching, and then refined by Policy learning by Weighting Exploration with the Returns (PoWER) algorithm. After 50 trials, the robot learns that the first part of the task requires a stiff behavior to throw the pancake in the air, while the second part requires the hand to be compliant in order to catch the pancake without having it bounced off the pan.