We studied a single robot playing table tennis against a human or ball gun in the past. In 2018, we migrated to a new setup with sufficient space for two robots opposing each other. We aim to achieve collaborative table tennis play with the Barrett WAM robot arm and our muscular robots as well as compare performance of motor and muscular actuation using methods from robotic skill learning.
Creating autonomous robots that can learn to assist humans in daily life situations is a fascinating challenge for machine learning. While this aim has been a long-standing vision of artificial intelligence, we have yet to create robots that can learn to accomplish many different tasks triggered by environmental context or higher-level instruction. We thus focus on the solution of basic problems in robotics by developing domain-appropriate machine-learning methods.
While many machine learning methods work in theory, in simplified simulations and textbook control plants, it is essential to study real robot systems to understand the learning of high-performance motor skills. We focus on the problem of learning robot table tennis as our "Drosophila" to gain insights that we hope will be generalizable. These tasks have a number of components that are representative of tasks encountered by natural intelligent systems, including perception and action in rapidly changing and uncertain environments.
Learning approaches have to generalize a complex hitting behavior from relatively few demonstrated trajectories, which do not cover all ball trajectories or desired hitting directions. Our recent work on capturing trajectory distributions using probabilistic movement representations [ ] opens new possibilities for robot table tennis. We have presented several methods to adapt probabilistic movement primitives, e.g., for adapting hitting movements learned in joint space to have a desired end-effector position, velocity, and orientation [ ], as well as to find the initial time and duration of the movement primitive to intercept a moving object like the table tennis ball [ ].
Another line of robot table tennis research is to explore new robot morphologies. To this end, we developed a lightweight robot actuated by pneumatic muscles [ ] that enabled us to return and smash table tennis balls using model-free reinforcement learning from scratch [ ]. This new robot exerts highly accelerated motions while inherently assuring safety, eventually opening up new ways to think of acquiring high-speed motor skills.
We have worked on various other questions of robot motion control with the context of robot table tennis in real robot experiments on the Barrett WAM. Among these approaches, we have studied the properties of optimal trajectory generation in robot table tennis strikes [ ], and learning striking controllers [ ]. We have also demonstrated how a table tennis serve can be captured and successfully reproduced [ ].