Header logo is ei

Imitation and Reinforcement Learning




In this article, we present both novel learning algorithms and experiments using the dynamical system MPs. As such, we describe this MP representation in a way that it is straightforward to reproduce. We review an appropriate imitation learning method, i.e., locally weighted regression, and show how this method can be used both for initializing RL tasks as well as for modifying the start-up phase in a rhythmic task. We also show our current best-suited RL algorithm for this framework, i.e., PoWER. We present two complex motor tasks, i.e., ball-in-a-cup and ball paddling, learned on a real, physical Barrett WAM, using the methods presented in this article. Of particular interest is the ball-paddling application, as it requires a combination of both rhythmic and discrete dynamical systems MPs during the start-up phase to achieve a particular task.

Author(s): Kober, J. and Peters, J.
Journal: IEEE Robotics and Automation Magazine
Volume: 17
Number (issue): 2
Pages: 55-62
Year: 2010
Month: June
Day: 0

Department(s): Empirical Inference
Bibtex Type: Article (article)

Digital: 0
DOI: 10.1109/MRA.2010.936952
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik

Links: PDF


  title = {Imitation and Reinforcement Learning},
  author = {Kober, J. and Peters, J.},
  journal = {IEEE Robotics and Automation Magazine},
  volume = {17},
  number = {2},
  pages = {55-62},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  month = jun,
  year = {2010},
  month_numeric = {6}