Learning to Play Table Tennis From Scratch using Muscular Robots

We investigate the symbiosis between Reinforcement Learning (RL) and soft robots by learning to return and smash real table tennis balls using model-free Reinforcement Learning from scratch.

Table tennis requires to execute fast and precise motions. To gain precision it is necessary to explore in this high-speed regimes, however, exploration can be safety-critical at the same time. The combination of RL and muscular soft robots allows to close this gap. While robots actuated by pneumatic artificial muscles generate high forces that are required for e.g. smashing, they also offer safe execution of explosive motions due to antagonistic actuation [1, 2].

This property enables us to

  • remove safety constraints in the algorithm

  • while maximizing the speed of returned balls directly in the reward function

  • using a stochastic policy that acts directly on the low-level controls of the real system

  • train for thousands of trials on the real system

  • from scratch without any prior knowledge (no model or demonstrations)

To enable practical training without real balls, we introduce Hybrid Sim and Real Training (HYSR) that replays prerecorded real balls in simulation while executing actions on the real system. In this manner, RL can learn the challenging motor control of the PAM-driven robot while executing ~15000 hitting motions.

To facilitate research on soft robots, we open source the data set of the complete return and smash training [4] as well as videos showing the whole experiment (see youtube videos below).

For more information please see our paper [3].

Summarizing Video

14h Video of Return Experiment

First part (7 hours)

Second part (7 hours)

14h Video of Smash Experiment

First part (7 hours)

Second part (7 hours)