Quadruped locomotion is challenging for learning-based algorithms. It requires tedious manual tuning and is difficult to deploy in reality due to the sim-to-real gap between the training and the testing scenarios. This paper proposes a quadruped robot learning system for agile locomotion which does not require any pre-training and works well in various terrains. We introduce a hierarchical framework that uses reinforcement learning as the high-level policy to adjust the low-level trajectory generator for a better adaptability to the terrain. We compact the observation and the action spaces of the reinforcement learning framework to deploy it on a host computer interfaced with the robot. Besides, we design an omnidirectional trajectory generator guided by robot posture, which generates omnidirectional foot trajectories to interact with the environment. Experimental results and the supplementary video demonstrate that our system only trained in simulation can be easily deployed in the real world, and also has the advantages of fast convergence and good terrain adaptability.
This paper presents a hierarchical framework for quadruped robots. It combines a high-level reinforcement learning controller with a posture-guided trajectory generator to adaptively generate omnidirectional motions. Our method is easy to train as it converges fast and does not need to adjust a dozen or so of rewards. The quadruped robot can be deployed in a real environment directly after being trained in simulation. With the trained hierarchical framework deployed on a remote host computer, the robot works well in a variety of real-world environments unseen in the simulation.
A Hierarchical Framework for Quadruped Locomotion Based on Reinforcement Learning Wenhao Tan, Xing Fang, Wei Zhang, Ran Song, Teng Chen, and Yibin Li IROS 2021 |