Lab Logo

RoboTamer: Full Stack Framework for Unitree Qmini Bipedal Robot Training & Deployment

Yanyun Chen     Tiyu Fang     Kaiwen Li     Kunqi Zhang     Wei Zhang    
Visual Sensing and Intelligent System Lab (VSISLab),
School of Control Science and Engineering,
Shandong University, China Contact: info@vsislab.com Website: www.vsislab.com

Introduction

We have developed an open-source solution for motion control of bipedal robots, leveraging deep reinforcement learning (DRL) within NVIDIA's Isaac Gym environment. This framework enables robots such as the Unitree Qmini to learn robust locomotion skills, including walking on uneven terrain. To facilitate a smooth sim-to-real transfer, we incorporate key techniques such as domain randomization and randomized external perturbations during training. These strategies enhance the generalization ability of the trained policies when deployed in the real world. Our repository provides everything needed for both training bipedal robots in simulation and deploying them in real-world environments. In addition, we offer C++ deployment code for high-performance control of bipedal robots. This deployment framework uses ONNX Runtime for efficient inference of reinforcement learning policies exported from PyTorch models to ONNX format. It enables seamless deployment on Linux-based edge devices and robot platforms, delivering low-latency, real-time control suitable for field applications. Our codebase features optimized inference pipelines, supports both CPU and GPU hardware acceleration, and is fully compatible with Linux-based robotics systems. With this end-to-end pipeline, robots like the Unitree Qmini can benefit from the power of deep reinforcement learning, both in simulation and in the real world.



Simulation Training

Real Deployment