Skip to content

luck2shi/motion_tracking

 
 

Repository files navigation

Whole Body Motion Tracking

This repository is an extension of the GentleHumanoid training repository, supporting universal, robust, and highly dynamic whole-body motion tracking policy training with upper-body compliance.

Main features:

  • A universal whole-body motion tracking policy, training and evaluation pipeline.

  • Upper-body compliance-aware control for contact-rich interaction.

  • Simulation backend based on mjlab.

A demo of the pretrained policies, shows a single model generalizing across diverse and highly dynamic motions, is available here.

motion_tracking_demo.mp4

Instructions for real-robot deployment and the use of pretrained models on new motion sequences are available in sim2real folder.

Installation

If you do not have uv installed, you can install it following the instructions in the uv documentation.

# this project uses uv for dependency and environment management
uv sync

Motion Dataset Preparation

Quick Start: Download Preprocessed Dataset (Google Drive)

Download link:

After downloading, extract it into the repository dataset/ directory, and you should have the following structure:

dataset/
  amass_all/
    meta.json
    ...
  lafan_all/
    ...
Build Dataset from AMASS/LAFAN with GMR

Retargeting with GMR

We use GMR to retarget the AMASS and LAFAN datasets. The output format is a dataset containing a series of npz files with the following fields:

  • fps: Frame rate
  • root_pos: Root position
  • root_rot: Root rotation in quaternion format (xyzw)
  • dof_pos: Degrees of freedom positions
  • local_body_pos: Local body positions
  • local_body_rot: Local body rotations
  • body_names: List of body names
  • joint_names: List of joint names

You can use the modified version of GMR to directly export npz files that meet the requirements.

You should organize the processed datasets in the following structure:

<dataset_root>/
    AMASS/ACCAD/Female1General_c3d/A1_-_Stand_stageii.npz
    ...
    LAFAN/walk1_subject1.npz
    ...

Dataset Building

Modify DATASET_ROOT in generate_dataset.sh to point to your dataset root directory, then run the script to generate the dataset:

bash generate_dataset.sh

The dataset will be generated in the dataset/ directory, and the code will automatically load these datasets. You can also use the MEMATH environment variable to specify the dataset root path.

Training

You can use the provided train.sh script to run the full training pipeline. Modify the global configuration section in train.sh to set your WandB account and other parameters, then run:

bash train.sh

Under standard settings, training takes approximately 15 hours on 4× A100 GPUs. If GPU memory is constrained, it is recommended to appropriately tune the NPROC and num_envs parameters in train.sh and cfg/task/G1/G1.yaml, respectively. Such adjustments may increase training time and could affect training performance to some extent.

Evaluation

uv run scripts/eval.py --run_path ${wandb_run_path} -p # p for play
uv run scripts/eval.py --run_path ${wandb_run_path} -p --export # export the policy to onnx (sim2real)

About

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 99.7%
  • Shell 0.3%