How to Replicate the Spring Festival Robot Dance: A Complete Video‑to‑Robot Motion Guide

This tutorial walks you through building a full video‑to‑robot motion pipeline—from installing the necessary repositories and environments, configuring GMR and PromptHMR, running command‑line tools, launching a multilingual Web UI, to exporting multi‑person trajectories and MuJoCo simulations—while highlighting common pitfalls and advanced considerations.

Sohu Tech Products
Sohu Tech Products
Sohu Tech Products
How to Replicate the Spring Festival Robot Dance: A Complete Video‑to‑Robot Motion Guide

Inspired by the impressive robot dance performance on the Spring Festival Gala, this guide shows how anyone can recreate similar motions by converting human actions captured in text or video into robot commands.

Core Pipeline

The conversion flow is:

Text Prompt / Video → PromptHMR → SMPL‑X → GMR → Robot Motion

Key command‑line tools for daily use are:

# Generate motion from text
python scripts/generate_video.py --model seedance --action "角色向前走四步"
# Extract human pose from video
python scripts/extract_pose.py --project data/video_001
# Convert to robot motion (supports multiple tracks)
python scripts/convert_to_robot.py --project data/video_001 --all-tracks
# Visualize results
python scripts/visualize.py --project data/video_001 --robot-viser --robot-all

Environment Setup (Step 0)

Clone the main repository and its submodules, then apply three patches if you received a patch package.

git clone https://github.com/datawhalechina/every-embodied.git
cd every-embodied/07-机器人操作、运动控制/Locomotion/video2robot
# Clone GMR and PromptHMR
git clone --depth 1 https://github.com/taeyoun811/GMR.git third_party/GMR
git clone --depth 1 https://github.com/taeyoun811/PromptHMR.git third_party/PromptHMR
# Apply patches (if needed)
git apply patches/main.patch
git -C third_party/PromptHMR apply ../../patches/prompthmr.patch
git -C third_party/GMR apply ../../patches/gmr.patch

GMR Environment

conda create -n gmr python=3.10 -y
conda activate gmr
cd /root/gpufree-data/every-embodied/07-机器人操作、运动控制/Locomotion/video2robot
pip install -e .
pip install loop-rate-limiters smplx imageio mink rich "imageio[ffmpeg]"

PromptHMR Environment

conda create -n phmr python=3.10 -y
conda activate phmr
cd /root/gpufree-data/every-embodied/07-机器人操作、运动控制/Locomotion/video2robot/third_party/PromptHMR
# Manual installation is recommended; avoid the buggy install script.

Launch the Web UI (Stable Version)

conda activate phmr
python -m pip install -U fastapi "uvicorn[standard]" jinja2 python-multipart
pkill -f "video2robot/visualization/robot_viser.py"
export VISER_FIXED_PORT=8789
python -m uvicorn web.app:app --host 0.0.0.0 --port 8000

Open http://localhost:8000 in a browser to access a Chinese‑friendly interface where you can upload videos, switch robot skins, and view results.

Multi‑Person Trajectories and MuJoCo Export

Generate multi‑track visualizations:

conda activate gmr
cd /root/gpufree-data/every-embodied/07-机器人操作、运动控制/Locomotion/video2robot
python scripts/convert_to_robot.py --project data/video_001 --all-tracks
python scripts/visualize.py --project data/video_001 --robot-viser --robot-all

Export a single‑person MuJoCo video:

conda activate gmr
cd /root/gpufree-data/every-embodied/07-机器人操作、运动控制/Locomotion/video2robot/third_party/GMR
python scripts/vis_robot_motion.py \
  --robot unitree_g1 \
  --robot_motion_path /root/.../robot_motion.pkl \
  --record_video \
  --video_path /root/.../mujoco_robot.mp4

Export a multi‑person MuJoCo video (fixed bugs for zero‑second clips and camera tracking):

python scripts/vis_robot_motion_multi.py \
  --robot unitree_g1 \
  --robot_motion_paths \
    /root/.../robot_motion_track_1.pkl \
    /root/.../robot_motion_track_2.pkl \
  --record_video \
  --max_seconds 10 \
  --camera_azimuth 0 \
  --video_path /root/.../mujoco_multi_robot_10s_front.mp4

Advanced Discussion

Why do replicated motions often “fall over”?

Retargeting accuracy: Mapping SMPL‑X poses to robot joints faces physical limits.

Missing environment perception: Open‑loop control cannot detect ground friction or obstacles.

Dynamic constraints: High‑torque demands for aerial flips exceed motor capabilities, causing loss of balance.

Future work will integrate IsaacSim for reinforcement‑learning‑based training, providing high‑fidelity physics and collision detection to turn motion retargeting into robust policy learning.

Troubleshooting & Quick Fixes

Web task exits immediately (conda not found): Ensure CONDA_EXE is set; the main repo now auto‑detects it.

robot‑viser “localhost refused connection”: Export a fixed port: export VISER_FIXED_PORT=8789.

MuJoCo recording error (missing imageio backend): Run pip install -U "imageio[ffmpeg]".

lietorch/droid_backends compile error: Replace all occurrences of .type() with .scalar_type() in the source.

Appendix 1: Required Model Weights

Download from HuggingFace Datawhale:

git-lfs install
git lfs clone https://huggingface.co/Datawhale/spring-festival-wushu-robot-replication-model

Appendix 2: Manual phmr Environment Setup for Advanced Users

# Clone main repo
git clone https://github.com/datawhalechina/every-embodied.git
cd every-embodied/07-机器人操作、运动控制/Locomotion/video2robot

# Create and activate phmr env
conda create -n phmr python=3.10 -y
conda activate phmr

# Install PromptHMR dependencies
cd third_party/PromptHMR
pip install -r requirements.txt

# Install chumpy
mkdir -p python_libs
git clone https://github.com/Arthur151/chumpy python_libs/chumpy
python -m pip install -e python_libs/chumpy --no-build-isolation

# Set PYTHONPATH
export PYTHONPATH=$PYTHONPATH:/root/.../third_party/PromptHMR
echo 'export PYTHONPATH=$PYTHONPATH:/root/.../third_party/PromptHMR' >> ~/.bashrc
source ~/.bashrc

# Install eigen
conda install -c conda-forge eigen -y

# Build droidcalib
source /opt/conda/etc/profile.d/conda.sh
conda activate phmr
export CPATH="$CONDA_PREFIX/include/eigen3:${CPATH:-}"
cd third_party/PromptHMR/pipeline/droidcalib
python setup.py install

# Add torch library path
export LD_LIBRARY_PATH="/root/gpufree-data/conda_envs/phmr/lib/python3.10/site-packages/torch/lib:/usr/lib/x86_64-linux-gnu:/usr/local/cuda/lib64:/usr/lib64:/usr/local/lib"
echo 'export LD_LIBRARY_PATH="/root/gpufree-data/conda_envs/phmr/lib/python3.10/site-packages/torch/lib:/usr/lib/x86_64-linux-gnu:/usr/local/cuda/lib64:/usr/lib64:/usr/local/lib"' >> ~/.bashrc
source ~/.bashrc

# Build lietorch
mkdir -p python_libs && cd python_libs
git clone https://github.com/princeton-vl/lietorch.git
cd lietorch
git submodule update --init --recursive
python setup.py install
cd ../..

# Install detectron2 and SAM2
git clone https://github.com/facebookresearch/detectron2.git
cd detectron2
pip install -e . --no-build-isolation
cd ..
git clone https://github.com/facebookresearch/segment-anything-2.git
cd segment-anything-2
pip install -e . --no-build-isolation

# Patch code for SAM2 video predictor
sed -i 's/load_video_frames, load_video_frames_from_np/load_video_frames/g' \
  /root/.../third_party/PromptHMR/pipeline/detector/sam2_video_predictor.py

# Final dependencies
python -m pip install -U torch-scatter --no-build-isolation
python -m pip install -U xformers

All core code, tips, and scripts are now ready. If you successfully replicate the motion, feel free to leave a comment or open an issue in the repository.

Demo image
Demo image
Web UI screenshot
Web UI screenshot
Multi‑person visualization
Multi‑person visualization
MuJoCo single‑person result
MuJoCo single‑person result
Model weight download
Model weight download
Manual setup illustration
Manual setup illustration
simulationembodied AIRoboticstutorialGitHubvideo-to-robot
Sohu Tech Products
Written by

Sohu Tech Products

A knowledge-sharing platform for Sohu's technology products. As a leading Chinese internet brand with media, video, search, and gaming services and over 700 million users, Sohu continuously drives tech innovation and practice. We’ll share practical insights and tech news here.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.