Portrait of Jingbo Wang

Jingbo Wang 王靖博

Embodied AI · Humanoid Robotics · Character Animation

About

I am a research scientist working on embodied intelligence, humanoid robotics, and physics-based simulation, focusing on generalization-oriented humanoid skill learning and sim-to-real transfer. I obtained my PhD from The Chinese University of Hong Kong in 2023, advised by Prof. Dahua Lin. I also spent research internships with NVIDIA (Omniverse), SenseTime Research, Megvii Research, and Momenta. My work has been published in top-tier conferences such as CVPR, ICCV, ECCV, SIGGRAPH, NeurIPS, ICRA, etc, with more than 10,000 citations.

🔥 Recent Research Topics

🤖

Behavior Foundation Model

Universal whole-body controllers and teleoperation.

Behavior Foundation Model teaser
Behavior Foundation Model for Humanoid Robots ICRA 2026
Universal Whole Body Controller with User Friendly Interface
AdaMimic teaser
AdaMimic: Towards Adaptable Humanoid Control via Adaptive Motion Tracking ICRA 2026
Adaptive Motion Tracking for Humanoid Robots
UniTracker teaser
UniTracker: Learning Universal Whole-Body Motion Tracker for Humanoid Robots Under Review
Universal Whole Body Controller with Kinematics Motion Priors
CLOT teaser
CLOT: Closed-Loop Global Motion Tracking for Whole-Body Humanoid Teleoperation Under Review
Universal Whole Body Controller for PND Adam
🏀

Agile Skill Learning

Sports-grade agility learned from human demonstrations.

HumanX teaser
HumanX: Toward Agile and Generalizable Humanoid Interaction Skills from Human Videos Under Review
Basketball Skill Imitation Learning
LATENT teaser
Learning Athletic Humanoid Tennis Skills from Imperfect Human Motion Data Under Review
Automatic Tennis Skills
Smash teaser
Mastering Scalable Whole-Body Skills for Humanoid Ping-Pong with Egocentric Vision Under Review
Table Tennis Skills in Egocentric Vision
RoboStriker teaser
RoboStriker: Hierarchical Decision-Making for Autonomous Humanoid Boxing Under Review
Multi-Agent Boxing Skills
Humanoid Goalkeeper teaser
Humanoid Goalkeeper: Learning from Position Conditioned Task-Motion Constraints Under Review
Agile Goalkeeping Skills
📦

Locomotion / Loco-Manipulation

Navigation and interaction policies for constrained terrains.

Gallant teaser
Gallant: Voxel Grid-based Humanoid Locomotion and Local-navigation across 3D Constrained Terrains CVPR 2026
Perceptive Locomotion Controller for Diverse Terrains
PhysHSI teaser
PhysHSI: Towards a Real-World Generalizable and Natural Humanoid-Scene Interaction System Under Review
Human Like Loco-Manipulation Policy
XHUGWBC teaser
XHUGWBC: Scalable and General Whole-Body Control for Cross-Humanoid Locomotion CVPR 2026
Unified Cross-Humanoid Locomotion and Loco-Manipulation Controller
🎨

Character Animation

Large-scale motion synthesis and human-scene interaction.

TokenHSI teaser
TokenHSI: Unified Synthesis of Physical Human-Scene Interactions through Task Tokenization CVPR 2025 · Oral
Foundation Controller for Human-Scene Interaction
MotionMillion teaser
Go to Zero: Towards Zero-shot Motion Generation with Million-scale Data ICCV 2025 · Spotlight
Scaled Text-to-Motion Generation Model with Million-scale Data
AMDM teaser
AMDM: Interactive Character Control with Auto-Regressive Motion Diffusion Models ACM SIGGRAPH 2025 (TOG)
Control Auto-Regressive Motion Diffusion Models with Reinforcement Learning
UniHSI teaser
UniHSI: Unified Human-Scene Interaction via Prompted Chain-of-Contacts ICLR 2024
Character Learning with LLMs