Visually-grounded Humanoid Agents

πŸ“… 2026-04-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitation of existing digital humans, which typically rely on predefined scripts and struggle to perform goal-directed behaviors autonomously in novel environments. The authors propose a novel two-layer (world–agent) framework that enables fully autonomous behavior generation from visual input alone. By integrating occlusion-aware 3D Gaussian scene reconstruction with Gaussian avatar modeling, the system leverages first-person RGB-D perception, spatially aware iterative planning, and full-body motion control to allow digital humans to perceive, reason, and act solely based on visual observations and task goals within any reconstructed 3D environment. Experiments demonstrate that the method significantly improves task success rates and reduces collisions across diverse real-world scenes, outperforming current planning approaches and ablated variants.
πŸ“ Abstract
Digital human generation has been studied for decades and supports a wide range of real-world applications. However, most existing systems are passively animated, relying on privileged state or scripted control, which limits scalability to novel environments. We instead ask: how can digital humans actively behave using only visual observations and specified goals in novel scenes? Achieving this would enable populating any 3D environments with digital humans at scale that exhibit spontaneous, natural, goal-directed behaviors. To this end, we introduce Visually-grounded Humanoid Agents, a coupled two-layer (world-agent) paradigm that replicates humans at multiple levels: they look, perceive, reason, and behave like real people in real-world 3D scenes. The World Layer reconstructs semantically rich 3D Gaussian scenes from real-world videos via an occlusion-aware pipeline and accommodates animatable Gaussian-based human avatars. The Agent Layer transforms these avatars into autonomous humanoid agents, equipping them with first-person RGB-D perception and enabling them to perform accurate, embodied planning with spatial awareness and iterative reasoning, which is then executed at the low level as full-body actions to drive their behaviors in the scene. We further introduce a benchmark to evaluate humanoid-scene interaction in diverse reconstructed environments. Experiments show our agents achieve robust autonomous behavior, yielding higher task success rates and fewer collisions than ablations and state-of-the-art planning methods. This work enables active digital human population and advances human-centric embodied AI. Data, code, and models will be open-sourced.
Problem

Research questions and friction points this paper is trying to address.

visually-grounded
humanoid agents
embodied AI
autonomous behavior
3D scene interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visually-grounded Humanoid Agents
3D Gaussian Splatting
Embodied Planning
First-person RGB-D Perception
Autonomous Digital Humans
πŸ”Ž Similar Papers
No similar papers found.
H
Hang Ye
Peking University
Xiaoxuan Ma
Xiaoxuan Ma
Peking University
Computer VisionDigital HumansAI for Science
F
Fan Lu
Tongji University
Wayne Wu
Wayne Wu
UCLA
Computer VisionRoboticsComputer GraphicsVirtual Humans
K
Kwan-Yee Lin
University of Michigan
Y
Yizhou Wang
Peking University