Olaf: Bringing an Animated Character to Life in the Physical World

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the physical embodiment challenges of non-human-proportion animated characters (e.g., Olaf), specifically targeting high-fidelity anthropomorphic locomotion and expressive interaction within spatially constrained environments—confronting issues including mechanical topology mismatch, motion control misalignment, high-impact acoustic noise, and actuator overheating. We propose an animation-reference-guided deep reinforcement learning framework based on Proximal Policy Optimization (PPO), integrating thermal-aware policy optimization and a silent-contact reward mechanism. A concealed asymmetric leg structure coupled with a soft skirt is designed, augmented by custom spherical and planar linkages to achieve stylistically faithful motion. Leveraging MuJoCo-based simulation and temperature-feedback closed-loop control, our hardware implementation achieves autonomous bipedal walking synchronized with facial expressions. Experimental results demonstrate a 28 dB reduction in impact noise and a 41% decrease in peak temperature rise, establishing a new benchmark for cost-effective, expressive humanoid robots.

Technology Category

Application Category

📝 Abstract
Animated characters often move in non-physical ways and have proportions that are far from a typical walking robot. This provides an ideal platform for innovation in both mechanical design and stylized motion control. In this paper, we bring Olaf to life in the physical world, relying on reinforcement learning guided by animation references for control. To create the illusion of Olaf's feet moving along his body, we hide two asymmetric legs under a soft foam skirt. To fit actuators inside the character, we use spherical and planar linkages in the arms, mouth, and eyes. Because the walk cycle results in harsh contact sounds, we introduce additional rewards that noticeably reduce impact noise. The large head, driven by small actuators in the character's slim neck, creates a risk of overheating, amplified by the costume. To keep actuators from overheating, we feed temperature values as additional inputs to policies, introducing new rewards to keep them within bounds. We validate the efficacy of our modeling in simulation and on hardware, demonstrating an unmatched level of believability for a costumed robotic character.
Problem

Research questions and friction points this paper is trying to address.

Creating a physical robot from an animated character with non-physical proportions
Reducing impact noise and overheating in a costumed robotic system
Validating stylized motion control using reinforcement learning and animation references
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning guided by animation references
Asymmetric legs hidden under a soft foam skirt
Temperature values as inputs to prevent overheating
🔎 Similar Papers
No similar papers found.