RynnEC: Bringing MLLMs into Embodied World

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of 3D-annotated data in embodied cognition and the limited fine-grained physical perception and spatial interaction capabilities of existing multimodal large models, this paper introduces RynnEC—the first video-based multimodal large language model explicitly designed for embodied cognition. Methodologically, RynnEC features: (1) a collaborative region encoder–masked decoder architecture enabling frame-level spatial region modeling and interaction; (2) a region-centric video generation paradigm that powers a fully automated embodied cognition data synthesis pipeline; and (3) the release of RynnEC-Bench, a dedicated benchmark for evaluating embodied cognition capabilities. Built upon general vision–language foundation models, RynnEC achieves state-of-the-art performance on object attribute understanding, instance segmentation, and spatial reasoning tasks. It significantly enhances embodied agents’ fine-grained perception of and interaction with the physical world.

Technology Category

Application Category

📝 Abstract
We introduce RynnEC, a video multimodal large language model designed for embodied cognition. Built upon a general-purpose vision-language foundation model, RynnEC incorporates a region encoder and a mask decoder, enabling flexible region-level video interaction. Despite its compact architecture, RynnEC achieves state-of-the-art performance in object property understanding, object segmentation, and spatial reasoning. Conceptually, it offers a region-centric video paradigm for the brain of embodied agents, providing fine-grained perception of the physical world and enabling more precise interactions. To mitigate the scarcity of annotated 3D datasets, we propose an egocentric video based pipeline for generating embodied cognition data. Furthermore, we introduce RynnEC-Bench, a region-centered benchmark for evaluating embodied cognitive capabilities. We anticipate that RynnEC will advance the development of general-purpose cognitive cores for embodied agents and facilitate generalization across diverse embodied tasks. The code, model checkpoints, and benchmark are available at: https://github.com/alibaba-damo-academy/RynnEC
Problem

Research questions and friction points this paper is trying to address.

Develops a video multimodal model for embodied cognition
Enables fine-grained region-level video interaction
Addresses scarcity of annotated 3D embodied datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Video multimodal large language model
Region encoder and mask decoder
Egocentric video data generation pipeline
🔎 Similar Papers
No similar papers found.
R
Ronghao Dang
DAMO Academy, Alibaba Group
Yuqian Yuan
Yuqian Yuan
PhD student, Zhejiang University
Computer VisionMachine Learning
Yunxuan Mao
Yunxuan Mao
Zhejiang University
computer vision robotics
Kehan Li
Kehan Li
Stanford University
J
Jiangpin Liu
DAMO Academy, Alibaba Group
Z
Zhikai Wang
DAMO Academy, Alibaba Group
X
Xin Li
DAMO Academy, Alibaba Group
F
Fan Wang
DAMO Academy, Alibaba Group
Deli Zhao
Deli Zhao
Alibaba DAMO Academy
generative modelsmultimodal learningfoundation models