🤖 AI Summary
Current autonomous driving research lacks a comprehensive benchmark for human behavior understanding—encompassing motion, trajectory, and intent—across diverse real-world scenarios.
Method: We introduce the first large-scale, multimodal human behavior understanding benchmark specifically designed for autonomous driving, integrating data from Waymo Open Dataset, YouTube videos, and proprietary field collections, totaling 57K action segments and 1.73M frames. The benchmark features fine-grained, multi-dimensional annotations—including natural-language action descriptions, intention inference labels, and safety-critical behavior tags—generated via a rigorous human-in-the-loop annotation pipeline. We further propose a standardized evaluation protocol supporting action prediction, generation, and behavior-oriented question answering.
Contribution/Results: This benchmark bridges a critical gap in cross-scenario, multi-task systematic evaluation of human behavior, providing a scalable, high-quality data resource and unified evaluation infrastructure to advance behavioral perception and reasoning in autonomous driving systems.
📝 Abstract
Humans are integral components of the transportation ecosystem, and understanding their behaviors is crucial to facilitating the development of safe driving systems. Although recent progress has explored various aspects of human behavior$unicode{x2014}$such as motion, trajectories, and intention$unicode{x2014}$a comprehensive benchmark for evaluating human behavior understanding in autonomous driving remains unavailable. In this work, we propose $ extbf{MMHU}$, a large-scale benchmark for human behavior analysis featuring rich annotations, such as human motion and trajectories, text description for human motions, human intention, and critical behavior labels relevant to driving safety. Our dataset encompasses 57k human motion clips and 1.73M frames gathered from diverse sources, including established driving datasets such as Waymo, in-the-wild videos from YouTube, and self-collected data. A human-in-the-loop annotation pipeline is developed to generate rich behavior captions. We provide a thorough dataset analysis and benchmark multiple tasks$unicode{x2014}$ranging from motion prediction to motion generation and human behavior question answering$unicode{x2014}$thereby offering a broad evaluation suite. Project page : https://MMHU-Benchmark.github.io.