🤖 AI Summary
Vision-language models (VLMs) are predominantly closed-source black boxes, severely hindering reproducibility and rigorous scientific evaluation—especially for fine-grained video understanding, where high-quality annotated data and standardized benchmarks remain scarce. Method: We introduce PLM—the first fully open-source, end-to-end reproducible perception-language model—eschewing knowledge distillation and instead systematically deconstructing the training paradigm. We release 2.8 million human-annotated, fine-grained video question-answer pairs and spatiotemporally aligned descriptive captions. We further propose PLM-VideoBench, a comprehensive benchmark enabling four-dimensional video reasoning evaluation (“what,” “where,” “when,” and “how”). Contribution/Results: All components—including data, code, model weights, and training recipes—are publicly released. Experiments demonstrate that PLM achieves state-of-the-art performance among open-source models on multiple fine-grained video understanding tasks, advancing transparent, verifiable multimodal research.
📝 Abstract
Vision-language models are integral to computer vision research, yet many high-performing models remain closed-source, obscuring their data, design and training recipe. The research community has responded by using distillation from black-box models to label training data, achieving strong benchmark results, at the cost of measurable scientific progress. However, without knowing the details of the teacher model and its data sources, scientific progress remains difficult to measure. In this paper, we study building a Perception Language Model (PLM) in a fully open and reproducible framework for transparent research in image and video understanding. We analyze standard training pipelines without distillation from proprietary models and explore large-scale synthetic data to identify critical data gaps, particularly in detailed video understanding. To bridge these gaps, we release 2.8M human-labeled instances of fine-grained video question-answer pairs and spatio-temporally grounded video captions. Additionally, we introduce PLM-VideoBench, a suite for evaluating challenging video understanding tasks focusing on the ability to reason about"what","where","when", and"how"of a video. We make our work fully reproducible by providing data, training recipes, code&models.