Perceive Anything: Recognize, Explain, Caption, and Segment Anything in Images and Videos

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses comprehensive region-level visual understanding—encompassing recognition, interpretation, description, and segmentation—for arbitrary regions in both images and videos. To this end, we propose a unified region-aware framework. Methodologically, we introduce the novel Semantic Perceiver module to bridge SAM 2’s visual features with the semantic space of large language models (LLMs); construct the first million-scale, region-level streaming video captioning dataset; integrate SAM 2, LLMs, and multimodal feature alignment; and design a customized data distillation and augmentation pipeline. Experiments demonstrate state-of-the-art performance across multiple region-understanding benchmarks, with 1.2–2.4× faster inference speed and significantly reduced GPU memory consumption, enabling efficient end-to-end deployment.

Technology Category

Application Category

📝 Abstract
We present Perceive Anything Model (PAM), a conceptually straightforward and efficient framework for comprehensive region-level visual understanding in images and videos. Our approach extends the powerful segmentation model SAM 2 by integrating Large Language Models (LLMs), enabling simultaneous object segmentation with the generation of diverse, region-specific semantic outputs, including categories, label definition, functional explanations, and detailed captions. A key component, Semantic Perceiver, is introduced to efficiently transform SAM 2's rich visual features, which inherently carry general vision, localization, and semantic priors into multi-modal tokens for LLM comprehension. To support robust multi-granularity understanding, we also develop a dedicated data refinement and augmentation pipeline, yielding a high-quality dataset of 1.5M image and 0.6M video region-semantic annotations, including novel region-level streaming video caption data. PAM is designed for lightweightness and efficiency, while also demonstrates strong performance across a diverse range of region understanding tasks. It runs 1.2-2.4x faster and consumes less GPU memory than prior approaches, offering a practical solution for real-world applications. We believe that our effective approach will serve as a strong baseline for future research in region-level visual understanding.
Problem

Research questions and friction points this paper is trying to address.

Comprehensive region-level visual understanding in images and videos
Integrating Large Language Models with segmentation for multi-modal outputs
Efficient and lightweight framework for real-world visual understanding tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates LLMs with SAM 2 for multi-modal outputs
Introduces Semantic Perceiver for visual feature transformation
Uses refined dataset for robust multi-granularity understanding
🔎 Similar Papers
No similar papers found.