AnomalyLMM: Bridging Generative Knowledge and Discriminative Retrieval for Text-Based Person Anomaly Search

📅 2025-09-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses text-driven person anomaly search, tackling two key challenges: fine-grained cross-modal alignment between textual behavior descriptions and visual person appearances, and discriminative retrieval under real-world sparse anomaly supervision. We introduce the first zero-shot, training-free framework for text-to-person anomaly search by leveraging large vision-language models (VLMs). Our method bridges the gap between generative multimodal understanding and discriminative retrieval via three novel components: masked cross-modal prompting for coarse localization, behavioral saliency prediction for fine-grained attention, and knowledge-aware re-ranking for interpretable refinement. Experimental results on the PAB benchmark demonstrate a 0.96% improvement in Recall@1 over strong baselines. Visual analysis further validates the model’s capability to precisely capture subtle anomalous behaviors—e.g., unusual postures or atypical interactions—without requiring annotated anomaly data.

Technology Category

Application Category

📝 Abstract
With growing public safety demands, text-based person anomaly search has emerged as a critical task, aiming to retrieve individuals with abnormal behaviors via natural language descriptions. Unlike conventional person search, this task presents two unique challenges: (1) fine-grained cross-modal alignment between textual anomalies and visual behaviors, and (2) anomaly recognition under sparse real-world samples. While Large Multi-modal Models (LMMs) excel in multi-modal understanding, their potential for fine-grained anomaly retrieval remains underexplored, hindered by: (1) a domain gap between generative knowledge and discriminative retrieval, and (2) the absence of efficient adaptation strategies for deployment. In this work, we propose AnomalyLMM, the first framework that harnesses LMMs for text-based person anomaly search. Our key contributions are: (1) A novel coarse-to-fine pipeline integrating LMMs to bridge generative world knowledge with retrieval-centric anomaly detection; (2) A training-free adaptation cookbook featuring masked cross-modal prompting, behavioral saliency prediction, and knowledge-aware re-ranking, enabling zero-shot focus on subtle anomaly cues. As the first study to explore LMMs for this task, we conduct a rigorous evaluation on the PAB dataset, the only publicly available benchmark for text-based person anomaly search, with its curated real-world anomalies covering diverse scenarios (e.g., falling, collision, and being hit). Experiments show the effectiveness of the proposed method, surpassing the competitive baseline by +0.96% Recall@1 accuracy. Notably, our method reveals interpretable alignment between textual anomalies and visual behaviors, validated via qualitative analysis. Our code and models will be released for future research.
Problem

Research questions and friction points this paper is trying to address.

Fine-grained cross-modal alignment between text and visual anomalies
Anomaly recognition under sparse real-world samples
Bridging generative knowledge with discriminative retrieval tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free adaptation cookbook for zero-shot anomaly focus
Coarse-to-fine pipeline integrating LMMs for anomaly detection
Masked cross-modal prompting with behavioral saliency prediction
🔎 Similar Papers
No similar papers found.