Preliminary Use of Vision Language Model Driven Extraction of Mouse Behavior Towards Understanding Fear Expression

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automated recognition of fear-related behaviors in mice faces challenges including poor detection of rare behavioral classes (e.g., freezing, fleeing), high manual annotation costs, and weak integration of multimodal data. Method: We propose a fine-tuning-free vision-language model (VLM) framework built upon Qwen2.5-VL, incorporating frame-level video preprocessing, structured textual prompting, and in-context learning (ICL) with labeled exemplars to achieve high-accuracy temporal classification of low-frequency behaviors—without updating any model parameters. Contribution/Results: The method achieves robust performance across all behavioral classes (F1 > 0.92), generates minimally intrusive, environment-transferable behavioral vector sequences, and establishes a standardized multimodal behavioral dataset. This enables reproducible, scalable analysis of neural mechanisms underlying fear expression in neuroscience, offering a lightweight, highly robust VLM paradigm for ethology.

Technology Category

Application Category

📝 Abstract
Integration of diverse data will be a pivotal step towards improving scientific explorations in many disciplines. This work establishes a vision-language model (VLM) that encodes videos with text input in order to classify various behaviors of a mouse existing in and engaging with their environment. Importantly, this model produces a behavioral vector over time for each subject and for each session the subject undergoes. The output is a valuable dataset that few programs are able to produce with as high accuracy and with minimal user input. Specifically, we use the open-source Qwen2.5-VL model and enhance its performance through prompts, in-context learning (ICL) with labeled examples, and frame-level preprocessing. We found that each of these methods contributes to improved classification, and that combining them results in strong F1 scores across all behaviors, including rare classes like freezing and fleeing, without any model fine-tuning. Overall, this model will support interdisciplinary researchers studying mouse behavior by enabling them to integrate diverse behavioral features, measured across multiple time points and environments, into a comprehensive dataset that can address complex research questions.
Problem

Research questions and friction points this paper is trying to address.

Classifying mouse behaviors from videos using vision-language models
Generating behavioral vectors over time for individual subjects
Enhancing rare behavior detection like freezing and fleeing without fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-language model classifies mouse behaviors from videos
Uses prompts and in-context learning without model fine-tuning
Generates behavioral vectors across multiple time sessions
🔎 Similar Papers
No similar papers found.
P
Paimon Goulart
University of California, Riverside
J
Jordan Steinhauser
University of California, Riverside
K
Kylene Shuler
University of California, Riverside
E
Edward Korzus
University of California, Riverside
J
Jia Chen
University of California, Riverside
Evangelos E. Papalexakis
Evangelos E. Papalexakis
Professor and Ross Family Chair, University of California Riverside
Data MiningTensor DecompositionGraph MiningSocial Media MiningAI4Science