🤖 AI Summary
Automated recognition of fear-related behaviors in mice faces challenges including poor detection of rare behavioral classes (e.g., freezing, fleeing), high manual annotation costs, and weak integration of multimodal data. Method: We propose a fine-tuning-free vision-language model (VLM) framework built upon Qwen2.5-VL, incorporating frame-level video preprocessing, structured textual prompting, and in-context learning (ICL) with labeled exemplars to achieve high-accuracy temporal classification of low-frequency behaviors—without updating any model parameters. Contribution/Results: The method achieves robust performance across all behavioral classes (F1 > 0.92), generates minimally intrusive, environment-transferable behavioral vector sequences, and establishes a standardized multimodal behavioral dataset. This enables reproducible, scalable analysis of neural mechanisms underlying fear expression in neuroscience, offering a lightweight, highly robust VLM paradigm for ethology.
📝 Abstract
Integration of diverse data will be a pivotal step towards improving scientific explorations in many disciplines. This work establishes a vision-language model (VLM) that encodes videos with text input in order to classify various behaviors of a mouse existing in and engaging with their environment. Importantly, this model produces a behavioral vector over time for each subject and for each session the subject undergoes. The output is a valuable dataset that few programs are able to produce with as high accuracy and with minimal user input. Specifically, we use the open-source Qwen2.5-VL model and enhance its performance through prompts, in-context learning (ICL) with labeled examples, and frame-level preprocessing. We found that each of these methods contributes to improved classification, and that combining them results in strong F1 scores across all behaviors, including rare classes like freezing and fleeing, without any model fine-tuning. Overall, this model will support interdisciplinary researchers studying mouse behavior by enabling them to integrate diverse behavioral features, measured across multiple time points and environments, into a comprehensive dataset that can address complex research questions.