🤖 AI Summary
To address the labor-intensive, costly, and data-scarce nature of manual ear action unit (AU) annotation in equine affective assessment, this paper proposes an automated ear AU detection framework integrating CNN-based video feature extraction, optical flow motion modeling, and RNN-based temporal analysis. It represents the first systematic investigation into the synergistic integration of deep learning and classical motion analysis for dynamic equine ear recognition, thereby overcoming the annotation bottleneck imposed by EquiFACS. Evaluated on a public equine video dataset, the framework achieves 87.5% accuracy in classifying the presence of ear movement—significantly outperforming established baseline methods. The implementation is fully open-sourced, providing a reproducible and extensible technical foundation for quantitative animal behavior analysis and welfare assessment.
📝 Abstract
The Equine Facial Action Coding System (EquiFACS) enables the systematic annotation of facial movements through distinct Action Units (AUs). It serves as a crucial tool for assessing affective states in horses by identifying subtle facial expressions associated with discomfort. However, the field of horse affective state assessment is constrained by the scarcity of annotated data, as manually labelling facial AUs is both time-consuming and costly. To address this challenge, automated annotation systems are essential for leveraging existing datasets and improving affective states detection tools. In this work, we study different methods for specific ear AU detection and localization from horse videos. We leverage past works on deep learning-based video feature extraction combined with recurrent neural networks for the video classification task, as well as a classic optical flow based approach. We achieve 87.5% classification accuracy of ear movement presence on a public horse video dataset, demonstrating the potential of our approach. We discuss future directions to develop these systems, with the aim of bridging the gap between automated AU detection and practical applications in equine welfare and veterinary diagnostics. Our code will be made publicly available at https://github.com/jmalves5/read-my-ears.