Publications

Deep Learning Models for Automated Classification of Dog Emotional States from Facial Expressions

Tali Boneh-Shitrit, Shir Amir, Annika Bremhorst, Daniel S Mills, Stefanie Riemer, Dror Fried, Anna Zamansky

This paper presents a novel application of deep learning models for the automated classification of dog emotional states from facial expressions, specifically focusing on positive anticipation and negative frustration. Addressing an underexplored area in animal emotion recognition, the research utilized a rigorously created dataset of Labrador Retrievers from a controlled experimental setting, featuring single facial frames extracted from videos. The study investigated the suitability of different pre-trained deep learning architectures, including ResNet and Vision Transformer (ViT), under both supervised and self-supervised training regimes. The key finding was that features from a self-supervised pretrained ViT model, specifically DINO-ViT, demonstrated superior performance for this binary classification task, achieving the highest accuracy on the validation set. Furthermore, the study explored interpretability using EigenCAM, observing that DINO-ViT models effectively localized and activated on multiple salient facial regions (like eyes, ears, and nose), which may contribute to their success. This work establishes a significant baseline for future research in automated canine emotion recognition and highlights deep learning’s potential to advance both the scientific understanding of animal emotions and practical animal welfare tools.

Now Available in Audio!
Listen to our publication as a podcast. 

Disclaimer: This content was generated using AI tools and is intended for informational purposes only.

Check out MELD

Our new facial analysis tool