Publications

Explainable Automated Pain Recognition in Cats

Marcelo Feighelstein, Lea Henze, Sebastian Meller, Ilan Shimshoni, Ben Hermoni, Michael Berko, Friederike Twele, Alexandra Schütter, Nora Dorn, Sabine Kästner, Lauren Finka, Stelio PL Luna, Daniel S Mills, Holger A Volk, Anna Zamansky

This study investigated the feasibility of automated pain recognition in cats using AI models in a more realistic and heterogeneous setting. Researchers compared two approaches: a landmark-based (LDM) approach utilizing 48 manually annotated facial landmarks, and a deep learning (DL) approach (ResNet50), on a dataset of 84 client-owned cats of diverse breeds, ages, sexes, and medical histories, with pain levels scored by veterinary experts. The findings showed that the landmark-based approach performed better, achieving over 77% accuracy in pain detection, while the deep learning approach reached only above 65%. This suggests that the LDM approach is more robust for noisier, naturalistic populations, possibly because it better accounts for variability in cat facial morphology. Furthermore, using explainable AI methods, the study consistently revealed across both approaches that the mouth region was most important for machine pain classification, whereas the ears region was least important. The study acknowledged limitations, including the dataset size and the use of static images, suggesting future research should focus on larger datasets, video-based analysis, and the automation of facial landmark detection. The results ultimately support that AI-assisted recognition of negative affective states like pain from cat faces is feasible, although these tools should complement, not replace, clinical judgment.

Now Available in Audio!
Listen to our publication as a podcast. 

Disclaimer: This content was generated using AI tools and is intended for informational purposes only.

Check out MELD

Our new facial analysis tool