This study addresses the critical need for developing non-invasive, animal-based indicators of affective states in livestock, specifically focusing on vocalizations in dairy cattle. The study’s primary contribution is providing the largest pre-processed dataset to date of vocalizations from 20 lactating adult multiparous dairy cows, collected under a controlled setting during negative affective states induced by visual isolation. Utilizing this dataset, the researchers developed two computational frameworks—a deep learning-based model and an explainable machine learning-based model—for two key tasks: classifying high-frequency (HF) and low-frequency (LF) cattle calls, and identifying individual cows based on their vocalizations. Their models achieved high accuracy in classifying LF and HF calls, reaching 87.2% for the explainable model and 89.4% for the deep learning model, outperforming previous state-of-the-art approaches and exhibiting less overfitting. For individual cow identification, the models demonstrated 68.9% accuracy with the explainable model and 72.5% with the deep learning model, with HF calls containing more individuality information than LF calls. The study also identified important vocal features for these classifications, such as AMvar, AMrate, AMExtent, Formant dispersal, and Wiener entropy mean for call type, and sound duration for individual identification. These results underscore the potential of machine learning approaches in analyzing cattle vocalizations as a valuable tool for assessing emotional valence and informing precision livestock farming practices.
Non-Invasive Computer Vision-Based Fruit Fly Larvae Differentiation: Ceratitis capitata and Bactrocera zonata
This paper proposes a novel, non-invasive method using computer vision