Publications

Do AI Models “Like” Black Dogs? Towards Exploring Perceptions of Dogs with Vision-Language Models

Marcelo Feighelstein, Einat Kovalyo, Jennifer Abrams, Sarah-Elisabeth Byosiere, Anna Zamansky

This paper investigates how large-scale, pretrained vision-language models, particularly OpenAI’s CLIP, reflect and entrench human biases concerning pets’ physical attributes and adoptability. Recognizing that CLIP is trained on vast amounts of uncurated web data and thus contains inherent biases, the authors propose using these models to explore human perceptions in human-animal relationships. Preliminary experiments using dog images and fifty dog-related phrases revealed that images of white dogs were the best match for phrases like ‘adoptable dog’ and ‘I want to adopt this dog’, raising questions about the ‘black dog syndrome’. The study also found negative correlations between “mixed-breed dog” and phrases such as “likeable dog” or “special dog”. The paper suggests these AI models can offer insights into human preferences and adoptability factors while emphasizing the need for caution, larger datasets, and better understanding of the models’ internal decision-making processes.

Now Available in Audio!
Listen to our publication as a podcast. 

Disclaimer: This content was generated using AI tools and is intended for informational purposes only.

Check out MELD

Our new facial analysis tool