Vectors representing terms related to emotion and gender in large text corpora (through Word2vec; Nicolas, in prep.)

Social-Cognitive Bias & Machine Learning

Vectors representing terms related to emotion and gender in large text corpora (through Word2vec; Nicolas, in prep.)

Social-Cognitive Bias & Machine Learning

In a recent line of research I have been interested in studying various social-cognitive biases in machine learning models. In particular, with Drs. Alexander Todorov (Psychology) and Arvind Narayanan (Computer Science), I am stuying how computer vision models may reflect human intersectional biases. In an initial project we are exploring if emotion and gender are associated in current computer vision models, resulting in, for example, more angry classifications for male faces or more happy classifications for female faces. I have also started to look into associations between race, gender, and emotion in widely-used pretrained models for natural language processing, such as Word2vec and Glove. Preliminary results suggest that, indeed, some of the intersectional stereotypes observed in humans are reflected in many machine learning models that are currently available.

Avatar
Gandalf Nicolas
Psychology PH.D Candidate