Video Analytics + Digital Health

Measure bias in emotion detection of underrepresented groups

Improve the performance of industry standard emotion detection models on people of color
The challenge

Advances in facial expression recognition (FER) technology have triggered many new applications, particularly in areas such as customer satisfaction and telehealth. FER algorithms often operate downstream of a face recognition algorithm and are designed to predict the expression of emotions such as anger, happiness, surprise, and calm. Much power comes from the copious amounts of training data fed to a FER model, which is then able to learn the relationship between intricate combinations of facial movements and the correct emotion classification. However, when systematic biases are woven into the data a model is trained on, model performance suffers on underrepresented groups. In the case of face expression recognition, this often equates to high error rates in prediction of emotion for people of color.

Our Solution

Interested in improving the fairness and efficacy of FER systems, we utilized our video analytics platform, Vivo, to design and develop a novel dataset that is significantly more uniform across demographics such as gender and skin tone than industry standard datasets. We conducted an emotion recognition experiment with labeler participants across the globe to understand the interaction between a labeler’s demographic variables and their emotion perception of images in our dataset. Our ongoing research focuses on curating high quality labels for a growing face recognition and face emotion detection dataset alongside retraining frequently used face recognition and face emotion detection models toward models with better, more equitable performance, particularly when applied to traditionally underrepresented groups.

More Case Studies ›

Advance Your Mission
Partner with us to advance your life science mission with our unparalleled expertise in biology & data science.