Emotion Recognition


Face recognition software is among the most controversial applications of artificial intelligence. Fundamental questions regarding bias, racial profiling, privacy, civil liberties and accuracy form the core complaints. But facial recognition is also one of the most ubiquitous forms of AI. If you’ve ever posted a photo or video to Facebook, WeChat, Instagram or TikTok; applied for a driver’s license or passport; or engaged in any activity where an image of your face can be connected to your name, then that image has likely become a template within a facial recognition database.

While much of the concern around facial recognition is with the terms and conditions of its construction of individual identity, there are many more applications and outcomes for which it can be used. Emotion recognition is among the most popular, in part because of the longstanding desire to understand facial expression as a clear and verifiable representation of a person’s emotional state. The debate around emotion and facial expression has been active for centuries. Charles Darwin’s treatise on The Expression of the Emotions in Man and Animals (1872) set the groundwork for a universal theory of expression that was revived in the 1960s and 70s by the American psychologist, Paul Ekman. Ekman proposed that humans worldwide could reliably infer emotional states from facial expressions, which he reduced to seven basic emotions: happiness, sadness, anger, contempt, disgust, fear and surprise. Not surprisingly, a growing body of research quickly emerged to counter Ekman’s proposition arguing that his definitions of facial expression were too limited, and that a broader physiological analysis was needed along with an understanding of context.

However, much of the emotion recognition software that is produced today has its basis in Ekman’s thesis—a reductive set of seven emotions and a belief in the cultural universality of those expressions. This is a framework that is especially amenable to computation and computer vision—and is equally appealing to the magical thinking that drives much of the marketing campaigns and advertising schemes of contemporary life.

Text To Speech

This interactive encounter with emotion recognition is the result of a collaboration with Vancouver’s Centre for Digital Media, under the direction of Larry Bafia. Graduate students in this program worked as a team to design, program and produce this wall. Many thanks to that talented team:

Valentina Forte-Hernandez: Project Manager
Courtney Clarkson: UX / UI Designer
Julia Read: UX / UI Designer 
Vlad Ryzhov: Software Programmer 

Vlad Ryzhov contributed additional post-production programming support 

The Centre for Digital Media was founded in 2007 and is a unique graduate program whose degree is imprinted with the seals of its four partner institutions: University of British Columbia, Emily Carr University of Art + Design, Simon Fraser University and British Columbia Institute of Technology. 

The emotion recognition software used in this interactive installation is produced by Visage Technologies, founded in Linköping, Sweden.

Emotion Recognition, 2020-22, interactive installation

Digital Slideshow produced for The Imitation Game exhibition