
For my final project, I wanted to understand facial recognition technologies within computer vision. More specifically, I was interested in how emotion recognition could be used to accurately asses a person's emotion. Emotions are ephemeral and personal to us and the idea that we can capture and quantify emotion as a data point is still in its infancy within technology, if it is even possible at all. However, this idea leads us to dissect our innermost self in understanding what exactly is emotion and can it ever be quantifiable, even by someone other than you?
I thought the best place to start this investigation was by using my own face with these technologies. I tested five different face recognition/emotion detection APIs ( Google, Microsoft, Face API, CLM Tracker, and Face++) using different facial ranges (neutral, happy and sad) to assess how each algorithm differed from each other and how overall there was not only a disparity amongst the different algorithms, but also what I personally assessed myself as feeling.
My face, my emotions across five different platforms

Google was the most accurate in reading my different emotions, not surprisingly.
Microsoft
Microsoft says I am "a woman who is smiling looking at the camera" when I display my neutral face.


Microsoft always detection smiling whether my face was neutral, happy or sad, leading me to think it's accuracy wasn't as high as Googles.
The Computer Says I'm Sad, But I Don't Feel Sad...
Using clm Tracker to assess my neutral and happy face. Apparently, sadness never leaves my face, but I don't feel sad. Which one is right?
Using CLM tracker, I tried testing different emotions in real time. Getting the algorithm to detect anger was the hardest for me. It seems as if it picked up on a certain level of one's brow furrowing together, which was impossible for me to do unless my was head tilted down. This became almost game like and I felt like the emotions on my face became less and less accurate for real life the more accurately the algorithm was able to detect these emotions.
CLM tracker required the most exaggerated facial expressions and therefore was the most inaccurate in reading my real emotions.
Face++
How can we detect genuine emotions?
Actual sadness vs. faked sadness using Face++

My facial expression to the left most accurately represents my face when I am sad. My face to the right is an exaggerated version of a "sad" face. This is almost cartoon and emoji like, yet the algorithm picks up on this version the best and classifies the other face as neutral. Micro-expressions are hard to pick up on. The slight tension above my eyebrow forming an almost frown is not able to be picked up on in the left image, but when exaggerated is able to be read on the right.
This brings up the question of reading and displaying genuine emotions as opposed to fake or exaggerated emotions. What is the biggest indicator of a person displaying a face of sadness? Slight tension above the eyebrow or the curvature of lips turning downward?
“We have recently tested the most well-known emotion recognition algorithms, and found that happiness, sadness, and surprise were the easiest emotional facial expressions to detect, while fear, disgust, anger, and a neutral state were the most difficult for A.I. Complex cognitive states, hidden, mixed and fake emotions, among other things, would require analysis and understanding of the context, and this hasn’t yet been achieved.”
- Neurodata Lab’s George Pliev
While certain emotions may be easier to detect, that doesn't mean it follows a 100% accuracy rate and what are the accuracy rates amongst different races and genders? Questions related to training samples and understanding the need for diversity amongst training data is often an issue that is left ignored by large companies.
Wider Implications
Research has shown that automated facial analysis tools can have high error disparities for different groups of people, yet people also trust these algorithms to be perfect. When these technologies are automatically implemented into systems without the proper protocols to screen from biases and errors, they can have negative effects (as such with Amazon's facial recognition software used by police falling short on tests for accuracy or Apple's facial recognition technologies falsely linking an 18-year-old to theft).
Taking these technologies further with the arrival of affective computing, allows for possible biases and errors to extend further beyond our physical exteriors, and deeper into our innermost thoughts. Can machines ever be better than humans at recognizing emotion? Can they know your emotions better than you can? These questions not only raise philosophical issues, but also address the rising concerns on accuracy and biases in these algorithms.
With this new level of surveillance into our inner selves, we need to be careful about how these sensitive technologies get carried out in the future. Could software screen possible job applicants out of a job by simply contrasting your facial patterns with the facial patterns of past successful applicants? Pymetrics is a company that uses AI in order to assess behavioral traits and data by existing employees to create an algorithm to assess future employees. What happens if a companies like these integrate emotion recognition technologies within their tests? Moving forward it is very possible and likely that many workplaces will turn the lens onto their own employees, monitoring their moods throughout the workday.