02 min reading inTech

Why ‘facial expression recognition’ AI is a total scam

A team of researchers at Jilin Engineering Normal University in China recently published a paper indicating they’d built an AI model capable of recognizing human facial expressions.

Why ‘facial expression recognition’ AI is a total scam

A team of researchers at Jilin Engineering Normal University in China recently published a paper indicating they’d built an AI model capable of recognizing human facial expressions.

I’m going to save you some time here: they most certainly did not. Such a thing isn’t currently possible.

The ability to accurately recognize human emotions is what we here at Neural would refer to as a “deity-level” feat. The only people who truly know how you’re feeling at any given moment are you and any potential omnipotent beings out there.

Here are my three golden tips for realistic planning:

  • Underestimate how much you can get done in a day
  • Overestimate how much time a task will take
  • Overestimate interruptions during your day

The reason some bootcamp grads are set up to fail

  • In my experience, not all bootcamps are created equal. They vary widely in acceptance processes, curriculum, program structure, and quality of instructors. They’re designed to push candidates through courses that will have them writing some code fairly quickly.

  • As a result, programs are forced to strip away a lot of fundamentals — those basics that help developers understand the “why” behind the code they’re writing.

  • When young software developers learn by copying and pasting, it can make troubleshooting difficult when they come across something that doesn’t fit the pattern they’re used to.

  • Small businesses and startups are those most likely to hire developers without degrees, but I’ve seen too many bootcamp grads take jobs at startups only to find they haven’t learned enough to make any real impact.

Up front:

The research is fundamentally flawed because it conflates facial expression with human emotion. You can falsify this premise by performing a simple experiment: assess your current emotional state then force yourself to make a facial expression that presents in diametric opposition to it.

If you’re feeling happy and you’re able to “act” sad, you’ve personally debunked the whole premise of the research. But, just for fun, let’s keep going.

Background:

Don’t let the hype fool you. The researchers don’t train the AI to recognize expressions. They train the AI to beat a benchmark. There’s absolutely no conceptual difference between this system and one that tries to determine if an object is a hotdog or not.

What this means is the researchers built a machine that tries to guess labels. They’re basically showing their AI model 50,000 pictures, one at a time, and forcing it to choose from a set of labels.

The problem:

All of this seems useful because, when it comes to outcomes that don’t affect humans, predictions models are awesome.

When AI models try to predict something objective, such as whether a particular animal is a cat or a dog, they’re aiding human cognition.

You and I don’t have the time to go through every single image on the internet when we’re trying to find pictures of a cat. But Google’s search algorithms do.

Neural take:

This is a total scam. The researchers present their work as useful for “fields like human–computer interactions, safe driving … and medicine,” but there’s absolutely no evidence to support their assertion.

The truth is that “computer interactions” have nothing to do with human emotion, safe driving algorithms are more efficacious when they focus on attention instead of emotionality, and there’s no place in medicine for weak, prediction-based assessments concerning individual conditions.


Keep Reading