Select the directory option from the above "Directory" header!

Menu
Don’t get too emotional about emotion-reading AI

Don’t get too emotional about emotion-reading AI

Artificial intelligence tools now try to figure out whether people are happy, sad or disgusted. The reality may surprise and anger you.

Credit: Dreamstime

Call it “artificial emotional intelligence” — the kind of artificial intelligence (AI) that can now detect the emotional state of a human user.

Or can it? More importantly, should it?

Most emotion AI is based on the basic emotions” theory, which is that people universally feel six internal emotional states: happiness, surprise, fear, disgust, anger, and sadness, and may convey these states through facial expression, body language and vocal intonation.

In the post-pandemic, remote-work world, sales people are struggling to “read” the people they’re selling to over video calls. Wouldn’t it be nice for the software to convey the emotional reaction on the other end of the call?

Companies like Uniphore and Sybill are working on it. Uniphore’s “Q for Sales” application, for example, processes non-verbal cues and body language through video, and voice intonation and other data through audio, resulting in an “emotion scorecard.”

Making human connections through computers

Zoom itself is flirting with the idea. Zoom in April introduced a trial of Zoom IQ for Sales, which generates for meeting hosts transcripts of Zoom calls as well as “sentiment analysis” — not in real time but after the meeting; The criticism was harsh.

While some people love the idea of getting AI help with reading emotions, others hate the idea of having their emotional states judged and conveyed by machines.

The question of whether emotion-detecting AI tools should be used is an important one that many industries and the public at large need to grapple with.

Hiring could benefit from emotion AI, enabling interviewers to understand truthfulness, sincerity, and motivation. HR teams and hiring managers would love rank candidates on their willingness to learn and excitement about joining a company.

In government and law enforcement, calls for emotion-detection AI are also rising. Border patrol agents and Homeland Security officials want the technology to catch smugglers and imposters. Law enforcement sees emotion AI as a tool in police interrogations.

Emotion AI has applications in customer service, advertising assessment and even safe driving.

It’s only a matter of time before emotion AI shows up in everyday business applications, conveying to employees the feelings of others on calls and in business meetings, and offering ongoing mental health counselling at work.

Why emotion AI makes people upset

Unfortunately, the “science” of emotion detection is still something of a pseudoscience. The practical trouble with emotion detection AI, sometimes called affective computing, is simple: people aren’t so easy to read. Is that smile the result of happiness or embarrassment? Does that frown come from a deep inner feeling, or is it made ironically or in jest. 

Relying on AI to detect the emotional state of others can easily result in a false understanding. When applied to consequential tasks, like hiring or law enforcement, the AI can do more harm than good.

It’s also true that people routinely mask their emotional state, especially in in business and sales meetings. AI can detect facial expression, but not the thoughts and feelings behind them. Business people smile and nod and empathetically frown because it’s appropriate in social interactions, not because they are revealing their true feelings.

Conversely, people might dig deep, find their inner Meryl Streep and feign emotion to get the job or lie to Homeland Security. In other words, the knowledge that emotion AI is being applied creates a perverse incentive to game the technology.

That leads to the biggest quandry about emotion AI: is it ethical to use in business? Do people want their emotions to be read and judged by AI?

In general, people in, say, a sales meeting, want to control the emotions they convey. If I’m smiling and appear excited and tell you I’m happy and excited about a product, service or initiative, I want you to believe that — not bypass my intended communication and find out my real feelings without my permission. 

Sales people should be able to read the emotions prospects are trying to convey, not the emotions they want kept private. As we get closer to a fuller understanding of how emotional AI works, it looks increasingly like a privacy matter.

People have the right to private emotions. And that’s why I think Microsoft is emerging as a leader in the ethical application of emotion AI.

How Microsoft gets it right

Microsoft, which developed some pretty advanced emotion detection technologies, later terminated them as part of a revamping of its AI ethics policies. Its main tool, called Azure Face, could also estimate gender, age, and other attributes.

“Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of ‘emotions,’ the challenges in how inferences generalise across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability,” wrote Natasha Crampton, Microsoft’s Chief Responsible AI Officer, wrote in a blog post.

Microsoft will continue to use emotion recognition technology in its accessibility app, called Seeing AI, for visually impaired users. And I think this is the right choice, too. Using AI to enable the visually impaired, or, say, people with autism where they may be debilitated by their struggle to read the emotions and reactions of others, is a great use for this technology. And I think it has an important role to play in the coming era of augmented reality glasses.

Microsoft isn’t the only organization driving the ethics of emotion AI.

The AI Now Institute and the Brookings Institution advocate bans on many uses of emotion-detection AI. And more than 25 organisations demanded that Zoom end its plans to use emotion detection in the company's videoconferencing software.

Still, some software companies are moving forward with these tools — and they’re finding customers.

For the most part, and for now, the use of emotion AI tools may be misguided, but mostly harmless, as long as everyone involved truly consents. But as the technology gets better, and face-interpreting, body-language reading technology approaches mind-reading and lie detection, it could have serious implications for business, government, and society.

And, of course, there’s another elephant in the living room: the field of affective computing also seeks to develop conversation AI that can simulate human emotion. And while some emotion simulation is necessary for realism, too much can delude users into believing AI is conscious or sentient. In fact, that belief is  already happening at scale.

In general, all this is part of a new phase in the evolution of AI and our relationship to the technology. While we’re learning that it can solve myriad problems, we’re also finding out it can create new ones.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags artificial intelligence

Show Comments