When Will Artificial Intelligence Begin To Understand Human Emotions?

Table of contents:

Video: When Will Artificial Intelligence Begin To Understand Human Emotions?

Video: When Will Artificial Intelligence Begin To Understand Human Emotions?
Video: This is why emotional artificial intelligence matters | Maja Pantic | TEDxCERN 2023, September
When Will Artificial Intelligence Begin To Understand Human Emotions?
When Will Artificial Intelligence Begin To Understand Human Emotions?
Anonim
When will artificial intelligence begin to understand human emotions? - artificial intelligence
When will artificial intelligence begin to understand human emotions? - artificial intelligence
Image
Image

Would you trust robotif he was your primary care physician? Emotional intelligent machines may not be as far from us as they seem.

Over the past few decades, artificial intelligence significantly increased the ability to read people's emotional reactions.

But reading emotions doesn't mean understanding them. If AI itself cannot experience them, will it ever be able to fully understand us? And if not, do we risk attributing properties to robots that they do not have?

The latest generation of artificial intelligence is already thanking us for the growth in the amount of data computers can learn from, as well as for the increase in processing power. These machines are gradually being improved in matters that we usually gave exclusively to people for execution.

Today, artificial intelligence can, among other things, recognize faces, turn facial sketches into photographs, recognize speech, and play Go.

Identifying criminals

Not so long ago, scientists developed artificial intelligence that can tell if a person is a criminal just by looking at their facial features. The system was evaluated using a database of Chinese photographs and the results were stunning. AI erroneously classified innocent people as criminals in only 6% of cases and successfully identified 83% of criminals. The overall accuracy was nearly 90%.

This system is based on an approach called "deep learning" that has proven successful in, for example, facial recognition. Deep learning combined with a "face rotation model" allowed artificial intelligence to determine if two photographs represent the same person's face, even if the lighting or angle changes.

Deep Learning creates a "neural network" that is based on the approximation of the human brain. It consists of hundreds of thousands of neurons, organized in different layers. Each layer takes input data, such as a face image, to a higher level of abstraction, such as a set of edges in specific directions and locations. And it automatically highlights the traits that are most relevant to the performance of a particular task.

Given the success of deep learning, it's no surprise that artificial neural networks can tell criminals from innocent ones - if there really are facial features that differ between the two. The study made it possible to distinguish three features. One is the angle between the tip of the nose and the corners of the mouth, which, on average, is 19.6% less for criminals. The curvature of the upper lip is also on average 23.4% larger for criminals, and the distance between the inner corners of the eyes is on average 5.6% narrower.

On the face of it, this analysis suggests that the outdated view that criminals can be identified by physical attributes is not all that wrong. However, this is not the whole story. Remarkably, the two most relevant features are associated with the lips, and these are our most expressive facial features. The photographs of the criminals used in the study require a neutral facial expression, but the AI still managed to find hidden emotions in these photographs. Perhaps so insignificant that people cannot detect them.

It's hard to resist the temptation to look at sample photos yourself - here they are. The document is still undergoing review. Close examination does show a slight smile in the photographs of the innocent. But there are not many photos in the samples, so it is impossible to draw conclusions about the entire database.

The power of affective computing

This is not the first time a computer has been able to recognize human emotions. The so-called area of "affective computing" or "emotional computing" has been around for a long time. It is believed that if we want to live comfortably and interact with robots, these machines must be able to understand and adequately respond to human emotions. The possibilities in this area are quite extensive.

For example, the researchers used facial analysis to identify students with difficulty with computer-based teaching lessons. AI has been taught to recognize different levels of engagement and frustration so the system can understand when students find a job too easy or too difficult. This technology can be useful for improving the learning experience on online platforms.

Sony is trying to develop a robot that can form emotional bonds with people. It is not yet entirely clear how she was going to achieve this or what exactly the robot will do. However, the company says it is trying to "integrate hardware and services to provide an emotionally comparable experience."

Emotional artificial intelligence will have a number of potential advantages, be it the role of the interlocutor or the performer - it will be able to identify the criminal and talk about the treatment.

There are also ethical concerns and risks. Would it be right to let a patient with dementia rely on an AI companion and tell them that they are emotionally alive when they are not? Can you put a person behind bars if the AI says they are guilty? Of course not. Artificial intelligence, in the first place, will not be a judge, but an investigator, identifying "suspicious", but certainly not guilty people.

Subjective things like emotions and feelings are difficult to explain to artificial intelligence, in part because AI doesn't have access to good enough data to analyze it objectively. Will AI ever understand sarcasm? One sentence can be sarcastic in one context and completely different in another.

Either way, the amount of data and processing power continues to grow. With a few exceptions, AI may well learn to recognize different types of emotions in the next few decades. But will he ever be able to experience them himself? That's a moot point.

Recommended: