2023 Author: Adelina Croftoon | [email protected]. Last modified: 2023-08-25 08:32
Would you trust robotif he was your primary care physician? Emotional intelligent machines may not be as far from us as they seem.
Over the past few decades, artificial intelligence significantly increased the ability to read people's emotional reactions.
But reading emotions doesn't mean understanding them. If AI itself cannot experience them, will it ever be able to fully understand us? And if not, do we risk attributing properties to robots that they do not have?
The latest generation of artificial intelligence is already thanking us for the growth in the amount of data computers can learn from, as well as for the increase in processing power. These machines are gradually being improved in matters that we usually gave exclusively to people for execution.
Today, artificial intelligence can, among other things, recognize faces, turn facial sketches into photographs, recognize speech, and play Go.
Not so long ago, scientists developed artificial intelligence that can tell if a person is a criminal just by looking at their facial features. The system was evaluated using a database of Chinese photographs and the results were stunning. AI erroneously classified innocent people as criminals in only 6% of cases and successfully identified 83% of criminals. The overall accuracy was nearly 90%.
This system is based on an approach called "deep learning" that has proven successful in, for example, facial recognition. Deep learning combined with a "face rotation model" allowed artificial intelligence to determine if two photographs represent the same person's face, even if the lighting or angle changes.
Deep Learning creates a "neural network" that is based on the approximation of the human brain. It consists of hundreds of thousands of neurons, organized in different layers. Each layer takes input data, such as a face image, to a higher level of abstraction, such as a set of edges in specific directions and locations. And it automatically highlights the traits that are most relevant to the performance of a particular task.
Given the success of deep learning, it's no surprise that artificial neural networks can tell criminals from innocent ones - if there really are facial features that differ between the two. The study made it possible to distinguish three features. One is the angle between the tip of the nose and the corners of the mouth, which, on average, is 19.6% less for criminals. The curvature of the upper lip is also on average 23.4% larger for criminals, and the distance between the inner corners of the eyes is on average 5.6% narrower.
On the face of it, this analysis suggests that the outdated view that criminals can be identified by physical attributes is not all that wrong. However, this is not the whole story. Remarkably, the two most relevant features are associated with the lips, and these are our most expressive facial features. The photographs of the criminals used in the study require a neutral facial expression, but the AI still managed to find hidden emotions in these photographs. Perhaps so insignificant that people cannot detect them.
It's hard to resist the temptation to look at sample photos yourself - here they are. The document is still undergoing review. Close examination does show a slight smile in the photographs of the innocent. But there are not many photos in the samples, so it is impossible to draw conclusions about the entire database.
The power of affective computing
This is not the first time a computer has been able to recognize human emotions. The so-called area of "affective computing" or "emotional computing" has been around for a long time. It is believed that if we want to live comfortably and interact with robots, these machines must be able to understand and adequately respond to human emotions. The possibilities in this area are quite extensive.
For example, the researchers used facial analysis to identify students with difficulty with computer-based teaching lessons. AI has been taught to recognize different levels of engagement and frustration so the system can understand when students find a job too easy or too difficult. This technology can be useful for improving the learning experience on online platforms.
Sony is trying to develop a robot that can form emotional bonds with people. It is not yet entirely clear how she was going to achieve this or what exactly the robot will do. However, the company says it is trying to "integrate hardware and services to provide an emotionally comparable experience."
Emotional artificial intelligence will have a number of potential advantages, be it the role of the interlocutor or the performer - it will be able to identify the criminal and talk about the treatment.
There are also ethical concerns and risks. Would it be right to let a patient with dementia rely on an AI companion and tell them that they are emotionally alive when they are not? Can you put a person behind bars if the AI says they are guilty? Of course not. Artificial intelligence, in the first place, will not be a judge, but an investigator, identifying "suspicious", but certainly not guilty people.
Subjective things like emotions and feelings are difficult to explain to artificial intelligence, in part because AI doesn't have access to good enough data to analyze it objectively. Will AI ever understand sarcasm? One sentence can be sarcastic in one context and completely different in another.
Either way, the amount of data and processing power continues to grow. With a few exceptions, AI may well learn to recognize different types of emotions in the next few decades. But will he ever be able to experience them himself? That's a moot point.
You don't have a flying car, jetpack or ray gun, but you still think that they will definitely appear in the future. Why? Because we are surrounded by artificial intelligence. It's funny when people ask when we will have smart computers, because they already have them in their hands. Phone calls are made by artificial intelligence. Whenever you write something into the search bar, you are using data that is collected by clever algorithms. Our world is full of these limited IP programs
If robots take over our planet one day, then Elon Musk will probably say: "I told you so!" It turned out that the genius billionaire inventor who loves to make the impossible come true is seriously worried about research in the field of artificial intelligence (AI). Musk tweeted last week: “We need to be very careful with artificial intelligence. It is potentially much more dangerous than nuclear warheads. " Spus
Stephen Hawking, Bill Gates, and Elon Musk all have something in common, and it's not wealth or intelligence. All of them are afraid of the apocalypse involving AI (artificial intelligence). This is a hypothetical scenario in which artificial intelligence becomes the dominant form of life on Earth. It could be the uprising of machines that think they are gods or, even worse, decide to destroy humanity and claim the Earth as their own. But whether the apocalypse will happen to AI is a big question. What prompts
Stephen Hawking continues the "crusade" against artificial intelligence (AI). During the highly anticipated Ask Me Everything on Reddit campaign, he wrote that work on developing an AI security protocol should be done now, not in the distant future. “Our goal should not be to create abstract artificial intelligence, but to create useful intelligence,” wrote the famous physicist, “it will take decades to figure out how to do this, so yes
The famous British physicist Stephen Hawking, in an article inspired by the science fiction film "Transcendence" starring Johnny Depp, said that underestimating the threat from artificial intelligence can be the biggest mistake in human history. In an article co-authored with computer science professor Stuart Russell of the University of California at Berkeley and physics professors Max Tegmark and Frank Wilczek of Massachusetts