When will artificial intelligence begin to understand human emotions?

And you would trust the robot if it was yoursthe attending physician? Emotional intelligent cars may not be as far from us as they seem. Over the past few decades, artificial intelligence has significantly added to the ability to read people's emotional reactions.

But reading emotions does not mean understanding them. If the AI ​​itself cannot test them, will it ever be able to fully understand us? And if not, do we risk attributing robots to properties that they don’t have?

The latest generation of artificial intelligence is alreadythanks us for the increase in the amount of data that computers can learn from, as well as for the increase in computing power. These machines are gradually improving in the affairs that we usually gave exclusively to people for execution.

Today, artificial intelligence, among other things, can recognize faces, turn face sketches into photographs, recognize speech, and play go.

Identification of criminals

Not so long ago, scientists developed artificialan intelligence that can tell whether a person is a criminal by simply looking at his facial features. The system was evaluated using a database of Chinese photographs and the results came out simply stunning. AI mistakenly classified innocent people as criminals in just 6% of cases and successfully identified 83% of criminals. Overall accuracy was almost 90%.

This system is based on an approach called“Deep learning” that has been successful, for example, in face recognition. Deep learning combined with the “face rotation model” allowed artificial intelligence to determine whether two photographs represent the faces of the same person, even if the lighting or angle changes.

Deep learning creates a “neural network",which takes as its basis the approximation of the human brain. It consists of hundreds of thousands of neurons organized in different layers. Each layer translates input data, for example, a face image, to a higher level of abstraction, such as a set of edges in certain directions and locations. And it automatically highlights the features that are most relevant for the performance of a particular task.

Given the success of deep learning, there is nothingwhat is surprising is that artificial neural networks can distinguish criminals from innocents - if there really are facial features that differ between them. The study highlighted three features. One is the angle between the tip of the nose and the corners of the mouth, which is on average 19.6% less for criminals. The curvature of the upper lip is also on average 23.4% greater for criminals, and the distance between the inner corners of the eyes is on average 5.6% narrower.

At first glance, this analysis allowsto suggest that the outdated view that criminals can be identified by physical attributes is not so wrong. However, this is not the whole story. Remarkably, the two most relevant features are associated with the lips, and these are our most expressive facial features. Photos of the criminals used in the study require a neutral facial expression, but AI still managed to find hidden emotions in these photos. Perhaps so insignificant that people are not able to detect them.

It’s hard to overcome the temptation to look at samplesphotos yourself - here they are. The document is still undergoing review. A careful examination and the truth reveals a slight smile in the photographs of the innocent. But there are not many photographs in the samples, so it is impossible to draw conclusions about the entire database.

The power of affective computing

This is not the first time a computer is capable ofrecognize human emotions. The so-called field of "affective computing" or "emotional computing" has existed for a long time. It is believed that if we want to live comfortably and interact with robots, these machines must be able to understand and adequately respond to human emotions. The possibilities of this area are quite extensive.

For example, researchers used face analysis,to identify students who have difficulty with computer-based learning lessons. AI taught to recognize different levels of engagement and frustration so that the system can understand when students find work too simple or too complex. This technology can be useful to improve learning on online platforms.

Sony is completely trying to develop a robot,able to form emotional bonds with people. It is not yet clear how she was going to achieve this or what exactly the robot will do. However, the company says it is trying to "integrate hardware and services to provide an emotionally comparable experience."

Emotional artificial intelligence will have a number of potential advantages, whether it is the role of an interlocutor or a performer, it will be able to identify the criminal and talk about treatment.

There are also ethical issues and risks. Would it be right to allow a patient with dementia to rely on a companion in the person of AI and tell him that he is emotionally alive, although not really? Can you put a person in jail if the AI ​​says that he is guilty? Of course not. Artificial intelligence, in the first place, will not be a judge, but an investigator who identifies "suspicious", but certainly not guilty people.

Subjective things like emotions and feelings are difficultexplain artificial intelligence, in part because AI does not have access to good enough data to objectively analyze it. Will AI ever understand sarcasm? One sentence can be sarcastic in one context and completely different in another.

In any case, the amount of data and computationalpower continues to grow. With a few exceptions, AI may well learn to recognize different types of emotions in the next few decades. But will he ever be able to test them himself? That's a moot point.