The development of artificial intelligence in the XXI centurygoes by leaps and bounds: one of his main achievements was the ability to recognize human emotions. In its annual report on AI development, an interdisciplinary research center studying the social effects of AI AI called for a ban on the use of this technology in certain cases. According to experts, the AI skill in the field of emotion recognition should not be used in decisions that affect the lives of people and society as a whole. Why, then, the ability of robots to distinguish emotions can significantly change the familiar life of mankind?
Can a robot have a sense of empathy?
Computer vision algorithms that are capabledetermine certain emotions that exist on the planet for at least a couple of decades. This technology is based on information obtained as a result of machine learning - special algorithms that process data for the best decision making. Despite all the successes of modern robotics, the ability to reproduce this truly human skill is still quite a challenge. Microsoft experts note that the recognition of people's emotions using computers has the potential to create a number of new generation applications, however, due to difficulties in determining them, AI for a long time showed erroneous results. However, new studies show that modern technology is already helping recruitment agencies evaluate the potential productivity of a future employee even at the interview stage. Thus, the analysis of video recordings of interviews using the latest technologies is already underway, allowing managers to get a better idea of the emotional state of their subordinates.
See also: In Japan, a robot is created in the form of a baby without a face. What is it for?
Easy continuous AI monitoringraises many issues that go beyond ethics. Currently, there are a large number of issues related to the confidentiality of personal information that can cause a negative attitude of society. Given these ethical considerations, the use of artificial intelligence in everyday life, although it can help with hiring or criminal sentencing procedures, can also be unreliable at the same time. So, if systems learn bias, the benefits of AI during interviews or sentencing convicts can be minimized.
By the way, on our official channel in Yandex.Zen you can find even more useful articles from various fields of modern science.
Ultimately, technology developers andsociety as a whole should carefully monitor how information from artificial intelligence systems is introduced into decision-making processes. Like any other form of intelligence, artificial systems can give incorrect results that can negatively affect the life of an individual or the whole society. In addition, modern AI has significant technical difficulties in reading certain emotions, for example, the level of self-confidence. If people who are inclined to trust artificial intelligence will make decisions based on the erroneous opinion of AI, society can have very big problems that can only be prevented by designing such systems as justice, transparency and ethics in these systems. In other words, before smart systems learn the correct recognition of emotions, scientists will first have to work hard to create moral standards for artificial intelligence.