Today it’s no secret to any of us thatcomputers are slowly but surely learning to read emotions. Imagine that your usual activities are monitored by artificial intelligence. Will such a future violate personal boundaries by depriving privacy? At the same time, it is completely possible that the development of AI will lead to a golden age of compassionate, intelligent and useful machines. Be that as it may, progress cannot be stopped, and therefore many scientists are convinced that each of us needs to understand by what principle artificial intelligence is trained, created and developed.
Poppy Crum, Principal Research Fellow, Dolby Labsand a professor at Stanford University, uses state-of-the-art sensors and AI to capture emotional signals. In his work, Crum uses a variety of technologies - from temperature sensors that track blood flow to monitors that determine breathing rate and cameras that monitor microscopic facial recognition. If that sounds a little intimidating, it probably should be. Nevertheless, already today a huge number of people around the world use devices that can be accessed thanks to face recognition technology. And it doesn’t scare anyone. Perhaps the reason lies in the fact that today we increasingly trust algorithms that work in an unexpected way. Even the scientists who create these algorithms sometimes shy away from responsibility, viewing artificial intelligence as something beyond their control.
And what do you think, what changes will the AI's ability to recognize emotions entail? You can discuss the technological future of humanity with the participants of our Telegram chat
How should be related to innovations in the field of AI?
Already today, AI is able to detect diseases,answer queries, make business forecasts, and even give advice on how to deal with climate change. But what is called “emotional artificial intelligence” may appear much earlier than we think. That's because today the development of AI, capable of recognizing emotions, is aimed at helping a variety of companies. This is necessary to provide all kinds of services, including personal ones. For example, if the client is not satisfied with the work of the maintenance staff, the system is able to calculate it and report dissatisfaction to employees. But what about the fact that the AI will recognize your emotions using the camera on a laptop? Data about your condition can be transferred to advertisers and as a result you will receive relevant advertising. For companies, this is certainly a big plus, but what about users? You may be surprised, but the results of a study published in the Gartner magazine say that most users are not opposed to AI recognizing their emotions. Meanwhile, there is an opinion that the American mathematician and terrorist, better known as Unabomber, turned out to be right, warning people about the dangers of technological progress.
Do you want to always be up to date with the latest news from the world of high technologies? Subscribe to our news channel in Telegram
Jaron Lanier, Microsoft Researcher and Authorsuch books as “We Are Not Gadgets” and “Who Owns the Future,” he believes that today people are so delighted with the advantages of technology that they do not take into account their possible shortcomings. According to the Wired publication, Lanier points out how many people greeted voice assistants in their homes and families, not paying attention to their impact on children and access to personal data. According to Lanier, the main mistake that many tend to make today is that we think of machines as living organisms. Of course, there is a downside that allows us to use useful technologies in our lives. So, AI algorithms and voice assistants that can recognize emotions can bring significant benefits to the work of the health service.
However, do not forget that AIpeople are programming. And people, as you know, tend to make mistakes. This is fraught with the appearance of incorrect data, which can make AI decisions difficult. Moreover, today surveillance of users and leakage of personal data is a real problem. Many experts are extremely concerned that in the future it will be impossible to refuse such technologies that have gained access to the lives of users.