General, Research, Technology

What dangers of neural networks do we underestimate?

Have you ever met a man on the streetwhich one to one would look like you? Clothing, face, gait, manner of communication, habits are completely identical to yours. As if you were scanned and printed on a printer. Sounds a little creepy, right? Now imagine that you saw a video in which such a person tells something about himself. In the best case, you will try to remember when you walked in such a way that you didn’t remember anything, but you could say such things to the camera. So far, all this sounds like simple reasoning, but technology has come close to creating such people. They already exist, but soon there will be many more.

It doesn’t matter how we imagine the face of neural networks. There will definitely be dangers in them.


  • 1 Where does the fake come from?
  • 2 What is Deep Learning?
  • 3 What is Deepfake? When did Deepfake appear?
  • 4 Danger Deepfake. How to change the face in the video?
  • 5 How to identify Deepfake?
  • 6 Scary Future Scenario

Where does the fake come from?

Now there is already too much that is acceptedcall fakes. They are everywhere. They can be found in photographs, in the news, in the production of goods and in information services. It is easier to say where there are no phenomena accompanied by this word. While you can fight them. You can study the origin of the photo, check the distinguishing features of the branded product from a fake, and check the news. Although, the news is a separate topic.

Nowadays, the content consumer does not want to waitand requires its creator instant production, sometimes he does not even care about quality, most importantly, quickly. This is where situations arise when someone said something, and the rest, without checking, dragged it through their websites and newspapers. In some cases, it takes a lot of time to roll this ball back and prove that it was all not true.

Explain why this is all done, it makes no sense. On the one hand, there are those who simply want to laugh at the situation, and on the other, those who really did not know that they were wrong. A separate place, approximately in the middle, is occupied by those to whom it is trite profitable. These may be interests of influence at various levels, including political. Sometimes this is the goal of making a profit. For example, sowing panic in the stock market and conducting profitable operations with securities. But often this is due to hostility towards a person (company, product, etc.) in order to belittle him. A simple example is the “lowering” in ratings of an objectionable film or institution. Of course, this requires an army of those who go and put a dislike (sometimes even bots), but this is a different story.

What is deep learning?

Recently, this term sounds more and more often. Sometimes he is not even relevant and is confused with something else. So the software product looks more impressive.

Deep learning - (deep learning; English Deep learning) - a set of machine learning methods (with a teacher, with a partial involvement of a teacher, without a teacher, with reinforcement), based on teaching representations, and not specialized algorithms for specific tasks.

Do not think that the concept and basic principlesMachine learning only appeared a few years ago. In fact, they are so many years old that many of us were not even born then. The basic principles of deep learning systems and mathematical models for their work were known back in the 80s of the last century.

At that time they didn’t have such a sense because oflack of one important component. It was a high computing power. Only in the middle of the two thousandth appeared systems that can help work in this direction and allow you to calculate all the necessary information. Now machines have developed even more and some systems of machine vision, voice perception and some others work so efficiently that they even sometimes surpass human capabilities. Although, they have not yet been “planted” in responsible areas, making them an addition to human capabilities while maintaining control over them.

We teach them, but how will they use their knowledge and capabilities?

What is deepfake? When did Deepfake appear?

It’s easy to guess that Deepfake is smallpuns associated with Deep Learning and the very fakes that I talked about above. That is, Deepfake should take the fake to a new level and relieve the person in this difficult matter, allowing him to create fake content without wasting any energy on it.

First of all, such fakes relate to the video. That is, any person will be able to sit in front of the camera, say something, and his face will be replaced by another person. It looks scary, because, in fact, it will just be necessary to grasp the basic movements of a person and it will be simply impossible to distinguish a fake. Let's see how it all started.

The first generative adversarial neural network wascreated by a student at Stanford University. It happened in 2014, and the name of the student was Jan Goodfellow. In fact, he pushed together two neural networks, one of which was engaged in the generation of people's faces, and the second analyzed them and said it was similar or not. So they taught each other and one fine day the second neural network began to get confused and take the generated images for real ones. It is such a constantly complicated system that gives rise to Deepfake.

Now one of the main promoters of the ideaDeepfake is Hao Li (Hao Li). He is engaged not only in this, but also in many other things. For this, he was repeatedly awarded various awards, including unofficial ones. By the way, he is one of those who should say thank you for the appearance of an animoji on the iPhone X. If interested, on his website you can familiarize yourself with him in more detail. Today, he is not the main topic of discussion.

We remembered about him only becauseat the World Economic Forum in Davos, he showed his application, which will replace the face of a person sitting in front of the camera with any other person. In particular, he showed how the system works on the example of the faces of Leonardo DiCaprio, Will Smith and other famous people.

With such technologies, the main thing is to scan, and then it's a matter of technology. In the truest sense of the word.

You can imagine the opposite situation, whena real person will say something, and then he will assure everyone that he was framed. How to be in this situation is also not very clear. This will make such a mess in the news feeds that it simply does not work to double-check it in another source. As a result, it will become completely incomprehensible what is true and what is false in this world. A picture is emerging from films about a dark future, such as Surrogates or Terminator, where the T-1000 was introduced by other people and, among other things, called John Conor on behalf of his adoptive mother.

Now I'm not even talking about another abuse that will allow us to collect false evidence. Against this background, all the fun of the toy becomes too dubious.

How to identify Deepfake?

The problem is not that such systems are neededprohibit, but in the fact that this is no longer possible. They already have and the development of technology, including, and face reading, has led to their appearance and the spread of open source. Even if you imagine that the system in its current form will cease to exist, you must understand that it will be re-created. They will just once again teach the neural networks to work among themselves and that’s all.

When the neural networks get out of control, you can find out about it from our news channel on Telegram. Join before it's too late.

Until everything is so scary, and determine the fakeYou can literally with the naked eye. The picture is similar, but it is rather rough. In addition, she sometimes has some problems with alignment, especially along the borders of the face. But nothing stands still and developing it even more is not at all difficult. The same Hao Li is sure that this will take no more than a few months, and to create “masks” that even a computer cannot distinguish, it takes several more years. After this, there will be no turning back.

On the one hand, it can protect against algorithm that YouTube and Facebook are already creating. By the way, the latter even opened a competition for the development of recognition technology - the Deepfake Detection Challenge ("The task of identifying dipfakes"). The prize fund of this contest is $ 10 million. The competition is already underway and will end in March 2020. You can still have time to participate.

Replacing a face in a video is no longer a problem.

Perhaps this generosity is due to a fake video with Mark Zuckerberg himself. If these two things are connected, the appearance of such a competition is not surprising.

If the replaced person is fullymatch the original, the counter-force in the face of a special neural network will be powerless. In this case, she will have to catch the minimum differences in facial expressions, movements and manner of speaking. In the case of famous people, this problem will be solved at the video service level, since the same YouTube knows how the conditional Donald Trump moves. When it comes to a lesser known person, it will be more difficult. Although, this can also be proved by planting it in front of the camera and conducting a casual conversation, while the neural network analyzes its movements. It will turn out something like studying a fingerprint, but, as we see, this again will lead to unnecessary difficulties.

If you embed a video authentication systeminto the cameras, they can also be bypassed. You can make the camera mark the shot video and it is clear that it is not shot through a separate application or processed in a special program. But what in this case be with videos that were simply processed. For example, a mounted interview. At the output we get a video in which there will no longer be that source key.

A few memes at the end.

Scary Future Scenario

Is it possible to say that we have now sketched one ofscenarios for a bleak future? In general, yes. If the technologies that were created to achieve good goals get out of control, you can sip grief with them. Actually, there are many options for such dangerous technologies, but most of them are protected. For example, nuclear fusion. Here we are dealing with code that anyone can get.

Write in the comments what protection you see.from fakes, given that the masking system was able to make the masks completely identical to the original faces. And due to the fact that they are in the video, one cannot even apply recognition of depth and volume to them. In addition, suppose that any code and key embedded in the image can be cracked. As the saying goes, it would be for what. Now we can discuss all the introductory notes.

</ p>