General, Research, Technology

How can criminals use artificial intelligence? The most dangerous option

For the past 10 years, we have heard daily news abouthow this or that artificial intelligence learned new skills. Today, computer algorithms are able to copy the drawing styles of famous artists, imitate other people's voices, create fake videos with public figures, and much more. All this is very interesting to keep track of, but many computer security experts are concerned that emerging technologies could be used by attackers to commit crimes. After all, the use of neural networks for the implementation of any tasks is currently not regulated in any way. It turns out that any person with a good command of programming can use artificial intelligence to lure money from people, spy and commit other illegal actions. Recently, the staff of University College London decided to find out which skill of artificial intelligence can cause the greatest harm to human society.

Artificial intelligence can do a lot. For example, he can replace Arnold Schwarzenegger's face with Sylvester Stallone

Artificial Intelligence - the property of computers to perform creative functions that are more characteristic of real people. Our website has a special section dedicated to this topic.

Artificial intelligence capabilities

The results of the work done were sharededition of The Next Web. As part of the scientific work, the researchers identified 20 artificial intelligence skills that will be available to criminals in the next 15 years. Unfortunately, a complete list of methods of use could not be found even on the college's website, but the researchers did mention the most important points.

So, in their opinion, artificial intelligence is able to:

  • drive electric cars, which is better known as autopilot;
  • fraudulently find out logins, passwords and other personal data of users - in other words, engage in phishing;
  • collect incriminating photos, videos and other data about people that can be used for blackmail;
  • generate fake news, with which you can control the thinking of a large number of people;
  • control robots with which you can follow people and even rob houses;
  • create fake videos with famous people in order to ruin their reputation.

These and many other usesneural networks for criminal purposes have been studied by a group of 31 artificial intelligence experts. They were given the task of sorting all these skills according to the level of danger to society. When compiling the rating, experts took into account the complexity of using artificial intelligence to perform certain tasks, the possibility of preventing fraud and the amount of money that criminals can get fraudulently.

See also: Why do we believe fake news?

The danger of neural networks

The most dangerous ability of artificial intelligence was called the creation of the so-called deepfakes (deepfakes). For the first time, this skill of computer algorithms became known around 2017. It was then that one of the users of the Reddit website showed the world how, having a powerful computer, you can create a fake video where the face of one person is replaced by another. He demonstrated this with the most obscene example, by inserting the faces of famous people into adult videos. This news caused a lot of noise and deepfakes in this form even became banned. However, at the moment, no one prevents programmers from creating funny videos like the one where Elon Musk sings a song about the "roar of the cosmodrome."

Sometimes a fake video is really hard to tell from the original

Also deepfakes can be used to spoilreputation of famous people and this has been proven many times. For several years in a row, videos have appeared on the Internet with a supposedly drunken Nancy Pelosi, the Speaker of the US House of Representatives. Of course, these videos are fake, but they get a lot of views and some people might actually mistake them for real. After all, many do not even suspect about the technology of creating fake videos - there are only a few of the older generation of technologically savvy people.

Earlier my colleague Artem Sutyagin wrote about the danger of neural networks. It turned out to be a very detailed material, so I strongly advise you to read it!

The second danger is that suchthe method of fraud is difficult to stop. The fact is that fake videos are so realistic that it is almost impossible to recognize them. This was proven in June 2020 by Facebook's Deepfake Detection Challenge. More than 2,000 developers took part in it, who created algorithms for recognizing fake videos. The highest recognition accuracy was 65%, which, according to cybersecurity experts, is a very bad result in such a serious matter.

The section of the Reddit website where deepfakes were posted is already blocked

Fight against crime

Other AI features likedriving cars and robots are not considered as dangerous as deepfakes. Yes, a hacked autopilot can force a car to swerve and crash into a pole at high speed. But in this case we are talking about one victim, while fake videos can affect the minds of thousands of people. Or, take, for example, the use of robots to spy on people and commit house robberies. Experts do not see them as dangerous at all, because they are very easy to detect. A hammer in hand and that's it - the "spy" is gone.

If you like our articles, subscribe to us on Google News! This will make it easier for you to keep track of new content.

What to do to minimize the danger fromartificial intelligence is not yet clear. Alternatively, you can continue to work on creating an algorithm for detecting fake videos. You can also somehow control the use of neural networks, but this can slow down the development of technologies. After all, artificial intelligence is used not only by criminals, but also by scientists who want to make the world a better place. Recently, startup Landing AI taught him how to maintain social distance, which in these difficult times can save many lives.