General, Research, Technology

Can artificial intelligence destroy humanity by 2035?

British theoretical physicist Stephen Hawking believedthat the creation of artificial intelligence (AI) will be "either the worst or the best event in human history." Back in 2016, the scientist advocated the creation of a scientific organization, the main task of which would be to study the prospects of artificial intelligence as "a critically important issue for the future of our civilization and our species." Recently, forensic researchers identified the top 18 AI threats that we should worry about in the next 15 years. While science fiction and popular culture depict the death of humanity at the hands of intelligent robots, research has shown that the main threat actually has more in common with us than meets the eye.

Deep Fake is the main threat posed by AI

Is AI a threat?

Today it may seem that artificialintelligence that poses a threat to humanity is the lot of science fiction writers and films like The Matrix or I am a robot. Agree, it's quite difficult to imagine an almighty and terrible AI when Siri is not able to give the correct weather forecast. But Shane Johnson, director of the Dawes Future Crimes Center at UCLA explains that the number of potential threats will grow and they will become more and more complex, intertwined with our daily life.

According to Johnson, quoted by Inverse,we live in an ever-changing world that creates new opportunities - both good and bad. This is why it is so important to anticipate future threats, including an increase in crime. This is necessary so that politicians and other stakeholders with competence can identify crimes before they happen. Yes, just like in Minority Report with Tom Cruise in the lead roles.

Although the authors of the work published in Crime Science admit that the findings of the study are inherently speculative and depend on the current political environment and technological development, in the future technology and politics will go hand in hand.

To keep abreast of the latest scientific discoveries in the field of high technology and not only, subscribe to our channel in Google News.

Shot from the film "Dissenting Opinion"

Artificial intelligence and crime

To make these futuristicconclusions, the researchers assembled a team of 14 scientists in related fields, seven experts from the private sector and 10 experts from the public sector. These 30 experts were evenly divided into groups of four to six people and received a list of potential AI crimes, ranging from physical threats (such as autonomous drone attacks) to digital forms of threats. In order to make their judgments, the team looked at four main features of the attacks:

  • Harm
  • Profitability
  • Reachability
  • Affection

The harm, in this case, may relate tophysical, mental or social damage. The study authors further determine that these threats can do harm, either by defeating AI (for example, by evading facial recognition) or using AI to commit crimes (for example, by blackmailing people with deep fake videos).

Although these factors cannot really beseparated from each other, experts were asked to consider the impact of these criteria separately. The teams' results were then sorted to determine the overall most dangerous AI threats over the next 15 years.

AI - threat diagram

Unlike a robotic threat capable ofcause physical harm or damage to property, deep fake can deprive us of trust in people and society itself. By assessing threats against the above criteria, the research team determined that deep fake - a technology that already exists and is spreading - represents the highest level of threat.

It is important to understand that the threats posed by AI will undoubtedly be a force to be reckoned with in the coming years.

Comparing 18 different types of AI threats, the group determined that deep fake video and audio manipulation were the biggest threats.

“People have a strong tendency to believe theirour own eyes and ears, so audio and video evidence has traditionally been given a lot of credibility (and often legal force) despite a long history of photographic deception, ”the authors explain. "But recent developments in training (including deep fake) have greatly expanded the possibilities for generating fake content."

Interesting: Artificial Intelligence Reveals the Secret of Happy Relationships

The study authors believe that the potentialthe impact of these manipulations ranges from individuals tricking older people into posing as a family member to videos designed to sow distrust in public and government figures. They also add that these attacks are difficult to detect by individuals (and even experts in some cases), making them difficult to stop. Thus, changes in the behavior of citizens may be the only effective protection.

Deep Fake is the main threat to AI

Other top threats included autonomouscars used as remote weapons, similar to the car terrorist attacks we have seen in recent years, AI and fake news. Interestingly, the group considered robotic burglars (small robots that can climb through small holes to steal keys and help burglars and burglars) as one of the most minor threats.

What do you think about this? We will wait for the answer here!

We are doomed?

No, but we have some work to do. Popular AI threat imagery suggests we will have one red button to press that could make all nefarious robots and computers stop. In fact, the threat lies not so much in the robots themselves, but in the way we use them to manipulate and harm each other. Understanding this potential harm and doing everything you can to get ahead of it in the form of information literacy and community building can be a powerful tool against this more realistic robot apocalypse.