General

Another Deep Mind Victory: After Chess and Go, Artificial Intelligence Conquered StarCraft

In November 2017, that is, a little more than a yearago, we wrote that artificial intelligence is not yet able to defeat professional players in StarCraft. But less than a year, as this barrier was taken. Last month in London, a team from the English division of the study of artificial intelligence DeepMind quietly laid a new cornerstone in the confrontation of people and computers. On Thursday, she revealed this achievement in a three-hour stream on YouTube, during which people and robots fought for life and death.

DeepMind defeated people in StarCraft

DeepMind broadcast showed that her robot withArtificial intelligence AlphaStar beats a professional player in a complex real-time strategy (RTS) StarCraft II. The champion of mankind, 25-year-old Grzegorz Komints from Poland, flew off with a score of 5: 0. It seems that machine learning software has discovered strategies unknown to professionals who compete for millions of prize money that are given out annually in one of the most profitable games for the eSports world.

</ p>

“It was not like any StarCraft I played,” said Comince, a well-known professional under the nickname MaNa.

The DeepMind feat is the hardest in longthe chain of contests that computers imposed on the best of the world of people in games and in which they won. Checkers fell in 1994, chess in 1997, in 2016 AlphaGo conquered the game of go. StarCraft robot is the most powerful player in the world of artificial intelligence; and his arrival was waiting.

AlphaStar appeared about six years ago inmachine learning history. Although the victory of AlphaGo in 2016 was overwhelming - go experts believed that this moment would come at least ten years later - the victory of AlphaStar seems to be more or less arrived on schedule. By now, it is clear that with enough data and computing power, machine learning can cope with complex but specific problems.

Mark Riedl, Associate Professor, Institute of TechnologyGeorgia, found the news of Thursday exciting, but not terrific. “We have already reached this point, so it was only a matter of time. In a sense, it was boring to win people in games. ”

Video games like StarCraft are mathematically harder,than chess or go. The number of actual positions on the go board is one with 170 zeros, and the StarCraft equivalent is rated as 1 with 270 zeros, no less. Creating and managing military units in StarCraft requires players to select and perform many other actions, as well as making a decision without being able to see every step of the opponent.

DeepMind has overcome these steep barriers withpowerful TPU chips that Google invented to increase the power of machine learning. The company has adapted algorithms designed for word processing to the task of determining actions on the battlefield that lead to victory. AlphaStar studied at StarCraft on the records of half a million games between people, then played with constantly improving clones of himself in a virtual league, which represents a kind of digital evolution. The best bots that appeared in this league accumulated experience equivalent to 200 years of gameplay.

AlphaStar, which defeated MaNa, is far fromomnipotent. At the moment, the robot can play only one of the three races available in StarCraft. In addition to the inhumanly long game experience, DeepMind also perceives this game in a different way. He sees everything that happens in the game at the same time, while MaNa needed to move around the map to see what was happening. AlphaStar also has a higher accuracy in controlling and targeting units than a person who owns a computer mouse, although the reaction time of a computer is less than that of a professional gamer.

Despite these flaws, Riedl and other expertsfully welcomed the work of DeepMind. “It was very impressive,” says Jie Tan, an independent AI research institute at OpenAI, who works on bots that play Dota 2, the most profitable game in the world for eSports. Such video game stunts can have potentially beneficial side effects. The algorithms and code that OpenAI used to master Dota last year have been adapted with varying success to make the hands of robots more agile.

However, AlphaStar illustrates the limitationmodern, highly specialized machine learning systems, says Julian Togelius, a professor at New York University and the author of a recent book about games and artificial intelligence. Unlike his human opponent, the new DeepMind champion cannot play at full strength on different game cards or for different alien race in a game without extended additional training. He also cannot play checkers, chess, or earlier StarCraft versions.

This inability to cope even with smallsurprises are a problem for many of the expected AI applications, such as autonomous cars or adaptable bots, which researchers call general artificial intelligence (AGI, OII). A more significant battle between man and machine can be a kind of decathlon, with board games, video games and the final in Dungeons and Dragons.

Limitations of highly specializedArtificial intelligence seemed to manifest itself when MaNa played an exemplary game against AlphaStar, which was limited to viewing a card by person type, one square at a time. DeepMind data showed that it is almost as good as the one that beat MaNa in five games.

A new bot quickly assembled an army, powerful enough,to crush his human rival, but MaNa used clever maneuvers and defeat experiences to hold off AI forces. The delay gave him time to gather his own troops and win.

To find more interesting news, read us in Zen.