Almost everything you hear about artificialintelligence today, is due to deep learning. This category of algorithms works with statistics to find patterns in data, and has proven extremely powerful in imitating human skills, such as our ability to see and hear. To a very narrow degree, it can even imitate our ability to reason. These algorithms support Google's search, Facebook news feed, Netflix recommendation engine, and also form industries such as healthcare and education.
How to develop deep learning
Despite the fact that deep learning is practicallyAlone, the artificial intelligence of the public was revealed; it represents only a small flash in the historical task of mankind to reproduce its own intelligence. It has been at the forefront of this search for less than a decade. If, however, to distance the entire history of this area, it is easy to understand that soon it may also depart.
“If in 2011 someone wrote that deeplearning will be on the front pages of newspapers and magazines in a few, we would be: wow, well, you smoke dope, ”says Pedro Domingos, a professor of computer science at the University of Washington and the author of The Master Algorithm’
According to him, sudden ups and downsvarious methods have long been characterized by research in the field of AI. Every decade there is a hot competition between different ideas. Then, from time to time, the switch clicks and the whole community begins to engage in one thing.
Our colleagues from MIT Technology Review wantedvisualize these trips and starts. To this end, they turned to one of the largest databases of open scientific papers, known as arXiv. They downloaded excerpts from a total of 16,625 articles available in the “artificial intelligence” section on November 18, 2018, and tracked the words mentioned over the years to see how this area developed.
Thanks to their analysis, three maintrends: a shift towards machine learning in the late 90s - early 2000s, the growing popularity of neural networks that began in the early 2010s, and the growth of reinforcement learning in the last few years.
But first, a few reservations. First, the arXiv section with AI dates back to 1993, and the term “artificial intelligence” refers to the 1950s, so the database itself represents only the last chapters in the history of this field. Secondly, the documents added to the database each year represent only a part of the work that is being done in this area at the moment. However, arXiv offers an excellent resource for identifying some of the major research trends and to see the tug of war between different ideological camps.
Machine learning paradigm
The biggest shift that foundresearchers, this is a departure from systems based on knowledge, to the beginning of the 2000s. Such computer systems are based on the idea that you can encode all the knowledge of humanity in the system of rules. Instead, scientists turned to machine learning — the parent category of algorithms that include deep learning.
Among the 100 words mentioned are related to systemson the basis of knowledge - “logic”, “restrictions” and “rule” - decreased the most. And associated with machine learning - "data", "network", "performance" - grew more than others.
The reason for this change of weather is very simple. In the 1980s, knowledge-based systems were gaining popularity among fans, thanks to the excitement around ambitious projects that tried to recreate common sense in cars. But when these projects were deployed, the researchers faced a major problem: too many rules had to be encoded so that the system could do something useful. This led to increased costs and significantly slowed down current processes.
The answer to this problem was machine learning. Instead of requiring people to manually encode hundreds of thousands of rules, this approach programs machines to automatically extract these rules from a pile of data. Similarly, this area has abandoned knowledge-based systems and turned to improving machine learning.
Neural network boom
As part of the new machine learning paradigmtransition to deep learning did not occur immediately. Instead, as shown by the analysis of key terms, scientists have tested many methods in addition to neural networks, the main mechanisms for deep learning. Other popular methods included Bayesian networks, the support vector machine and evolutionary algorithms, all of which use different approaches to finding patterns in the data.
During the 1990s and 2000s between thesemethods there was sustainable competition. Then, in 2012, a cardinal breakthrough led to another change of weather. During an annual ImageNet contest designed to accelerate computer vision progress, a researcher named Geoffrey Hinton and his colleagues at the University of Toronto achieved the best accuracy in image recognition with an error of just over 10%.
Technique of deep learning heused, spawned a new wave of research - first in the community of visualizers, and then beyond. As more and more scientists began to use it to achieve impressive results, the popularity of this technique, along with the popularity of neural networks, increased dramatically.
Growth of reinforcement learning
The analysis showed that a few years after the heyday of deep learning, there was a third and final shift in AI research.
In addition to various machine learning methods,There are three different types: supervised, uncontrolled, and reinforced training. Supervised learning, which involves feeding the labeled data to a machine, is used most often and also has the most practical applications today. However, in the past few years, training with reinforcements that mimics the process of teaching animals through “carrots and sticks”, punishments and rewards has led to a rapid increase in references to it in the works.
The idea itself is not new, but for many decades it has notworked. “Supervised learning specialists laughed at reinforcement trainers,” says Domingos. But, as with deep learning, one turning point suddenly brought the method to the fore.
This moment came in October 2015, when DeepMind AlphaGo, trained with reinforcements, defeated the world champion in the ancient go game. The impact on the research community was instantaneous.
Next ten years
Analysis MIT Technology Review provides onlyThe latest cast of competition among the ideas that characterize AI research. However, he illustrates the impermanence of the desire for duplication of intelligence. “It is important to understand that no one knows how to solve this problem,” says Domingos.
Many of the methods that were used onover the course of 25 years, they appeared at about the same time in the 1950s, and could not meet the challenges and successes of each decade. Neural networks, for example, reached their peak in the 60s and a little in the 80s, but almost died before regaining their popularity, thanks to deep learning.
Every decade, in other words, sawthe dominance of other technology: neural networks in the late 50s and 60s, various symbolic attempts in the 70s, knowledge-based systems in the 80s, Bayesian networks in the 90s, reference vectors in zero and neural networks again in 2010- x
The 2020s will be no different, saysDomingos. So the era of deep learning may end soon. But what will happen next - the old method in the new glory or a completely new paradigm - this is the subject of fierce disputes in the community.
“If you answer this question,” says Domingos, “I want to patent the answer.”
To catch the news of artificial intelligence by the tail, read us in Zen.