Every day the most advanced systemsArtificial intelligence are becoming smarter and smarter, gaining new knowledge and skills. AI is already capable of being better in many areas than humans. But behind all this “superiority” there are only lines of code and well-defined algorithms that do not allow the program to be “free in their thoughts”. In other words, a machine cannot do things that are not incorporated in it. AI can come to logical conclusions, but can not reason on a given topic. And it seems that this will change soon.
How do people know the world
We, like all rational organisms, will learn aboutdevice of the world gradually. Imagine a one-year-old baby to see a toy truck drive off a platform and hang in the air. For him this will not be anything unusual. But do the same experiment just two or three months later, and the little man will immediately realize that something is wrong. After all, he already knows how gravity works.
"No one tells the child that objects shouldto fall, ”says Jan Lekun, head of Facebook in the direction of developing artificial intelligence and a professor at New York University. - “Much of what children learn about the world, they learn through observation.”
And, no matter how simple it may sound, it is this approach that can help AI developers create more advanced versions of artificial intelligence.
Why is it so difficult to teach the AI to reason
Deep machine learning (i.e., roughlyspeaking, acquiring certain skills through trial and error today allows AI to achieve tremendous success. But most importantly, artificial intelligence is still not able to do. He cannot reason and draw conclusions based on an analysis of the objective reality in which he exists. In other words, machines do not truly understand the world around them, which makes them unable to interact with it.
This is interesting: Can artificial intelligence beat a poker person?
One of the ways to improve AI can be a kind of “shared memory”, which will help the machines get information about the world around them and gradually study it. But this does not solve all the problems.
“Obviously, we are missing something,” saysProfessor Lekun. “A child can develop an understanding of how adult elephants and their babies look after seeing only 2 photos. While deep learning algorithms have to look at thousands, if not millions of images. A teenager can learn to drive safely, practicing a couple of dozen hours and figure out how to avoid accidents, but robots have to roll tens of millions of hours. ”
How to teach AI to reason
The answer, according to Professor Lekun, is toan underestimated sub-category of deep learning, known as unsupervised learning. When algorithms based on supervised and enhanced learning teach the AI to reach the goal through data entry from the outside, the unsupervised develop behavior patterns on their own. Simply put, there are 2 ways to teach a robot to walk: the first is to enter into the system all the parameters based on the structure of the robot. The second is to “explain” the principles of what walking is and make the robot learn independently. At the same time, the overwhelming majority of existing algorithms work along the first path. Yang Lekun believes that the emphasis should be shifted towards the second method.
"Researchers should start by learningprediction algorithms. For example, to teach neural networks to predict the second half of the video, seeing only the first. Yes, in this case mistakes are inevitable, but in this way we teach the AI reasoning, expanding the possibilities of its application. Returning to the example of a child and a toy truck: we have 2 possible outcomes - the truck will fall or freeze. "Throw" another hundred such examples to neural networks and they learn how to build logical interconnections and ultimately learn how to reason. "
You can discuss this and other news in our chat in Telegrams.