When Norbert Wiener, the father of cybernetics, wrote hisThe book “Human use of human beings” in 1950, vacuum tubes were still the main electronic building blocks, and in fact there were only a few computers. However, he imagined the future, which we are now witnessing, with unprecedented accuracy, having made a mistake only in minor details.
Before any other artificial philosopherintelligence, he realized that AI will not just imitate — and replace — human beings in many kinds of intellectual activity, but will change people in the process. “We are just whirlpools in a river of ever flowing water,” he wrote. "We are not something that just lives, we are models that perpetuate themselves."
For example, when a lot of temptingopportunities, we are willing to pay and accept small business costs to access new opportunities. And very soon we become addicted to new tools, lose the ability to exist without them. Options are required.
This is a very old history of evolution, and many chaptersfrom it we are well known. Most mammals can synthesize their own vitamin C, but primates who eat mostly fruits lose this built-in ability. The self-repetitive patterns that we call humans now depend on clothes, processed foods, vitamins, syringes, credit cards, smartphones, and the Internet. And tomorrow, if not today, - from artificial intelligence.
Wiener foresaw several problems with this state of affairs, which Alan Turing and other early AI optimists largely missed. The real threat, he said, was:
... is that such machines, although helpless themselveson their own, can be used by humans or block human beings to increase their control over the rest of the race, or political leaders can try to control their populations, using not political methods, but political methods, as narrow and indifferent to humans as if they were invented mechanically.
Obviously, these dangers are very relevant now.
In the media, for example, innovations in digital audio andvideos allow us to pay a small price (in the eyes of audiophiles and movie lovers) for refusing analog formats, and getting extremely simple in return — too simple — a way to play records with almost no restrictions.
But there is a huge hidden price. Orwell's truth ministry has become a real possibility. AI methods for creating virtually indistinguishable fake “records” make obsolete the tools that we have used for investigations over the past 150 years.
It remains for us to simply abandon the brief epoch.photographic evidence and return to that old world where human memory and trust were the gold standard. Or we can develop new methods of defense and attack in the battle for truth. One of the most exciting recent examples was the fact that destroying a reputation is much cheaper than earning and protecting that reputation. Wiener saw this phenomenon very widely: “In the long run, there will be no difference between arming yourself and enemy armament.” The information age has also become an era of misinformation.
What we can do? The key is Wiener's observation that "these machines" are "helpless in themselves." We create tools, not colleagues, and the real threat is that we don’t see the difference.
Artificial intelligence in their currentmanifestations parasitic on human intelligence. He very unceremoniously takes possession of everything created by human creators and extracts patterns — including our most secret habits. These machines do not yet have goals or strategies, are not capable of self-criticism and innovation, they only study our databases without having their own thoughts and goals.
They are, as Wiener says, helpless in thatmeaning that they are chained or immobilized, no, they are not agents at all - they do not have the ability to “act on causes,” as Kant would have put it.
In the long run, “strong AI”, orgeneral artificial intelligence, possible in principle, but undesirable. Even more limited AI, which is possible in practice today, will not be evil. But it is a threat - in part because it can be mistaken for a strong AI.
How strong is artificial intelligence today?
The gap between today's systems andSci-fi systems flooding the popular imagination are still huge, although many people, both amateurs and professionals, tend to underestimate it. Let's look at IBM's Watson, which may well be worthy of respect in our time.
This supercomputer is the result of extremelya large-scale R & D process (research and development), in which many people were involved and the design of intelligence for many centuries, and it uses thousands of times more energy than the human brain. His victory in Jeopardy! was a real triumph, which was made possible by the formulary restrictions of the rules of Jeopardy !, but so that he could take part, even these rules had to be revised. I had to give up a bit of versatility and add humanity to make the show.
Watson is a bad company, despite entering intomisleading advertising from IBM, which promises the AI's conversational abilities at a volume level, and turning Watson into a plausible multifaceted agent would be akin to turning a calculator into Watson. Watson may be a good computational case for such an agent, but rather a cerebellum or amygdala rather than a mind - at best, a special purpose subsystem that plays the role of support, but also not a system for planning and formulating goals, depending on the conversational experience gained.
And why would we want to create a thinking andcreative agent from watson? Perhaps the brilliant idea of Turing - the famous Turing test - lured us into a trap: we became obsessed with creating at least the illusion of a real person sitting in front of the screen, bypassing the "sinister valley."
The danger is that since Turingpresented his task - which was, first of all, the task of deceiving the judges - the creators of AI were trying to accomplish it with the help of funny humanoid dolls, "cartoon" versions that would fascinate and disarm the uninitiated. ELIZA Joseph Weisenbaum, the very first chatbot, was a vivid example of creating such an illusion, and at the same time an extremely simple algorithm that could convince people that they were having intimate and sincere conversations with other people.
He was concerned about the ease with which people are willingbelieve in it. And if we understood something from the annual competition for passing the limited Turing test for the Lebner Prize, it is that even the smartest people who are not well versed in computer programming are easily led to these simple tricks.
Attitude of people in the field of AI to such methodsvaries from condemnation to promotion, and the consensus is that all these tricks are not particularly deep, but can be useful. A shift in attitude that would be very useful would be a sincere admission that the androids painted in the dolls are false advertisements that should be condemned and not encouraged.
How to achieve this? As soon as we understand that people begin to make decisions of life and death, following the “advice” of AI systems, whose internal operations are almost incomprehensible, we will see a good reason for those who urge people to trust such systems to begin to rely on morality and law.
AI systems are very powerful.instruments. So powerful that even experts have a good reason for not trusting their own judgment when there are “judgments” represented by these tools. But if these tool users are going to benefit, financially or otherwise, from popularizing these tools, they need to make sure they know how to do this with full responsibility, maximum control and justification.
Operator Licensing and Approvalsuch systems - just as we license pharmacists, crane operators and other professionals whose mistakes and erroneous judgments can have dire consequences - can, with the support of insurance companies and other organizations, oblige the creators of AI systems to go a long way looking for weaknesses their products, as well as train those who are going to work with them.
You can imagine a kind of reverse test.Turing, in which the judge will be the subject of evaluation; until he finds weaknesses, trespassing, gaps in the system, he will not receive a license. To obtain a certificate such a judge will need serious training. The desire to ascribe to an object the human ability to think, as we usually do when meeting with a reasonable agent, is very, very strong.
In fact, the ability to resistthe desire to see in something humanized is a strange thing. Many people would find cultivating such talent questionable, because even the most pragmatic users of the system occasionally treat their tools “friendly”.
No matter how carefully the designersartificial intelligence will exclude false "human" notes in their products, we must expect the flourishing of labels, detours and permissible distortions of the actual "understanding" of both systems and their operators. In the same way that drugs with a long list of side effects or alcohol are advertised on TV, supplying the video with an abundance of fine print with all the statutory warnings, so too, the developers of artificial intelligence will abide by the law, but will be sophisticated in warnings.
Why do we need artificial intelligence?
We do not need artificial conscious agents. There are plenty of natural conscious agents who are enough to perform any tasks intended for specialists and privileged individuals. We need smart tools. Tools do not have rights and should not have feelings that can be hurt or that can be “abused”.
One of the reasons for not doing artificialconscious agents is that although they can become autonomous (and in principle they can be as autonomous, self-improving, or self-creating as any human being), they should not - without special permission - share our vulnerability or our mortality.
Daniel Dennett, a professor of philosophy fromTufts University once set a task for students at a seminar on artificial agents and autonomy: give me the technical specifications of a robot that can sign a contract with you - not a surrogate owned by another person, but by itself. It is not a question of understanding the causes or manipulations of a pen on paper, but rather of possession and well-deserved possession of legal status and moral responsibility. Small children cannot sign such contracts, as well as disabled persons, whose legal status obliges them to be under guardianship and imposes liability on guardians.
The problem of robots that might wantto get such exalted status is that, like Superman, they are too vulnerable to make such statements. If they refuse, what will happen? What will be the punishment for breaking a promise? They will be closed in a cage or disassembled? A prison for artificial intelligence will not be inconvenient unless we first load the thirst for freedom, which cannot be ignored or turned off by the AI itself. Disassembling the AI will not kill the information stored on its disk and in the software.
Easy digital recording and data transfer -a breakthrough that allowed software and data to get, in fact, immortality - makes robots invulnerable. If this does not seem obvious, think about how people's morality would change if we could back up people every week. A jump from a bridge without a rubber band on Sunday after Friday's backup can be a rash decision, then you can watch the recording of your premature death later.
That is why we create not conscious -would like to create - humanoid agents, but rather a completely new type of creatures, some oracles, unconscious, without fear of death, without distraction to love and hate, without personality: mirrors of truth that will almost certainly be infected with human lies.
Human use of human beingswill change soon - once again, forever, but if we take responsibility for our evolution trajectory, we will be able to avoid unnecessary dangers.
Do not agree? Tell us about your opinion in our chat room in Telegram.