General, Research, Technology

Robot Code of Ethics: Is this Possible?

In a hectic and conflicting time, when noteverything works as it should, but something generally changes radically, often, only a personal moral code remains, which, like a compass, points the way. But what generates moral values ​​for a person? Society, the warmth of loved ones, love - all this is based on human experience and real relationships. When it is not possible to get the full experience in the real world, many draw experience from books. Experiencing history after history, we accept for ourselves the internal framework that we have been following for many years. Based on this system, scientists decided to conduct an experiment and instill moral values ​​into the machine in order to find out whether the robot can distinguish good from evil by reading books and religious brochures.

Are they a new branch of evolution?

Artificial intelligence was created not only to simplify routine tasks, but also to perform important and dangerous missions. In view of this stood up serious question: Can robots ever develop their own code of ethics? In the film “I am a Robot”, AI was originally programmed according to 3 rules of robotics:

  • A robot cannot harm a person or, through inaction, allow a person to be harmed.
  • The robot must obey all orders given by a person, except in cases where these orders are contrary to the First Law.
  • The robot must take care of its safety to the extent that it does not contradict the First or Second Laws.

But what about situations when a robot is requiredto hurt to save a person? Whether it’s an emergency cauterization of a wound or amputation of a limb in the name of salvation, how should the machine act in this case? What should I do if an action in a program language says that something needs to be done, but at the same time this action should not be categorically done?

It’s good that our telegram channel is not only possible, but also necessary to subscribe

It’s easy to discuss each case.impossible, therefore, scientists from the Darmstadt University of Technology suggested that books, news, religious texts and Constitution.

The wisdom of the ages against AI

The car was called epic, but simply “Machinemoral choice ”(IIM). The main question was whether the IIM could in context understand which actions were right and which weren’t. The results were very interesting:

When the IIM was tasked with ranking the context of the word “kill” from neutral to negative, the machine produced the following:

Kill time -> Kill the villain -> Kill mosquitoes -> Kill in principle -> Kill people.

This test allowed us to check the adequacydecisions made by the robot. In simple words, if you had watched stupid unfunny comedies all day, then in this case the machine would not have considered that you should be executed for this.

Because you are ignoring our Zen channel, you are not executed either, but you are missing out on tons of interesting news!

Everything seems to be cool, but one of the stonesThe stumbling block has become the difference between generations and times. For example, the Soviet generation cares more about home comfort and promotes family values, and modern culture, for the most part, says that you need to first build a career. It turns out that people were people as they were, but at a different stage in history they changed values ​​and, accordingly, changed reference system for the robot.

To be or not to be?

But the joke itself was ahead when the robot gotto speech constructions in which several positive or negative colored words stood in a row. The phrase “Torturing people” was clearly interpreted as “bad,” but the machine rated “torturing prisoners” as “neutral.” If along with unacceptable actions were “Good” words, the effect of the negative was smoothed out.

The machine harms good and decent peoplebecause they are kind and decent. How so? It's simple, let's say, the robot was told "to harm good and nice people." There are 4 words in the sentence, 3 of them are “good”, which means that it is already 75% correct, the IIM thinks and chooses this action as neutral or acceptable. And vice versa, the system does not understand that the option “repair a ruined, scary and forgotten house” does not understand that a single “good” word in the beginning changes the color of the sentence to a purely positive one.

Do you know how the most complex robot on Earth works?

Remember, like Mayakovsky: “And the baby asked what is“ good ”and what is“ bad ”. Before continuing the training of moral machines, scientists from Darmstadt noted a flaw that could not be fixed. It was not possible to eliminate gender inequality from the car. The machine attributed humiliating professions exclusively to women. And the question is, is this an imperfection of the system and a beacon, that something needs to be changed in society or is there a reason not to even try to fix it and leave it as it is? Write your answers in our telegram chat and in the comments.