General

It's time to give artificial intelligence the same protection as animals

Universities around the world spend seriousartificial intelligence research. Technological giants are actively developing it. Most likely, very soon we will have artificial intelligence, in its thinking abilities standing at the level of mice or dogs. And that means it's time to think that such artificial intelligence will need ethical protection, which we usually give animals.

Does artificial intelligence hurt?

Until now, in discussions on the topic of "AI rights" or“Robots rights” were dominated by questions about what ethical obligations we would place on AIs with human or superior intelligence — like the Android Data from Star Trek or Dolores from the Wild West World. But thinking about it means starting from the wrong place. Before we create an AI with human qualities that deserves human ethics, we will create a less complex AI that deserves ethical rights, at best, animals.

We have already begun to be cautious in research, inwhich certain animals are involved. Special committees evaluate research proposals to ensure that vertebrate animals will not be killed unnecessarily and will not be subjected to excessive suffering. If human stem cells or human brain cells are involved, surveillance standards are even more stringent. Biomedical research is carefully considered, but AI research, which may entail some of the same ethical risks, is not currently being studied at all. Perhaps it would be worth it.

You might think that the AI ​​does not deserve suchethical protection because it does not possess consciousness — that is, because it does not have a genuine flow of experience, with real joy and suffering. We agree with that. But here you have a difficult philosophical question: how do we know that we have created something that is capable of joy and suffering? If the AI ​​is similar to Data or Dolores, he can complain and protect himself by initiating a discussion of his rights. But if the AI ​​cannot express it like a mouse or a dog, or for some other reason does not inform us about its inner life, perhaps it will not be able to report suffering. But after all dogs can quite rejoice and suffer.

Here comes a mystery and difficulty, becausethat the scientific study of consciousness has not reached a consensus on what consciousness is and how to tell us whether it is present or not. According to some ideas - so to speak, liberal - for the presence of consciousness, only the presence of a process of well-organized information processing is sufficient. We may already be on the threshold of such a system. According to other ideas - conservative - consciousness may require very specific biological functions, like the mammalian brain in all its glory: and then we didn’t come close to creating artificial consciousness.

It is not clear which approach is correct. But if the “liberal” view is true, very soon we will create many subhuman artificial intelligences that deserve ethical protection. There is a moral risk.

Discussion of “risk AI” usually focuses onthe risks that new AI technologies may pose to us, people, such as capturing the world and destroying humanity or destroying our banking system. The less common are the ethical risks to which we expose the AI ​​due to improper handling.

All this may seem far-fetched, but sinceScientists from the AI ​​developer community seek to develop conscious AI or reliable AI systems that may well eventually become conscious, we must take this issue seriously. Such studies require ethical verification, such as that we do in animal studies and human nervous tissue samples.

In the case of animal studies and even humansappropriate protection measures were taken only after serious ethical violations were detected (for example, in case of unnecessary vivisections, Nazi military medical crimes and others). With AI, we have a chance to achieve more. We may have to create oversight committees that will evaluate cutting-edge AI research in the light of these issues. Such committees should include not only scientists, but also AI designers, cognitives, ethics specialists and interested people. Such committees will be charged with identifying and assessing the ethical risks of new forms of AI design.

It is likely that such committees will count allcurrent research in the field of AI is quite acceptable. In most cases, no one believes that we are creating an AI with conscious experience that deserves ethical consideration. But soon we may cross this line. Need to be ready.

You will find even more interesting material here in Zen.