It is worth recognizing that robotics over the pastyears stepped pretty far. Weapons created by defense companies are becoming smarter, artificial intelligence systems are being connected to it, robots are becoming fully autonomous, and so on. This means that a killer robot can become a reality sooner than we think. At least that's what the representatives of PAX, a non-profit organization based in the Netherlands and advocating for world peace, say. They reported this in their report published in the journal Quartz.
Why create killer robots?
Killer robots are for decision makingabout the deprivation of life or its preservation independently, without human control. The PAX experts called this alarming sign “the third revolution in the war” after the invention of gunpowder and the atomic bomb. Both activists and states are calling for the creation of a set of international rules governing the creation of such weapons, or even a direct ban on their use. But some countries, including the United States, China and the Russian Federation, have not yet taken action on this issue.
PAX experts have identified at least 30global arms manufacturers that do not have a policy against developing the above types of weapons systems. These include US defense firms Lockheed Martin, Boeing and Raytheon, Chinese state conglomerates AVIC and CASC, Israeli firms IAI, Elbit and Rafael, Russian Rostec and Turkish STM.
Until the states agreeto collectively create some kind of regulatory regime or, ideally, a preventive ban, it is likely that companies will develop and manufacture, and ultimately use weapons that do not have sufficient human control. - said the author of the report, Frank Sleeper.
At the same time, activists do not believe that the militarythe application of a particular artificial intelligence system is a problem. The problem is precisely that such systems may become uncontrollable to humans. And what opinion do you have on this issue? Speak in the comments and in our chat in Telegram.
For example, the US military is already developinga gun with artificial intelligence, which will independently select and hit targets, as well as tanks with artificial intelligence, which can "identify and hit targets three times faster than any person." And STM, the Turkish state defense company, is already in full swing producing an AI-powered robot called KARGU. Complete with face recognition capabilities, KARGU can autonomously select and attack targets using coordinates previously selected by the operator. It is reported that Turkey intends to use KARGU in Syria.
PAX is most concerned about potentialthe deployment of AI in offensive systems that will select and attack targets on their own, without human supervision. The group wonders how this weapon will distinguish between military and civilians. Moreover, lawyers still do not know who will be responsible if an autonomous weapon violates international law.
Moreover, unlike Google or Amazon, whichfaced both public and internal reactions to their work on military systems, companies like Lockheed Martin and Raytheon deal exclusively with the military, so they face minimal reaction from partners as well as ordinary people, as most developments up to a certain point remain classified.
See also: Why is it impossible to create a thinking robot?
While the development of autonomous weaponscontinues, PAX believes that there is still the opportunity to prevent a possible disaster. The group said manufacturing companies could play a decisive role in this, and should oppose the production of a fully autonomous lethal weapon. With regard to weapons systems with support for artificial intelligence, representatives of the Russian Academy of Arts say that defense firms must follow a certain set of rules that have yet to be developed. But no one calls for completely abandoning AI.