Wednesday, October 27, 2021

Be like a ‘car in a bus’: UN urged to end reliance on autonomous robots

Special Rapporteur has asked UNSC to end dangerous use of AI “to justify unlawful wars, and military assaults against civilians in autonomous weapon systems and other technologies, including cyber security”.

Permanent representatives of the United Nations members have been urged to take urgent measures to ban the use of artificial intelligence in armed conflict and to prevent other countries from creating and deploying such weapons. The new UN report, authored by independent investigator Philip Alston, was made available online on Monday.

The report, which examined potential dangers arising from the development and deployment of autonomous weapons, said that these weapons “have been described as ‘killer robots’ by media sources.”

“In one analogy, they are like being able to pilot a military aircraft or a missile fully by hand. Another analogy is taking the ability to make the crucial driving decisions about an automobile, as well as be aware of weather and other local conditions, from a person or a human pilot to a machine. According to this analogy, our thinking process would have become the vehicle and we would be its driver,” Alston noted.

He said a ban would also set an example for other countries, including the United States and China, both of which are developing fully autonomous weapons with few, if any, human intervention.

A wide range of technologies are increasingly being used in warfare, but they cannot be used in conflict without human supervision, he pointed out.

He mentioned that civilian and human rights are at risk through the use of increasingly “modern forms of armed technology”.

Alston cited the high number of civilian casualties caused by robot guided bombs dropped by the U.S. Air Force; the possibility of autonomous weapons weapons being deployed by the U.S. and U.K.; current plans by France and Israel to develop autonomous weapons; and the knowledge and even plans about the use of technology such as artificial intelligence by other countries.

AI weapons systems are not ready to be used in military conflicts and there is no human supervision over such systems.

A giant step

“Even a giant step towards human control and oversight would provide many years of useful and cost-effective research and development. A further giant step might be to adopt an internationally binding ban on the development, production and use of weapons that could be deemed autonomous,” Alston pointed out.

He raised concerns that researchers are busy developing AI technology to take human control from people in as short as a decade.

In fact, he warned that the best current option is to use human supervision over the development of AI, or else other countries could undermine the right to autonomous human judgment and independent human oversight.

Alston added that current debates on AI weapons technology in the international community have gained little momentum. He feared that the technology is moving far too fast.

But “many do not see it as a human rights issue at all and consider it something other than lethal autonomous weapons.”

He expressed worry that “certain approaches are being adopted to justify” such weapons by seeking to invoke concepts such as national security, law and humanity.

The answers to various questions of potential misuse posed by the new technology in conflict zones are “often not adequate” and could introduce new problems, he said.

He called for the quick adoption of a legally binding prohibition on the use of lethal autonomous weapons systems and authorisation for the development, production and stockpiling of technologies for these purposes.

Latest article