Killer robots: Humans  again  fail to decide future of killer robots

What is the News?

A report from a  UN  panel  has said that the first autonomous drone attack may  have  already happened in Libya. Yet, a UN conference in Geneva  failed to regulate the use of killer robots on the battlefield.

What are Killer Robots?

Killer Robots are fully autonomous weapons that would be able to select and engage targets without meaningful human control.

​​The idea of killer robots has been explored widely in science fiction movies including Terminator, Blade Runner, and Robocop

Collectively, these weapons fall under Lethal Autonomous Weapons Systems(LAWS) which can include bombs, dog-like robots, and more that can use AI and other digital technologies to make decisions on the battlefield. It doesn’t include drones, which are manned remotely by pilots.

Read more: Killer robots aren’t science fiction. A push to ban them is growing
Why is there a law needed to control the use of Killer Robots?

Firstly, allowing robots to make life-or-death decisions is inhumane and shouldn’t be allowed. 

Secondly, killer robots raise the concern of algorithmic bias. Data sets are typically flawed and tend to disfavour traditionally disadvantaged groups.

Thirdly, killer robots also present challenges for compliance with international humanitarian law’s proportionality principle, which prohibits attacks in which expected civilian harm is excessive in comparison to anticipated military advantage.

Fourthly, it could also reduce the threshold for war.

What are India’s views on a new law on Killer Robots?

India, Russia, and the United States have said that the existing international humanitarian law is sufficient and opposed negotiation of a new legally binding instrument on killer robots.

Source: This post is based on the articleHumans  again  fail to decide future of killer robotspublished in Livemint on 21st Dec 2021.

Print Friendly and PDF