Humanoid killing machines are a mainstay of many futuristic works of fiction. While I’m not worried about a robot knocking on my door and asking for Sarah Connor today, science fiction is gradually becoming science fact as modern armies move from flesh-and-blood combatants to ones made of metal. There are benefits to using autonomous weapons (aka machines) instead of humans on the battlefield: military lives could be saved, and robots wouldn’t suffer the same psychological trauma that humans do.
However, autonomous weapons lack something that human soldiers have: humanity. If you get a chill at the thought of a human life being ended by something with oil pumping through its veins instead of blood, you’re not alone. The Human Rights Watch, a member of the Campaign to Stop Killer Robots, has been fighting the growing presence of autonomous weapons in the military. They’ve put together a document that details the problems of autonomous weapons from a humanitarian perspective (robots killing humans without human input is a bad thing) as well as from an international law perspective (can a robot interpret who is a combatant and who isn’t?). It’s a quick read, and it’s worth looking at: the future is coming sooner than you think.
. . .a 2011 US roadmap specifically for ground systems stated, “There is an ongoing push to increase UGV [unmanned ground vehicle] autonomy, with a current goal of ‘supervised autonomy,’ but with an ultimate goal of full autonomy.” A US Air Force planning document from 2009 said, “[A]dvances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input.”