Ethics Homework

“a ban on offensive autonomous weapons beyond meaningful human control”

As artificial intelligence opened a new frontier in scientific development, the idea of using scientific knowledge to invent autonomous weapons is brought under the spotlight. Whether it is ethical to create robots that can eliminate humans is a critical question. Many AI researchers are concerned with the potential catastrophic consequences of allowing the development of lethal weapons. On the other hand, autonomous system can provide great benefits to humanities, saving human lives and providing critical information in dangerous situation. Therefore, the development of autonomous weapons does not need to be completely halted but here is a need in global policy intervention to set a limit on the capability of offensive autonomous weapons.

There are indeed many advantages in developing automated weapons. First, it saves the military costs which in turn can reduce the tax. In a 2013 article published in The Fiscal Times, David Francis cites Department of Defense figures showing that “each soldier in Afghanistan costs the Pentagon roughly $850,000 per year.” Some estimate the cost per year to be even higher. Conversely, according to Francis, “the TALON robot—a small rover that can be outfitted with weapons, costs $230,000”. The reduce in cost is significant. If the military can use these automated weapons to replace some particular roles, potentially helping soldiers to make better decision on the battlefield, then fewer soldiers would need to be deployed to the battlefield and the death rate can be reduced.

Automated system that helps human to make judgement in the battlefields are acceptable. The issue is these autonomous systems should not be able to kill humans. First of all, it is hard to account for responsibility for deaths if the killer is a machine. Similar issues arise in autonomous cars when accidents happen. Is the person who wrote the designed the flawed software responsible? Or the manufacturing company who authorize the release of the product responsible? When life and death decision is dependent on the correctness and accuracy of scientific advancement, it is hard to charge people with crimes because these flaws are usually unintentional. Furthermore, the delegation of life and death decisions to a nonhuman machine is concerning. Making decision to take away someone’s life is difficult even for human, let alone implementing algorithms for machines to kill people. There are too many factors. If these factors are not carefully calibrated, autonomous weapons might choose a wrong target and accidentally kill innocent lives.

The banning of offensive lethal weapons must be a collaborative effort among world leaders, as some may worry about hostile countries breaking the deal and develop more advanced weapons. However, as Musk and Hawking said in their open letter on autonomous weapon, biologists and chemists could have developed biological and chemical weapons that can wipe out the human race, but they all come together to “supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons”. Viewing from this optimistic perspective, the ban of offensive autonomous weapons can work.

Reference

“Open Letter on Autonomous Weapons.” Future of Life Institute, Jolene Creighton https://Futureoflife.org/Wp-Content/Uploads/2015/10/FLI_logo-1.Png, futureoflife.org/open-letter-autonomous-weapons/.

“Pros and Cons of Autonomous Weapons Systems.” Army University Press, www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/.