Tuesday, January 15, 2013

Comment Paper 4 - Acceptable Uses for Autonomous Weapon Systems


Autonomous weapon systems have the potential to change warfare entirely.  Current drone usage has already taken the operator away from the battlefield, but the abilities that autonomous weapon systems present may limit human involvement in warfare even further.  While this may seem to be a step in less violent warfare, the fact that robots can target humans is a concept that should be addressed critically by our military leaders before fully reshaping our armed forces.
            Although warfare has evolved from hand-to-hand combat to aerial bombardments, the decision to target and attack an opponent has always included human decision-making that factors in real-time information.  Autonomous weapons change this common trait of warfare.  While it is debatable that the human programmer is involved in the targeting and eventual attack, there is no “human judgment” to analyze the present situation and deem that an attack should be carried out. 
            Many potential issues can arise from excluding human judgment in a attack.  The Aegis disaster in 1988 demonstrated a failure in the ability of semi-autonomous weapon systems to target and kill an enemy.  Navy personnel trusted technology over their own judgments and unfortunately killed hundreds of Iranian civilians.  If a semi-autonomous weapon system still had a flaw that resulted in hundreds of civilian deaths, how can we trust fully autonomous weapons to make correct targeting decisions?  While there have been countless advancements made in the development of autonomous weaponry since this tragedy, it is an eerie suggestion that the military may be willing to trust technology to take autonomous lethal action even though that technology’s decision-making capabilities are worse than those of humans.  Additionally, autonomous technology could cause individuals and states to feel less accountable when taking potentially lethal action.  Although this claim is only theoretical, it could be an unforeseen consequence when taking the human out of the decision-making process to kill other humans.
            Although I am hesitant to accept lethal autonomous weapons as an efficient way to fight a war, I believe that non-lethal autonomous weapons, such as ones used for surveillance and intelligence gathering, are acceptable and should be pursued.  Singer discusses the Biomimetic Autonomous Undersea Vehicle (BAUV) and how it can be used to track activity in shallow waters.  The BAUV can track hostile submarine movements without putting American submarines at risk.  These technologies, while they may encounter technological glitches and flaws in their use, do not have as drastic and deadly consequences if they fail.
            Autonomous weapons should have an increased presence in warfare, but with a limited scope.  While I believe that non-lethal autonomous weapons should increasingly be used because they limit the number of people in harm’s way, I have difficulty accepting lethal AWSs.  It is impossible to tell if lethal AWSs will ever have a targeting capability that exceeds human judgment.  Additionally, I have moral issues with taking the human element out of the immediate decision to kill people.

7 comments:

  1. What do you think of the argument in the Carpenter piece about machines only targeting machines?

    ReplyDelete
    Replies
    1. Carpenter addresses a key point, arguing that even if fully autonomous weapons target other machines, humans are still at risk. If an autonomous weapon drops a bomb on another non-human target, it is still likely that the ensuing blast could harm individuals in the immediate area. Claiming that AWSs could apply lethal force to an unmanned target without the risk of human loss would be overly idealistic. However, I do not think that the Department of Defense is ignorant to that fact or even making such a claim. Rather, I believe the policies established in the directive are an attempt by the military to reduce the risk of civilian casualties and friendly fire incidents while the use of autonomous weaponry is still in its early stages.

      Delete
  2. This comment has been removed by the author.

    ReplyDelete
  3. I share your view on lethal AW's. It does appear to be a scary idea that lethal AW's could completely replace human judgment. However, would your opinion change if technology continued to improve and studies came out that suggested malfunctions caused by the machines were far less likely than errors done by human troops?

    ReplyDelete
    Replies
    1. Yes, but only if our increased usage of these weapons did not lead to an overall trend in military conflicts on our part. If we do not have to worry about troops in harm's way, or holding them accountable, I'm worried policymakers may be inclined to use the military more freely. If this were to happen, the long-term impact of excessive military aggression could backfire in countless ways.

      Delete
  4. I am curious as to you if you believe human judgement is necessary in wars. It seems that human emotion it critical in how we conduct ourselves in combat and we could actually change the reputation of the United States for better of worse if we took the emotion out of combat. We would have planes just killing and then moving on with not ability to become fatigues and concede potential defeat or victory. How would we lose or with a war if we had not human judgement?

    ReplyDelete
    Replies
    1. While wars could potentially not be fought by humans, human judgment will always be involved in war in some way. Policymakers, while relying on autonomous weapons to fight a war, will always be the ones who decide whether to go to war or not. Like in a conventional war, if policymakers are noticing significant losses, even if all these losses were robots, they would be wise to cease aggression because they could eventually be putting themselves and the rest of their country at risk.

      Delete