19/10/2018
Abstract:
The purpose of this essay is to both summarise some of the most pertinent areas of philosophy engaging with the use of robotics in warfare but also to add to this. The current nature of the robotic- soldier debate focuses on man’s place within it, this is an attempt to shift that focus onto the robot and view from that perspective the reciprocal implications of robotic development and use in warfare on man and machine alike. The thought process behind this project was to some extent inspired by the works of Professor Andrew Pickering, particularly his work surrounding the ‘mangle of practice’, ‘dances of agency’ and Asian eels. Rather than a masculinised discussion of the latest technologies and warfare, this essay focuses upon the philosophical notions of personhood, the implications upon this of advanced robotics and A.I and the repercussions this would have on society, particularly one of its most ubiquitous activities, violence and warfare. This is grounded in the works of Donna Haraway and her Cyborg Manifesto. By progressing through the current issues people take with the use of robots in warfare, this essay aims to delineate the inevitable outcome, that robots can and ought to be assigned personhood, and depicts the implications of this for their continued (and ever expanding) use in modern warfare.
With the increasing proliferation in the use of robotics and drone systems in warfare, the ethical debate surrounding the use of these technologies is becoming more and more pertinent. A discussion of the ubiquity of robots in warfare is taken up by Peter Singer, with him quoting leaders in the U.S military as believing that battles featuring ‚tens of thousands‛ of drones will soon be reality. [1] Some see these technologies as providing a much needed benefit by reducing the number of both civilian and combatant casualties, making warfare more efficient, effective and safe. The most extreme of their opponents argue that the unfettered use of robotic soldiers in place of human combatants will result in humanity losing control in a manner previously the sole domain of science-fiction. Others simply hold that the claims of the proponents of these technologies are overblown. They fear the implications of using robotic soldiers; their inability to act autonomously like humans and their lack of emotions as a check upon their actions. As M. Riza acknowledges ‚depending on perspective, robotic warfare holds great promise or is a sign of the end of days.‛ [2] In this essay I shall outline some of the key issues taken with the use of drones and robotic soldiers. Then I shall discuss one way in which a solutionto these issues is possible and yet, how this pursuit could lead to unintended consequences. Namely, that developing robots sufficiently skilled and unflawed would result in them having intelligence and capabilities on a par with, or even exceeding, humanity’s own. Applying Donna Haraway’s notions of the ‘cyborg’ in relation to both man and machine, I will then demonstrate how, at such a juncture, the tenability of the claim that the loss of a robot combatant in warfare is (all things being equal) more desirable than the loss of a human soldier becomes eroded. In this essay I will discuss the notion of personhood and how, with sufficient technological advancement, it can be applied to robotic soldiers and so challenge our pre-conceived notions of morality in relation to the use of robot soldiers. This results in the claimthat using robotic soldiers in place of human combatants is morally justifiable, on the grounds that it reduces the death toll of warfare, becoming untenable.
Science fiction has previously had the monopoly on common understandings of robots in warfare with the term robot soldiers evoking the image of the Terminator or perhaps the more anthropomorphic figure of Roy Batty from Blade Runner. Increasingly though, robots are coming to be part of the modern military arsenal with technologies such as the drone and the ‘Reaper’ derivation, to name just one, becoming commonly used and recognised by the public. As Chris Meyers notes ‚…we have seen the first robot soldiers, armed with lethal weapons, entering the battlefield…‛ with a plethora of forms and uses comprising the robotsoldiers currently known by the general public to be in operation; what will soon replace them could be radically more advanced still. [3] This emerging reality has brought with it a number of legitimate ethical concerns. Drones have featured heavily in military operations to fight insurgents in the Middle-East, with stories of ‘collateral damage’ accompanying their actions as civilians and friendly combatants occasionally come under fire in the place of genuine targets. Are robots, and will they ever be, capable of performing with the same discretion and situational awareness as a human soldier is currently able to? If not, then how can the major duties of combat ever be delegated to autonomous robots? In the drive for more automated robots taking over from human combatants this is a concern which will have to be answered. Moral concerns with the use of robot soldiers go deeper still, with Meyers arguing that it could potentially lead to a more callous attitude on the part of governments; robot soldiers potentially could encourage engaging in warfare rather than deterring it. [4] It could also lead to a loss of gravity on the part of the general public in respect to their views of warfare, Singer argues. [5] ‚Unmanned systems represent the ultimate break between the public and its military‛. [6]
These ethical concerns with the use of robots in warfare are in need of answer because, as Singer notes, ‚technologies such as unmanned systems can be seductive‛. [7] For Meyers, they potentially give nations the freedom to pursue more humanitarian foreign policies and interventions and they have the potential to be more precise than human combatants – thus reducing collateral damage – also proving alluring to the military. [8] Running through this debate though is an endorsement of the claim that robot soldiers will reduce the human cost of war in at least one sense – regardless of their performance capabilities. As they will substitute for human combatants, the loss of the robot during combat will save the life of the human who would have been in his place, so the argument goes. There is also the tactical advantage to the military of the ‚war in a can‛. [9] Robot soldiers will not need the intense and extended periods of training before they can be mobilised as humans do. Instead, they can be manufactured and sent straight into combat with their skills uploaded from a database. These plausible benefits seem to necessitate the development and implementation of robotic soldiers. The converse attitude – that robot soldiers ought to be abandoned on the grounds that they can potentially lead to a denigration of our objections to warfare – falls prey to a reduction ad absurdum, where nations ought to instead make warfare as brutal and damaging as possible, and so can easily be dispensed with. [10] In trying to reconcile the robot soldier though with the problems it currently faces, a new one emerges. The accepted benefit – that robotic soldiers in place of humans reduce the cost of warfare as a human life is saved when a robot dies in their place – would be endangered as the robots become more advanced. With advancement, intelligence, capabilities and other attributes on a par with humans, would it still be morally desirable for a robot to die in the place of a human? Would a ‘life’ truly have been saved? I shall demonstrate that, in resolving the issues currently faced by the use of robotic soldiers in place of humans, the notion that the human cost of war is reduced, by virtue of a robot replacing a potential human casualty, becomes fallacious.
To Riza’s mind there are two conditions which fully autonomous robot soldiers need to meet before they can assuage the ethical concerns surrounding their implementation in warfare; firstly, they need to distinguish non-combatants from legitimate targets (of their own accord) and there needs to be accountability for when things go wrong – either residing explicitly in an owner, commander or creator or the robot itself. [11] The complex nature of potential combat environments and situations renders it either at least incredibly difficult or outright impossible for a human programmer to create an exhaustive list of commands which an autonomous robot can refer to. On top of this, the situations a robot faces can potentially have no morally ‘correct’ answer, and in such a situation, the human soldier would be demanded to come to an ethical judgement – a robot incapable of doing as much and instead referring to an explicit command seems deficient therefore.
The answer to these issues, if it is to be accepted that robot soldiers have potential benefits that we ought to seek out (such as their capability to reduce the human cost of warfare as they die in place of human combatants), is generally accepted to lie in the development of functional A.I. A.I – artificial as opposed to simulated intelligence – would give robots cognitive capabilities that match those of a human. If achieved, they would make the robot soldier capable of performative parity with a human soldier in the complex environment of warfare, and so answer the key ethical concerns surrounding ‘collateral damage’ (and minimising it). As Riza acknowledges ‚*f+or now, humans have a monopoly on intelligence of the sort required for satisfying distinction…‛ of combatant from non-combatant but this state of affairs is by no means certain to persist. [12] A.I also has the potential to provide robots with the means of determining an ‚ethical standard‛ which could be used in relation to the ‘Rules of Engagement’ which all soldiers are today asked to interpret and implement in combat. [13] On top of this, inculcating sufficient levels of intelligence and other cognitive capacities in robot soldiers allows for an easy answer to the accountability dilemma; namely, the robot itself would be held accountable for the actions it chose to take as it would have a sufficient understanding of consequences and causality. [14] Google’s ‘DeepMind’ recently created the AlphaGo program which was capable of teaching itself how to play go, the board game, and to a standard sufficient enough to beat leading human players of the game. [15] These developments give credence to the belief that A.I can be achieved with the performative capabilities matching those of humans – at least with regards to skill. The potential for ethical thinking on the part of the robot possessing such A.I also seems reasonable to expect to be achieved also, especially with AlphaGo’s ability to learn like humans and think ‘organically’ going from strength to strength. [16]
Artificial intelligence therefore seems to be a much-needed panacea to the major ethical concerns facing the use of robot soldiers as they currently stand. It can resolve the accountability and discernment problems underlying the risks of collateral damage when robot soldiers are used in place of humans. The only issue that is still maintained when robot soldiers are used is the concern that robot soldiers will only encourage warfare. The development and implementation of A.I in robotic soldiers does give rise to a new concern though,which needs to be factored in to our understanding. Namely, would the attributes of a robot soldier, possessing such artificial intelligence so as to be accountable, capable of ethical decisions and cognitively on a par with (or even exceeding) humans, not be worthy of being granted ‘personhood’? Would it be reasonable to claim that the destruction of such a being would still be morally more desirous than the death of the human for whom they have been substituted? In such a state of affairs, can it still be maintained that the use of robot soldiers reduces the human cost of warfare?
Samir Chopra and Lawrence White take up this debate of the implications of A.I for personhood of both the legal and moral kind. [17] They argue that the legal basis for assigning personhood rests solely on a pragmatic need of society, and that being human is no condition for personhood – corporations, groups and other non-human entities are regularly considered to have personhood by the law. [18] What would matter in such a personhood decision regarding artificial agents would be whether there was a felt need for this kind of legal personality to be accorded.‛ [19] They nonetheless assert though that highly developed A.I would severely challenge our currently conceived notions of personhood, demanding that we reconstitute our framework as it currently stands. Whether or not artificially intelligent robots hold metaphysical personhood – as opposed to being assigned it for the purposes of the conduct of law – seems more relevant to our specific enquiry however. ‚…[P]hilosophical views of personhood often cleave the concepts of ‚human‛ from ‚person‛…‛ with various moral thinkers such as Immanuel Kant and John Locke putting stock in the rational attribute of a being as deterministic of personhood – instead of its human character. [20] Unless we hold to a seemingly outdated notion of human- exceptionalism, it seems more than reasonable to suppose that a highly developed A.I would warrant being considered as both a legal and non-legal person in the same manner that a human is. To do as much would endanger the key benefit (as it is perceived) that the death toll of warfare would be reduced by virtue of robotic casualties being substituted for human combatants.
Even with such a novel framework though, the claim that the human cost of war would be reduced when a robotic person dies in the place of a human person would still be perfectly sound. Donna Haraway though would go further than Chopra and White, arguing that robots with sufficiently developed artificial intelligence would warrant not just being assigned personhood but also being welcomed into the same ontological category (being) of humans. [21] For Haraway, humans fall into the ontological category of the cyborg as ‚..by the late twentieth century…we are all chimeras, theorized and fabricated hybrids of machine and organism…‛ with our increasing reliance and mutual dependence upon technology having broken down the pure human and re-created us as beings cyborg-ian in nature. [22] With the parameters of humanity long since dispensed with, for Haraway, the artificially intelligent robot which can perform, think and possibly even feel as cyborg humans do seems to demand equity with humans in our moral understandings of warfare. If the artificially intelligent robot and the modern human hold such equality, then the claim that a human life has been saved when a robot soldier dies in their place becomes false. In the search to resolve the major issues facing the use of robot soldiers, the major benefit of the robot soldier appears to have been lost.
Someone could respond by arguing that Haraway is going too far to argue that humans are now cyborgs and that I am going too far in using this to argue that advanced robots deserve not just personhood, but to be considered of the same ontology as humans. It could be argued that there is a biological nature to humans which clearly delineates a human combatant from the prospective robotic substitute and that we humans have a metaphysical nature – possibly of a mind or soul – which warrants us being considered to be of more import than a robot. This being the case, the claim that robot soldiers substituting for human combatants would reduce the human cost of warfare still stands firm. It must be acknowledged here that both sides of the debate are having to take certain premises for granted – that highly developed A.I could have ‘feelings’ and other human cognitive abilities, that humans have a metaphysical character potentially precluded from robots etc. On the basis of these both sides are forming their differing arguments. Riza evokes the Precautionary Principle when he discusses the implications of A.I in warfare – though not in reference to the specific matter of personhood and humanity that we are here dealing with. [23] It states that ‚we should choose whichever course of action has the least bad worst-case scenario…‛. [24] Applying this to the question of whether or not robots ought to be considered as persons, humans and given equal footing within our moral framework, the precautionary principle would seem to imply that if we do not assign these things to robotic soldiers, and view them in the same ontological category as humans, the consequences (if true) would far worse than the consequences of the contrary scenario. For example, if we do not grant artificially intelligent robots personhood, humanity, and treat them equally with humans (in respect to their use as soldiers) then we potentially consign thousands of people (these robots) to cruel yet unappreciated deaths. This is a more deplorable scenario than the contrary one which arises from viewing robot soldiers as of the same category as humans. It seems therefore that it would not be morally justifiable to use robot soldiers in place of humans, on the grounds that they reduce the human cost of warfare when they die in place of human combatants.
In addition to this, the learning process which A.I is currently having inculcated in it is akin to biological growth and the development of children and organisms – very different to programming machines. This hints that the biological distinction between machine and human is also being broken down with artificially intelligent robots, on a cognitive level undergoing essentially biological (if not organic) processes of learning. As such, the distinction between human and artificially intelligent robot is yet further eroded.
The assertion that robot soldiers with highly developed A.I ought to be considered as essentially equivalent to human combatants – and that therefore that robots dying in place of human soldiers does not reduce the human cost of warfare -is not equivalent to saying that robot soldiers should not be used in warfare. It is merely to say that the key benefit of using robots in combat has been lost. A decision about whether or not to use robot soldiers can still be made on the basis of their performance capabilities, their efficiency savings for the military and the wider hurt caused by their deaths – as Meyers states robotic soldiers (even if viewed as human) would still mean that ‚…there will be fewer grieving widows (or widowers), fewer fatherless or motherless children.‛ [25] Viewing robots as humans could also provide an answer to the concerns raised by Singer and Meyers about how governments and the public could come to be more callous about warfare conducted by robots rather than humans. If society took the view that robots are equivalent to humans then a robot casualty would bear the same gravity as a human one and so warfare no more encouraged than previously.
To conclude then, the use of robot soldiers in warfare cannot be morally justified on the grounds that a robot casualty saves the life of a human for whom they have been substituted. This is because, in attempting to answer a number of the current major ethical and performative concerns for robotic soldiers, the development of artificially intelligent robots would, applying the cyborg world-view render the robots of the same ontological category as humans. Though the necessity of this not certain, the precautionary principle incites us to take this viewpoint as to do otherwise could potentially have far worse moral consequences. Viewing artificially intelligent robots as human would still permit robots to be used in combat (justifications would be based more, though not solely, on their performance capabilities now though) and would have the added benefit of answering some of the persistent fears of using robots in combat – namely that it would create a callous regard for warfare. Nonetheless, using robotic soldiers in the place of human combatants would not be morally justified on the grounds that it reduced the human cost of warfare.
Notes
[1] P. W. Singer, Robots at war: the new battlefield (2009) in The Wilson QuarterlyOnline https://wilsonquarterly.com/quarterly/winter-2009-robots-at-war/robots-at-war-the-new-battlefield/ (last accessed 15th December 2017)
[2] M. Shane Riza, ‘A.I: The search for relevance and robotic, jus in bello’ (chapter in) Killing without heart: The limits on robotic warfare in an age of persistence conflict (Nebraska, 2013), p. 125.
[3] Chris Meyers, ‘G.I , robot: The ethics of using robots in combat’,Public Affairs Quarterly 25:11 (2011),21.
[4] Meyers, ‘G.I robot’, 26-27.
[5] Singer, Robots at war,https://wilsonquarterly.com/quarterly/winter-2009-robots-at-war/robots-at-war- the-new-battlefield/
[6] Ibid.
[7] Ibid.
[8] Meyers, ‘G.I robot’, 29.
[9] Riza, ‘A.I: The search for relevance and robotic, jus in bello’, p. 132.
[10] Meyers, ‘G.I robot:The ethics of using robots in combat’, 28.
[11] Riza, ‘A.I: The search for relevance and robotic, jus in bello’, p. 127. 12Ibid, p. 136.
[13] Ibid, p. 132.
[14] Ibid, p. 137.
[15] Rory Cellan-Jones, ‘Google DeepMind: AI becomes more alien’, BBC News (2017), http://www.bbc.co.uk/news/technology-41668701(last accessed 15th December 2017).
[16] Ibid.
[17] Samir Chopra and Lawrence White, ‘Personhood for artificial agents’ (chapter in) A legal theory for autonomous artificial agents(Michigan, 2011), pp. 154 & 171.
[18] Chopra and White, ‘Personhood for artificial agents’, p. 158.
[19] Ibid, p. 160.
[20] Ibid, p. 171.
[21] Donna Haraway, ‘A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century’ (chapter in) Simians, Cyborgs and Women: The Reinvention of Nature (New York, 1991), p. 153.
[22] Haraway, ‘A cyborg manifesto’, p. 151.
[23] Meyers, ‘G.I robot: The ethics of using robots in combat’, 29.
[24] Ibid.
[25] Ibid, 22.
Bibliography
Cellan-Jones, Rory., ‘Google DeepMind: AI becomes more alien’, BBC News (2017),
http://www.bbc.co.uk/news/technology-41668701.
Chopra, Samir and White, Lawrence., ‘Personhood for artificial agents’ (chapter in) A legal
theory for autonomous artificial agents (Michigan, 2011), pp. 153 – 192.
Haraway, Donna., ‘A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century’ (chapter in) Simians, Cyborgs and Women: The Reinvention of Nature (New York, 1991), pp. 149 -181.
Meyers, Chris., ‘G.I , robot: The ethics of using robots in combat’, Public Affairs Quarterly 25:11 (2011), 21 – 36.
Riza, M. Shane., ‘A.I: The search for relevance and robotic, jus in bello’ (chapter in) Killing without heart: The limits on robotic warfare in an age of persistence conflict (Nebraska, 2013), pp. 125 -148.
Singer, P.W.,Robots at war: the new battlefield (2009) in The Wilson QuarterlyOnlinehttps://wilsonquarterly.com/quarterly/winter-2009-robots-at-war/robots-at-war-the-new- battlefield/.