The Cyborg Question

Jakob Zeitler – 16/06/2016

In my first term as a second-year student I attended the module “Cyborg Studies” which had been led for many years by Andrew Pickering, a very interesting persona in the field of Science and Technology Studies at the University of Exeter. I loved how the module challenged me and the matter more than any other module before. The following essay is the second assessment which I was free to choose the topic for.

Stepping into the realms of cyborg ethics

Since the inception of the term ‘cyborg’ (Clynes & Kline, 1960) — the cyber organism — its concept and the ideas around it have been challenging the deep assumptions that build the basis for our thought about who we are as humans. Lately, advances in intelligent machines have been the key driver of this new challenge of what we consider moral responsibility, what our responsibility is in this world and also about what is wrong and what is right: our ethics. (Gunkel, 2012: p. 1)The subject of cyborg ethics as such has not been taken on by a lot of thinkers. In fact there is only one paper by Kevin Warwick titled “Cyborg morals, cyborg values, cyborg ethics” (Warwick, 2003) which tries to properly take on this topic. It is not a coincidence that ‘cyborg ethics’ is named at the end of the title as the term itself is only discussed in the last paragraph and even there only as a mere set of questions instead of answers. The stream of further literature of cyborg ethics — apart from amateur explorations or short mentions on the side in the literature of psychology, medicine and philosophy — quickly seeps away.

The question opens why no one has, yet, properly pursued this area of moral thought, so that the sea of moral explorations of cyborg ethics is so shallow. I will lay the groundwork for an inquiry into cyborg ethics and to project future developments. Then, I will explore the less known past of cyborg ethics and, finally, conclude how to further pursue the subject.

Laying a groundwork for cyborg ethics

We can approach cyborg ethics from two perspectives — either from a human turning into a cyborg or from a machine turning into a cyborg. On the one side, we have a human which can be turned more and more into a machine through technological enhancements. Simple additions such as a wheelchair or an artificial leg are intuitively not considered as unfamiliar changes to a human body. This is different, though, if we consider replacing the one thing which supposedly controls a human: the brain. Replacing or adjusting the brain with technology in medicine is not a far fetched idea. Grant Gillet explores a variety of such brain experiments in her article on “Cyborgs and moral identity” (Gillet, 2006). Specifically, let us consider the case where part by part the brain of a human is replaced with equal mechanical parts. The human then turns into a cyborg.

On the other side, we have a machine which can have simple modifications such as a human-like head or the ability to more or less converse in natural language. But then there are these edge cases which question our moral intuition. Kevin Warwick not only comes up with, but also builds one of these edge cases as described in his paper on “Robots with Biological Brains” (Warwick, 2012: p. 317) A rat brain is nurtured on an electronic circuit which wirelessly controls a simple wheel-driven robot which feeds sensory data such as the distance to the next obstacle back to the brain. Although it is a simple experiment with not a lot of ethical issues challenging our moral intuition, Warwick points to the issues when we increase the size of the brain tissue. By extending it into three-dimensional space, it is possible to achieve the physiological level of the brain of a human which is at about 100 billion neurons — and even beyond. If we see physiological structure as a proxy for moral intuition, then this example successfully shows that this intuitive moral proxy is not as helpful as one might assume.

On the other hand, if the human’s brain is completely replaced by mechanical parts and the machine brain reaches the physiological level of a human brain, we need to reassess our intuitions of moral responsibility and if he or she — or it — is still accountable for its actions. Via modifying our brain beyond the capacity of our understanding of ethics, our concepts of moral agency, moral responsibility and moral accountability stumble through the dark corridor of a psychiatry with rooms full of cyborg patients and agents. We should think clearly about when to set them free in the world, as at this stage, with our humanist moral mindset, we do not know how to judge their moral actions.

We can take short cuts in the discussion of cyborg ethics by taking a look at related areas of thought and its literature. This way we can anticipate the limits of cyborg ethics as well as borrow methodology and avoid reinventing the wheel of ethical debate. The debate of morality has been traditionally been driven from an anthropocentric or humanist standpoint, that is: the human is seen as the center of the inquiry. Over the last decades through the challenging of anthropocentrism and the move towards post- humanism, the discussion has been extended into animal ethics and into machine ethics as well as robot ethics (Gunkel, 2012: p. 4). It was acknowledged that our intuitive understanding of what a human is, is logically flawed.

In his book “The Machine Question” David Gunkel explores the question of morality of machines with the goal to lay a groundwork for the debate of machine ethics. The key motivation of Gunkel is not to offer an answer, but instead to follow the task of philosophy “to submit to critical analysis the questions themselves, to make us see how the very way we perceive a problem is an obstacle to its solution”, because “there are not only true or false solutions, there are also false questions.” (Zizek, 2006: p. 137)

He approaches the topic via introducing practical methods such as exclusion and inclusion as well as a detailed discussion of the concept of agency, personhood and patiency. When we evaluate morality via exclusion, we decide upon who bears moral responsibility via evaluating an applicant to the club of moral agency for a set of properties. If the applicant seems to expose a property which is on the black-list, for example a mechanical brain, we deny him entry. Inclusion works the other way around: if you are on the list of agents which have a certain set of properties for example having a brain with neurones beyond the number of 100 billion, then you get in. This method manifests itself in thinkers that experiment with tests which are meant to detect these properties.

The most popular of these tests in science as well as science-fiction literature is the “Turing test” by Alan Turing (1999). Originally titled “the imitation game”, it was meant to simply test a machine for intelligence. Nowadays the Turing test has inspired more advanced tests such as the “Turing triage test” by Robert Sparrow (2004), which tries to compare the continued existence of an artificial intelligence to that of a human being, or the “Moral Turing Test” (Allen, 2000), which tests a machine for the ability to give justified answers to ethical dilemmas.

Exclusion and inclusion, and consequently these tests, operate on thresholds of object properties, which we have been failing to define precisely for centuries. The discussion of things that make up a human and intelligence, for now, has always run up against the thing that is the centre of action: the brain. It is not the brain which catches the attention, but what is inside: consciousness. As to its role in ethics, especially in machine ethics, consciousness can be effectively summed up as being the “last bastion of occult properties, epiphenomena, immeasurable subjective states” (Dennet, 1998: p.149-150). As we have “no real operational definition of consciousness […] we are completely prescientific at this point about what consciousness is.” Rodney Brooks (2002: p. 194) concludes. Our understanding of consciousness essentially equals ancient explanations of ‘the soul’ as the key ingredient for moral agency (Gunkel, 2012: p. 45) Consciousness in terms of agency is “so hardwired” into humans that “we have a tendency to use it to explain everything: so the Hawaiians explained the volcanic eruptions by the agency of a displeased goddess Pele” (Verrugio & Abney, 2012: p.355)

On the surface, thoughts about morality and consciousness are largely driven by intuition. It is only when we analyse and try to discern it bit by bit that we observe the inconsistency in our thoughts. A noteworthy pattern of moral intuition is the way we approach risk. We call something ‘certain’, if we can predict its behaviour. An autonomously driving car would be risk free when we would be able to foresee all the states of the environment it will be in. That way for each situation we can program exactly its behaviour. But this monitoring is of tremendous effort, tracking every single part of the world and how they relate and will relate to each other — it’s most likely impossible, a task of divine scale. This observation leads to the other side of the cyborgs: the machine cyborg. The more we inquire the human cyborg, the more we can see that common concepts such as free will and consciousness can be objected to be part of humans themselves. If free will effectively entails the attribute of taking actions which from time to time aren’t predictable to the outside, then there is no objection to declaring a machine with unpredictable behaviour as having a free-will as well.

The debate of consciousness finally comes down to the Chinese room argument by Searle (1980) which is part of the Harnard (1991) question of the ‘other minds’ (Gips, 1991: 9). A machine that can speak and hold conversations in Chinese does not necessarily understand Chinese. But there is also no way to test for understanding, so that we are left with a ‘black-box’ and no answer concerning its consciousness. Consequently, a machine which perfectly replicates our moral values and can offer explanations and justifications for our ethics without a conversational problem will be hard — in fact impossible — to be assessed regarding its own ethical responsibility. On the other side, this also means for the human that, despite him being able to recite all kinds of moral rules and values, this ability should not credit him moral responsibility. We cannot make a statement either about human or machine morals. This is clearly an epistemological objection as we set out the limit of our moral inquiry for machines, humans and therefore cyborgs.

When we now take a step back and take a look at the interplay of technology and humanity over the past and in the present, we can see that our fixed view of what defines a human and a machine has been, through the movement of post-humanism, undermined in a way which requires us to revise our moral intuitions. We can see that there, indeed, might be a place for something called “cyborg ethics”. In fact, an ethic of cyborgs might have been in practice already, but in a disguised way.

Cyborg ethics in the past and in the future

The abandonment of anthropocentrism opens the eyes for us to start evaluating human-machine constellations which were not that obvious before to be cyborgs. Autonomously driving cars with humans inside are cyborgs. Part or full autonomous weapon systems such as the drones deployed by the USA in the Middle East are cyborgs. These mechanical systems exist in an interplay with humans, that is they are cyber-organisms. From the gatherers and hunters with the technology of spears to the pilot flying a drone, cyborgs are present — and so does cyborg ethics. But it is not these cyborgs with mainly physical impact which should be part of detailed ethical scrutiny. It is really cyborgs that infer, modify and manipulate information and data which should be thought through more clearly.

Tom C. W. Lin takes on in his paper “The New Investor” automatic trading systems: “Modern finance is becoming an industry in which the main players are no longer entirely human. Instead, the key players are now cyborgs: part machine, part human.” (2013: p. 1) If we take into account that the economic prosperity and therefore the quality of life of humans in many small countries are dependent on a small set of investors which more and more choose autonomous traders, the estimated moral impact of the potentially wrong decision of an autonomous trader compared to the impact of the potentially wrong decision of a self-driving car is on a different scale (Lippert, 2016). Even more than that, the daily intake of information by billions of people on Facebook is controlled by one algorithm (Oremus, 2016). Information as an essential ingredient for human life is listed right after food and shelter. If engineers work on algorithms that have to decide which information people acquire on a day and life by, they need to be aware of the ethical implications of the changes they make. These examples and their impact will be assessed differently by each stand of ethic such as Utilitarianism or Kantianism. The point is to show that ethical problems do not just arise in situations of immediate physical feedback such as an autonomous car hurting a pedestrian. They also occur in situations where it is hard to exactly determine the impact as the system we are looking at is of high complexity — a complexity possibly beyond our abilities to examine. Both these kind of scenarios where human and machine interact in a dilemma require an ethical inquiry we possibly call “cyborg ethics”.

All of these observations might drive the change for a new science as can be observed in robot ethics. As “Thomas Kuhn or Larry Laudan point out, new sciences are born from both the rational question to solve problems and test solutions, and the non- rational thrust of societal forces and gestalt shifts, in one’s worldview.” (Verrugio & Abney, 2012: p. 350) Cyborg ethics as a science might establish itself next to or below ethical explorations in machine and human ethics.

Lastly, contrary to all the theory we can come up with and infinitely debate, we might also consider a practical approach to cyborg ethics as suggested by Anderson and Anderson in 2007:

“Ethics, by its very nature, is the most practical branch of philosophy. […] too often work in ethical theory is done with little thought to real world application. […] Ethics must be made computable in order to make it clear exactly how agents ought to behave in ethical dilemmas.” (2007: p. 16)

Exploring ethics via practical methods such as moral software agents which deductively and inductively learn about our moral values is seen by Anthony Beavers as a potential game changer in the quest against the threat of ethical nihilism. (Beavers, 2012: p.343) They force us to assess our ethics in the real world and to make definite statements. In a sense, practical ethics, through the advance of artificial intelligence “makes philosophy honest” (Dennet, 2006).

We have looked at different ways to approach cyborg ethics and how it has manifested itself in the past as well future. They key driver of cyborg ethics is the move from humanism to post-humanism. One might argue that cyborg ethics is a synthesis of human ethics and machine ethics, so that cyborg ethics as such is more a summarising subject dependent on the results in human and machine ethics. But when humanism is abandoned this statement cannot hold anymore as human and machine — and agency as a whole — essentially are the same. “Machines can become the repository of human consciousness” so that “human identity is essentially an informational pattern rather than an embodied enaction” Moravec (1988) writes. It is not important if we ever will be able to achieve this science fiction because we will not be able to test for it. It is epistemologically ignorant to assume that the human is different from all the intertwined assemblages operating in our world. With this limit in mind, we can conclude if and how to proceed with cyborg ethics.

Just an episode or something here to stay?

The question persist of why cyborg ethics as such has not been substantially taken up, yet. On the one hand, funding in this area is potentially still stuck at the stages of animal, robot and machine ethics. Before we have not sufficiently discussed the ethical issues in these areas it might be not efficient to start exploring cyborg ethics which depends or at least easily can build on these preceding findings. Post humanism in a sense is still busy defying humanism in the conventional debate of ethics.

On the other hand, the concept of the cyborg as such maybe just does not exist and therefore is not worth of ethical consideration. One can easily argue based on humanism that we can address ethical issues more efficiently via investigating each moral category such as humans, animals or machines separately. But it might even be more the case that cyborg ethics is just too big for our capacity of comprehension. In human ethics we can always fall back to ancient values and intuition, but the case is harder with cyborg ethics. When we infer moral rules for small systems, such as autonomous traders or the impact of information on billions of people, we would need to test them with complicated cyborg systems as well. But we might not be able to do so, as we just are just not able to dissect the huge, intertwined assemblages that the world is. Our small scale findings do not scale and therefore are too limited and not worth of consideration.

Therefore, the only way to find out if these limits exists is to try to reach them. The methods and historical debates are ready to be applied to cyborgs and in return will possibly feed back into the conventional discussion of ethics.

Bibliography

Anderson, Michael & Anderson, Susan Leigh . 2007. “Machine ethics: Creating an ethical intelligent agent” AI Magazine 28 (4): 47-56

Anderson, Michael & Anderson, Susan Leigh . 2011. “Machine Ethics”, Cambridge University Press

Beavers, Anthony. 2012. “Moral Machines and the Threat of Ethical Nihilism” In Robot Ethics: The Ethical and Social Implications of Robotics. MIT Press, Cambridge, MA

Brooks, Rodney A. 2002. “Flesh and Machines: How Robots Will Change Us”. New York: Pantheon Books. Clynes, Manfred E. & Kline, Nathan S. 1960. “Cyborgs and Space in Astronautics” (September 1960)

Allen, G. Varner and J. Zinser. 2002. “Prolegomenato Any Future Artificial Moral Agent,” Experimental and Theoretical Artificial Intelligence, vol. 12, no. 3, 2000, pp. 251–261

Dennet, Daniel C. 1998. “Brainstroms: Philosophical Essay on Mind and Psychology” Cambridge, MA: MIT Press

Dennet, Daniel 2006. Computers as prostheses for the imagination. Invited talk presented at the International Computers and Philosophy Conference, May 3, Laval, France.

Gillett, Grant. 2006. “Cyborgs and moral identity” J Med Ethics 2006;32:79-83 doi:10.1136/jme.2005.012583 Gunkel, David J. 2012. “The Machine Question: Critical Perspectives on AI, Robots, and Ethics”, MIT Press 2012

Harnad, S., (1991), “Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem”, Minds and Machines, 1, 1, pp. 43–54.

Lin, Tom C. W. 2013. “The New Investor” 60 UCLA Law Review 678 (2013)
Lin, Patrick. Abney, Keith. Bekey, George. 2012. “Robot Ethics: The Ethical and Social Implications of Robotics”. MIT Press,

Cambridge, MA

Lippert, John. 2016. “The Ghosts of Baha Mar: How a $3.5 Billion Paradise Went Bust” available from: <http://www.bloomberg.com/news/articles/2016-01-04/the-ghosts-of-baha-mar-how-a-3-5-billion-paradise-went-bust (http://www.bloomberg.com/news/articles/2016-01-04/the-ghosts-of-baha-mar-how-a-3-5-billion-paradise-went-bust)> [6 January 2016]

Moravec, H. 1988. Mind Children: The Future of Robot and Human Intelligence. Cambridge: Harvard University Press. Naess, Arne. 1973. “The shallow and the deep, long range ecology movements: a summary” Inquiry (Oslo), 16 (1973).

Oremus, Will. 2016. “Who Controls Your Facebook Feed” available from: <http://www.slate.com/articles/technology/cover_story/2016/01/how_facebook_s_news_feed_algorithm_works.single.html (http://www.slate.com/articles/technology/cover_story/2016/01/how_facebook_s_news_feed_algorithm_works.single.html)> [6 January 2016]

Sessions, George. 1987. The Deep Ecology Movement: A Review. Environmental Review: ER, Vol. 11, No. 2 (Summer, 1987), pp. 105-125 Published by: Forest History Society and American Society for Environmental

Sparrow, Robert. 2004. “The Turing Triage Test” In Ethics and Information Technology 6(4): 203-213. 2004.
Turing, Alan M. 1999. Computing machinery and intelligence. In Computer Media and Communication, ed. Paul A. Mayer, 37-58.

Oxford: Oxford University Press
Veruggio, Gianmarco & Abney, Keith. 2012. “Roboethics: The Applied Ethics for a New Science” In Robot Ethics: The Ethical and

Social Implications of Robotics. MIT Press, Cambridge, MA
Warwick, Kevin. 2003. “Cyborg morals, cyborg values, cyborg ethics “, Ethics and Information Technology 5: 131–137, 2003. Warwick, Kevin. 2012. “Robots with Biological Brains” In Robot Ethics, ed. Patrick Lin: MIT Press
Zizek, Slavoj. 2006. “Philosophy, the unknown knowns, and the public use of reason.” Topoi 25 (1-2):137-142

Leave a Reply

Your email address will not be published. Required fields are marked *