About the Journal
Contents All Volumes
Abstracting & Indexing
Processing Charges
Editorial Guidelines & Review
Manuscript Preparation
Submit Your Manuscript
Book/Journal Sales
Contact


Cosmology Science Books
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon


Journal of Cosmology, 2011, Vol. 14.
JournalofCosmology.com, 2011

Can Machines be Murdered?

Alex Miller Tate1, Rory Scott1, Andrea Eugenio Cavanna2,3
1School of Philosophy, University of Birmingham, UK
2Department of Neuropsychiatry, BSMHFT and University of Birmingham, UK
3Department of Neuropsychiatry, Institute of Neurology and University College London, UK

Abstract

This paper examines the hypothesis that deactivating a suitably powerful computer system could, theoretically, constitute murder. This paper is divided in to two sections. Section I outlines a case for the possibility of a genuine synthetic consciousness. Initially it is established that, given the Closure of Physics and Unity of Nature, the human mind should be seen as an ultimately physical entity- in a sense identical to the brain. Since all mental events (including conscious experiences) must therefore be realised by physical events in the brain, it follows that if the physical events are replicated in a synthetic brain, a synthetic mind and consciousness will result. Some possible objections to the possibility of conscious machines will be considered. Section II progresses from this, positing that the existence of consciousness is the only criterion by which a being can be judged to have a right to life. This is in agreement with the argument suggesting that continuity of consciousness is the measure which ensures a person has the same rights throughout time. If this is the case, it must therefore be accepted that this consciousness is stand-alone and that its genesis is irrelevant to its moral status. It follows from this that a being possessing synthetic consciousness would have the same rights as a being possessing a natural consciousness. This leads to the conclusion that deactivating such a machine would qualify as murder, a scenario which might become relevant in the not so distant future.

KEY WORDS: Consciousness; Artificial intelligence; Identity theory; Physicalism; Life.



Detective Del Spooner: Human beings have dreams; even dogs have dreams, but not you. You are just a machine, an imitation of life; can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?
Robot: Can you?
--(iRobot, 2004)

1. Defining Consciousness, the Right to Life and Artificial Intelligence

With recent advances in robotic mobility (Seeni et al., 2010), voice recognition (Junqua & Haton, 1995), speech production (van Santen et al., 1997) visual perception including the ability to recognizes faces (Brunelli and Poggio, 1993) and facial emotion (Ren, 2009), it is only a matter of time before a computer is created which can combine, associate, and assimilate these modalities and achieve consciousness. Consciousness may be followed by self-awareness, and thus, self-consciousness. Might a self-conscious machine also have emotions, and a desire for self-preservation? Coupled with its ability to reason, analyze, and compute, might a self-conscious machine seek to protect itself from danger and threats to its own existence? Might a machine contemplate murder? By the same token, might humans also feel endangered? And what would happen if a human decided to disable a self-conscious machine? Can machines be murdered?

Throughout this paper, we align our definition of consciousness with Block’s ‘access’ consciousness (awareness of the self and the environment which informs and is informed by other cognitive functions, as opposed to phenomenal consciousness: Block, 1995) and Levy & Savulescu’s ‘extended and self-referential mental states’ with representational content (Levy & Savulescu, 2009, p. 368). ‘Representational content’ in this context refers to an interpretation of information that was acquired as a result of phenomenal consciousness. A typical example is the representation of a reflection on why something is painful, as contrasted with the actual experience of pain. We reject the notion that a simple response to stimuli and even the display of immediate preferences is sufficient to constitute fully, morally significant consciousness. We insist, by contrast, that "very sophisticated cognitive abilities such as an ability to conceive of oneself as a being persisting through time" (Levy & Savulescu, 2009) are what constitute ‘consciousness’ for the scope of this paper.

Another concept that needs a clear definition to avoid possible ambiguities is ‘the right to life’. What we mean when using this term is the moral status of a being at which there would have to be a considerable and important mitigating condition in order to justify the cessation of its life. A lack of this right to life can be seen in most, if not all non-human animals – the justification for ending their lives (painlessly) can be trivial by comparison to the ending of those lives that possess our intended ‘right to life’. A convenient way of summating this is to consider whether the ending of that being’s life would constitute murder.

Finally, a brief introduction to Artificial Intelligence (AI) would seem appropriate. The development and application of AI has been the subject of much scientific investigation: speech recognition, game playing, logistics and robotics are all areas of contemporary investigation and advancement (Russell & Norvig, 1995 p. 28-29). We believe that the speed and degree of this advancement warrants serious and immediate ethical discussion. Although many authors have conjectured about the possible moral responsibility of instances of AI (Asimov, 1950; Wallach & Allen, 2008; Sparrow, 2007), far fewer have considered the responsibilities humans may have towards them.

2. Section I: The Consciousness of Synthetic Minds

The aim of this section is to make a convincing case for the idea that a sufficiently advanced machine/computer/robot would experience the phenomenon of consciousness in the same way a human does. In order to do this, a case for a broadly physicalist philosophy of mind will be made. More specifically, we will argue for a version of the mind-brain identity thesis. From here, we shall posit that, accordingly, a machine of sufficient processing power would not only emulate our communication of consciousness, but also itself experience consciousness indistinguishable from yours or ours. Furthermore, we shall assert that such an occurrence is not merely plausible, but also likely.

2.1 Physicalism and Artificial Minds In order to assert that a sufficiently powerful computer or instance of Artificial Intelligence would be conscious in the same way as humans can be said to be conscious, we must argue that a computer’s "mind" (or whatever constitutes its cognitive faculty) could be said to house processes that are directly comparable to those of the human brain.

Importantly, it can be asserted that the processes involved in the functioning of an artificial mind would be physical in nature, as it seems highly implausible to suggest that a scientist or engineer could construct an immaterial mind or "soul" of the kind a dualist claims is possessed by humans. Hence a dualist philosophy of mind is not compatible with the idea that a machine could experience consciousness as we do, as the mind of a human, by this account, is such that it could never be replicated physically - it is something "other".

Therefore, the assertion of the possibility of a human-like consciousness in a computer is best supported by a physicalist account of the human mind. The most accurate physicalist account of mind is known as the mind-brain identity thesis, which is presented in the following section.

2.2. The Mind-Brain Identity Thesis The Mind-Brain Identity Theory states that "…descriptions of our mental states […] and some descriptions of our brain states […] are in fact descriptions of the very same things" (Carruthers, 2004, p. 148). What this means is that any mental state (for instance, a thought), is identical to some physical state of the brain. This account does not imply that any mental states or conscious states are dependent on the constituent atoms themselves, but on their mutual relationships within the arrangement of the brain.

This thesis also holds that our terms for mental events and some of our terms for neural events which take place in the brain are just different ways to conceive of the same physical events (Carruthers, 2004, p.148-9). For instance, pain does not literally mean a "stimulation of neural fibres", but both terms can refer to the same event (which is most commonly referred to as the sensation of pain).

No absolute argument can be made to prove that mental events must be ultimately realised in physical events in the brain, but a very convincing case can be made. Carruthers puts forward the following two premises to support such a conclusion:

"(1) It is a successful methodological assumption of science that non-physical events cannot cause physical ones…

(2) It is a successful methodological assumption of science that higher level events and processes in nature must be realized in lower-level (ultimately physical) ones…" (Carruthers, 2004, p.151)

The truth of (1) and (2) above is, while not assured, very strongly suggested by the fact that they are successful methodological assumptions of natural science. Physicalistic science has, for centuries, been making excellent progress using these assumptions. It seems, therefore, reasonable to accept their accuracy.

Statement (1) simply suggests that discovering how any causal link can be established between non-physical and physical events or processes seems highly problematic. Given our knowledge of the physical world, there seems to be no room "for a distinct and independent psychological level of nature" (Carruthers, 2004, p.150). In short, it is reasonable to believe that the structure of Physics is closed (i.e, no non-physical processes exist which can affect the acknowledged physical laws of nature).

Statement (2), in turn, asserts that nature is "layered into a unitary system of laws and patterns of causal organization" (Carruthers, 2004, p.151). By this theory, the fundamental "layer" of causal organization is fundamental physics. Higher level processes (the laws of chemistry, biology, mental events, etc) can be reductively explained by physical processes. It follows that "in accordance with this layered picture of nature, we should expect […] the ‘laws’ of operation of the human mind to be realized in those of neurology […] physical events in the brain." (Carruthers, 2004, p.151).

In brief, it seems logical to conclude that our experience of "mind" (i.e mental events, including consciousness), are identical (in the sense explained above) to physical events in the brain. The two phenomena are one and the same. From this, it can reasonably be asserted that the material from which any brain (and therefore mind) is constructed, will play no role in that brain’s ability to produce consciousness. In other words, if the physical events of the human brain are replicated, and the experience of consciousness is, in a sense, identical to these physical events, then it follows that the experience of consciousness produced by a synthetic brain will be identical to the experience of consciousness produced by the human brain.

2.3 Objections and Rebuttals A possible objection at this juncture comes from a neuro-scientific/computer science perspective. It could be argued that the physical events which realise a human mind are not analogous to the type of physical events (or processes) which are used to programme and control computers. For instance, while computers work in an ultimately discrete programming language (e.g binary), the human mind seems to work in a far more continuous (or nuanced) way, despite some notable exceptions (e.g. sleep-wake cycle). It has been shown that it is not only neuronal electrical discharges in the brain (working in a binary fashion) which control its functions, but also chemical (even gaseous) "messengers". The maintenance of consciousness relies on the neurophysiological and neurochemical integrity of a distributed network which is in a functionally active "default mode" state (Cavanna et al., 2011). It seems, therefore, not plausible to suggest that a discretely programmed digital brain would be identical to a human brain and, therefore, produce a conscious mind in the same way.

The argument against this objection is twofold. Firstly, there is no reason to suspect that every single event in the brain forms a necessary part of our experience of consciousness. In fact, given quantum phenomena, the occurrence of which have been observed in the brain, it seems highly likely that a number of neural events are unnecessary by-products of a (arguably inefficient) biological brain. In other words, although a human brain may be an analogue system whilst computers are digital, there is no reason to believe that the variables affecting the functions of the human brain could not be sufficiently "discretised" in order for the mechanical result to work in a way that was, to all intents and purposes, indistinguishable. It seems likely that such a mind would still be conscious.

Secondly, and perhaps more convincingly, the general conception of how computers are programmed is only partially correct. It is sometimes imagined that computers are only programmed in a discrete manner. In fact this has never been the sole method of programming. There exist analogue computers, both mechanical and electrical, some of which have been used for millennia to replicate the continuous aspects of physical phenomena. The earliest known example of an analogue computer is believed to be the Antikythera Mechanism, a device thought to have been used to calculate astronomical positions. (Freeth et al., 2006). The idea generally fell out of fashion - not because it was unsuccessful, but because the programming capacity required for such processes was, necessarily, much higher than for discrete programming.

We are not claiming, necessarily, that computers as we know them could become conscious, but merely that synthetic minds are logically possible and, given the effort we are currently investing in producing them, that a breakthrough in the not so distant future is practically inevitable. Such a breakthrough could well come via a renewed interest in analogic computing which, given sufficient programming space could certainly replicate continuous processes such as those found in a human brain. Given the rate at which the possibilities of computer programming improve, the replication of the human mind seems to rest simply on a matter of complexity - an issue which, given current rates of progress, is likely to resolve itself over time.

Another possible objection to the Mind-Brain Identity Thesis comes in the form of the Chinese Room Argument (Searle, 1980). Searle presents us with the following scenario. Suppose a man who does not understand Chinese language is alone in a room and is given passages of Chinese text, through an "input" slot in one wall. The man is equipped with a rulebook, which provides him with an English translation of any Chinese character he may come across. He then applies these rules to the passages given to him and posts his translation through an "output" slot in another wall. Searle posits that this is what computers are doing when they process information. Systems working in this way, according to Searle are not conscious, just like the man who does not understand Chinese. They are merely applying a rule to an input in order to arrive at an output, and this is not equivalent to conscious thought. In order to come to the conclusion that computers could never be conscious, one would need to employ what Hauser refers to as a "bridging" argument:

"1. Brains cause minds (mentality)

2. Anything else that caused minds would have to have causal powers at least equivalent to a brain

3. Computation alone does not cause minds

Therefore,

4. Something else about the brain causes mentality

5. Digital electronic computers lack this something else

Therefore,

6. Digital electronic computers don’t have mentality" (Hauser, 2002, p.125).

Many theories have been suggested which contradict Searle’s claims against the possibility of conscious machines (e.g. Moravec, 2000). Our responses are again twofold. Firstly, we agree with Hauser when he states that 5) is "insupportable unless we’re told what this additional something is and wherein computers are supposed to be lacking it" (Hauser, 2002, p.125). This alone, seems to remove any immediate threat this argument poses to our position.

Moreover, given what we have already stated about physical events, we would suggest that a broadening of the term computation to encompass the physical events and processes in the human brain would be correct. All we are suggesting is that the functionality inherent in the human brain, that results in consciousness, is ultimately reducible to four elements:

The arrangement of constituent parts (Schemata)

Input (Stimuli)

Processing (Thoughts)

Output (Physical Events)

Although this is a substantial simplification, we believe it is more than sufficient as a reductive explanation for the occurrence of consciousness. In doing this we also challenge the third premise of the argument and, by association, 4-6.

2.4 Synthetic Minds We therefore conclude that a synthetic brain, performing the same physical functions as the human brain, is a logically possible, and indeed likely, occurrence over time. Furthermore, such a synthetic brain, given the fact that the human mind seems to be realised in purely physical events, would give rise to a synthetic mind. Therefore, if it were true that a human mind produces consciousness, it would appear correct to assert that a synthetic mind would also produce consciousness. The implications of this shall be discussed in the next section.

3. Section II--The Relationship Between Consciousness and the Right to Life

In this section we will explore the requisite elements of a moral patient with the right to life. In light of our initial definition of consciousness, it should be noted that we are establishing the toughest possible criteria for said ‘right to life’ This is done with the intention of testing to the highest possible standard, the hypothesis in question. We will consider our specific model of consciousness as the key element and highlight its supersession of any other factors in moral deliberation (Levy and Savulescu, 2009). Finally, using an example that postulates a form of artificial intelligence, we will synthesise these assertions into a context relevant to the core question of this paper. We will conclude that, if the premises of this paper are true, then the type of machines outlined would indeed have the right to life.

3.1. The dependence of the right to life upon consciousness The first step in evaluating an entity’s potential right to life is to establish the criteria for possession of said right. According to Levy and Savulescu (2009), the only measure by which one can establish such a right is the presence of consciousness. . It could be argued that the potential for consciousness is also a legitimate criterion for the right to life, a view endorsed, for instance, by anti-abortion movements. Neurological patients in a persistent vegetative state or epileptic black-out would represent a form of interrupted consciousness (Cavanna and Monaco, 2009; Cavanna et al., 2010), as opposed to potential consciousness, and infantile children would be an example of rudimentary, but already existent consciousness.

In the most basic sense, bearing in mind what has already been established in our defence of physicalism, a human’s consciousness is the only permanent feature of its existence. This is most easily justified by reflecting on how frequently the atoms in our bodies are replaced by other atoms. It is in fact known that one’s body experiences a complete renewal of all of its constituent atoms many times over in the course of an average human life (Carruthers 2004, p. 192). When this idea of total cell renewal is applied to the question of moral entitlement, it becomes clear that physical continuity cannot be the criterion for moral judgements. A hypothetical example of this could be as follows. Suppose that a woman was raped by a man for which there was no trial until 20 years after it was committed (we will assume that any mitigating factors are absent in this case). The defence might argue that neither the woman, nor the man were the same people that were involved in the incident 20 years prior, on the grounds that their physical composition was entirely different. In a sense this would be true. We assert however, that this would not absolve the man of moral responsibility, or deprive the woman of moral entitlement, i.e. punishment or delivery of justice, respectively. The moral status is not dependent on the constituent material, but on the continuity of consciousness, as it is the only thing that persists in the interim.

A possible objection to this would be to assert that consciousness is dependent on physical continuity and therefore it would be incorrect to assume that said physical continuity is not relevant. However, having established that consciousness is dependent on physical events, we can reasonably posit that a single, perennial incarnation of a legitimate form of consciousness is the only prerequisite for the right to life, and that physical continuity is necessary only insofar as it facilitates said consciousness. Crucially, the actual matter that constitutes the being in question is not significant.

3.2. The continuity of consciousness In order to fully assert that a machine could possess the same right to life as a human being, we will explore a possible route to reaching a suitable instance of synthesised consciousness. This will result in the conclusion that the origin of consciousness is not relevant to its ethical status.

We propose the following scenarios, which are dependent on the continuity of consciousness and identity. Say that (a) Alex, co-author of this paper, has lost a mental faculty due to a faulty part of the brain, and has had it replaced with a ‘synthetic’ (robotic or biological) surrogate. (b) All of Alex’s mental faculties were incrementally replaced with similar surrogates of the respective parts of the brain, so he would then have a fully functioning artificial brain. (c) Finally, he has these surrogate parts replaced by some that are biologically identical to those that he originally possessed, albeit not made with the same actual atoms.

In the case of (a), it would seem natural to say that this being is still Alex, as it retains the memories, long and short-term preferences, and everything else that would constitute consciousness (insofar as we have defined it). Although in principle we should be able to extend this conclusion to (b), there may still be an objection claiming that at the moment the last original part of Alex’s brain is removed, all that made his consciousness is lost. There are various responses to this objection, which is worth examining in light of (c). Arguably, it would be fair to assert that the consciousness found in the Alex resultant from (c) would not only be identical to that of pre a) Alex, but would in fact be the same consciousness, however it might be objected that this is not the case, as the material which constituted him is now different on the most fundamental level. This is where the renewal of the body’s (and brain’s) atoms becomes crucially relevant. If we accept that the woman mentioned above is the same women who was raped 20 years prior, on the grounds that her consciousness and identity are unchanged, then we must accept that post c) Alex is the same as pre a) Alex on the same grounds. This is because the physical events which constituted his consciousness have never changed (although there are forensically relevant cases where these physical events do change: diminished mental capacity due to the effects of substances, traumas, ageing etc.). Of course, the physical components have changed equally in both examples. What should be recognised from this, is that for a) and c) to result in the same consciousness, there must have been a continuity throughout and that, therefore, b) must have entailed this same consciousness, complete with the same right to life.

3.3 So, can machines be murdered? It is now necessary to synthesise what we have established about the human mind, the possibilities of artificial consciousness, and their relationship, with the right to life. Given what has been established about the likelihood of some realisation of an artificial consciousness, we submit that a discussion of the rights of conscious Artificial Intelligence could be of relevance to future students of ethics. That a synthetic brain will be able to replicate all physical events that take place in the human brain, when coupled with the fact that the mind and brain are (in the sense stated above) identical, strongly infers that consciousness possessed by a machine will be identical to that possessed by a human. The only foreseeable objection would be that there are discrepancies between the geneses of these instances of consciousness and that this would somehow invalidate the rights of artificial consciousness. Although we would be tempted to refute the idea that one could choose a morally superseding genesis (which would be fair), our objection is considerably more fundamental. We strongly assert that moral subjects cannot be evaluated (and thus granted or refused the right to life) on the basis of their genesis, and that doing so would represent injustice.

In summary, we submit two assertions. The first is that upon accepting a physicalist philosophy of mind and the principle of causal closure of physics ("nothing immaterial can have any causal efficacy upon the material world", Kile, 2008 ), coupled with the rate of technological advancement, one would have to accept that artificial consciousness is not far from inevitability. The second is that any form of sufficiently sophisticated and self-aware consciousness that is identical to ours, warrants the same right to life as ours does. If and when an instance of artificial consciousness - with all of the qualities stipulated above - is actually realised, there will be no other moral option than to recognise that a non-consensual, permanent deactivation of it, would constitute murder.

Acknowledgements We would like to acknowledge our Knowledge and Reality lecturer Nikk Effingham, who unwittingly, through his pub-situated talk on time-travel, paved our meandering path of fascination, which (via Nick Bostrom) would eventually lead us to AI. Professor Ann Logan, Head of the Section of Neuroscience at the University of Birmingham, was highly instrumental in facilitating much of the support we received. John Barnden, Professor of AI at the University of Birmingham, was not only remarkably helpful and generous with his time, but enduringly patient with our rudimentary understanding of his subject areas, and invaluable to the fruition this paper. Several other members of the neuroscience community who took a keen interest and certainly left their mark on this paper include Richard Blanch, Deborah Gordon and Emil Toescu.




References

Asimov, I. (1950), I Robot. Doubleday: New York.

Block, N. (1995), On a confusion about a function of consciousness. Behavioral and Brain Sciences, 18, 227–287.

Brunelli, R., and Poggio, T. (1993), Face Recognition: Features versus Templates. IEEE Trans, 10, 1042-1052.

Cavanna, A.E., Cavanna, S., Servo, S., and Monaco, F. (2010), The neural correlates of impaired consciousness in coma and unresponsive states. Discovery Medicine, 9, 431-438.

Cavanna, A.E., and Monaco, F. (2009), Brain mechanisms of altered conscious states during epileptic seizures. Nature Reviews Neurology, 5, 267-276.

Cavanna, A.E., Shah, S., Eddy, C.M., Williams, A., and Rickards, H. (2011), Consciousness: A neurological perspective. Behavioural Neurology, in press.v Carruthers, P. (2004), The Nature of the Mind. London, Routledge.

Freeth, T., Bitsakis, Y., Moussas, X., et al. (2006), Decoding the ancient Greek astronomical calculator known as the Antikythera Mechanism. Nature, 444, 587-591.

Hauser, L. (2002), Nixin’ goes to China. In: John Preston and Mark Bishop (Eds), Views into the Chinese room: new essays on Searle and artificial intelligence, Oxford, Oxford University Press, pp. 123-143.

Junqua, J.-C., and Haton, J.-P. (1995), Robustness in Automatic Speech Recognition: Fundamentals and Applications. Kluwer Academic Publishers.

Kile, J. (2008), The causal closure of physics: An explanation and critique. World Futures: The Journal of general Evolution, 64, 179-186.

Levy, N., and Savulescu, J. (2009), Moral significance of phenomenal consciousness. In: Steven Laureys, Nicholas D. Schiff and Adrian M. Owen (Eds), Progress in Brain Research, Vol 177, Coma Science: Clinical and Ethical Implications, pp. 361-370.

Moravec, H. (2000), Robot: Mere Machine to Transcendent Mind. Oxford, Oxford University Press.

Ren, F. (2009), Affective information processing and recognizing human emotions. Electronic Notes in Theoretical Computer Science, 225, 39-50.

Russell, S. J., and Norvig. P. (1995), Artificial intelligence: a modern approach, New Jersey, Pearson.

Searle, J. (1980), Minds, Brain and Programmes. Behavioural and Brain Sciences, 3, 417-424.

Seeni, A., Schafer, B., and Hirzinger, G. (2010), Robot mobility systems for planetary surface exploration – State-of-the-art and future outlook: A literature survey. In: Arif TT (Ed) Aerospace technologies advancements. Intech Publ., pp. 189-208.

Sparrow, R. (2007), Killer Robots. Journal of Applied Philosophy, 24, 62-77.

van Santen, J.P.H., Sproat, R.W., Olive, J.P., and Hirschberg, J. (1997), Progress in Speech Synthesis. Springer.

Wallach, W., and Allen, C., (2009), Moral Machines: Teaching Robots Right from Wrong, Oxford, Oxford University Press.



Edited by
Sir Roger Penrose & Stuart Hameroff

20 Scientific Articles
Explaining the Origins of Life



Abiogenesis
The Origins of LIfe
ISBN: 9780982955215
ISBN-10: 0982955219

Biological Big Bang
Panspermia, Life
ISBN: 9780982955222
ISBN-10: 0982955227

The Human Mission to Mars.
Colonizing the Red Planet
ISBN: 9780982955239
ISBN-10: 0982955235

Life on Earth
Came From Other Planets
ISBN: 9780974975597
ISBN-10: 0974975591


Copyright 2011, All Rights Reserved