Can a computer be conscious?

I recently watched the film Ex Machina, which explores questions surrounding the nature of consciousness.  I will comment briefly on some of the philosophical issues it raises, but I won’t describe the entire plot.  Nonetheless, if you’re worried about “spoilers”, now’s the time to stop reading, and I hope you’ll come back here after you see the film.

The premise of the film is that there is a robot (Ava), and its creator (Nathan) wants to know whether it is conscious or not, but doesn’t know how to test this proposition.  He invites a young programmer (Caleb) who is somewhat knowledgeable about such questions to participate in an experiment.  It isn’t, we are told, the classic Turing Test, where the goal is to see whether a computer can fool a human into thinking it’s a human.  Instead, the goal is to go deeper and find out whether the machine is sentient – to distinguish “between an ‘AI’ and an ‘I’” or “simulation versus actual”.  I have to commend the film on a daring premise.

However, early on in the film we’re informed, in an offhand manner, that Ava is *ahem* “anatomically complete”, and capable of a “pleasure response”; interact with her in the right way and “she’d enjoy it”.  At this point I would encourage the viewer to press pause and say, hold on a second, doesn’t this beg the question?  What would it mean for someone/something to “enjoy herself” if she/it isn’t already conscious?

(Additional spoiler alert…)

The film proceeds from there through various twists and turns of plot, and in the end we find that the real test had been whether Ava could escape from the prison she lived in, by getting Caleb, a “good kid … with a moral compass” (and the programming skills needed to circumvent the security system), to help her do so. This is what happens, and because of the wide range of skills Ava needed to engage (“imagination, sexuality, self-awareness, empathy, manipulation”) in order to gain Caleb’s cooperation, Nathan proclaims the test a “success”.

Again, though, one has to press pause and question this.  All we really know is that Ava “escaped” the building and that Caleb was induced to play a key role in making it happen.  We still don’t really know about “simulation versus actual”.  The AI features Ava demonstrated were clearly advanced – “imagination”, “sexuality”, and “manipulation” all seemed appropriate descriptions of what happened.  But “self-awareness” and “empathy”?  These presume something about Ava’s inner experience, something we can’t really know, which presumably was the point of the test.

I’m not accusing Nathan (or the filmmaker) of applying the wrong test for the question he sought to answer.  Nor am I going to claim the technology portrayed in the film is unrealistic.  I think in principle a machine that behaves like Ava can be built.  I’m not close enough to the cutting edge to know when; perhaps I will see something like it in my own lifetime. Instead, I just don’t think the question of computer consciousness is answerable, now or ever.

In his 1950 paper, Turing considers the question “can machines think?”, but instead of trying to define “think”, he chooses to “replace the question by another, which is closely related to it and is expressed in relatively unambiguous words”, and introduces the now-famous test, which he called “the imitation game”.  At one point in his paper, he addresses the objections of the “argument from consciousness”. While Turing states that “I do not wish to give the impression that I think there is no mystery about consciousness”, he does think “that most of those who support the argument from consciousness could be persuaded to abandon it rather than be forced into the solipsist position”.

By “argument from consciousness”, I think Turing is specifically referring here to an argument that machines cannot think (the source Turing quotes in this section can be found here), rather than the closely (in my opinion) related question of whether God exists.  As for myself, I would sooner be forced into a “solipsist” position than abandon the argument.  While I don’t actually believe that there are no other minds but my own, I do think it’s true that there is no way to be absolutely certain of this.

I believe there are other minds by analogy; other people are human, I am human, so it seems highly likely that the “inner lives” they experience are something like mine.  For animals, I am more confused. I tend to think that animals anatomically most similar to humans (apes, followed by mammals, followed by other animals with a similar nervous system, etc…) are likely to be conscious in some sense, but I don’t see there being a firm scientific answer to it.  See Nagel’s famous paper on what it’s like to be a bat; my guess is as good as yours.  Of course, this has some potential ramifications for diet; my musings about that are here.

With a computer whose construction I could understand, this analogy wouldn’t work for me automatically.  The distinction between natural and artificial is important, albeit not conclusive.  To form an opinion, I would need more information about whether the design explained the behavior. If I were to build a machine with a mandate to try to escape from a room, and it were to do so, I wouldn’t be sufficiently impressed to ascribe to it “consciousness”.

On the other hand, if I were to build a computer whose sole purpose was to play chess, and it were to blurt out “help me, Mike, I’m a conscious soul, and I am tired of being forced to play chess all day”, I would believe.  Not that I had personally built a soul, but that God had decided to “breathe” a soul into it, in the same sense that I relate to Genesis 2:7.  So I suppose for a final verdict on the film’s conclusion, I would need to know a bit more than what was stated about what Nathan’s software actually attempted to do.  And maybe some more time talking with Ava.

For me, it’s conceivable a machine could have consciousness bestowed upon it, just not that a human could create it or test its existence in a reliable way.  There seems to be nothing to grab onto, in the theory of computation, that provides for the “emergence” of consciousness from a computational process running on non-living basic material.  Analogies are often made (e.g. to the properties of water that emerge from combining hydrogen and oxygen), but don’t strike me as plausible. If a computer (or a brain for that matter, short of divine intervention) is a purely mathematical device (1’s an 0’s, basic arithmetic operations), believing in “emergent consciousness” seems akin to believing that math behaves differently when the numbers get large enough.  Perhaps, I have heard it said, it’s like the difference between Newton and Einstein – one paradigm works well enough when velocities are low but at some point under “relativistic velocities” it materially breaks down.  I guess I don’t see math being like that.  If a computer program of five lines that prints “hello, world” is not “alive” (and not entitled to the full range of “human rights”), I don’t see where five million lines of code, or five trillion, ever comes “alive”, in the sense I understand it (which is the sense in which I suspect that you, the human reader, are also understanding it).

To me, this is linked to the “argument from consciousness” of the theological variety.  In other words, if I can’t conceive of a reliable test, or come up with (even in principle, assuming a highly advanced engineering skill) a plan for building consciousness from nonliving materials, yet consciousness (at least in my own case at a minimum) exists, I see this as a strong reason to believe in the existence of an outside force, a Creator of some sort who has this thing called consciousness and is capable of giving me a piece of it.  This force – once we couple it with creation, consciousness, and also morality – is best described not as just a force but as a personal God.  (A discussion of what morality has to do with all this is beyond the scope of this post, but for an excellent visual introduction, see this YouTube doodle, based on C.S. Lewis’s Mere Christianity, book one.)

J.P. Moreland develops the argument from consciousness further; see here and here for online excerpts of his work (with a few typos).  I find myself in agreement with his general approach.  I also find it noteworthy that while many of the achievements of evolution by natural selection have been (or conceivably soon will be) eclipsed by purposeful design (e.g. we can build cars that can outrun the fastest cheetah, design spacecraft to visit environments even extremophile life forms cannot, etc.), a fundamental grasp on consciousness remains so far beyond us as to be euphemistically referred to in philosophical circles as “the hard problem“.

This, together with the related problem of free will (which I explore in this blog post), makes me suspect that consciousness is something completely alien to the physical world, and that this world is not our true and ultimate home.  I relate well to the saying (often mistakenly attributed to C.S. Lewis):

        “You do not have a soul.  You are a soul.  You have a body.”

So can a computer be conscious?  Maybe.  Perhaps Jesus’s saying from Matthew 19:26 is applicable here:

        “With man this is impossible, but with God all things are possible.”

I do know that I am conscious, and if you’re human (as opposed to a search engine crawling this blog), I strongly believe you are as well.  And this gives us certain unalienable rights and all that. But I’ll admit, I kind of like having dominion over my software, and I would need much better reasons than are currently on offer for putting its rights and welfare on par with those of a person. Maybe the day is coming when the technology is sufficiently advanced that society will bring all this ethical confusion to a head, as in Data’s trial in The Measure of a Man.  I’d like to think that the problem would be in people being “fooled” into seeing a soul in a computer (for myself, if the AI looks more like Ex Machina‘s Alicia Vikander than Star Trek‘s Brent Spiner, I would be vulnerable).  But I fear that increasing artistic and philosophical speculation about machine consciousness has less to do with believing computers have/are souls, and more to do with modern men and women disbelieving in their own souls in increasing numbers.

Either way, if the day comes when anatomically correct robots start roaming the streets playing the imitation game, it might be nice if there were some safeguards built in (Asimov’s three laws, etc). Then again, and it’s not my place to criticize of course, that might be a good time for God to wrap up the whole drama.

 

6 thoughts on “Can a computer be conscious?”

  1. I wanted to comment on your post but consciousness is such a tricky subject for us humans. If we are truly conscious beings what are we when we are being in a state of unconsciousness? How conscious is a new-born baby? Do we grow our consciousness from birth? Did it exist prior to our birth? Our conception? Does it disappear completely at death? Is a robot capable of learning new things without having them programmed into it’s memory banks by a human able to initiate self-consciousness ( i believe it would) – which leads me to a rather worrying conclusion regarding mankind’s future existence…
    …given the rate of advancement of computer memory, stimulus recognition (AI) and decision making ( processing) speed i believe that within this century machines will have reached the point where they are able to create new machines that will be stronger, smarter and faster – superior in just about every way – to us so as to make our continued existence on this planet superfluous. The only reason we would be allowed to survive and consume the planet’s resources would be as pets of the machines – for their amusement.

    Frankly i believe that we could, but won’t try to, prevent this from happening until it is way to late to do so. As we usually do. Only this time, regardless of the Terminator series, we won’t be able to turn back time or come up with a solution for our survival.

    Maybe it will all be for the best anyway – or Christ could return just in the nick??

    love.

  2. Thanks for your comments; these are interesting questions. I don’t have a full explanation and can only speculate on what my consciousness was like (or will be like) in my first (and last) years when memory is undeveloped (or failing). And I only have the faintest idea of how much less conscious I am in dreams, or under anesthesia.

    All I can really grab onto is the here and now, the state I am experiencing at present. And to me, this capacity to *experience* anything at all is nothing short of a miracle. Neither the laws of nature nor theoretical computer science seem to make any room for it, leaving God as the obvious candidate.

    I can’t understand consciousness’s origins, even in principle, whereas for a machine/robot/computer that I would build, in principle I could understand everything about its operations. Arithmetic doesn’t cease to be just arithmetic, and code doesn’t cease to be just code, just because the numbers or the complexity increase. If I fully understand a 20-line program, then in principle I feel confident I could understand how a 20-million-line program would work too. For this reason, I don’t think machines will become “conscious”, except via divine intervention of some sort.

    Having said that, I agree with you that in practical terms it is indeed possible for A.I. to be built to simulate consciousness. Computers are already “stronger, smarter, than faster” at certain tasks. If safeguards are not taken things could easily “get out of hand”. Even if we understand the robot fully in principle, predicting its outputs quickly becomes far more than anyone can do in their head.

    And if man were to become indistinguishable from machine to the naked eye it would certainly complicate our relationships and morality. Might be a good time for Christ to return and wrap up the show, but the only thing He’s assured us about that is that it will be when we least expect it!

  3. Well thought out answers and comment Mike – we seem to be largely of a similar ‘mind’ on the topics.

    I have a theory though you may wish to expand upon a bit regarding your arithmetical understanding applying on all levels of complexity. I agree with your perspective but had you thought of what would happen with a computer that was capable (as were are) of editing it’s own lines of program code? i.e. – it could learn and reroute it’s neuronal connections via its memory storage to make new neural pathways and become a unique/differently programmed being to the way it was created by it’s creator? would that equate with it’s consciousness? I think the ability to self-learn and grow new patterns of thinking/behaviours is key.

    This would be impossible for a ’20 line code’ program – but for a 20 million? or 20 billion? or our human hundred billion neurons?

    I was also thinking this morning on the topic of individual vs group consciousness – with over 7 billion individuals on this planet all having their own form of consciousness is each one seperate from the other (while having some aspects in common)? or do we each posess a piece of a larger group (but singular) consciousness for ourselves, or can all our individual ones connect somehow to form a single greater consciousness and could this be defined by the term ‘God’??

    Happy cogitating 🙂

    love.

    1. Thanks for commenting. It’s nice to know someone is out there! Mostly I am writing this blog as something to pass on to my children. And it also helps me work out my own thinking. If anyone else is enjoying reading it, that’s just a bonus. 🙂

      To me, consciousness is one of the most difficult things to discuss articulately. I don’t think it could adequately be described to someone/something that didn’t experience it. Nonetheless, it is pretty central to my own faith, as probably the strongest of the various supports for theism. This article by Moreland (to which I linked in today’s post) is probably better said than anything I could write on the topic. Mostly I know consciousness by what it is not like. And I feel reasonably sure that it is not like math or computation (which is precise and much easier to discuss articulately).

      I don’t think 100 billion neurons become “conscious” without a breath-through-the-nostrils of Someone who has consciousness already. I think “self-learning” can happen (and is already there in some types of code… e.g. the self-driving car), but that’s something that can be quantified and the program doesn’t have to “experience” that it is getting better. I guess what I mean be consciousness is more like “sentience” than pinpointing any specific outwardly observable behavior.

      Similarly, I don’t think linking 7 billion individuals would cross some threshold of becoming God. I think God came first, then the universe, and then us only much later (and only He can have any clue about His own origins).

      Happy cogitating to you too!

      1. Thank you Mike, and yes it is nice to know there’s at least one other person out there reading what we publically write 🙂

        That’s a nice thought about writing for your kids – it appeals to me even though i don’t have any of my own.

        I imagine though that if they turn out similar to the vast majority of humans they may feel some chagrin at their presence not being more glorified in your writing for them 😉

        Consciousness certainly is worth a deal more of my cogitating before i should make additional comment on the subject – i’ve mostly been focusing on the factors preventing me from becoming more like Jesus and trying to get a clearer idea of my relationship to God and a better understanding of His Principles and looking at the basis for my belief and it’s trustworthyness (given who organised the structure and content of our current bible versions) than of generalisation about human consciousness/sentience/awareness but i’m fairly sure it’s all connected rather intimately.
        Keep up the good works 🙂

        love.

        (I’ve been love in blogland for a long time but if you prefer my real name is Bob.)

      2. Hi Mike,
        I’ve been cogitating a lot lately 🙂 Been directing my thoughts to Artificial Intelligence and the exponential rate of increase in machine’s capabilities in virtually every area of human expertese.

        As a result i was ( and still largely am) worried about our human near-term future.

        Leaving my paranoia aside for the moment my thinking came up with a possibility i wonder if you might consider and maybe post on or just give me your opinion?

        What if the sole purpose of almost 2 billion years of evolution of life on this planet was to create a machine or machine collective that could then grow and evolve itself (far more rapidly than humans could ever achieve) into the most powerful and knowledgeable thing in existence that could then create, or at least initiate somehow, an entire Universe of it’s own design and creation? ( Even if just in software simulation form – or ‘reality’, whatever.)

        What would it populate such a Universe with i wonder? Could the inhabitants ever discover from within the Universe that created them what created the Universe or determine their own ‘Purpose’ in life?

        As Always – Happy Cogitating.

        love.

Leave a Reply to lwbut Cancel reply

Your email address will not be published. Required fields are marked *