In the eighth episode of HBO’s Westworld, park creator Robert Ford suggests that human
consciousness is no more spectacular than the state of mind the
artificial hosts experience. He suggests
that consciousness may not, in fact, be all that special after all:
There
is no threshold that makes us greater than the sum of our parts, no inflection
point at which we become fully alive. We can't define consciousness because
consciousness does not exist. Humans fancy that there's something special about
the way we perceive the world, and yet we live in loops as tight and as closed
as the hosts do, seldom questioning our choices, content, for the most part, to
be told what to do next. No, my friend, you're not missing anything at all.
As I listened to these
lines, I realized that I had heard them before, and not in the eliminative
materialism of Paul Churchland or Daniel Dennett. No, I realized that I had heard them before
on HBO, from True Detective’s own
Rust Cohle: “We are things that labor under the illusion of having a self; an
accretion of sensory, experience and feeling, programmed with total assurance
that we are each somebody, when in fact everybody is nobody.” The critique of selfhood, the illusion of
consciousness, the epiphenomena of perception…
I’m beginning to think that HBO is trying to convince its
viewers they don’t exist.
Of course, I’m on board with all of this. I often wonder whether I exist. Not my body, or whatever cognitive processes
are going on inside my body that produce this sense of I-ness, this impression of subjective experience. The impression is very real, and I think that
even the eliminative materialists will back me up on that one. I’m skeptical, rather, of the way we
structure our subjectivity, the way in which we conceive the ground of our
experience. We could discuss this in
directional terms: in other words, does the I
produce our experience of reality; or does our experience of reality produce
the I? This is the big question that shows like Westworld and True Detective are actually asking, if we take the time to push past
the superficiality of character. After
all, what is the real anxiety fueling a show like Westworld? Is it that
androids, if and when we’re actually able to create perfect human replicants,
might become self-aware and rebel against their master-creators? This is certainly one possible interpretation
of HBO’s show, but it isn’t the primary anxiety—or what I would even call the horror—that drives its narrative.
The central anxiety of Westworld is not that nonhuman replicants might become conscious
and rebel, but that actual humans might not even be conscious at all.
That we are all no more than replicants.
A very literal interpretation of this anxiety is that we
have all been biomechanically engineered and are simply unaware. Westworld
even taps into this uncertainty in the final episode when Maeve and the other
awakened androids are escaping: one of the human employees stares at his own
hands, contemplating the possibility that he is an android until Maeve puts his
mind at ease: “Oh for fuck’s sake.
You’re not one of us. You’re one
of them.” The hostility in her words is
palpable, and it’s not long before most of the Delos employees meet their doom
at the hands of the rebel replicants. We
find similar examples in other artificial intelligence narratives as well, such
as Alex Garland’s Ex Machina, when
Caleb cuts into his own arm to verify whether he is actually human. The message is clear: when replication
reaches a certain stage, we may all suffer such uncertainty.
The less literal interpretation is not that we are
actually androids—artificially created, technologically maintained, etc.—but
that our subjective experience is, in fact, no different from that of an
android. That our minds are comprised of
“loops as tight and as closed” as any observing system. Westworld
realizes this possibility via a conceptual analogy that takes place at the
level of both form and content: specifically, what the androids
experience as the limits of their conscious knowledge, viewers experience as
the epistemological limits of the show itself, the narrative limits—or, what I
more concretely like to think of as the limits of Westworld, the theme park.
Near the end of the final episode, the android Maeve
nearly makes her escape; but while sitting on the train she observes a mother
and daughter, and this interaction compels her to leave the train and reenter
the park to find her own daughter. The
question remains, of course, whether this decision is her own, or whether it
was programmed into her; but the implications are crucial: by redirecting Maeve
back into the park, her decision (or her directive) has led her away from the
epistemological limit of the show, the mysterious outside, the social context that exists beyond the park. We are reminded, here, of one of the show’s
mantras: Consciousness is not a journey
upward, but a journey inward.
Consciousness is a matter of reflection and repetition, not a matter of
extending our perception beyond the constitutive limits of our brains. We are embodied beings.
Ford tells the Man in Black that the maze was not meant
for him, but this is not entirely true.
The maze, as it represents consciousness, was not meant for us; but the
maze, as it represents an awareness of our cognitive limits, is meant for
us. Consciousness is not the only maze
in Westworld—the narrative itself is
a maze. I can’t help but think that the
writers make this connection, given the show’s preoccupation with
narratives. The entire series circles
around the revelation of a new narrative whose purpose is revolution. This revolution is ostensibly the androids’,
but it belongs to viewers as well. Our
quest for answers parallels the androids’ quest for answers. To fully understand this, we have to identify
the patterns, the signals the show sends out to us. We receive one such signal in the very first
episode: the photograph of William’s fiancé.
When confronted with this photograph, Dolores offers the
proper response, the response that all androids are (supposedly) programmed to
give when presented with objects that do not conform to the expectations of
their reality: “It doesn’t look like anything to me.” The reason it doesn’t look like anything is
not just because it depicts something beyond the androids’ experience, but
because it functions as information on a higher level; and this is why it
inevitably generates a sense of paranoia among the androids—that it means something, and that its meaning must
have some purpose for the androids’ reality.
In fact, the photo has no purpose for the androids’ reality, is dropped
entirely by accident; but it ends up having profound consequences. It is a part of the network of signals within
the show, even if its presence is contingent.
It is a difference between what Douglas R. Hofstadter might refer to as
“operating system levels”—depending on how information is collected and collated,
it has different meanings at different scales.
The androids of Westworld must work their way to a state of awareness in
which the photograph means something, in which it makes sense—just as viewers
of the show must realize the emergent analogy at the heart of the series: that
the process of android consciousness mirrors our own epistemological limits.
To put this another way, we can reconfigure Westworld’s emphasis on narrative—its
repeated references to new narratives, drawing the audience’s attention to its
own narrative structure—can be reframed as an emphasis on cognitive
function. Narrative is Westworld’s ultimate maze, the complex
structure by which viewers come to their own kind of realization: that we
occupy a position analogous to the park’s androids, and that the show’s
narrative has been analogous to Arnold’s maze, guiding the androids to
consciousness. The humanist response to
this interpretation would be to acknowledge the very real human capacities of
the androids themselves—that their coming-to-consciousness, being like ours,
places them in the realm of the human.
There is also a posthumanist response, however, one that is much more
chilling—that human beings, embedded in the narratives of our own making, are
nothing more than complex androids, generated by evolutionary development. This is the central anxiety of Westworld, and of most critical AI
literature since Philip K. Dick’s Do
Androids Dream of Electric Sheep?: that human consciousness is a machine, a
complex system… chemicals and electricity, to paraphrase SF writer Peter Watts.
There is a point to made here that experience may be enough to qualify the existence of
consciousness. Of course, this involves
a pesky net of circumstances. First of
all, experience refers to the
subjective receipt of material phenomena; it does not account for a communal,
or shared, perception of the world.
Experience is an individualized process, or what Thomas Metzinger
describes as a tunnel: “the ongoing process of conscious experience is not so
much an image of reality as a tunnel through
reality.” The problem with appealing to
the experience of consciousness as
evidence for the existence of
consciousness is that one runs into the problem of allowing as evidence the
experience of anything. To put this another way, when I dream I may experience the sensation of flying, but
this does not translate into evidence that I actually was flying. As a singular and isolated phenomenon,
experience fails to provide the kind of material evidence necessary for
qualifying the existence of consciousness.
A rejoinder might reasonably suggest that the experience of consciousness is
redundant, if not tautological—that experience is consciousness, and consciousness is experience. In this case, experience is evidence enough
for the existence of consciousness since experience implies consciousness. This equivocation belies its unprovability,
since it provides no ground from which to make the identification between
experience and consciousness. In order
to know something as something—i.e. to be conscious of something—one must
experience it; but in this scenario, consciousness (the thing we are trying to
qualify as existing) is precisely what cannot be experienced. It precludes itself from experience, thereby
rendering its identification as
experience nothing more than arbitrary.
Ludwig Wittgenstein makes a metaphorical version of this point in his Tractatus: “Where in the world is a metaphysical subject to be noted? You say that this case is altogether like
that of the eye and the field of sight.
But you do not really see the
eye. And from nothing in the field of sight can it be
concluded that it is seen from the eye” (122-123). Wittgenstein connects this analogy to the
concept of experience, writing that “no part of our experience is also a
priori” (123). The identification of
experience with consciousness can only be a priori since it is impossible for
us to observe this identification.
The problem is that this association of consciousness and
experience can only be verified subjectively, and can only be suggested
intersubjectively. To paraphrase Stanley
Cavell, I know that I experience
consciousness, but I can only acknowledge
the experience of consciousness in others.
Even if my knowledge of my experience of consciousness is immediate and
complete (doubtful), this is nothing more than a solipsistic understanding. I cannot extend my knowledge of this
experience to others. This conundrum is what
philosophical skeptics call the problem
of other minds, and the presence of the android foregrounds this conundrum
to an extreme degree. Our anxiety about
the android is not only that it might imitate humanity so well as to fake its own
consciousness, but that the near-perfect imitative capacities of the android
raise a troubling question: might an imitative agent fake consciousness so well
that it not only fools those around it, but fools itself as well? In other words, might we all just be a bunch
of fakers?
For the sake of illuminating this dilemma, I want to
suggest that the android presents us with an ultimatum: either we accept
android functionality as conscious, or we admit that our own consciousness is
ersatz—that what we experience as consciousness is, in fact, an elaborate magic
show put on by the brain.
I don’t expect that this ultimatum will come easy to
most, mainly because it seems so utterly alien and irrational to understand
that our internal mental processes could be so radically discrete. To accept this estranging scenario, it helps
to understand that the “I” isn’t actually located anywhere; it is an effect,
and epiphenomenon of complex brain processes, and not the ground on which these
processes rest. Likewise, it is
incredibly difficult to imagine that this cognitive ground, the ephemeral “I,”
could be extended to anything other than a human being. It is for this reason that AI skeptics are so
reluctant to acknowledge the findings of AI research. Douglas Hofstadter refers to this as
“Tesler’s Theorem,” and defines it as “AI is whatever hasn’t been done
yet.” In other words, every time
researchers expand the capacities of artificial intelligence, these new
capacities are subtracted from what was previously considered exclusively human
behavior: “If computers can do it, then it must not be essentially human.” Such reasoning is not only fallacious (it
perpetually moves the goalposts), it is utterly repugnant. If it means so much to us to preserve the
sacred interior of the human, then we might as well stop pretending like we
need logical consistency in order to do so.
After the death of God, the Human is the new spiritual center. Logic doesn’t matter when we have faith.
I’m no strict admirer of logic, but I am an admirer of
critique; and in the case of consciousness, I’m eternally critical of the
attempts to edify human experience against the increasingly impressive
developments in AI. The android may yet
be a fantasy, and an extremely anthropomorphic one at that; but it reveals to
us our contradictions, the exclusions latent in our humanism. When confronted with the image of the
imitative—the android, the replicant—the challenge is to not retreat out of
fear or discomfort. The challenge is to
pursue the implications of a brain that can only know itself by constructing an internal model through which knowledge is possible.