“The
brain is already a sleight of hand, a massive, operationalist shell game. It designs and runs Turing Tests on its own
constructs every time it ratifies a sensation or reifies an idea. Experience is a Turing Test – phenomena
passing themselves off as perception’s functional equivalents.”
~Richard
Powers, Galatea 2.2
Is intelligence distinct from the unconscious and
non-intentional processes that underlie cognitive function? When we administer IQ tests, do we actually
measure intelligence? Is intelligence “liftable,” to borrow a term
from Douglas Hofstadter – can it be “lifted” from one substrate and installed
into another – or is it system-specific, contingent upon the parameters
dictated by a formal system? Is
intelligence a particular quality of
an organism or system, isolatable to the conventional embodied structure of a
given formal system, or is it an effect of the organism/system’s relationship
to its environment – of its own recursive structure? Is intelligence the ability to operate logically
within a given set of axioms, or is it the capacity to produce novel axioms via
the pursuit of isomorphic patterns with the external world – through tools,
instruments, media, the interfacial film that reflects its existence back to
it…?
These are the ridiculous and probably ill-conceived
questions that I grapple with, but the more I explore these ideas in the
appropriate literature (fiction and nonfiction) the more I’m convinced that I’m
not entirely crazy.
In short, I’m interested in the following questions: when
we talk about intelligence in humans, do we conflate intelligence with
consciousness? Are consciousness and
intelligence related – and if so, how?
Does intelligence mean consciousness; or, alternatively, does
consciousness mean intelligence? These
questions require so many qualifications that it’s impossible to give
straightforward yes or no answers, but the inquiry into these indeterminacies
is as rewarding (if not more so) than any definitive answer we could hope to
give. So, here is a very preliminary,
and very amateur, attempt to
foreground some of these questions. Fair
warning: I’m a literature PhD, I have no formal training in neuroscience, cognitive
science, computer science, artificial intelligence, et al. But I like to think I’ve encountered a fair
amount of critical examination of these questions in my studies – enough at
least to warrant a random blog post on the internet. So, here goes.
First, a disclaimer: if there’s one compelling notion
that cognitive studies has illuminated, it is that the postmodern anxiety over logical grounds may not be so
ill-founded after all. Logical ground
entails a premise, formula, or axiom that requires no prior proof. In the wake of Gödel’s incompleteness
theorems, the promise of logical grounds seems asymptotically distant – barely
visible on the horizon, and never quite in reach. Yet we have to begin somewhere, inaugurating
a perpetual series of rejoinders from our critics, who cry in dismay, “But
you’re assuming ‘x’!” And they’re right. I am making assumptions. I’m making assumptions all over the place;
but then, I’m fine with being an ass.
My primary assumption is that consciousness is, quite
obviously, not a centrally organized or isolatable phenomenon. Consciousness is an effect, rather, of brain
processes far too complex to rationalize via consciousness itself. In other words, consciousness is an escape
mechanism: a way of avoiding the supreme complexity of what exactly is going on
inside our heads. To give you an idea of
the kind of complexity I’m talking about, I’ll refer to one of the wizards of
modern sociobiology, Edward O. Wilson.
In his now famous book, Consilience:
The Unity of Knowledge (1998), Wilson offers the following description of
the processes that underscore conscious thought:
Consciousness
consists of the parallel processing of vast numbers of […] coding
networks. Many are linked by the
synchronized firing of the nerve cells at forty cycles per second, allowing the
simultaneous internal mapping of multiple sensory impressions. Some of the impressions are real, fed by
ongoing stimulation from outside the nervous system, while others are recalled
from the memory banks of the cortex. All
together they create scenarios that flow realistically back and forth through
time. The scenarios are a virtual
reality. They can either closely match
pieces of the external world or depart indefinitely far from it. They re-create the past and cast up
alternative futures that serve as choices for future thought and bodily action.
(119-120)
In other words,
consciousness provides the human subject with a virtual experience of the
material world; it is an interface
between interior qualia and exterior phenomena.
It is the internal form that our relation to the world takes: “Conscious
experience,” as Thomas Metzinger says, “is an internal affair” (21). It is the way we imaginarily fashion the
nonhuman world.
Proceeding from this initial premise, we can arrive at a
quite obvious conclusion: that consciousness
does not necessarily imply intelligence,
at least as intelligence is defined by IQ tests. If consciousness implied intelligence, then
there would be no need to test for intelligence among obviously conscious human
subjects. Rather, IQ tests would reflect
disparities in intelligence between humans and other species – apes, birds,
insects, etc., but not disparities in intelligence among humans, unless we are
willing to admit that some human beings are not conscious. These human beings would be, according to
philosophical tradition, zombies: “a
human who exhibits perfectly natural, alert, loquacious, vivacious behavior but
is in fact not conscious at all, but rather some sort of automaton” (Dennett, Consciousness Explained 73). According to this definition, however, a
philosophical zombie would yield intelligent results despite being a non-conscious entity. It would exhibit intelligent behavior. How are we to square this? If an IQ test assumes consciousness on the
part of its examinees, how can a non-conscious entity exhibit intelligent
behavior? Are we forced to admit that
such behavior is not actually intelligent, but merely superficially intelligent
– like a group of monkeys who happen to write The Tragedy of Hamlet, Prince of Denmark? Such a claim constructs intelligence as a
substance, something that can be genuinely isolated and identified, and can be
opposed to false intelligence –
seemingly intelligent behavior that actually lacks the rudiments of
intelligence itself, intelligence tout
court. But we cannot do this – all
we have to judge intelligence by is
behavior, not by any privileged access to an interiority that exposes its
immediate authenticity.
IQ tests operate on the assumption that intelligence is a
genuine and measurable quality, that it conforms to specific aspects of
material existence. But in fact, many of
these aspects of existence cannot be verified by IQ tests, but are only applied
in retrospect. That is, IQ tests assume
that their examinees are conscious agents, and apply consciousness to their
examinees, but IQ tests cannot test for
consciousness, as made clear by the philosophical zombie scenario – they can
only test, purportedly, for intelligence.
An advanced computer can pass an IQ test. All of this would seem to suggest that
intelligence is not a quality per se,
not something that inheres substantially within a conscious subject, but rather
an effect of behavior. So when we examine
human subjects for intelligence, and quantify their performance with a number,
what are we actually saying? If we’re
not isolating and substantiating some quality of conscious experience, then
what are we doing?
Let’s take a moment and assess what examinees do when
they take an intelligence test: they are presented with problems, composed of
specific elements of information, and
are asked to parse these problems. This
is not an official definition, nor am I citing any source. This is my own definition: IQ tests present examinees with particular
elements of information, ask them to analyze this information, and produce new
equivalences of this information. Clearly,
I’m stacking the deck; for if intelligence is simply constructing equivalences
between informational elements, then intelligence actually has little to do
with any interior capabilities or substances.
It has to do only with the relational structure that emerges between
bits of information – that is, it emerges in the pattern, or isomorphism, that mediates
information. Intelligence is not
quality, nor substance, but effect. Human subjects are not “intelligent” in any
possessive sense, but only in a behavioral sense; and in this regard, even the
most clueless human beings can hypothetically fake their way through
intelligence tests. In fact, we can take
this one step further, beyond the limits of the human: even non-consciouscomputer programs can pass IQ tests with flying colors.
So when Alan Turing proposed his perpetually fascinating
inquiry into machine intelligence – “Can machines think?” – what does he mean? What would it mean for a machine to
think? According to our discussion
above, it would not mean for a machine to possess consciousness (necessarily);
rather, for a machine to “think” it would merely have to exhibit evidence of
thought. The implications that build up
around this proposal are unsettling – for if a machine merely pretends to think, is it not actually
thinking…? And if this distinction no
longer holds, then are we – us human subjects, strung along this mortal coil –
also merely pretending to think? What is the self that only pretends to think? Is it even a “self” at all?
One of the most serious objections to the proposal of
machine intelligence that Turing entertained was the “argument from
consciousness,” which Hofstadter helpfully articulates in his masterful work, Gödel, Escher, Bach: an Eternal Golden Braid
(1979): “Not until a machine can write a sonnet or compose a concerto because
of thoughts and emotions felt, and not by the chance fall of symbols, could we
agree that machine equals brain” (597).
Fair enough, objectors – perhaps intelligence cannot be dissociated from
the capacity to create art. Yet as we
have seen, and as recent developments in computer science (more recent than
1979) have shown, machines can compose poetry. They can reorganize
words in novel ways, in ways that may even engender emotions among their human
readers. Furthermore, we know that
computers may be able to perform such tasks without even understanding much of
the semantic content of the words themselves – John Searle suggests this very
point in his Chinese Room scenario, in which he argues that an artificial
system can be programmed to engage in conversation without actually
understanding the content of the conversation.
Certainly, and as Searle declares, this means that the program does not
possess understanding, or consciousness… but these are resolutely human conceptions of being, and have
little bearing on models of intelligence.
For if, as we’ve already shown, intelligence adheres in behavior, not in
substance, then a Chinese Room may not be conscious, but it can damn well be
intelligent.
“But wait!” the objectors proclaim, “there’s more!” Fine, let’s hear what they have to say. Mr. Hofstadter, if you please…? Of course, as he says, not until a machine
writes a sonnet can we agree that it has achieved the level of “brain,” but
there’s another element to the theorem:
“that
is, not only write it but know that it
has written it” (597; emphasis mine).
Ah, yes… clever. As Hofstadter relates, Turing’s objectors
insist that a machine cannot be intelligent not only unless it composes a
sonnet, but unless it knows that it
composes a sonnet. For the time being,
let’s grant that this is a fair objection.
So now, I ask: how do we know that
it knows? Why, let’s ask it:
Objectors:
“Machine, do you know that you’ve just composed a sonnet?”
Let’s
imagine that the machine replies: “Yes, I do.”
Assuming the machine
answered thus, our concerned objectors would presumably acquiesce. The machine admits that it knows it composed
a sonnet – it has become a brain. But
how do we know the machine is telling the truth? After all, we’ve already witnessed that the
machine has the capacity to compose believably plausible statements, to even
craft poetic texts. Why do we assume
that it can’t lie to us? Why are we
convinced that its intelligence ends at pretense – that it cannot compose
deception? Oscar Wilde associated art with the supreme aesthetic practice of lying; if a machine can compose a
sonnet, then why can’t it lie to us?
Now, let’s not be unfair to our objectors. Let’s also imagine that the machine answers
to the question of whether it knows
it has produced a sonnet: “No, I do not.”
But what does this mean? By
saying it does not know that it has produced a sonnet, is it saying that it
doesn’t know what a sonnet is? Or is it
saying that it knows what a sonnet is and that it has not produced one? Assuming that the poem composed by the machine
conforms to the rules of a sonnet, we have a couple options available to us:
either it is unaware of what a sonnet actually is, or it has made a profound
aesthetic judgment regarding the qualities of a sonnet. Either way, its response is ambiguous, and
provides no definitive evidence as to its ignorance of the sonnet form. In an even more radical sense, perhaps the
machine is making an uncanny comment not on the aesthetic qualities of a
sonnet, but on the limitations of knowledge itself.
So, once again – what
the hell is intelligence? Where do
we locate it? Can we locate it? And, pressing our inquiry a bit further, what
exactly does intelligence mean? In Gödel,
Escher, Bach, Hofstadter provides the following speculative assessment of intelligence:
Thus
we are left with two basic problems in the unraveling of thought processes, as
they take place in the brain. One is to
explain how the low-level traffic of neuron firings gives rise to the
high-level traffic of symbol activations.
The other is to explain the high-level traffic of symbol activation in
its own terms – to make a theory which does not talk about the low-level
neuronal events. If this latter is
possible – and it is a key assumption at the basis of all present research into
Artificial Intelligence – then intelligence can be realized in other types of
hardware than the brains. Then
intelligence will have been shown to be a property than can be “lifted” right
out of the hardware in which it resides – or in other words, intelligence will
be a software property. (358)
I know –awesome,
right? If intelligence can be isolated
from its intra-cranial hardware, can be distinguished and abstracted, and
transposed into other systems… then suddenly we have a vision of intelligence
that need not yield to consciousness, to that sacred and protected form of
human experience. We have an
intelligence, in other words, that exceeds
consciousness.
Yet there is something deeply troubling about this
proposal; for even if we limit intelligence in humans to behavioral patterns –
that is, to how humans act – this behavior still must be associated with the
dense, gray matter of the three-pound organ balancing above our shoulders. Unless we want to reify intelligence into an
abstraction, a formal system that subsists beyond its material components… we
have to take into account its substructure.
Suggesting that we can “lift” intelligence out of our brains and implant
it (or, perhaps, perceive it) in other hardware systems assumes a certain limitability
of intelligence – that it comprises an entire system in itself, a formal
arrangement of operations. In contrast
to this hypostatizing tendency, I want to argue that intelligence is a
trans-formal mediation, an isomorphic pattern that dictates the interaction
between systems or organisms. It is not
the quality of a system, capable of being lifted and transplanted, but rather
an interfacial effect, and therefore always in flux. Intelligence cannot be quantified or
delimited, but only perceived. It is for
this reason that we can perceive patterns
of intelligence in human behavior, but we cannot isolate them as
characteristics of an individual human subject.
Isolating intelligence involves a conflation of intelligence with
consciousness; indeed, going beyond that, it involves intelligence’s
capitulation to consciousness. As is the
case of the philosophical zombie, however, an individual subject may in fact be
entirely devoid of consciousness and yet still exhibit intelligent
behavior. This is because intelligence
presents itself not in the illusory self-presence of a rational, conscious
subject, but in the conformity between a subject’s behavior and its external
environment.
This is not a matter of “lifting” intelligence out of its
hardware, because its hardware is simply not isolated to the brain, nor to the
body, nor to the hard drive. The
hardware of intelligence materializes between entities, as the interface that
mediates their relations. It is not a
quality of entities, but an effect of systems.
It is for this reason that we can, and should, talk about what has been
popularly and professionally referred to as artificial
intelligence – but as I hope is becoming clear, artificial intelligence is
actually not “artificial” at all, at least not in the sense of false, or
imitative, or untrue. It is, quite
simply, intelligence tout court. Perhaps intelligence is not a genetic trait,
but a technological phenomenon; it is not limited to the domain of biological
life, but manifests between organisms as a technological prosthesis.
I am not the first
to suggest that intelligence is technological, not genetic. This notion of intelligence can be detected
throughout the tradition of communications theory since World War Two. In a short essay on technology, John Johnston
traces the evolution of computer intelligence back to the necessity of new
communications technologies and code-breaking during the Second World War:
For
the first modern computers, built in the late 1940s and early 1950s,
“information” meant numbers (or numerical data) and processing was basically
calculation – what we call today “number crunching.” These early computers were designed to
replace the human computers (as they were called), who during World War II were
mostly women calculating by hand the trajectories of artillery and bombs,
laboring to break codes, and performing other computations necessary for highly
technical warfare.” (199)
Johnston’s comment
highlights a detail of computer history that we all too often forget. When we hear the word “computer,” we
typically make a semantic leap whereby we associate the word with a technical
instrument; but originally, a computer simply meant someone who computes.
Computers were, in their first iteration, human subjects. The shift to technical instruments in the
place of human labor did not fundamentally alter the shape of intelligence as
it manifested among these ceaselessly calculating women – it simply programmed
the symbolic logic by which these original computers operated into the new technical instruments.
Human bodies, like computers, are information
processors. “In short,” Johnston writes,
“both living creatures and the new machines operate primarily by means of
self-control and regulation, which is achieved by means of the communication
and feedback of electrochemical or electronic signals now referred to as
information” (200). We must recall,
however, that Johnston is making a historical claim; that is, he’s attuned to
the specificity of the postwar moment in providing the conditions necessary for
aligning humans and machines. The war
was pivotal in directing science and technology toward a cybernetic paradigm,
which in turn allowed for a new perspective on intelligence to emerge. At first, this evolution in intelligence
simply took the form of technical Turing machines – simple computers that could
produce theorems based on an algorithm, essential glorified calculators – but
eventually it transformed, blossoming into the field that we know today as artificial intelligence.
The entire premise of AI, as popularly defined and
pursued, is a controversial and even contradictory one: controversial because
there is intense disagreement over whether or not artificial intelligence has
ever been attained – and if not, then whether or not it is even
attainable. AI research is also
contradictory, however, and its contradictions touch upon the primary subject
of this post. It is contradictory
because the achievement of intelligence in technological constructs undermines
that very achievement, according to the way its proponents define intelligence:
“There is a related ‘Theorem’ about progress in AI,” Hofstadter writes; “once
some mental function is programmed, people soon cease to consider it as an
essential ingredient of ‘real thinking.’
The ineluctable core of intelligence is always in that next thing which
hasn’t yet been programmed. This
‘Theorem’ was first proposed to me by Larry Tesler, so I call it Tesler’s Theorem: ‘AI is whatever hasn’t
been done yet’” (601). Hofstadter’s
comment is revealing, and it draws the humanization of intelligence back into
the discussion. That is, according to
Tesler’s Theorem, experiments in AI pose functions performed by the brain as
goals for computers; but as soon as computers are able to carry out these
functions, they lose their importance for human brain function. They
can’t actually be aspects of intelligent behavior, Tesler’s Theorem
retorts, if a computer can do them!
The contradiction immediately presents itself – for if
we’ve already presumed that real
intelligence is behavior that a computer cannot perform, then why do we even bother
experimenting with AI at all!? The
entire enterprise becomes self-defeating.
Tesler’s Theorem tells us that we need to reorient ourselves with
respect to intelligence, we need to construct a new perspective on what
intelligence actually is – for if we remain convinced that intelligence can only be human, then it’s pointless to
even try programming intelligence in nonhuman substrates.
This is my point in emphasizing that intelligence isn’t a
genetic trait, or quality of a specific structure of being. If we reimagine intelligence as an emergent
phenomenon deriving from complex pattern-matching, then we open the door not
only to legitimate experiments in AI, but an entirely new conceptualization of
intelligence itself. Of course, such a
proposal will inevitably invite resistance.
Humans don’t like to separate their intelligence from their experience
of it. It doesn’t only make us feel as
though we’re different from other animals (which we are), but it makes us feel
that our experiences are our own, that our intelligence is our own. In other words, it makes us feel that we’re
not simply going through our lives pretending
to be smart, or imitating intelligence (although, again, we all are – I pretend
to be smart for a living). We feel the
need to make our intelligence a part of our consciousness, as Dennett suggests:
“Consciousness, you say, is what matters, but then you cling to doctrines about
consciousness that systematically prevent us from getting any purchase on why it matters. Postulating special inner qualities that are
not only private and intrinsically valuable, but also unconfirmable and
uninvestigable is just obscurantism” (450).
As Dennett convincingly intimates, many of us prefer that consciousness
matters because we experience it –
but then we succumb to a tautology befitting our desire for presence and
immediacy. We claim that consciousness
is important because it is mine, I am
living it. It’s important because we
have it.
But being able to say “I have consciousness” is consciousness. It’s a nasty little Ouroboros we’ve gotten
ourselves into here. This, of course, is
the hard problem of consciousness: explaining it begins to look uncanny when we
really try and orient ourselves to it in a removed fashion. Who is this shambling, talking, possibly thinking thing saying it’s
conscious? Why, it’s just me, of course…
as far as you know. Which is what
Dennett means when he says that consciousness is unconfirmable and
uninvestigable. Of course, from my
private and privileged perspective, it is absolutely confirmable – but you
don’t have access to my consciousness, you can’t prove to yourself that I am conscious. Even if you hired a trained surgeon to remove
my brain so you could inspect it, you won’t find my consciousness. It isn’t there. It’s only there for me.
So now, imagine that I’m not actually a conscious person,
that I’m what we call a “philosophical zombie.”
I can say all the right things, answer your questions in the right way…
but I’m not conscious. Does this mean
I’m unintelligent? Even if I have no
human connection to the words you say, even if I have no intimate knowledge of
the semantics of language… I possess the capacity to match patterns to such an
extent that I can fake it. I can appear
conscious. It’s the same with a
specifically advanced computer system, a system that interprets theorems and
algorithms and performs specific functions.
At a certain point of complexity, the functions it performs might
include carrying on a conversation regarding the aesthetic qualities of “Dover
Beach,” or the ethical issues surrounding immigration – subjects that we consider
fundamental to our humanness. If a
machine matches our communicative capabilities, that may not make it conscious…
but does this mean it isn’t intelligent?
My position on the matter is probably clear by this
point, so I won’t hammer it into the ground anymore. I’ll only leave you with this final
thought. You and I aren’t conversing
face-to-face. You can’t see me, my human
body; you can’t hear my words in the form of speech, you’re just reading them
on a computer screen. Yet we assume each
other’s presence because of the highly complex degree of our communication, of
the pattern-matching and analysis required to interact via an amateur blog post
on the topic of intelligence. And we
likely feel quite comfortable in our positions, confident in our identification
of the person across the interwebz.
After all, it would be difficult for something to fake this level of
linguistic communication, right? Yet
there are programs out there that perform similar functions all across the
web. They’re called (fittingly) bots,
and some are even used to make posts on internet forums. Granted, it’s highly unlikely that a bot
would make a blog post as extended and immaculately crafted as this one (why,
thank you – ah, damn, this is quite a self-reflexive bot!); but even if one
could, would this post be any less… intelligent? (oh please, you’re too kind)
Anyway, no need to fret.
I’m not a bot. As far as you
know… ;-)
(at least… I think I’m not)
Works
Cited
Dennett, Daniel. Consciousness Explained. New York: Back
Bay Books, 1991. Print.
Hofstadter, Douglar R. Gödel, Escher, Bach: an Eternal Golden Braid.
1979. New York: Basic
Books,
1999. Print.
Johnston, John. “Technology.”
Critical Terms For Media Studies.
Eds. W.J.T. Mitchell and
Mark
B.N. Hansen. Chicago: U Chicago P, 2010. 199-214. Print.
Metzinger, Thomas. The Ego Tunnel: The Science of the Mind and
the Myth of the Self. 2009.
New
York: Basic Books, 2010. Print.
Wilson, Edward O. Consilience: The Unity of Knowledge. New
York: Vintage Books, 1999.
Print.