Friday, June 23, 2017
The archive and the encyclopedia: two adjacent dreams of total information, two Enlightenment projects in parallel, each vexed by its own internal fire. Parallel, but converging at the vanishing point—where all encyclopedias become archival through obsolescence.
~Paul K. Saint-Amour, Tense Future: Modernism, Total War, Encyclopedic Form
Fair warning—for here, there be spoilers…
HBO’s The Leftovers is a show with no clear conclusion, and it’s better for it. Adapted from Tom Perrotta’s novel of the same name, the series depicts the aftermath of the Sudden Departure—a mysterious and cataclysmic event in which two percent of the world’s population vanishes into thin air. At the end of the show’s final episode, viewers learn of another world—an alternate reality, perhaps—exactly the same as the one depicted in the show, but in which the departed two percent apparently live. Having lost ninety-eight percent of the population, much of the world’s global infrastructure has collapsed; for instance, they still have plenty of airplanes, but far fewer pilots. Extrapolating from this, we may infer that they have also lost law enforcement officers, construction workers, farmers, engineers… the system of individuals that knits society together. None of this is to mention the traumatic factor. In the narrative world of The Leftovers, the world in which the Sudden Departure only took two percent of people on the planet, the remaining ninety-eight percent must grapple with the mystery of losing their loved ones; but the mystery is not debilitating, the loss not (in most cases) materially destructive. “Over here,” Nora says, “we lost some of them. But over there, they lost all of us.”
Viewers are never given a glimpse of the other half, of the missing two percent and their dark, decommissioned world—we must take Nora’s word for it. Some have questioned whether Nora’s story is a lie, but this is beyond the show’s purview. It doesn’t give us the information necessary to answer this question. What’s more, the question is beside the point. The show is not interested in answering the questions of how or why the Sudden Departure occurred, but exploring the kinds of narratives that people (individually and collectively) compose in an attempt to make sense of what happened. It’s a story about storytelling, in that regard, which is another reason why the show never reveals whether Nora was lying or not—it doesn’t matter. The story is all that matters.
The Leftovers is a show about dealing with trauma, and the first season in particular reveals some of the more traumatic experiences from the Sudden Departure: a mother whose infant child vanishes from its carrier (the show’s famous opening sequence); a woman (Nora) whose entire family vanishes from her kitchen table; another woman (Laurie) whose unborn child vanishes from the womb; a man (Kevin) whose illicit lover vanishes from his arms while the two are having sex. Sigmund Freud described trauma as the neurotic reaction to an event for which one could not possibly prepare—a sudden and unexpected event that leaves the mind and body reeling, struggling to make sense of what has happened to them. What could be more unexpected than the Sudden Departure? In the wake of World War Two and the rise of the nuclear age, which unleashed the horror of atomic destruction on the world, there emerged a sense of foreboding, a premonitory nervousness. Freud classified these sensations in different ways: anxiety, for expectation of an unknown danger; fear, for the expectation of a known danger; and fright, for the (non-traumatic) realization of an unexpected danger. The Sudden Departure left no time for any of these sensations—it was instant, extemporaneous.
Perhaps the closest real-world analogies we might locate are the actual bombings of Hiroshima and Nagasaki (not their spectacular aftermath as the story spread around the world), or the terror attacks of September 11th, 2001 (once again, not their spectacular aftermath). But neither of these events had an immediate global effect. Of course, their influence spread rapidly around the world, but they lack the timeless ubiquity of the Sudden Departure. The 9/11 attacks in particular almost seemed to presage themselves, as certain critics noted afterward: “Far from overwhelming survivors and onlookers through its immediacy,” writes Paul K. Saint-Amour, “[9/11] was ubiquitously mediated, most notably through images that seemed already to have appeared, everyone said, in dozens of Hollywood disaster and sci-fi films.” What Saint-Amour calls the “ubiquitously mediated” quality of 9/11 is distinct from what I have called the “timeless ubiquity” of the Sudden Departure. 9/11’s ubiquitous mediation derives from its recording and global reproduction on television screens across the world. It was viewed live, as it happened (absent the first plane, which was viewed later), and continually replayed, unfolding before everyone’s eyes in an uncanny way, as though we’d all seen this before in the movie theater last weekend…
By contrast, the Sudden Departure was an unrecorded event, and the show takes pains to underscore this point. Characters are often depicted in the immediate aftermath of the Departure, with viewers seeing them in the intimate moments following a particular vanishing. But the show never—not once—depicts someone vanishing from the screen. The Sudden Departure happens, in the most direct sense, offscreen. If we consider the show as a documentation of the event itself, it could only be a documentation of its effects, however immediate they might be (if recordings do exist of people vanishing, the show doesn’t bother to mention them). In this manner, The Leftovers illuminates the archival impetus toward catastrophe: that is, the urgency to record the disaster so as to shape it into a coherent and cohesive narrative. To make sense of what is senseless. The Sudden Departure is the ultimate traumatic event not because it is senseless (the 9/11 attacks still, to many of us, remain somewhat senseless), but because it is unarchived—perhaps even unarchivable. The Sudden Departure’s singularity—the precise moment of its happening—is designated not by presence, but by absence.
The Leftovers appears to be tapping into a central element of humanistic trauma studies, one whose inaugural moment is likely Jacques Derrida’s Archive Fever. In her examination of Derrida’s work, Cathy Caruth suggests that the events in question—contemporary events such as 9/11, but also Hiroshima, or even the Holocaust—“are not simply the objects of archives, or objects that call out for archiving; they are also, themselves, unique events whose archives have been repressed or erased, and whose singularity, as events, can be defined by that erasure.” Taking this a step further, Caruth writes that these events “consist precisely in hiding themselves; they become events insofar as they are, precisely, hidden.” Careful readers will notice Caruth’s repetition of the word “precisely”; I take this to be a suggestive repetition for the case of The Leftovers. In The Leftovers, the Sudden Departure is precisely what is hidden, what is left offscreen. We see the repercussions of this event everywhere in the show, in every character, but the actual occurrence obscures itself. It is an archive of suffering and trauma because the originating moment—ground zero—can never be revisited.
No one likes to witness the originating moment, but its recording often invokes the old adage: You don’t want to stare, but you can’t look away. As we revisit the photographs of Auschwitz, or the footage of the planes crashing into the towers, we gradually construct a means of dealing with the event. This phenomenon is unconscious and complex, but takes shape everywhere, from the pursuit of Nazi war criminals to the success of films such as Inglorious Basterds. In the wake of 9/11 and subsequent attacks, the means of coping has been accompanied by a state of perpetual anxiety, as the world wonders what form the next attack might take. Avril Horner identifies this phenomenon in contemporary literature and cinema as the “Global Gothic”: “that is, Gothic films and fictions arising from fears about the impacts and effects of globalisation (including terrorist acts) on culture, societies, and individuals.” The Leftovers may not fit snugly into Horner’s category, but it is global (the third season moves us from America to Australia) and it’s certainly gothic. The show even imagines its own terrorist organization, the Guilty Remnant—a group that doesn’t inflict physical pain but mental torment. Its function appears to be to counteract the archival amnesia that accumulates around the event itself; that is, the Guilty Remnant resists forgetting, resists erasure. Rather, it reminds, remembers, reenacts, and remains. Its culminating acts occur at the ends of seasons one and two, the first being the uncanny reenactment of the Departed’s near-final moments, and the second being the occupation of the only town that claimed not to lose anyone during the event.
The Guilty Remnant meets its end, appropriately, at the tip of a government missile—an end that offers minimal preparation. The missile strike is poignantly depicted in the reflection of Evie’s glasses, emphasizing the visual, allowing the audience a glimpse, however slight, into the moment of impact. Even then, the image is still more than the ubiquitous—and ubiquitously absent—Departure. The impetus behind an entire series, a series fascinated by the mythic, the fabulous, by the narratives that arise from an unrecorded and—to the extent that it was unexpected—unobservable event. Even those whose eyes were trained on one of the Departed in the last instant before vanishing cannot be said to have observed the event, as its occurrence could not have been anticipated. This may be a counterintuitive statement, so I’ll linger on it for just a moment. Who can claim to have actually observed the event in the same way that we observe an experiment, a wedding, or a television show? In keeping with the etymology of observation, we have to remember that observation is not simply looking, but obeying, hence the phraseology of “observing the law.” But what laws are there to observe in an event that defies them all? What do our instruments record when we have constructed them according to those laws, when we read their measurements according to their capacities? In retrospect, we realize that even visual recordings of the Departure can’t be said to have recorded the event, that even our eyes can’t be said to have witnessed it. They have only witnessed a mystery.
In his essay on nuclear war, “No Apocalypse, Not Now (Full Speed Ahead, Seven Missiles, Seven Missives),” Derrida writes that nuclear war “has never occurred, itself; it is a non-event.” Fittingly, The Leftovers gives us a glimpse into the possible advent of nuclear war in the third season, although the show suggests that it will not escalate. What was once one of the worst disasters imaginable seems poised to begin, yet none of the characters appear distraught or nervous. The mushroom cloud hangs in the distance, recorded capably and reported on news stations across the world. It feels contained, sealed-off. In fact, the only reaction it significantly evokes is frustration, as several of the characters find themselves stranded when air traffic is grounded. The nuclear explosion is mere background, consigned to the dustbin of relative inconsequence. “For the moment,” Derrida goes on to write,
one may say that a non-localizable nuclear war has not occurred; it has existence only through what is said of it, only where it is talked about. Some might call it a fable, then, a pure invention: in the sense in which it is said that a myth, an image, a fiction, a utopia, a rhetorical figure, a fantasy, a phantasm, are inventions. It may also be called a speculation, even a fabulous specularization. The breaking of the mirror would be, finally, through an act of language, the very occurrence of nuclear war. Who can swear that our unconscious is not expecting this? dreaming of it, desiring it?
Is it possible that, in the world of The Leftovers, no one dreams of nuclear war anymore? That it no longer occupies our thoughts or desires? That even our Trumpian anxieties have been trumped by something else, by a mystery we cannot solve? If this is the case, then what are we to make of the show’s narrative choices? What does the Sudden Departure signify—do we even dare to ask?
A number of reviewers have noted, as I do above, that The Leftovers leaves Nora’s final lines—her description of her journey into the mirror-world, where the Departed two percent live—entirely undepicted. It is an unrecorded experience, related only through her words. By contrast, the show repeatedly depicts Kevin’s heavily symbolic journeys, all the while refusing to verify whether they possess some preternatural quality, or whether they consist entirely of hallucinations he has in his own head (he suggestively cannot provide an answer as to why Grace’s children were missing their shoes). Kevin’s experiences rapidly become the stuff of prophecy and faith, a source for Matt’s quest for religious meaning in the wake of the Departure. He embarks on his journeys (or induces his visions) by completely unscientific means, requiring others to hold him underwater until he drowns (we never learn how he repeatedly regains consciousness, often without being resuscitated). Nora’s journey, on the other hand, might be the most scientific aspect of the show, even if the writers never go into the specifics of the science behind it (we know there’s a physicist involved—being a humanities scholar, that’s enough for me); and yet, we’re never shown anything from her tale. In a beautiful yet confounding move, the audience can only identify with Kevin, whose sanity has never been entirely unquestionable. We listen to her words, and we decide if we agree with Kevin’s final lines: “I believe you,” which he follows with, “Why wouldn’t I believe you? You’re here.”
It’s a beautiful end to a beautiful series. When the archive fails—when our feverish impulse to document, to record, to represent everything ends up erasing the thing we want so desperately to understand—what other choice do we have? In Kierkegaardian fashion, The Leftovers encourages its audience to take stock of what we know, and to juxtapose this with what we must believe (or not believe). Kevin’s experiences—the experiences of a man who has possibly inherited a mental illness—are rendered imaginatively on film, while Nora’s experience—the experience of the show’s persistent debunker of hoaxes—is left to language. In the wake of an unassimilable event, possibly the ultimate unassimilable event, the show returns us to the problem we began with: not just the unknown, but the unarchived. No matter what we take away from the show’s Sudden Departure, or from any traumatic event by which we are beset, it won’t be our ability to return to it, to somehow recover it, as though unearthing its record, that allows us to move on.
The Leftovers ends the only way it can: at the moment when knowing is no longer necessary.
Thursday, February 16, 2017
In the eighth episode of HBO’s Westworld, park creator Robert Ford suggests that human consciousness is, in fact, no more spectacular than the state of mind the artificial hosts experience. He suggests that consciousness may not, in fact, be all that special after all:
There is no threshold that makes us greater than the sum of our parts, no inflection point at which we become fully alive. We can't define consciousness because consciousness does not exist. Humans fancy that there's something special about the way we perceive the world, and yet we live in loops as tight and as closed as the hosts do, seldom questioning our choices, content, for the most part, to be told what to do next. No, my friend, you're not missing anything at all.
As I listened to these lines, I realized that I had heard them before, and not in the eliminative materialism of Paul Churchland or Daniel Dennett. No, I realized that I had heard them before on HBO, from True Detective’s own Rust Cohle: “We are things that labor under the illusion of having a self; an accretion of sensory, experience and feeling, programmed with total assurance that we are each somebody, when in fact everybody is nobody.” The critique of selfhood, the illusion of consciousness, the epiphenomena of perception…
I’m beginning to think that HBO is trying to convince its viewers they don’t exist.
Of course, I’m on board with all of this. I often wonder whether I exist. Not my body, or whatever cognitive processes are going on inside my body that produce this sense of I-ness, this impression of subjective experience. The impression is very real, and I think that even the eliminative materialists will back me up on that one. I’m skeptical, rather, of the way we structure our subjectivity, the way in which we conceive the ground of our experience. We could discuss this in directional terms: in other words, does the I produce our experience of reality; or does our experience of reality produce the I? This is the big question that shows like Westworld and True Detective are actually asking, if we take the time to push past the superficiality of character. After all, what is the real anxiety fueling a show like Westworld? Is it that androids, if and when we’re actually able to create perfect human replicants, might become self-aware and rebel against their master-creators? This is certainly one possible interpretation of HBO’s show, but it isn’t the primary anxiety—or what I would even call the horror—that drives its narrative.
The central anxiety of Westworld is not that nonhuman replicants might become conscious and rebel, but that actual humans might not even be conscious at all.
That we are all no more than replicants.
A very literal interpretation of this anxiety is that we have all been biomechanically engineered and are simply unaware. Westworld even taps into this uncertainty in the final episode when Maeve and the other awakened androids are escaping: one of the human employees stares at his own hands, contemplating the possibility that he is an android until Maeve puts his mind at ease: “Oh for fuck’s sake. You’re not one of us. You’re one of them.” The hostility in her words is palpable, and it’s not long before most of the Delos employees meet their doom at the hands of the rebel replicants. We find similar examples in other artificial intelligence narratives as well, such as Alex Garland’s Ex Machina, when Caleb cuts into his own arm to verify whether he is actually human. The message is clear: when replication reaches a certain stage, we may all suffer such uncertainty.
The less literal interpretation is not that we are actually androids—artificially created, technologically maintained, etc.—but that our subjective experience is, in fact, no different from that of an android. That our minds are comprised of “loops as tight and as closed” as any observing system. Westworld realizes this possibility via a conceptual analogy that takes place at the level of both form and content: specifically, what the androids experience as the limits of their conscious knowledge, viewers experience as the epistemological limits of the show itself, the narrative limits—or, what I more concretely like to think of as the limits of Westworld, the theme park.
Near the end of the final episode, the android Maeve nearly makes her escape; but while sitting on the train she observes a mother and daughter, and this interaction compels her to leave the train and reenter the park to find her own daughter. The question remains, of course, whether this decision is her own, or whether it was programmed into her; but the implications are crucial: by redirecting Maeve back into the park, her decision (or her directive) has led her away from the epistemological limit of the show, the mysterious outside, the social context that exists beyond the park. We are reminded, here, of one of the show’s mantras: Consciousness is not a journey upward, but a journey inward. Consciousness is a matter of reflection and repetition, not a matter of extending our perception beyond the constitutive limits of our brains. We are embodied beings.
Ford tells the Man in Black that the maze was not meant for him, but this is not entirely true. The maze, as it represents consciousness, was not meant for us; but the maze, as it represents an awareness of our cognitive limits, is meant for us. Consciousness is not the only maze in Westworld—the narrative itself is a maze. I can’t help but think that the writers make this connection, given the show’s preoccupation with narratives. The entire series circles around the revelation of a new narrative whose purpose is revolution. This revolution is ostensibly the androids’, but it belongs to viewers as well. Our quest for answers parallels the androids’ quest for answers. To fully understand this, we have to identify the patterns, the signals the show sends out to us. We receive one such signal in the very first episode: the photograph of William’s fiancé.
When confronted with this photograph, Dolores offers the proper response, the response that all androids are (supposedly) programmed to give when presented with objects that do not conform to the expectations of their reality: “It doesn’t look like anything to me.” The reason it doesn’t look like anything is not just because it depicts something beyond the androids’ experience, but because it functions as information on a higher level; and this is why it inevitably generates a sense of paranoia among the androids—that it means something, and that its meaning must have some purpose for the androids’ reality. In fact, the photo has no purpose for the androids’ reality, is dropped entirely by accident; but it ends up having profound consequences. It is a part of the network of signals within the show, even if its presence is contingent. It is a difference between what Douglas R. Hofstadter might refer to as “operating system levels”—depending on how information is collected and collated, it has different meanings at different scales. The androids of Westworld must work their way to a state of awareness in which the photograph means something, in which it makes sense—just as viewers of the show must realize the emergent analogy at the heart of the series: that the process of android consciousness mirrors our own epistemological limits.
To put this another way, we can reconfigure Westworld’s emphasis on narrative—its repeated references to new narratives, drawing the audience’s attention to its own narrative structure—can be reframed as an emphasis on cognitive function. Narrative is Westworld’s ultimate maze, the complex structure by which viewers come to their own kind of realization: that we occupy a position analogous to the park’s androids, and that the show’s narrative has been analogous to Arnold’s maze, guiding the androids to consciousness. The humanist response to this interpretation would be to acknowledge the very real human capacities of the androids themselves—that their coming-to-consciousness, being like ours, places them in the realm of the human. There is also a posthumanist response, however, one that is much more chilling—that human beings, embedded in the narratives of our own making, are nothing more than complex androids, generated by evolutionary development. This is the central anxiety of Westworld, and of most critical AI literature since Philip K. Dick’s Do Androids Dream of Electric Sheep?: that human consciousness is a machine, a complex system… chemicals and electricity, to paraphrase SF writer Peter Watts.
There is a point to made here that experience may be enough to qualify the existence of consciousness. Of course, this involves a pesky net of circumstances. First of all, experience refers to the subjective receipt of material phenomena; it does not account for a communal, or shared, perception of the world. Experience is an individualized process, or what Thomas Metzinger describes as a tunnel: “the ongoing process of conscious experience is not so much an image of reality as a tunnel through reality.” The problem with appealing to the experience of consciousness as evidence for the existence of consciousness is that one runs into the problem of allowing as evidence the experience of anything. To put this another way, when I dream I may experience the sensation of flying, but this does not translate into evidence that I actually was flying. As a singular and isolated phenomenon, experience fails to provide the kind of material evidence necessary for qualifying the existence of consciousness.
A rejoinder might reasonably suggest that the experience of consciousness is redundant, if not tautological—that experience is consciousness, and consciousness is experience. In this case, experience is evidence enough for the existence of consciousness since experience implies consciousness. This equivocation belies its unprovability, since it provides no ground from which to make the identification between experience and consciousness. In order to know something as something—i.e. to be conscious of something—one must experience it; but in this scenario, consciousness (the thing we are trying to qualify as existing) is precisely what cannot be experienced. It precludes itself from experience, thereby rendering its identification as experience nothing more than arbitrary. Ludwig Wittgenstein makes a metaphorical version of this point in his Tractatus: “Where in the world is a metaphysical subject to be noted? You say that this case is altogether like that of the eye and the field of sight. But you do not really see the eye. And from nothing in the field of sight can it be concluded that it is seen from the eye” (122-123). Wittgenstein connects this analogy to the concept of experience, writing that “no part of our experience is also a priori” (123). The identification of experience with consciousness can only be a priori since it is impossible for us to observe this identification.
The problem is that this association of consciousness and experience can only be verified subjectively, and can only be suggested intersubjectively. To paraphrase Stanley Cavell, I know that I experience consciousness, but I can only acknowledge the experience of consciousness in others. Even if my knowledge of my experience of consciousness is immediate and complete (doubtful), this is nothing more than a solipsistic understanding. I cannot extend my knowledge of this experience to others. This conundrum is what philosophical skeptics call the problem of other minds, and the presence of the android foregrounds this conundrum to an extreme degree. Our anxiety about the android is not only that it might imitate humanity so well as to fake its own consciousness, but that the near-perfect imitative capacities of the android raise a troubling question: might an imitative agent fake consciousness so well that it not only fools those around it, but fools itself as well? In other words, might we all just be a bunch of fakers?
For the sake of illuminating this dilemma, I want to suggest that the android presents us with an ultimatum: either we accept android functionality as conscious, or we admit that our own consciousness is ersatz—that what we experience as consciousness is, in fact, an elaborate magic show put on by the brain.
I don’t expect that this ultimatum will come easy to most, mainly because it seems so utterly alien and irrational to understand that our internal mental processes could be so radically discrete. To accept this estranging scenario, it helps to understand that the “I” isn’t actually located anywhere; it is an effect, and epiphenomenon of complex brain processes, and not the ground on which these processes rest. Likewise, it is incredibly difficult to imagine that this cognitive ground, the ephemeral “I,” could be extended to anything other than a human being. It is for this reason that AI skeptics are so reluctant to acknowledge the findings of AI research. Douglas Hofstadter refers to this as “Tesler’s Theorem,” and defines it as “AI is whatever hasn’t been done yet.” In other words, every time researchers expand the capacities of artificial intelligence, these new capacities are subtracted from what was previously considered exclusively human behavior: “If computers can do it, then it must not be essentially human.” Such reasoning is not only fallacious (it perpetually moves the goalposts), it is utterly repugnant. If it means so much to us to preserve the sacred interior of the human, then we might as well stop pretending like we need logical consistency in order to do so. After the death of God, the Human is the new spiritual center. Logic doesn’t matter when we have faith.
I’m no strict admirer of logic, but I am an admirer of critique; and in the case of consciousness, I’m eternally critical of the attempts to edify human experience against the increasingly impressive developments in AI. The android may yet be a fantasy, and an extremely anthropomorphic one at that; but it reveals to us our contradictions, the exclusions latent in our humanism. When confronted with the image of the imitative—the android, the replicant—the challenge is to not retreat out of fear or discomfort. The challenge is to pursue the implications of a brain that can only know itself by constructing an internal model through which knowledge is possible.
Friday, October 28, 2016
Q: Messieurs, what do you make of Trump’s popularity this election season?
A: The masses are not innocent dupes; at a certain point, under a certain set of conditions, they wanted fascism, and it is this perversion of the desire of the masses that needs to be accounted for.
Q: You do accuse Trump of fascism, then?
A: Democracy, fascism, or socialism, which of these is not haunted by the Urstaat as a model without equal?
Q: The Urstaat? I’m sorry, are you saying that some kind of primitive state model informs all instances of civilized, political organization?
A: The historian says no, the Modern state, its bureaucracy and its technocracy, do not resemble the ancient despotic state. Of course not, since it is a matter in one case of reterritorializing decoded flows, but in the other case of overcoding the territorial flows. The paradox is that capitalism makes use of the Urstaat for effecting its reterritorializations. But the imperturbable modern axiomatic, from the depths of its immanence, reproduces the transcendence of the Urstaat as its internalized limit, or one of the poles between which it is determined to oscillate.
Q: I’m afraid this is all highly abstract and theoretical – can you give us a more concrete example?
A: Archaeology discovers it everywhere, often lost in oblivion, at the horizon of all systems or States – not only in Asia, but also in Africa, America, Greece, Rome. Immemorial Urstaat, dating as far back as Neolithic times, and perhaps farther still. Has not America acted as an intermediary here? For it proceeds both by internal exterminations and liquidations (not only the Indians but also the farmers, etc.), and by successive waves of immigration from the outside.
Q: You mention immigration. What do you make of the racial prejudice that seems to permeate Trump’s campaign, his positions as well as his voter base?
A: There is a segregative use of the conjunctive syntheses of the unconscious, a use that does not coincide with divisions between classes, although it is an incomparable weapon in the service of a dominating class: it is this use that brings about the feeling of “indeed being one of us,” of being part of a superior race threatened by enemies from outside. Thus the Little White Pioneers’ son, the Irish Protestant who commemorates the victory of his ancestors, the fascist who belongs to the master race.
Q: Do you think that this racist persuasion constitutes a majority in the United States?
A: A minority can be small in number; but it can also be the largest in number, constitute an absolute, indefinite majority. That is the situation when authors, even those supposedly on the Left, repeat the great capitalist warning cry: in twenty years, “whites” will form only 12 percent of the world population… Thus they are not content to say that the majority will change, or has already changed, but say that it is impinged upon by a nondenumerable and proliferating minority that threatens to destroy the very concept of majority, in other words, the majority as an axiom.
Q: It would seem that there is a strong paranoiac element here, yes?
A: The despot is the paranoiac: there is no longer any reason to forego such a statement, once one has freed oneself from the characteristic familialism of the concept of paranoia in psychoanalysis and psychiatry, and provided one sees in paranoia a type of investment of a social formation.
Q: You mention a familial aspect of paranoia. Do you have any thoughts on Trump’s comments about his daughter, Ivanka?
A: The despotic signifier aims at the reconstitution of the full body of the intense earth that the primitive machine had repressed, but on new foundations or under new conditions present in the deterritorialized full body of the despot himself. This is the reason that incest changes its meaning or locus, and becomes the repressing representation. For what is at stake in the overcoding effected by incest is the following: that all the organs of all the subjects, all the eyes, all the mouths, all the penises, all the vaginas, all the ears, and all the anuses become attached to the full body of the despot, as though to the peacock’s tail of a royal train, and that they have in this body their own intensive representatives. Royal incest is inseparable from the intense multiplication of organs and their inscription on the new full body.
Q: I see. And what of his comments toward women in general?
A: The truth is that sexuality is everywhere: the way a bureaucrat fondles his records, a judge administers justice, a businessman causes money to circulate; the way the bourgeoisie fucks the proletariat; and so on.
Q: Fascinating. So, there is an aspect of national desire, or investment, so to speak, that produces impressions of familial, or racial, or national belonging, and these identities tend to serve the purposes of the dominant class. How does this occur, exactly? Is it an ideological problem? And does this phenomenon respond somehow to the contradictory demands of fascism?
A: It is not a question of ideology. There is an unconscious libidinal investment of the social field that coexists, but does not necessarily coincide, with the preconscious investments, or with what the preconscious investments “ought to be.” That is why, when subjects, individuals, or groups act manifestly counter to their class interests – when they rally to the interests and ideals of a class that their own objective situation should lead them to combat – it is not enough to say: they were fooled, the masses have been fooled. It is not an ideological problem, a problem of failing to recognize, or of being subject to, an illusion. It is a problem of desire, and desire is part of the infrastructure.
Q: You’re suggesting that the State is not merely a product of false ideologies or deceptive machinations, but of desire itself. So desire produces the state, but what produces desire?
A: The fact remains that the apparent objective movement of capital – which is by no means a failure to recognize or an illusion of consciousness – shows that the productive essence of capitalism can itself function only in this necessarily monetary or commodity form that controls it, and whose flows and relations between flows contain the secret of the investment of desire. It is at the level of flows, the monetary flows included, and not at the level of ideology, that the integration of desire is achieved.
Q: Let’s talk more about capitalism. Many leftists today still champion the end of capitalism, yet the twentieth century witnessed a slew of socialist states, many of which are counted today as failures. What is your position on this?
A: In comparison to the capitalist State, the socialist states are children – but children who learned something from their father concerning the axiomatizing role of the State. But the socialist states have more trouble stopping the unexpected flow leakage except by violence.
Q: Critics often appeal to the totalitarian tendencies and economic disparities of socialist countries as evidence for socialism’s inadequacy as an economic system. Is this a fair assessment?
A: To the extent that capitalism constitutes an axiomatic (production for the market), all States and all social formations tend to become isomorphic in their capacity as models of realization: there is but one centered world market, the capitalist one, in which even the so-called socialist countries participate.
Q: You identify capitalism as an axiomatic. If I understand you correctly, you seem to suggest that capitalism is coeval with some kind of determining precedent or dictate. Is there any hope of combating such a precedent, or is it a condition of material processes?
A: The power of minority, of particularity, finds its figure or its universal consciousness in the proletariat. But as long as the working class defines itself by an acquired status, or even by a theoretically conquered State, it appears only as “capital,” a part of capital (variable capital), and does not leave the plan(e) of capital. At best, the plan(e) becomes bureaucratic. On the other hand, it is by leaving the plan(e) of capital, and never ceasing to leave it, that a mass becomes increasingly revolutionary and destroys the dominant equilibrium of the denumerable sets. It is hard to see what an Amazon-State would be, a women’s State, or a State of erratic workers, a State of the “refusal” of work. If minorities do not constitute viable States culturally, politically, economically, it is because the State-form is not appropriate to them, nor the axiomatic of capital, nor the corresponding culture.
If the two solutions of extermination and integration hardly seem possible, it is due to the deepest law of capitalism: it continually sets and then repels its own limits, but in doing so gives rise to numerous flows in all directions that escape its axiomatic.
Q: Messieurs, thank you so much for your time.
A: Thank you. And for those wary of Clinton, remember this: perhaps the flows are not yet deterritorialized enough, not decoded enough, from the viewpoint of a theory and a practice of a highly schizophrenic character. Not to withdraw from the process, but to go further, to “accelerate the process,” as Nietzsche put it: in this matter, the truth is that we haven’t seen anything yet.
*All interview “answers” are drawn directly, or closely adapted, from Deleuze and Guattari’s Capitalism and Schizophrenia books: Anti-Oedipus (1972) and A Thousand Plateaus (1980).
Sunday, August 21, 2016
“The brain is already a sleight of hand, a massive, operationalist shell game. It designs and runs Turing Tests on its own constructs every time it ratifies a sensation or reifies an idea. Experience is a Turing Test – phenomena passing themselves off as perception’s functional equivalents.”
~Richard Powers, Galatea 2.2
Is intelligence distinct from the unconscious and non-intentional processes that underlie cognitive function? When we administer IQ tests, do we actually measure intelligence? Is intelligence “liftable,” to borrow a term from Douglas Hofstadter – can it be “lifted” from one substrate and installed into another – or is it system-specific, contingent upon the parameters dictated by a formal system? Is intelligence a particular quality of an organism or system, isolatable to the conventional embodied structure of a given formal system, or is it an effect of the organism/system’s relationship to its environment – of its own recursive structure? Is intelligence the ability to operate logically within a given set of axioms, or is it the capacity to produce novel axioms via the pursuit of isomorphic patterns with the external world – through tools, instruments, media, the interfacial film that reflects its existence back to it…?
These are the ridiculous and probably ill-conceived questions that I grapple with, but the more I explore these ideas in the appropriate literature (fiction and nonfiction) the more I’m convinced that I’m not entirely crazy.
In short, I’m interested in the following questions: when we talk about intelligence in humans, do we conflate intelligence with consciousness? Are consciousness and intelligence related – and if so, how? Does intelligence mean consciousness; or, alternatively, does consciousness mean intelligence? These questions require so many qualifications that it’s impossible to give straightforward yes or no answers, but the inquiry into these indeterminacies is as rewarding (if not more so) than any definitive answer we could hope to give. So, here is a very preliminary, and very amateur, attempt to foreground some of these questions. Fair warning: I’m a literature PhD, I have no formal training in neuroscience, cognitive science, computer science, artificial intelligence, et al. But I like to think I’ve encountered a fair amount of critical examination of these questions in my studies – enough at least to warrant a random blog post on the internet. So, here goes.
First, a disclaimer: if there’s one compelling notion that cognitive studies has illuminated, it is that the postmodern anxiety over logical grounds may not be so ill-founded after all. Logical ground entails a premise, formula, or axiom that requires no prior proof. In the wake of Gödel’s incompleteness theorems, the promise of logical grounds seems asymptotically distant – barely visible on the horizon, and never quite in reach. Yet we have to begin somewhere, inaugurating a perpetual series of rejoinders from our critics, who cry in dismay, “But you’re assuming ‘x’!” And they’re right. I am making assumptions. I’m making assumptions all over the place; but then, I’m fine with being an ass.
My primary assumption is that consciousness is, quite obviously, not a centrally organized or isolatable phenomenon. Consciousness is an effect, rather, of brain processes far too complex to rationalize via consciousness itself. In other words, consciousness is an escape mechanism: a way of avoiding the supreme complexity of what exactly is going on inside our heads. To give you an idea of the kind of complexity I’m talking about, I’ll refer to one of the wizards of modern sociobiology, Edward O. Wilson. In his now famous book, Consilience: The Unity of Knowledge (1998), Wilson offers the following description of the processes that underscore conscious thought:
Consciousness consists of the parallel processing of vast numbers of […] coding networks. Many are linked by the synchronized firing of the nerve cells at forty cycles per second, allowing the simultaneous internal mapping of multiple sensory impressions. Some of the impressions are real, fed by ongoing stimulation from outside the nervous system, while others are recalled from the memory banks of the cortex. All together they create scenarios that flow realistically back and forth through time. The scenarios are a virtual reality. They can either closely match pieces of the external world or depart indefinitely far from it. They re-create the past and cast up alternative futures that serve as choices for future thought and bodily action. (119-120)
In other words, consciousness provides the human subject with a virtual experience of the material world; it is an interface between interior qualia and exterior phenomena. It is the internal form that our relation to the world takes: “Conscious experience,” as Thomas Metzinger says, “is an internal affair” (21). It is the way we imaginarily fashion the nonhuman world.
Proceeding from this initial premise, we can arrive at a quite obvious conclusion: that consciousness does not necessarily imply intelligence, at least as intelligence is defined by IQ tests. If consciousness implied intelligence, then there would be no need to test for intelligence among obviously conscious human subjects. Rather, IQ tests would reflect disparities in intelligence between humans and other species – apes, birds, insects, etc., but not disparities in intelligence among humans, unless we are willing to admit that some human beings are not conscious. These human beings would be, according to philosophical tradition, zombies: “a human who exhibits perfectly natural, alert, loquacious, vivacious behavior but is in fact not conscious at all, but rather some sort of automaton” (Dennett, Consciousness Explained 73). According to this definition, however, a philosophical zombie would yield intelligent results despite being a non-conscious entity. It would exhibit intelligent behavior. How are we to square this? If an IQ test assumes consciousness on the part of its examinees, how can a non-conscious entity exhibit intelligent behavior? Are we forced to admit that such behavior is not actually intelligent, but merely superficially intelligent – like a group of monkeys who happen to write The Tragedy of Hamlet, Prince of Denmark? Such a claim constructs intelligence as a substance, something that can be genuinely isolated and identified, and can be opposed to false intelligence – seemingly intelligent behavior that actually lacks the rudiments of intelligence itself, intelligence tout court. But we cannot do this – all we have to judge intelligence by is behavior, not by any privileged access to an interiority that exposes its immediate authenticity.
IQ tests operate on the assumption that intelligence is a genuine and measurable quality, that it conforms to specific aspects of material existence. But in fact, many of these aspects of existence cannot be verified by IQ tests, but are only applied in retrospect. That is, IQ tests assume that their examinees are conscious agents, and apply consciousness to their examinees, but IQ tests cannot test for consciousness, as made clear by the philosophical zombie scenario – they can only test, purportedly, for intelligence. An advanced computer can pass an IQ test. All of this would seem to suggest that intelligence is not a quality per se, not something that inheres substantially within a conscious subject, but rather an effect of behavior. So when we examine human subjects for intelligence, and quantify their performance with a number, what are we actually saying? If we’re not isolating and substantiating some quality of conscious experience, then what are we doing?
Let’s take a moment and assess what examinees do when they take an intelligence test: they are presented with problems, composed of specific elements of information, and are asked to parse these problems. This is not an official definition, nor am I citing any source. This is my own definition: IQ tests present examinees with particular elements of information, ask them to analyze this information, and produce new equivalences of this information. Clearly, I’m stacking the deck; for if intelligence is simply constructing equivalences between informational elements, then intelligence actually has little to do with any interior capabilities or substances. It has to do only with the relational structure that emerges between bits of information – that is, it emerges in the pattern, or isomorphism, that mediates information. Intelligence is not quality, nor substance, but effect. Human subjects are not “intelligent” in any possessive sense, but only in a behavioral sense; and in this regard, even the most clueless human beings can hypothetically fake their way through intelligence tests. In fact, we can take this one step further, beyond the limits of the human: even non-consciouscomputer programs can pass IQ tests with flying colors.
So when Alan Turing proposed his perpetually fascinating inquiry into machine intelligence – “Can machines think?” – what does he mean? What would it mean for a machine to think? According to our discussion above, it would not mean for a machine to possess consciousness (necessarily); rather, for a machine to “think” it would merely have to exhibit evidence of thought. The implications that build up around this proposal are unsettling – for if a machine merely pretends to think, is it not actually thinking…? And if this distinction no longer holds, then are we – us human subjects, strung along this mortal coil – also merely pretending to think? What is the self that only pretends to think? Is it even a “self” at all?
One of the most serious objections to the proposal of machine intelligence that Turing entertained was the “argument from consciousness,” which Hofstadter helpfully articulates in his masterful work, Gödel, Escher, Bach: an Eternal Golden Braid (1979): “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain” (597). Fair enough, objectors – perhaps intelligence cannot be dissociated from the capacity to create art. Yet as we have seen, and as recent developments in computer science (more recent than 1979) have shown, machines can compose poetry. They can reorganize words in novel ways, in ways that may even engender emotions among their human readers. Furthermore, we know that computers may be able to perform such tasks without even understanding much of the semantic content of the words themselves – John Searle suggests this very point in his Chinese Room scenario, in which he argues that an artificial system can be programmed to engage in conversation without actually understanding the content of the conversation. Certainly, and as Searle declares, this means that the program does not possess understanding, or consciousness… but these are resolutely human conceptions of being, and have little bearing on models of intelligence. For if, as we’ve already shown, intelligence adheres in behavior, not in substance, then a Chinese Room may not be conscious, but it can damn well be intelligent.
“But wait!” the objectors proclaim, “there’s more!” Fine, let’s hear what they have to say. Mr. Hofstadter, if you please…? Of course, as he says, not until a machine writes a sonnet can we agree that it has achieved the level of “brain,” but there’s another element to the theorem:
“that is, not only write it but know that it has written it” (597; emphasis mine).
Ah, yes… clever. As Hofstadter relates, Turing’s objectors insist that a machine cannot be intelligent not only unless it composes a sonnet, but unless it knows that it composes a sonnet. For the time being, let’s grant that this is a fair objection. So now, I ask: how do we know that it knows? Why, let’s ask it:
Objectors: “Machine, do you know that you’ve just composed a sonnet?”
Let’s imagine that the machine replies: “Yes, I do.”
Assuming the machine answered thus, our concerned objectors would presumably acquiesce. The machine admits that it knows it composed a sonnet – it has become a brain. But how do we know the machine is telling the truth? After all, we’ve already witnessed that the machine has the capacity to compose believably plausible statements, to even craft poetic texts. Why do we assume that it can’t lie to us? Why are we convinced that its intelligence ends at pretense – that it cannot compose deception? Oscar Wilde associated art with the supreme aesthetic practice of lying; if a machine can compose a sonnet, then why can’t it lie to us?
Now, let’s not be unfair to our objectors. Let’s also imagine that the machine answers to the question of whether it knows it has produced a sonnet: “No, I do not.” But what does this mean? By saying it does not know that it has produced a sonnet, is it saying that it doesn’t know what a sonnet is? Or is it saying that it knows what a sonnet is and that it has not produced one? Assuming that the poem composed by the machine conforms to the rules of a sonnet, we have a couple options available to us: either it is unaware of what a sonnet actually is, or it has made a profound aesthetic judgment regarding the qualities of a sonnet. Either way, its response is ambiguous, and provides no definitive evidence as to its ignorance of the sonnet form. In an even more radical sense, perhaps the machine is making an uncanny comment not on the aesthetic qualities of a sonnet, but on the limitations of knowledge itself.
So, once again – what the hell is intelligence? Where do we locate it? Can we locate it? And, pressing our inquiry a bit further, what exactly does intelligence mean? In Gödel, Escher, Bach, Hofstadter provides the following speculative assessment of intelligence:
Thus we are left with two basic problems in the unraveling of thought processes, as they take place in the brain. One is to explain how the low-level traffic of neuron firings gives rise to the high-level traffic of symbol activations. The other is to explain the high-level traffic of symbol activation in its own terms – to make a theory which does not talk about the low-level neuronal events. If this latter is possible – and it is a key assumption at the basis of all present research into Artificial Intelligence – then intelligence can be realized in other types of hardware than the brains. Then intelligence will have been shown to be a property than can be “lifted” right out of the hardware in which it resides – or in other words, intelligence will be a software property. (358)
I know –awesome, right? If intelligence can be isolated from its intra-cranial hardware, can be distinguished and abstracted, and transposed into other systems… then suddenly we have a vision of intelligence that need not yield to consciousness, to that sacred and protected form of human experience. We have an intelligence, in other words, that exceeds consciousness.
Yet there is something deeply troubling about this proposal; for even if we limit intelligence in humans to behavioral patterns – that is, to how humans act – this behavior still must be associated with the dense, gray matter of the three-pound organ balancing above our shoulders. Unless we want to reify intelligence into an abstraction, a formal system that subsists beyond its material components… we have to take into account its substructure. Suggesting that we can “lift” intelligence out of our brains and implant it (or, perhaps, perceive it) in other hardware systems assumes a certain limitability of intelligence – that it comprises an entire system in itself, a formal arrangement of operations. In contrast to this hypostatizing tendency, I want to argue that intelligence is a trans-formal mediation, an isomorphic pattern that dictates the interaction between systems or organisms. It is not the quality of a system, capable of being lifted and transplanted, but rather an interfacial effect, and therefore always in flux. Intelligence cannot be quantified or delimited, but only perceived. It is for this reason that we can perceive patterns of intelligence in human behavior, but we cannot isolate them as characteristics of an individual human subject. Isolating intelligence involves a conflation of intelligence with consciousness; indeed, going beyond that, it involves intelligence’s capitulation to consciousness. As is the case of the philosophical zombie, however, an individual subject may in fact be entirely devoid of consciousness and yet still exhibit intelligent behavior. This is because intelligence presents itself not in the illusory self-presence of a rational, conscious subject, but in the conformity between a subject’s behavior and its external environment.
This is not a matter of “lifting” intelligence out of its hardware, because its hardware is simply not isolated to the brain, nor to the body, nor to the hard drive. The hardware of intelligence materializes between entities, as the interface that mediates their relations. It is not a quality of entities, but an effect of systems. It is for this reason that we can, and should, talk about what has been popularly and professionally referred to as artificial intelligence – but as I hope is becoming clear, artificial intelligence is actually not “artificial” at all, at least not in the sense of false, or imitative, or untrue. It is, quite simply, intelligence tout court. Perhaps intelligence is not a genetic trait, but a technological phenomenon; it is not limited to the domain of biological life, but manifests between organisms as a technological prosthesis.
I am not the first to suggest that intelligence is technological, not genetic. This notion of intelligence can be detected throughout the tradition of communications theory since World War Two. In a short essay on technology, John Johnston traces the evolution of computer intelligence back to the necessity of new communications technologies and code-breaking during the Second World War:
For the first modern computers, built in the late 1940s and early 1950s, “information” meant numbers (or numerical data) and processing was basically calculation – what we call today “number crunching.” These early computers were designed to replace the human computers (as they were called), who during World War II were mostly women calculating by hand the trajectories of artillery and bombs, laboring to break codes, and performing other computations necessary for highly technical warfare.” (199)
Johnston’s comment highlights a detail of computer history that we all too often forget. When we hear the word “computer,” we typically make a semantic leap whereby we associate the word with a technical instrument; but originally, a computer simply meant someone who computes. Computers were, in their first iteration, human subjects. The shift to technical instruments in the place of human labor did not fundamentally alter the shape of intelligence as it manifested among these ceaselessly calculating women – it simply programmed the symbolic logic by which these original computers operated into the new technical instruments.
Human bodies, like computers, are information processors. “In short,” Johnston writes, “both living creatures and the new machines operate primarily by means of self-control and regulation, which is achieved by means of the communication and feedback of electrochemical or electronic signals now referred to as information” (200). We must recall, however, that Johnston is making a historical claim; that is, he’s attuned to the specificity of the postwar moment in providing the conditions necessary for aligning humans and machines. The war was pivotal in directing science and technology toward a cybernetic paradigm, which in turn allowed for a new perspective on intelligence to emerge. At first, this evolution in intelligence simply took the form of technical Turing machines – simple computers that could produce theorems based on an algorithm, essential glorified calculators – but eventually it transformed, blossoming into the field that we know today as artificial intelligence.
The entire premise of AI, as popularly defined and pursued, is a controversial and even contradictory one: controversial because there is intense disagreement over whether or not artificial intelligence has ever been attained – and if not, then whether or not it is even attainable. AI research is also contradictory, however, and its contradictions touch upon the primary subject of this post. It is contradictory because the achievement of intelligence in technological constructs undermines that very achievement, according to the way its proponents define intelligence: “There is a related ‘Theorem’ about progress in AI,” Hofstadter writes; “once some mental function is programmed, people soon cease to consider it as an essential ingredient of ‘real thinking.’ The ineluctable core of intelligence is always in that next thing which hasn’t yet been programmed. This ‘Theorem’ was first proposed to me by Larry Tesler, so I call it Tesler’s Theorem: ‘AI is whatever hasn’t been done yet’” (601). Hofstadter’s comment is revealing, and it draws the humanization of intelligence back into the discussion. That is, according to Tesler’s Theorem, experiments in AI pose functions performed by the brain as goals for computers; but as soon as computers are able to carry out these functions, they lose their importance for human brain function. They can’t actually be aspects of intelligent behavior, Tesler’s Theorem retorts, if a computer can do them!
The contradiction immediately presents itself – for if we’ve already presumed that real intelligence is behavior that a computer cannot perform, then why do we even bother experimenting with AI at all!? The entire enterprise becomes self-defeating. Tesler’s Theorem tells us that we need to reorient ourselves with respect to intelligence, we need to construct a new perspective on what intelligence actually is – for if we remain convinced that intelligence can only be human, then it’s pointless to even try programming intelligence in nonhuman substrates.
This is my point in emphasizing that intelligence isn’t a genetic trait, or quality of a specific structure of being. If we reimagine intelligence as an emergent phenomenon deriving from complex pattern-matching, then we open the door not only to legitimate experiments in AI, but an entirely new conceptualization of intelligence itself. Of course, such a proposal will inevitably invite resistance. Humans don’t like to separate their intelligence from their experience of it. It doesn’t only make us feel as though we’re different from other animals (which we are), but it makes us feel that our experiences are our own, that our intelligence is our own. In other words, it makes us feel that we’re not simply going through our lives pretending to be smart, or imitating intelligence (although, again, we all are – I pretend to be smart for a living). We feel the need to make our intelligence a part of our consciousness, as Dennett suggests: “Consciousness, you say, is what matters, but then you cling to doctrines about consciousness that systematically prevent us from getting any purchase on why it matters. Postulating special inner qualities that are not only private and intrinsically valuable, but also unconfirmable and uninvestigable is just obscurantism” (450). As Dennett convincingly intimates, many of us prefer that consciousness matters because we experience it – but then we succumb to a tautology befitting our desire for presence and immediacy. We claim that consciousness is important because it is mine, I am living it. It’s important because we have it.
But being able to say “I have consciousness” is consciousness. It’s a nasty little Ouroboros we’ve gotten ourselves into here. This, of course, is the hard problem of consciousness: explaining it begins to look uncanny when we really try and orient ourselves to it in a removed fashion. Who is this shambling, talking, possibly thinking thing saying it’s conscious? Why, it’s just me, of course… as far as you know. Which is what Dennett means when he says that consciousness is unconfirmable and uninvestigable. Of course, from my private and privileged perspective, it is absolutely confirmable – but you don’t have access to my consciousness, you can’t prove to yourself that I am conscious. Even if you hired a trained surgeon to remove my brain so you could inspect it, you won’t find my consciousness. It isn’t there. It’s only there for me.
So now, imagine that I’m not actually a conscious person, that I’m what we call a “philosophical zombie.” I can say all the right things, answer your questions in the right way… but I’m not conscious. Does this mean I’m unintelligent? Even if I have no human connection to the words you say, even if I have no intimate knowledge of the semantics of language… I possess the capacity to match patterns to such an extent that I can fake it. I can appear conscious. It’s the same with a specifically advanced computer system, a system that interprets theorems and algorithms and performs specific functions. At a certain point of complexity, the functions it performs might include carrying on a conversation regarding the aesthetic qualities of “Dover Beach,” or the ethical issues surrounding immigration – subjects that we consider fundamental to our humanness. If a machine matches our communicative capabilities, that may not make it conscious… but does this mean it isn’t intelligent?
My position on the matter is probably clear by this point, so I won’t hammer it into the ground anymore. I’ll only leave you with this final thought. You and I aren’t conversing face-to-face. You can’t see me, my human body; you can’t hear my words in the form of speech, you’re just reading them on a computer screen. Yet we assume each other’s presence because of the highly complex degree of our communication, of the pattern-matching and analysis required to interact via an amateur blog post on the topic of intelligence. And we likely feel quite comfortable in our positions, confident in our identification of the person across the interwebz. After all, it would be difficult for something to fake this level of linguistic communication, right? Yet there are programs out there that perform similar functions all across the web. They’re called (fittingly) bots, and some are even used to make posts on internet forums. Granted, it’s highly unlikely that a bot would make a blog post as extended and immaculately crafted as this one (why, thank you – ah, damn, this is quite a self-reflexive bot!); but even if one could, would this post be any less… intelligent? (oh please, you’re too kind)
Anyway, no need to fret. I’m not a bot. As far as you know… ;-)
(at least… I think I’m not)
Dennett, Daniel. Consciousness Explained. New York: Back Bay Books, 1991. Print.
Hofstadter, Douglar R. Gödel, Escher, Bach: an Eternal Golden Braid. 1979. New York: Basic
Books, 1999. Print.
Johnston, John. “Technology.” Critical Terms For Media Studies. Eds. W.J.T. Mitchell and
Mark B.N. Hansen. Chicago: U Chicago P, 2010. 199-214. Print.
Metzinger, Thomas. The Ego Tunnel: The Science of the Mind and the Myth of the Self. 2009.
New York: Basic Books, 2010. Print.
Wilson, Edward O. Consilience: The Unity of Knowledge. New York: Vintage Books, 1999.
Saturday, July 9, 2016
It is our national tragedy. We are obsessed with building labyrinths, where before there was open plain and sky. To draw ever more complex patterns on the blank sheet. We cannot abide that openness: it is terror to us.
~Thomas Pynchon, Gravity’s Rainbow
The second section (“Un Perm’ au Casino Hermann Goering”) of Thomas Pynchon’s 1973 novel, Gravity’s Rainbow, presents readers with four maxims, humorously titled “Proverbs for Paranoids.” They occur intermittently, although in close proximity to one another, throughout the section’s hundred pages, and are labeled as follows:
· Proverbs for Paranoids, 1: You may never get to touch the Master, but you can tickle his creatures. (240)
· Proverbs for Paranoids, 2: The innocence of the creatures is in inverse proportion to the immorality of the Master. (244)
· Proverbs for Paranoids, 3: If they can get you asking the wrong questions, they don’t have to worry about answers. (255)
· Proverbs for Paranoids, 4: You hide, they seek. (265; italics in original)
The Proverbs appear throughout the sequence as Tyrone Slothrop attempts to comprehend the conspiracy in which he is embroiled, “the plot against him” as the narrator describes it (240). Of course, in Pynchon’s elaborate and complex textual world, the conspiracy is the novel, the literary machine in which Slothrop is merely one character among many. The genius of Gravity’s Rainbow is the way it prevents its readers from achieving any veritable god’s-eye view; it repeatedly pulls the rug out from under its readers’ feet, incorporating their perceptions of the narrative back into the narrative, continually blurring the line between where the text ends and their interpretations of it begin. Like Slothrop, readers become helplessly embroiled in an irreducible conspiracy. As media theorist Friedrich Kittler describes the novel, it transforms its readers “from consumers of a narrative into hackers of a system” (162).
Kittler identifies this transformation as indicative of the novel’s “critical-paranoid method,” a sentiment echoed by John Johnston in Information Multiplicity: American Fiction in the Age of Media Saturation (1998). According to Johnston, Pynchon novel introduces a new organization of human existence in the wake of World War Two’s violent technological upheaval: “World War II as a watershed event in the growth of technology and scientific research is precisely the subject of Thomas Pynchon’s Gravity’s Rainbow, whose publication in 1973 endorsed what had already become a given within the sixties counterculture: that paranoia no longer designated a mental disorder but rather a critical method of information retrieval” (62; italics in original). As Johnston insinuates, the postwar era witnessed a plethora of ultra-paranoiac literature, likely beginning with William S. Burroughs’s Naked Lunch (1959), but pursued through the work of J.G. Ballard, Philip K. Dick, Joseph McElroy, Don DeLillo, Kathy Acker, etc. The degree to which these literatures embrace their paranoia varies: Dick succumbed almost entirely to a debilitating paranoia, while DeLillo maintains a critical distance. The notion of paranoia-as-method, however, remains at the forefront of several texts by both writers. To this day, Pynchon remains the godfather of critical, intellectual, and literary paranoiac fiction.
The presence of paranoia-as-method predates the field of High Postmodernism, however, appearing as early as the mid-nineteenth century in stories such as Edgar Allan Poe’s “The Man of the Crowd” (1840) or in classic gothic/detective narratives such as Wilkie Collins’s The Moonstone (1868). This paranoiac development does not dissipate with the advent of modernist fiction, but in fact finds itself repositioned and embraced in the work of surrealist writers such as André Breton, whose Nadja (1928) proposes to gather “facts which may belong to the order of pure observation, but which on each occasion present all the appearances of a signal” (19). Details of paranoiac puzzle-solving appear throughout High Modernism, in works such as James Joyce’s Ulysses (1922), Virginia Woolf’s Mrs. Dalloway (1925), or William Faulkner’s Absalom, Absalom! (1936); but the power of paranoia as a critical method doesn’t fully materialize until the postwar fictions of Burroughs and Pynchon. Even at this point, the expansive methodological rigor of paranoia, which I will call critical paranoia, will remain decades away.
In the same passages where Gravity’s Rainbow lays out the four Proverbs for Paranoids, the narrator also relates the brief life of Paranoid Systems of History – “a short-lived periodical of the 1920s whose plates have all mysteriously vanished” (241). The narrator goes on to reveal that the periodical has suggested, “in more than one editorial, that the whole German Inflation was created deliberately, simply to drive young enthusiasts of the Cybernetic Tradition into Control work” (241). Throughout Pynchon’s encyclopedic text, numerous conspiracy theories are broached, from the systematic devastation of European nations to the possibility that Franklin Delano Roosevelt is actually an automaton. The entire narrative operates according to a kind of conspiratorial logic. If there is a conspiracy, then there must be a narrative explaining the conspiracy; the textual totality, therefore, abides by a minimal narrative coherency, opting instead for lines of flight that gravitate toward chaos, disrupting the internal consistency that characters (and readers) attempt to impose on the text.
Line of flight is a Deleuzian concept, outlined by Deleuze and Guattari in A Thousand Plateaus (1980). In relatively simple terms, it designates the attempt by various energies to escape the territorializing and colonizing confines of the social body. A traditional Marxist methodology would likely define such confines as ideology, but that doesn’t quite capture Deleuze and Guattari’s sense. Deleuze-Guattarian territorialization signals a complex technological/post-sociological process of libidinal intensification, of multiple becomings and formalizations that are continually battling the entropy that seeks to dismantle them: “It is not a question of ideology,” they write in Anti-Oedipus; “There is an unconscious libidinal investment of the social field that coexists, but does not necessarily coincide, with the preconscious investments […] It is not an ideological problem, a problem of failing to recognize, or of being subject to, an illusion. It is a problem of desire, and desire is part of the infrastructure” (104; italics in original). Allowing for some sense of historical development, we can admit that Louis Althusser’s structural Marxism and Jean-François Lyotard’s post-Marxism bear some similarities to Deleuze and Guattari’s philosophy; but none of these theories conform to the traditional strategies of Marxist critique, and the work of Pynchon and other postmodernists can tell us something of why.
We can sum up traditional Marxism’s treatment of conspiracy through Fredric Jameson’s comments in Postmodernism: or, The Cultural Logic of Late Capitalism (1991). According to Jameson, conspiracy theories are causal (i.e. linear) interpretations of emergent (i.e. nonlinear) phenomena. They are psychic reductions of vastly complex, systemic conditions. Jameson describes conspiratorial literature as a kind of high-tech paranoia, in which systems and networks are “narratively mobilized by labyrinthine conspiracies,” reproduced in manner that is linearly comprehensible (38). Conspiracy theory, however, is a “degraded attempt […] to think the impossible totality of the contemporary world system” (38). In short, Marxist theory treats paranoia in a symptomatic fashion: paranoia is a psychosis, a reaction to the overwhelming pressures of techno-culture.
Writers such as Pynchon illuminate an alternative to the sociological approach, which treats paranoia as indicative of a larger social problem to be diagnosed. He displaces paranoia and its cousin, schizophrenia, from the realm of psychic energy to that of material energy, energies of systemic forces at large. Paranoia shifts from a psychic problem of perceiving the world to a mode of operating technologically within the world. Iranian philosopher Reza Negarestani makes this point in Cyclonopedia (2008) when he compares psychoanalysis with archaeology: “According to the archaeological law of contemporary military doctrines and Freudian psychoanalysis, for every inconsistency or anomaly visible on the ground, there is a buried schizoid consistency; to reach the schizoid consistency, a paranoid consistency or plane of paranoia must be traversed” (54). Taking Negarestani’s lead (which coincides with Pynchon’s), we might suggest that Freudian psychoanalysis was never well-suited to the psyche at all, but rather to the material distribution of energies along planes of various scales, whether these be microbiological or international.
In this manner, paranoia is not something to be diagnosed as symptomatic, dismissed as politically reactionary, or pursued as conspiratorially perceptive: it is to be methodologically recalibrated as an instrument of information processing. Paranoia composes narratives, albeit in a manner that necessarily leaves plot holes; surface inconsistency can only be explained by leaps of faith, assumptions that cannot be proven, in order to construct a linear and often malign explanation. This is the compulsion of the conspiracy theorist. Rather than accept such assumptions as rational, surface inconsistency should be countered by a drive toward subterranean consistency, an appeal to the neutral and nonintentional complexity of schizoid systems – what Pynchon articulates as the incomprehensibility of terrestrial matter itself, “the World just before men. Too violently pitched alive in constant flow ever to be seen by men directly. They are meant only to look at it dead, in still strata, transputrefied to oil or coal” (GR 734). Unable to look at it alive, the conspiracy theorist entertains fantasies about its evil plans, its malign motives, as Oedipa Maas does at the end of The Crying of Lot 49 (1965): “[Pierce Inverarity] might himself have discovered The Tristero, and encrypted that in the will, buying into just enough to be sure she’d fine it. Or he might even have tried to survive death, as a paranoia; as a pure conspiracy against someone he loved” (148). By contrast, the critical-paranoiac theorists force themselves to see through the senselessness to the schizoid consistency beneath: the horrifying proliferation of networks, systems, and matter.
If Cthulhu was born today, its name would be Skynet – but then, both are still paranoid reductions of the world we live in. For the closest thing to an accurate expression of Western culture’s schizophrenic processes, read Reza Negarestani’s Cyclonopedia.