Tuesday, November 1, 2011

Review: Devil May Care, by Sebastian Faulks, writing as Ian Fleming

New York: Doubleday, 2008, pp. 278.

In 2008 Sebastian Faulks, "writing as Ian Fleming," published Devil May Care, a James Bond novel which is novel in its picking up exactly where Ian Fleming left off. Taking place a year and a half after the events of The Man With the Golden Gun (1965), the book informs us that since his battle with Francisco Scaramanga, Bond has been manning a desk at "Universal Exports," part of a "phased return" to the service. Naturally, M judges him ready to go back to his old assignments early in the novel, charging him with investigating a Dr. Julius Gorner, a villainous industrialist who is believed to be involved in a Soviet scheme to flood Britain with drugs – and perhaps, something more than that.

The results of Faulks' effort are mixed. To his credit, Faulks' gets the tone and "feel" of Fleming's writing right for the most part. He also derives a good deal of interest out of the retro context. There is some amusement in seeing the Bond of the novels plunged into Swinging London, which is making itself felt in the unlikeliest of places – a conversation with his housekeeper May in which she tells him about the drug charges against the Rolling Stones, the news from Moneypenny that M has taken up yoga. (Remember, Fleming's Bond was a creation of the '50s, not the '60s.)

Faulks' choice of contemporaneous Iran for the principal scene of the action added to its interest. Not only does it take Bond to a part of the world we haven't seen him in before (with few exceptions, Fleming stuck with the United States, the Caribbean and West European countries as his settings), but it helps reinforce the feel of another era, given how much the depiction of the country differs from what is routine in today's thrillers. It is hardly nuanced or deep, and true to the book's "retro" approach more than a few of the remarks the characters make will strike attentive readers as ignorant and bigoted – but the prejudices are different, and the whole may come as something of a shock to those who imagine Iranian history to have begun in 1979, and the Middle East to have never been anything but a cauldron of fundamentalist insanity and homicidal prudery. Call it the more complex "Orientalism" of an earlier period, when these parts of the world were imagined as colorful, extravagant and sensual, as well as decadent and backward – an object of fantasy as well as nightmare, and a much more suitable scene for James Bondian adventure than it might seem today (the episode in the Paradise Club striking in this respect).

The selection of this setting has yet another advantage, namely the Persian Gulf's having been one of the last scenes where Britain played an independent military role (these were the last years before the end of Britain's commitments "east of Suez"), enabling Faulks to center the story on British action without stretching plausibility too far. To his credit, he proves reasonably adroit in developing his plot to this end.

Still, the weaknesses are not minor ones. While Faulks captures the feel of Fleming's writing, he does not capture the spark it had at its best, and as a thriller the book is a letdown. The inclusion of an assassination attempt on Bond before he even leaves London struck me as a clumsy attempt to correct for the slow start typical of Fleming's novels. Faulks also fails to get full use out of the promising bits of atompunk he introduces, which include an ekranoplan. (Indeed, his prose tends to falter when dealing with the action and techno-thriller bits, a telling instance his description of a Soviet Mi-8 helicopter as "classic." That helicopter can be regarded as classic now, but no one would have described it as that in the 1960s, when it was new gear.) The same goes for his dispatch of Bond behind the Iron Curtain for the first time in the history of the series (the unseen events between You Only Live Twice and The Man with the Golden Gun apart), which should have been a highlight, but ends up feeling perfunctory, merely something to get Bond from point A to point B.

At the same time, if the idea of sending Bond back to the 1960s was to free him from the constraints of political correctness in the manner of Mad Men, then the novel doesn't quite work on that score, Bond's hedonistic flair clearly lacking. (Early in the tale, Bond actually turns down an attractive woman's offer to go up to her room.) The appearance of a female double-o in the story, something I have a hard time picturing Fleming's M (or Bond for that matter) accepting, looks like a concession to twenty-first century attitudes, much more in line with the later screen Bond girls than anything Fleming wrote.

Where Faulks hews closer to Fleming's precedent, he tends to seem plainly derivative, certainly in his creation of Gorner. He clearly owes much – too much – to Moonraker's Hugo Drax, another physically deformed continental who came away from his schooling in England feeling humiliated and hateful, who in World War II fought for the Nazis and after becoming a self-made tycoon, now works with the Soviets to pursue a personal vendetta against Britain, to culminate in a high-tech blow against the country, using means we have certainly seen before in Fleming's books. The repetition is to diminishing returns, and it does not help that this villain's particular scheme is overly complex, diffusing the action and tension, which makes for a poor contrast with the admirable compactness of the Fleming novels.

The result is a book I found consistently readable, with quite a few bits that were better than that, but in the end left a lot to be desired as a continuation of Fleming's series.

For the full listing of the James Bond continuation novels (and the reviews of them available on this blog), click here.

Monday, October 31, 2011

A Primer on the Technological Singularity

In March 1993 mathematician Vernor Vinge famously presented a conference paper, "The Coming Technological Singularity: How to Survive in the Post-Human Era," in which he wrote dramatically of a "change comparable to the rise of human life on Earth," caused by "the imminent creation by technology of entities with greater than human intelligence," whether through the expansion of human intelligence via biotechnology or mind-machine interfaces, or the emergence of "strong" artificial intelligence.

This greater-than-human intelligence would not only exceed the human capacity for mental activity, but, particularly to the extent that it is an AI, be capable of designing still-smarter AI, which in its turn can create AI smarter than that, and so on, in a chain of events far outstripping the original, human creators of the technology – while in the process exploding the quantity of intelligence on the planet.

This "intelligence explosion," theoreticians of the Singularity predict, will result in the acceleration of technological change past a point of singularity, analogous to the term's mathematical meaning – a point beyond which the curve of our accelerating technological progress explodes – resulting in
a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in "a million years" (if ever) will likely happen in the next century.
The idea of an intelligence explosion was not new in 1993, having quite old roots in science fiction before then, as Vinge, a science fiction writer himself, acknowledged. (To give but one example, something of the kind happens in the course of the stories collected together in Isaac Asimov's I, Robot, with "The Evitable Conflict" describing such a sequence of events quite presciently.) Additionally (as Vinge also acknowledged) Vinge was preceded by Bletchley Park veteran Irving John Good who is often credited with first giving the a rigorous nonfictional treatment in his 1965 paper, "Speculations Concerning the First Ultraintelligent Machine."

However, Vinge's particular presentation has been highly influential, in part, perhaps, because of its timing, converging as it did with similarly spectacular predictions regarding progress in computing, robotics, genetic engineering and nanotechnology (Eric Drexler's Engines of Creation came out in 1986, Hans Moravec's Mind Children in 1988, Ray Kurzweil's Age of Intelligent Machines in 1990) in what some term the "molecular" or GNR (Genetics, Nanotechnology and Robotics) revolution (which many now take to be synonymous with the Singularity concept). That intellectual ferment would shortly be turbo-charged by the expectations the "tech boom" of the late '90s aroused, and perhaps the pre-millennial buzz in evidence during the approach to the year 2000 as well.

In the years since then, the idea has not only become the predominant theme in "hard" (technologically and futuristically-oriented) science fiction, but made increasing inroads into mainstream discussion of a wide range of topics – for instance, by way of Peter Singer's highly publicized book on robotics and warfare, 2009's Wired for War (reviewed here).

There is, of course, a range of outlooks regarding the implications of a Singularity. Some are negative, like Bill Joy's well-known essay "Why The Future Doesn't Need Us", or for that matter, the darker possibilities Vinge has touched on, like technological accidents or a turn to hostility toward human beings on the part of those intelligences. However, there are also "Singularitarians" who believe not only that the Singularity is possible, but commit themselves to working to bring about a benign intelligence explosion, which they expect (especially in combination with advances in biotechnology, nanotechnology and robotics) to bestow on humanity a whole array of epoch-changing, species-redefining goods, including unprecedented prosperity as vast artificial brainpower explodes our economic productivity while vaulting over the inevitable resources scarcities and cleaning up our ecological messes, and even a transition to the posthuman, complete with the conquest of death (a theme which also occupies much of Kurzweil's writing).

The Case For The Singularity's Plausibility
As stated before, the argument for the possibility or likelihood of the Singularity is largely founded on expectations of a continuing geometrical growth in computing power. Singularitarians like Kurzweil commonly extrapolate from the well-known "Moore's Law" that the circuitry and speed of chips doubles every two years (or less). At the same time, they reject a mystical (and certainly a dualistic, body-mind) view of cognition, believing the basic stuff of the brain as "hardware," the performance of which computers could plausibly overtake.

This raises the question of what the hardware can do. Kurzweil estimates the human brain's performance as equivalent to 10 petaflops (10 quadrillion calculations) a second. As it happens, the IBM Blue Gene supercomputer passed the 1 petaflop milestone back in 2008, while the fastest computer in the world today, the Fujitsu-built "K" computer is capable of 8.2 petaflops a second at present, and expected to attain the 10 petaflop mark when it becomes fully operational in November 2012. Nor does this mark the outer limit of present plans, an exaflop-capable supercomputer (a hundred times as fast) popularly projected to appear by 2019.

Of course, it can be argued that merely increasing the capacity of computers will not necessarily deliver the strong AI on which the Singularity is premised. As Dr. Todd Hylton, director of the Defense Advanced Research Projects Agency's SYNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) program puts it,
Today's programmable machines are limited not only by their computational capacity, but also by an architecture requiring human-derived algorithms to both describe and process information from their environment.
Accordingly, some propose the "reverse-engineering" of the human brain, the object of several high-profile research programs around the world, of which SYNAPSE is only one, some of which are expected to achieve results in the fairly near term by their proponents. Henry Markham, director of the IBM-funded "Blue Brain" project, claims that an artificial brain may be a mere ten years away.

Those arguing for the possibility of human-level or greater artificial intelligence can also point to the improving performance of computers and robots at everything from chess-playing to maneuvering cars along obstacle courses, as in the Defense Advanced Research Agency's 2007 Grand Challenge, as well as the proliferation of technologies like the irritating and clunky voice-recognition software that customer service lines now routinely inflict on the purchasers of their companies' products.

The Case Against The Singularity's Plausibility
Writers like Kurzweil and Moravec can make the argument in favor of the Singularity's likelihood in the early half of the twenty-first century seem overwhelming, but that claim has plenty of knowledgeable critics, including Gordon Moore of Moore's Law fame himself (ironically given how often his prediction with regard to the density of components on semiconductor chips is cited on the idea's behalf).

Some might respond by citing Arthur Clarke's "First Law of Prediction," which holds that "When a distinguished elderly scientist states that something is possible, he is almost certainly right," whereas "When he states that something is impossible, he is very probably wrong." Moore's objection appears more intuitive than anything else, but, besides the epistemological, ontological and other philosophical problems involved in defining consciousness and verifying its existence raised by John Searle's "Chinese Room" argument, among many others (We don't know what consciousness is; and even if we did create it, how could we be sure we did?), skeptics have offered several cogent criticisms of Singularitarian assumptions, which generally fall into one of two categories.

The first category of argument posits that it may simply be impossible to develop really human-like artificial intelligence. Physicist Roger Penrose's 1989 The Emperor's New Mind, in which he argues the inadequacy of physical laws to account for human consciousness – and therefore, the prospects for human-equivalent machine-based intelligence – is perhaps the best-known argument of this kind.

The second is the position that, while artificial intelligence of this kind may be theoretically feasible, we are unlikely to realize the possibility in the foreseeable future. This could be due to either the growth in computing power slowing down sharply before the point of the Singularity (given our likely reliance on undeveloped new computer architectures to continue Moore's Law past – or even through – this decade), or the elusiveness of the "software" of intelligence, which seems a more subtle thing than building a faster computer. (Certainly those whose expertise is in biological systems rather than computer hardware and software tend to be more skeptical about the possibility, pointing to the complexity of human neurology, as well as how little we actually understand the workings of the brain.)

Less directly relevant, but telling nonetheless, is the fact that, far from accelerating, our technological development may actually have slowed as the twentieth century progressed, as Theodore Modis and Jonathan Huebner have each argued on theoretical and empirical grounds.

Indeed, there has been a tendency on the part of "Singularitarian" prognosticators to push the dates of theirs predictions farther and farther into the future as the years go by without the Big Moment happening. I.J. Good declared it "more probable than not that, within the twentieth century, an ultraintelligent machine will be built" (a guess which influenced Stanley Kubrick's 2001). Of course, the century ended and the titular date came and went without anything like the expected result. Later expectations reflect this, Vinge guessing in the early 1990s that the Singularity would occur in the generation-length period between 2005 and 2030, while Kurzweil suggested 2045 as the big date in The Singularity is Near.

Notably, Kurzweil made more modest but also more specific predictions about what would happen prior to the event, including a list of developments he offered in his 1999 book The Age of Spiritual Machines, for 2009. A good many observers revisited his claims when the time came (myself included). Kurzweil himself discussed the results in a 2008 interview with the newspaper Ha'aretz, and while sanguine about his results, others disagreed, and the same newspaper mentioned him in its New Year's Eve list of recent prediction bloopers. (Indeed, since that time Kurzweil has conceded on some points, bumping what he predicted for 2009 regarding virtual reality and automobiles over to 2020 if not later in an article he published in the New York Daily News in December of that year.)

It may be noteworthy, too, that the production of original ideas about the Singularity seems to have fallen off. Surveys of the field, like the aforementioned book by Peter Singer, increasingly seem to go over and over the same list of authors and works (Vinge, Moravec, Kurzweil, Joy, et. al.), most of which are a decade old at this point, rather than the rush of new ideas one might expect were we racing toward a post-Singularity reality. (Indeed, I have found the vigor of the critics more striking as of late.) Even science fiction has taken less interest, the genre actually seeing something of a backlash against the concept.

In short, there is abundant reason to think that the claims made for the Singularity are not just open to question, but that the kinds of technological change on which their predictions are founded are actually coming along rather more slowly than many of its advocates argue. Yet, even if the last decade has tended to validate the skeptics, that is a far different thing from saying it has discredited the idea – especially given how much can still change in the time frame Kurzweil has suggested for the Big Event (to which he has stuck, even after his moderation of his smaller predictions). Moreover, we can at the very least continue to expect to see faster computers, more sophisticated software and more versatile robotics, and that, for as long as human beings continue to dream of transcendence, some will continue to envision technology as the means by which we attain it.

Sunday, October 30, 2011

Review: Russian Spring, by Norman Spinrad

New York: Bantam, 1991, pp. 567.

Twenty years ago science fiction legend Norman Spinrad published Russian Spring, a saga of space exploration during the twenty-first century. While envisioned as a story of the future, it now reads like an alternate history, premised as it is in a timeline unfolding from a turn of events quite different to our own--the success of Gorbachev's reforms at achieving their goal of a reformed Communism, rather than finishing off the Soviet economy and precipitating the country's collapse. Focusing on economic prosperity inside a Common Europe being extended all the way to Vladivostok, the Soviet Union does not wholly disarm, but it is no longer competing militarily with the West.

However, the U.S. remains mired in Cold War-style militarism, a policy epitomized by its vast investment in the "Battlestar America" space shield which gives it a genuine strategic superiority over every other country on Earth, and continued military interventions in Latin America intended to preserve its hemispheric hegemony--while its economy rots (except for the bloated military-industrial complex), and the Federal government grows more repressive. Its behavior is jingoistic, deluded, predatory to the point that it expropriates the assets of the European nations to which it owes an increasingly staggering debt, then turns around and annexes Mexican territory to recoup its own loans to that country, actions applauded by the "jingo gringo" portion of the American electorate--bigoted, self-righteous and clinging to a warped version of history.

On the other side of the world, the post-Gorbachev Soviet Union is functional, relatively prosperous and comparatively free, but no utopia. Personal connections and badges of political conformity matter less than they used to, but the want of them continues to stifle many a life. Additionally, the relaxation of central control has unleashed the specter of nationalism inside the country, not least on the part of the Russian "Bears" who are the Soviet counterparts to American jingo gringos.

The collision between the jingo gringos and bears culminates in an old-fashioned, new-fashioned superpower nuclear crisis, instigated by a Ukrainian secessionist movement directed by U.S. neoconservatives. This makes America's, and the world's, last, best hope a left-wing history professor from Berkeley named, of all things, Nat Wolfowitz (a choice of surname that has since become staggering in its irony), who launches an exceedingly implausible political career.

The tale is structured as an old-fashioned family epic. It begins with a four year-old Jerry Reed watching Neil Armstrong walk on the moon, and catches up with him after the turn of the century, as an American aerospace engineer who has come to the attention of the European Space Agency. Their headhunting him is made problematic by the political context Spinrad imagines. The U.S. at this point increasingly views Europe as a rival, and new laws reflecting both neo-mercantilist economics and an increasingly intrusive security state make it difficult for Americans with Reed's skills and knowledge to go abroad. For him to accept the ESA offer is not a simple change of employment, but something more like a Cold War-style defection.

Nonetheless, the ESA can offer Jerry something the United States cannot: a chance to work on a new space vehicle that will be the next step in the development and exploration of space (the American space program, by this point, being overwhelmingly military in its focus). While the Europeans wine and dine him in Paris, he also meets and falls in love with a young Russian woman working for a Soviet trading company in the West, Sonya Gagarin. Ultimately he decides to take the ESA offer, marries Gagarin, and begins a family with her, which the book tracks through the rest of the narrative, with their son Robert (who identifies with his father's American nationality) going to the United States and becoming a journalist, and their daughter Franja (who identifies with her mother's nationality, but her father's dream of space) going to the Soviet Union and joining its space program as a pilot of the hypersonic craft revolutionizing long-distance air travel, and laying the foundations for more intensive space travel as well.

Just as the Reed family plays important roles in the story's unfolding events, the politics going on above their heads interfere with their careers and personal lives. Jerry's warm welcome in Europe does not last forever, the defecting American engineer soon enough finding himself an outsider in his own organization amid a rising wave of anti-Americanism on the continent, and his personal campaign for a next-generation space vehicle capable of supporting longer-ranged space missions. He becomes especially bitter about the role the Soviets play in the bureaucratic politics standing in his way, alienating him from his daughter Franja (who finds her own ambitions complicated by her family background), which is all the more ironic given that, while the U.S.'s concern with space is military, and Europe's is primarily commercial (the Europeans focusing on upmarket retirement homes in orbit), it is the Russians who are seriously pursuing Jerry's dream of space exploration and development, building a permanent station on the moon and sending manned missions to Mars. Meanwhile Robert's decision to study in an increasingly closed United States cuts him off from his family back in Europe. The end of Jerry's marriage to Sonya completes their apparent dissolution.

Certainly the Reed family's situation (American father, Russian mother, two French-born children who are at once all and none of these things) comes off as very obviously engineered to convey the "big picture" of these events. Additionally, that big picture indisputably differs enormously from how the 1990s and 2000s have unfolded. Nonetheless, the development of the Reeds' story is logical, as is that of the global context, Spinrad's conception of the larger history--the wild card of Nat Wolfowitz's political trajectory aside--solidly grounded in the possibilities of the world's situation as they appeared circa 1990, one result of which is its prescience at a number of points, from the revival of centrifugal nationalisms in the Soviet Union, to the worsening balance of payments problems of the United States. And Spinrad for the most part achieves the difficult feat of reconciling the demands of these intertwined tales. Certainly his characters and their dilemmas are compelling, the book not just readable but consistently engaging throughout its considerable length (I found myself breezing through my hardback copy's densely packed 567 pages) all the way up to its culmination in tragedy and transcendence.

As one who had always thought of Spinrad (best known for '60s and '70s-era classics like Bug Jack Barron and The Iron Dream) as a "New Wave" science fiction star, I was struck by the fact that here he was presenting those most "Golden Age" of science fiction themes--space travel (treated here with all the attention to the "how" of it one might expect from a much more technology-oriented writer) and the future of humanity--in a celebratory, soaring, optimistic, romantic way, the "future with a Capital F" most definitely arriving (if a bit behind schedule). Spinrad once called William Gibson's Neuromancer a "New Wave hard science fiction novel." Russian Spring, a very different book, can be described in exactly the same way, and in the most favorable sense of those words.

Thursday, October 27, 2011

Reflections on the Jason Bourne Series

Robert Ludlum's writing has drawn a great many critical barbs over the years – many of them deserved. His books are bloated and often poorly focused, stretching a two hundred-page Eric Ambler-style plot to a length of six hundred pages and more. And as his obituary in The Economist noted, his prose reads like James Thurber's parody of thriller writing in "The Secret Life of Walter Mitty" – except, one might add, that even when engaged in parody Thurber demonstrated greater restraint in his use of italics and exclamation points and synonyms for the word "said," so that Ludlum's prose style was to frequently grate on the nerves of readers possessing the smallest measure of sensitivity to the use of language.

The Bourne Identity (1980), alas, is not exempt from these defects, nor from Ludlum's tendency to sloppy research and silly plotting, which are particularly acute in this book. His depiction of Carlos the Jackal in the novel shows how little he troubled himself to look up facts quite easily available to the public at the time of writing – for instance, in Colin Smith's 1977 book Carlos: Portrait of a Terrorist. (Indeed, I can think of no other writer who has been so cavalier with the facts surrounding this particular figure. Even the creators of Carlos clones, like the screenwriters of the 1981 film Nighthawks – which made its villain a fairly obvious Carlos stand-in, the switch in his nationality from Venezuelan to German notwithstanding – did their homework rather better.) Those facts make the novel's implausible scheme for luring its villain out to be captured or killed (grounded in Carlos's being a gun-for-hire whose face has never been seen) sheer nonsense.

I have to admit that I didn't care about any of that when I first encountered the book. I was an eighth grader, and moreover, one suspicious of highbrow views of what constituted "good" literature. I read novels for page-turning entertainment, and Ludlum certainly offered that. I polished off The Bourne Identity in about a week and was soon reading the sequels, The Bourne Supremacy (1986) and The Bourne Ultimatum (1990).

Inevitably, I found myself comparing the films to the books when I saw them, and while by the time they appeared I had long been conscious of the weaknesses of Ludlum's writing, the films compared with them unfavorably.1

Of course, making The Bourne Identity a movie in the twenty-first century posed some real challenges, given how the material had dated. The Cold War with which Carlos-style terrorism had been bound up was over, Carlos the Jackal had ceased to be topical long before his capture in 1994, and the jet set vibe of the novels also very much of its times.

Still, the response of the filmmakers was underwhelming. Rather than updating or reinventing the material Ludlum left them, they simply abandoned it. They discarded the original Bourne's duel with a real-life terrorist, along with his Vietnam-era history, and for that matter, any politico-military-intelligence context for the spy game whatsoever. (At the end of the film, it's hardly clear why it is that intelligence services assassinate people. If it's an anti-Establishment spy movie of the sort popular in the '70s, it's a pretty toothless one.) They discarded the plushness of Ludlum's spy world just as completely. (This Jason Bourne gives the impression of a college kid backpacking through Europe, rather than some man-of-the-world accustomed to first-class traveler's lounges, four-star dining and deluxe hotel suites, and Marie St. Jacques is similarly changed.)

They even did away with the hero's ruthless edge. (Instead of kidnapping Marie at gunpoint as he does in the novel, Bourne pays her ten thousand dollars to chauffeur him.)

All we are left with is a secret agent who has lost his memory, which wasn't nearly so original an idea as some of the audience seemed to think, even back in 1980 when the novel first came out. (That most famous of fictional secret agents, James Bond, lost his memory in 1964's You Only Live Twice – sixteen years before Ludlum presented Bourne to the world.)

Of course, a slenderness of concept has been routine in spy movies these last couple of decades (like an episode of Seinfeld, they have tended to be about nothing), but this wasn't the sort of film that hides the thinness of its script behind a rollercoaster ride of over-the-top action and special effects, like True Lies, or the Mission: Impossible series. It's much more restrained in that respect – and to be frank, I found the handling of the action the movie did offer competent but unexceptional.

All the same, the 2002 film was a decent if unspectacular performer at the box office (U.S. gross $121 million, and another $92 million overseas), then really took off on video, resulting in sequels in 2004 and 2007. The sequels (which took the titles of the sequels Ludlum wrote himself, but little else from those books) held to this pattern, though admittedly they did a somewhat better job of fleshing out the plot, and benefited from brisker pacing and more accomplished action (which, I suspect, was especially impressive to those young enough or forgetful enough to find fight scenes and chases not based on CGI and wire work a novelty).

Even bigger moneymakers than the original (The Bourne Ultimatum, notably, outgrossing any recent Bond film in the U.S. market by a significant margin), they made the series a pop cultural phenomenon. The Bourne movies were not only treated as the new template and standard for the spy film, but (as might be expected given the extent to which the Ludlum name has been franchised in co-authorship schemes) have revived the Bourne series in print. Since 2004 Eric Van Lustbader, another '70s vintage, and past-his-prime, author of bestselling thrillers given to over-the-top writing and bad prose, has penned a half dozen Bourne sequels (with a seventh book, The Bourne Upset, scheduled for release next year). Additionally, the success of the Bourne series has brought renewed attention to Ludlum's broader work from readers and filmmakers alike (high-profile movie versions of The Matarese Circle and The Chancellor Manuscript both in the works now). Naturally, Ultimatum has turned out to not be the last film in the series, a fourth movie, The Bourne Legacy, currently scheduled for release in the summer of 2012.

1. Incidentally, the 2002 film was not the first made of Ludlum's story. There was a two-part, four-hour miniseries on ABC back in 1988, with Richard Chamberlain as Bourne, and Jaclyn Smith as Marie, which was rather truer to the source material.

The Trouble With Reboots: Thoughts on Sherlock

In this age of endless reboots, remakes and retreads of every conceivable brand name and IP, in every conceivable medium, reminders of the difficulties unavoidably entailed in updating stories written and set in earlier periods are everpresent.

Consider, for example, the well-received BBC drama Sherlock. While I was surprised by how much of Arthur Conan Doyle's original the writers managed to retain in an update that is on the whole clever and entertaining, the source material shows its age nonetheless.

There are constant reminders that the "Great Detective" above ordinary human concerns, passions and connections is a less convincing (and less acceptable?) figure than he was a century ago. Holmes is certainly looked at as rather more of a crank than a wonder by the cops he works with, and the fact reflected in a weakened self-assertion. Instead of confidently defending to Watson, and the reader, his ignorance of so elementary a fact as the Earth's revolving around the sun (as he does in the original tales), this bit of information is presented only as something Watson has let slip on his blog, and offers no answer when one of his more disdainful colleagues asks if this is really true. Holmes' sexuality also draws from the other characters speculations and commentary that never appeared in the Victorian source material. (And, one might add, Holmes has been turned from a cocaine user into an ex-smoker who simply slaps on extra nicotine patches when faced with an especially puzzling problem, a tobacco habit, it would seem, less acceptable today than a coke habit was over a century ago.)

Holmes' nemesis Moriarty, far from the Napoleon of Crime he is made out to be in stories like "The Final Problem," seems rather small fry. Charles Stross explained the reason for that in the afterword to his novel The Jennifer Morgue when he observed that:
The perfect criminal, should he or she exist, would be the one who is never apprehended . . . the one whose crimes may be so huge they go unnoticed, or indeed miscategorized not as crimes at all because they are so powerful they sway the law in their favor, or so clever they discover an immoral opportunity for criminal enterprise before the legislators notice it . . . When the real Napoleons of Crime walk among us today, they do so in the outwardly respectable guise of executives in business suits and thousand-dollar haircuts . . . I'm naming no names . . . [but] They have intelligence services! Cruise missiles!
Certainly such views were not unknown in Doyle's time, but it may be that pop culture has since more thoroughly caught up with political consciousness (in spite of all the setbacks the latter has had these three past decades).

These significant differences in how we look at the protagonist and his nemesis aside, there is also the matter of how we look at their setting. Holmes' London was the world-city of its time, the political, commercial and cultural capital of a global empire that covered the map in red and ruled a quarter of humanity directly, the center of world finance, and much else besides. Not only is it the case that twenty-first century London enjoys no such position, but it might be said that no city on the planet does. (Those who might imagine an American city as a logical successor, for instance, would have to consider the division of functions between New York, Washington D.C. and Los Angeles, greatly complicating an attempt to find an equivalent.) The stage on which Holmes plays his part is diminished not just by the loss of that mystique, but also the disappearance of the opportunities for building that an Imperial London afforded for the Great Detective (whose services were sought by government ministers and crowned heads, as in "A Scandal in Bohemia"; whose adventures sometimes had serious implications for international politics, as in "The Naval Treaty"; whose career, in short, contained much that might only happen in the dominant international center of its day). And of course, there is the romance of the gas-lit, cobblestoned predecessor to today's metropolis, which Doyle's work is so famous for evoking. That, too, has no equivalent today.

That the show works as well as it does in spite of such diminutions is a testament to the strength of Doyle's original concept, and the talents of those involved in its most recent on-screen realization.

Tuesday, October 18, 2011

Review: Blood & Thunder: The Life & Art of Robert E. Howard, by Mark Finn

Austin, TX: Monkeybrain Books, 2006, pp. 264.

I am generally not inclined to the biographical approach to literary analysis, or toward reading biography generally, but in Robert Howard's case I found myself making an exception. I suppose that is because of the kind of writer Howard was, concisely summed up Joe Lansdale in his introduction to Mark Finn's study of the creator of Conan the Barbarian. He credits Howard with not just a willingness to "wade into . . . [the] messy end of the literary pool wearing hip boots and a smile," and a knack for "masculine prose, action, and manly adventure . . . equivalent, if not the superior" of Hemingway's, but also a "special ability . . . to tap into the subconscious, into the true and often not-so-polite desires of what Freud called the Id . . . [especially] the male Id," which endows his best work with a "power . . . a rawness, a wet-bone visceral relentlessness" that is rarely equaled.

I wondered where all that came from, and that led me to Finn's book. And indeed, Finn's book devotes a great deal of attention to the formative influence of Howard's family (their constant, restless movement; the close relationship between Howard and his mother) and of the small-town, early twentieth century Texas in which he grew up (the violence and corruption that went with oil boom-and-bust in a place not far removed from the days of the frontier; the stifling anti-intellectualism and conformism of a Southwestern version of Main Street; the culture of the roughneck and the tradition of the "tall tale").

Reasonable as this approach is in the essentials, Finn emphasizes them to the point that his book reads like an overcorrection of the neglect with which he charges other biographers (particularly L. Sprague de Camp), especially given how little Finn says about Howard's literary and intellectual influences, an aspect of Howard's creative life I would have liked to see treated more thoroughly. It does not help that many of the details of Howard's life are scanty, and that there seemed to be little here that did not crop up before in the course of my earlier reading about Howard, casual as it has been.

Finn does the best he can with the available bits. His interpretation of them is well-grounded, compared with many of his predecessors (whom de Camp similarly took task for their "jejune" attempts to psychoanalyze Howard in the chapter on him in Literary Swordsmen and Sorcerers, de Camp's own attachment to an "Oedipal" view of Howard notwithstanding). Still, while this is methodically admirable, I found myself wishing for more. I suppose part of this is my wanting to see the speculations so many have made about the author (inevitably made, I suppose, given that special ability to tap the unconscious, and the "wet-bone visceral relentlessness" Lansdale writes of) confirmed, or refuted, or at least acknowledged, and perhaps even to see Howard (or at least, his imagination) somehow emerge larger than life, like the creations for which he is remembered three-quarters of a century after he ended his life.

A Fragment on Indie Film

At Ruthless Culture, Jonathan McCalmont has offered a review of Red State, the latest film from Kevin Smith. The title, "Nothing to Say and No Idea of How to Say It," just about sums up McCalmont's take on the movie. It also offers a cogent summation of Smith's career, which has consisted primarily of Kevin Smith making movies about nothing other than . . . Kevin Smith.

Moviemakers making movies about themselves are certainly nothing new but nonetheless seems to me to have comprised far too much of the cinematic output of the last two decades, and especially the output of independent films since the 1990s. The result is that in a staggering number of films, we have movies about frustrated creative types (particularly frustrated Hollywood types), and hapless slackers who do not even aspire to creative careers. In relating the small number of stories told about them time and time again, in repeating the same scenes over and over again (like the hero getting thrown out of the apartment he shares with his girlfriend after the inevitable fight over his aimlessness), we are subjected to the filmmakers wallowing in their exaggerated idea of their coolness, edginess, wit and command of their craft, as well as their not-very-interesting "big thinks" on capital "R" Relationships. There is, too, the celebration of dialogue for its own sake, and references to other films as an end in themselves (the careers of Jason Friedberg and Aaron Seltzer are merely the logical end result of this turn), and in general, the exaltation of the superficial as genius.

Consequently, it is not only the case that Kevin Smith is an overrated filmmaker (or at least, formerly overrated), but along with the other overrated '90s-vintage kings of the independent film scene (like Richard Linklater, and Quentin Tarantino, a strong candidate for the title of "most overrated filmmaker of all time"), did enormous damage, which inevitably spilled over into mainstream moviemaking, and television, and just about everything else.

Sunday, October 16, 2011

The Life of a Literary Genre: Considering The Mystery

Reading Julian Symons' classic study of the detective novel, Mortal Consequences (which has also appeared under the title Bloody Murder), it struck me that the development of the mystery genre offers a striking parallel to the development of the science fiction genre as I have written about it these past several years. (This appeared principally in my article "'The End of Science Fiction': A View of the Debate." With the disappearance of the fiction review The Fix, where I published the article, it is no longer available, but an edited version may be found in my new book, After the New Wave: Science Fiction Since 1980.)

While the mystery can appear as old as literature itself if one applies the looser definitions, the first flickers of the mystery as we know it may be said to have really appeared in the Romantic era straddling the end of the eighteenth and the start of the nineteenth centuries – in the work of authors like William Godwin and Edgar Allan Poe (analogous to the influence of Godwin's daughter Mary Shelley, and Poe himself, in the misty early days of science fiction). Unmistakable practitioners, like Wilkie Collins and Arthur Conan Doyle (comparable in their influence to Jules Verne and H.G. Wells), appeared with increasing regularity after this, down to the emergence of a recognizable genre - a story type sufficiently well-defined and sufficiently attractive to readers to constitute a recognizable, marketable product (and sufficiently self-aware to have its own presses and publications, its body of classic works and its fandom and its critical literature).

By the early decades of the twentieth century, a "golden age" of mystery writing was underway (the label associated with it identical to that applied to science fiction's own Golden Age). These golden age stories, like the tales of Agatha Christie, focused on concept (in this case, the puzzle the stories present the reader, which might be compared to the scientific gimmickry of early science fiction, with the Great Detective of Golden Age mystery comparable to the hero-scientists of early science fiction) at the expense of other aspects of fiction traditionally associated with literary quality (like characterization and prose style, a charge routinely made against early science fiction), and tended to be politically conservative in outlook (also stereotypically the case with the science fiction published by Hugo Gernsback and John Campbell).

A generation on, however, a new crop of writers rebelled against the old standards, bringing a new emphasis on character and prose style, as well as a sense of grit reflecting, in part, the inclusion of more radical political outlooks – and a diminished emphasis on the genre's old raison d'etre - as seen in the writing of Dashiell Hammett (just as happened in science fiction with the New Wave, less interested in technological gimmickry and techno-scientific problems, and frequently identifiable with the counterculture of the period). The new authors were often pointed critics of earlier works, with Raymond Chandler's essay "The Simple Art of Murder," regarded by many as the most striking attack on the Golden Age murder mystery (just as New Wave giant Michael Moorcock, for example, wrote some of the most memorable criticism of Golden Age science fiction in pieces like "Starship Stormtroopers").

The rebellion proved divisive, an audience certainly existing for the newer sensibility, even as the older style had its defenders, who felt that the "new stuff" dispensed with what most appealed to them in the genre. One might go so far as to say that the detective story gave way to the story with a detective interest, or simply a crime interest, as Julian Symons has suggested (as arguably the old-style science fiction story increasingly gave way to the story with a speculative science interest). Moreover, it might be said that by the end of the twentieth century, much of the most celebrated new writing the genre had to offer was set in earlier periods, and frequently written as homage, as with James Ellroy's "L.A. Quartet," or Dennis Lehane's "Shutter Island" (a tendency comparable to the fashion for retro-science fiction, seen in the popularity of subgenres like steampunk).

Of course, along with these striking parallels in the development of these genres, there are also quite a number of striking differences. For one thing, there is little question that the mystery has much more thoroughly pervaded the pop cultural mainstream. (Legal and police procedurals, for instance, have from the start had a far more prominent place in publishing and on television than science fiction - while science fiction trumps these genres in film principally because of the opportunities it affords for spectacle.) It is interesting, too, that where in science fiction the Golden Age is so often thought of as American, and the New Wave rebellion against it as British, the reverse seems to be the case with the mystery. Whether the crime story is a higher development of the detective genre, or a successor genre, or perhaps both, would also seem to be open to argument, in a way that post-New Wave science fiction does not quite seem to parallel. Paranormal romance, steampunk and the like are what people point to when discussing what's doing well at the moment, and there is arguably a bigger gap between this and science fiction as it is usually defined than between the mysteries of this year and yesteryear. The fact is paralleled in their different conditions. While there is a widespread sense that science fiction may be in trouble - enough so that a recent article in the Wall Street Journal actually addresses the issue - no one seems to be asking such questions about the mystery.

Monday, October 3, 2011

From Screen to Page: Reading Ian Fleming

As is the case with the vast majority of those who picked up their first Ian Fleming after the 1960s, my expectations were formed by the films. As it happened, the first Ian Fleming novel I ever picked up happened to be Thunderball (1961).

The opening scene of the film version (1965) opens at the funeral of a SPECTRE operative involving, among other things, a memorable attempt on Bond's life, a jet pack flight and the special features on a certain Aston Martin.

The novel's opening has Bond treating a cut on his face, feeling "ashamed of himself" because he has been to drinking too much (and losing money at cards!) by the boredom and stress of "more than a month of paper-work," during which his life was apparently not dissimilar from that of any other harried bureaucrat.1

Not only was this much less entertaining than the way the movie' opens, but Bond's life seemed dull, our hero weary and worn-out and frankly looking a bit like a loser – things Bond should never appear to be. And things quickly got worse. When M calls Bond into his office he upbraids him for his "'mode of life,'" and inflicts on him a speech rather like Edward Fox's in Never Say Never Again (1983) (a piece I'd earlier imagined was purely a creation of the health-obsessed '80s) before sending him off to Shrublands to get clean.2.

The next chapter begins with Bond meeting the taxi taking him there, and forming an impression of his driver that leads him to some sweeping generalizations about Britain circa 1961. Feeling that the cabbie has not shown sufficient devotion to his duties, Bond reflects that the young man's manner in receiving his fare is "typical of the cheap assertiveness of young labour since the war."3 "This youth, thought Bond, makes about twenty pounds a week, despises his parents, and would like to be Tommy Steele," a regrettable result of his having been "born into the buyers' market of the Welfare State, and into the age of atomic bombs and space flight."4

Reading that Bond seemed to me a stodgy crank on the wrong side of the generation gap, unreconciled even to the Britain of Harold Macmillan, let alone the Swinging Britain of the 1960s with which I associated the figure - fiftysomething rather than the thirtysomething the character was supposed to be.

I stuck with the book all the same, thinking that I could at least look forward to better when the adventure got started. Not only was the opening sequence absent, however, but there was no Fiona Volpe, no rocket-firing motorcycle, no chase through the Junkanoo. There was an underwater battle at the end, but it fell far short of the big-screen version, where the Disco Volante shed its cocoon as a Coast Guard cutter blasted it, and Aquaparas rained from the sky, and the clash culminates in a frantic struggle on the bridge of Emilio Largo's yacht before it runs aground and explodes . . . and while Bond does end up with Domino by the story's close, rather than being whisked into the air with her by skyhook as the credits roll, he is visiting her in the hospital where she is recovering from Largo's abuse, a rather less James Bondian conclusion to my way of thinking.

In short, my disappointment was not significantly alleviated by the time I'd finished the book.

I turned to Moonraker (book 1955, film 1979) after that, and found it even more of a letdown. The first third of the novel was swallowed up by the matter of Drax's cheating at cards (including a lengthy description of the game in which Bond gives Drax his comeuppance that I found almost unreadable). There was little action after that, the story only getting going in the last third or so, during which there is not even a proper shootout or bit of hand-to-hand combat (though there is a decent car chase). Bond doesn't even get the girl this time. Gala Brand, as it turns out, is happily engaged. And even as travelogue the book falls flat, Bond never getting away from his home turf in southeastern England, with most of it set along a small stretch of coast around Dover.

My skim of a few other Fleming novels left me with a similar impression, and I soon set aside the series in disappointment. However, I later revisited Fleming, and eventually found myself reading through his whole series.

The Bond of the books is quite different from the Bond of the films - a formidable secret agent who enjoys the good life, but not a superman living out an unending fantasy of violent action, luxury and sex. While he may be the "best shot in the service" (as we are told in Moonraker), being on an airliner as it flies through a storm can make him nervous, and his sleep is not untroubled by nightmares (as we learn in 1954's Live and Let Die) – the moments of vulnerability more frequent and conspicuous. The locations he visits are not always glamorous, or the accommodations enticing, with Fleming's portrait of Jacksonville and Saint Petersburg, Florida in Live and Let Die, or the motel that is the setting for much of The Spy Who Loved Me (1962), standing out as examples of quite the opposite in my recollections, while in Diamonds Are Forever (1956), the impression Las Vegas makes is of an oppressively crass, tacky, clattering tourist trap. Bond is exceptionally successful with women, managing to win over some who would ordinarily be quite resistant to male advances (like Tiffany Case in Diamonds and Pussy Galore in 1959's Goldfinger), but his record is not unblemished by rejections (by his secretary Loelia Ponsonby, for instance, and even granting her engagement, Gala Brand takes no interest whatsoever).

The novels tend to start slow, and even when they do get going, don't have Bond fighting as many bad guys, or using as many cool toys, the action and gimmickry generally less extravagant. (Indeed, it is the villains who typically employ the few bits of gadgetry seen – like the walking stick-gun and the nail-studded chains Le Chiffre deploys from the trunk of his car in 1953's Casino Royale, or the copy of War and Peace with a gun in its spine in 1957's From Russia With Love.)

Bond also tends to take rather more of a beating in the books than on-screen – often as a result of torture tough on his pride as well as his body (like Le Chiffre's use of a carpet-beater in Casino, or Wint and Kidd's stomping of Bond with football cleats in Diamonds Are Forever, or the KGB's brainwashing of Bond in 1965's The Man With the Golden Gun, after which Bond is subjected to brutal electric shock therapy to undo the damage). Not surprisingly, the adventure frequently leaves him requiring an extended period of convalescence (as in Casino, in Moonraker after his subjection to a steam hose, after the events of From Russia With Love - where unlike in the movie (1963) Rosa Klebb actually got him with the poisoned blade in her shoe - and after his escape from Dr. Shatterhand's fortress in 1964's You Only Live Twice, and yet again after his confrontation with the titular character in The Man With the Golden Gun, again in contrast with the film (1974) where he gets away unscathed).

Bond gets bored, depressed, ambivalent about the meaning and significance of his work, in Casino confessing to his French colleague Rene Mathis that "this country-right-or-wrong business is getting out of date" in a world where "History is moving pretty quickly . . . and the heroes and villains keep on changing parts."5 The moment passes, but such thoughts do recur, with Bond feeling sure in "Quantum of Solace" (story 1960) that his government is on the wrong side of the issue when he is charged with firebombing a cabin cruiser ferrying arms to Fidel Castro's rebels. He gets anxious that between the "two or three times a year that an assignment came along requiring his particular abilities" he is going soft – that small portion of his career outside which he is little different from any other senior civil servant of his time and place.6 His daily life is comfortable, and sometimes more than that (the cars he gets to drive, like his Bentley, are luxurious), but on the whole comparatively pedestrian, and there are times when it gets to be a grind like everyone else goes through (as seen in Thunderball) – the sort of thing one might expect from a novel by Graham Greene or John le CarrĂ© rather than Fleming. He even has qualms about his relations with women, brooding over the "conventional parabola" of his affairs, and the way they concluded with "tears and the final bitterness . . . shameful and hypocritical."7

And he does appear to get worn down over the course of the stories. Where he responds to Vesper Lynd's murder in Casino by making his fight against SMERSH personal, he goes to pieces when Tracy is murdered by Ernst Stavro Blofeld, leaving M wondering what to do with him in the book You Only Live Twice – quite unlike what we see in the opening of the film version of Diamonds Are Forever (1971), in which he has purposefully set off on a mission of revenge. In short, even granting their distance from how the intelligence business really works, Bond is much more human, the novels about him rather more realistic, than Fleming's reputation suggests.

Yet, there is no question that the books laid the groundwork for the films. Casino Royale introduced the protagonist and his mode of life, at once luxurious and dangerous. Live and Let Die offered the series' first taste of James Bondian action, complete with larger-than-life villains with armies of henchmen, surreal, spectacular hideouts, and over-elaborate death-traps from which Bond makes narrow escapes. Moonraker presented the first of Fleming's mad villains possessed of high technology, plans for mass murder, and ambitions that are geopolitical in scale. Goldfinger was the first to begin the tale with Bond engaging in a bit of action as he winds up a previous mission. Thunderball gave us SPECTRE - and on the whole later books tend to come closer to the big-screen versions, even the more flamboyant ones, like From Russia With Love Dr. No (book 1958, film 1962) and On Her Majesty's Secret Service (book 1963, film 1969).

However, where Fleming developed the essential elements slowly (and employed them inconsistently) over the course of the series, it was the genius of the films to bring them all together in a single formula at the outset;  build on what was most-suited and dispensing with what was least-suited to the side of the novels it opted to develop - the extravagant, larger-than-life action-adventure, globe-trotting and often science fiction-tinged, in which the travel was first-class all the way and there was somehow always time for another kind of action as well; and in the process, invent the "adrenaline" movie that has been king of the box office ever since, as well as pop culture's principal template for cinematic depictions of sophisticated Casanova-ism for the rest of the twentieth century.

It was also their genius to widen Bond's appeal far beyond what might be expected given the source material. Part of this was the removal of the films from political reality, making them easier to take as escapist fare. In contrast with the early Cold War earnestness of the novels, the conflict's presence was muted in the films. SMERSH was out and SPECTRE in from the very start, the latter replacing the Soviets as the villains in Dr. No and From Russia With Love (1963), and Mr. Big made an independent actor instead of a Soviet agent in Live and Let Die (1973). The writers were perhaps more aggressive in their treatment of "Red China," which replaced the Soviets as sponsors of the villains in Goldfinger (1964) and The Man With The Golden Gun. Even this had limits, however, China's relationship with Francisco Scaramanga reduced to that of a landlord taking his rent in the form of the occasional hit, while in the 1967 film version of You Only Live Twice limits the country's connection of China with SPECTRE to a brief implication - and in the big-screen version of Diamonds Are Forever, China is as much a victim of Blofeld's scheming as the U.S. and the Soviet Union.

Indeed, a measure of professional respect, and even occasional cooperation, becomes part of the relationship between the British and Soviets from The Spy Who Loved Me (1977) on, and continues all the way up to the last Cold War-themed Bond film, The Living Daylights (story 1966, film 1987). Such Soviet bad guys as do appear, like From Russia With Love's Rosa Klebb, General Orlov in Octopussy (story 1966, film 1983) and The Living Daylights' General Koskov, are renegades pushing their own agendas, with Orlov and Koskov both shot dead by the Soviets themselves for their treachery. The really bad guys were the opportunists and extremists out to exploit the situation, as in From Russia With Love, Diamonds Are Forever and The Spy Who Loved Me.

Moreover, on the occasions when Bond does play Cold Warrior, he does so as a professional, rather than an ideologue or a man pursuing a vendetta, and such clichĂ©s of right-wing paranoia as trade unionists and ethnic minorities being willing tools of Soviet and Communist subversion (prominent in the novels) are absent from the films. If Bond himself has any opinions about what he is doing, they are rarely (not never, but rarely) expressed, and the racialism, the disdain for uppity proletarians and young people, the dislike of the welfare state and decolonization which are overtly and explicitly present in the books can only be found in the films only by "reading them against the grain" (and sometimes, hardly at all).

Not unrelated to this is the treatment of Bond's own social position. As Jeremy Black has observed in his study of the series, Bond's "mode of life" is "apparently cost-free," and his sense of "'class' apparently unconnected with money or birth . . .  a matter of style, not economics."8 The characteristics that mark Bond as a creature of privilege sniffing at the behavior of the lower-born are downplayed or even eliminated entirely (there being no mention of the private income he enjoys, for instance). Certainly a fair amount of snobbery remains on display, but Bond's "wisecracks and the absence of pedigree and social stuffiness" made it "possible for 1960s audiences to identify with him and to imagine that he was their type of hero."9

As Black put it, Bond is "more a cosmopolitan man than a man of class," and up to a point such cosmopolitanism carried over to downplaying the "Britishness" of the Fleming novels (even as Bond became an internationally recognized icon of Britishness).10 Just as the villains were much less likely to be instruments of Soviet policy or international Communism than exploiters of the Cold War situation, they were much more likely to be menaces to humanity and the world as a whole rather than enemies of Britain as such. In the film versions of Moonraker, Diamonds Are Forever and On Her Majesty's Secret Service, for instance, the threats to British interests seen in the novels are turned into global threats, and the producers never set any of the films wholly, or even primarily, inside Great Britain (the film Moonraker taking Bond to California, Venice and Brazil before sending him into orbit). This, too, would seem to have contributed to the international appeal of the films.

Nonetheless, significant as the achievement of the filmmakers has been, the series inevitably showed its age, and the 2000s saw a highly publicized reboot of the series, during which the producers loudly trumpeted an intent to return to the vision of the original novels.

Such a course struck me not only as counterintuitive, but frankly absurd.

When Bond first appeared, the Cold War was in its most highly charged and volatile early period, and World War II was only a recent memory, making for a very different context for espionage, one with a dramatic tension the early twenty-first century does not even begin to approach. Britain may have been in decline in the 1950s, but it had not yet been reduced to the status of a "normal" country, its claims to being a global player rather more credible. This period was also the start of the nuclear age, the jet age, the space age, with all that implied for the feel of Bond's world, and we can hardly substitute for this today. Much the same goes for the dynamics of Bond's relationship with M, with Moneypenny, with the Bond girls (feminism being non-negotiable). Bond's griping like the Edwardian Etonite who created him about "Those kids today" or sitting in his office at Universal Exports worrying he is losing his edge seemed even less plausible. And if anything, it seemed likely the films would play down rather than play up the original Bond's snobbishness and self-indulgence (his smoking, his womanizing). In the end all that left was a (somewhat) greater realism in the treatment of the plots and the action, and this is hardly enough to set the series apart, the series bound to give up what remained of its distinctiveness, its special appeal, its personality if it followed that path.

So far, the reboot has only borne out such expectations.

NOTES
1. Ian Fleming, Thunderball (New York: Penguin, 2003), pp. 1-2.
2. Fleming, Thunderball, p. 3.
3. Fleming, Thunderball, p. 9.
4. Ibid.
5. Fleming, Casino Royale (New York: Penguin, 2002), p. 135.
6. Fleming, Moonraker (London: Jonathan Cape, 1975), p. 15.
7. Fleming, Casino Royale, p. 149.
8. Jeremy Black, The Politics of James Bond: From Fleming's Novels to the Big Screen (Westport, CN: Praeger, 2001), p. 211.
9. Black, p. 212.
10. Black, p. 211.

Sunday, September 25, 2011

New Review: The Politics of James Bond: From Fleming's Books to the Big Screen, by Jeremy Black

Westport, CN: Praeger, 2001, pp. 227.

I initially became familiar with Jeremy Black through his historical writings, which include a number of excellent works (like his critique of the state of military historiography, Rethinking Military History, which I reviewed at my other blog). It later came as a pleasant surprise that he'd written a study of the James Bond series. Black examines both the novels (the original works by Ian Fleming, as well as the series' continuation by Kingsley Amis, John Gardner and Raymond Bensen) and the films (focusing on the EON series, but also offering briefer mentions of 1967's Casino Royale and 1983's Never Say Never Again) as these stood by the time of writing (early 1999).

As the title indicates, the politics of the series' evolution is the main theme of that examination. Black certainly does the race/class/gender stuff fashionable in academic literary criticism, but his writing in this area is better-grounded than most such studies, and virtually free of obfuscating jargon – in part, I suspect, because being a historian rather than a professional critic, he is less prone to the failings of literary theorists. He also has a strong sense of the appeal of the works, never forgetting that the works he examines are more than objects in which to root around for ever subtler instances of yesteryear's prejudices. His background as a historian also makes him particularly well-positioned to deal with the evolving geopolitics of Britain's position during the period under study.

Still, there are ways in which it could have been stronger. There are a number of points where I felt more treatment of a subject was called for, from Fleming's own personal background (certainly a significant influence here) to the scripting of The Spy Who Loved Me (during which the question of approaching the period's politics was a major issue). Additionally, it is the case that any book about a still-continuing series like the Bond novels and films dates quickly. The fact that the book's analysis ends prior to the release of the last two Pierce Brosnan-starring Bond films, and the "reboot" of the series when Daniel Craig replaced him in the role (and for that matter, the contemporaneous real-world political changes such a work might have commented on, like the War on Terror), means that it leaves a significant amount of territory uncovered.

The result is a book that falls short of being definitive, but which is still well worth the while of anyone interested in the evolution of the franchise in its sixty years of existence.

Wednesday, September 14, 2011

The 2011 Summer Movie Season

With Labor Day behind us, the summer movie season is over.

Once more, the published statistics present a contrast between rising revenues and stagnating attendance, with the two last seasons regarded as the weakest since the late 1990s (1997 and 1998 having been similarly lackluster).

The backlash against 3-D (and 3-D surcharges) seems to be continuing, even as individual 3-D movies (like the last Harry Potter and third Transformer films) succeed with audiences. There would also seem to be signs of dissatisfaction with the endless parade of sequels and reboots (though the biggest, like the aforementioned Harry Potter and Transformers films, and at least when considered internationally, the fourth Pirates of the Caribbean movie did well, each of them making more than a billion dollars), and perhaps superheroes as well (three of the four superhero movies this summer having enjoyed respectable grosses, but nothing comparable with the Spiderman and Iron Man series, or The Dark Knight – though Thor was an exceptionally strong overseas performer, earning almost 60 percent of its $450 million take internationally).

The slow summer may also be indicative of the limited salability of the "retro" science fiction that was so abundant this past summer (like the science fiction Western Cowboys & Aliens, the World War II-set Captain America, the 1960s-set X-Men: First Class, the 1970s-set Super 8). (Indeed, the Box Office Guru has speculated that the plenitude of this matreial might have been a factor in the poor turnout among that traditional target audience for the summer blockbuster, young males, hardly a solid audience for period pieces.)

However, it is also the case that hesitant moviegoers know they will be able to rent the DVD or stream the films in the theater in a matter of months – in the comfort of their own home, and rather more cheaply than if they were to go to the local mulitplex, which are not small things. The unpleasantness of much of what goes along with the movie-going experience, like the massive, draining assault of commercials and previews viewers are subjected to before the film they actually came to see begins, and the company of the mental defectives who won't turn off their cell phones, are plenty of reason to keep away. So are the ever-rising prices of tickets, transport and concessions (the last, universally recognized as long having been obscene). Especially in tough economic times (and remember, it's not that we're headed for another recession; it's that we never really left the last one), consumers are more conscious of the gap between the cost of that trip, and the pleasure it actually affords, especially in a season in which the fare is so much less exciting to them than to "high concept"-obsessed Suits apparently incapable of reading the scripts they greenlight.

Still, for all of the angst over the numbers, this would hardly seem to be a turning point, especially when one extends their consideration from the summer season to the year as a whole. Despite all of the changes in the media market during the last three decades (the explosion in the number of channels, the proliferation of videotapes and discs, the advent of the Internet), annual ticket sales have consistently averaged 4-5 per capita in North America. The ups and downs of the film business these last few years are safely within that range, while the increasing profitability of the global market reduces the risk attendant on churning out the next $300 million sequel no one asked for. Naturally, there is ample reason to think the crassness and stupidity seen in 2011 will continue for a good long while to come, even if the current downward proves to be a deep and meaningful shift in the market.

Monday, August 8, 2011

A Revolution of Falling Expectations: Whither the Singularity?

By Nader Elhefnawy
Originally published in the New York Review of Science Fiction 23.9 (May 2011), pp. 19-20.

Science fiction readers and writers began the first decade of the twenty-first century preoccupied by the prospect of a technological Singularity, a race toward tomorrow at such a speed that even the very near future was held to be unknowable – a vision epitomized for me by the stories of Charles Stross's Accelerando cycle. It seems we end it preoccupied by "retro" science fiction in various forms, looking not to what lies ahead of us, but to what earlier generations may have thought was ahead of them, and different ways history may have unfolded since then, viewed with more or less irony or nostalgia – steampunk, dieselpunk, and all the rest – even as the Singularity enters the literary mainstream in novels like Gary Shteyngart's Super Sad True Love Story.

Maybe that is how we will look back on the Singularity one day, the same way that we now look back at flying cars and personal jetpacks. I sometimes think that day is not far off. After all, the last ten years have seen a profound lowering of expectations since the mid- and late 1990s when technological optimism hit a recent peak. Those were the years when futurists like Vernor Vinge, Hans Moravec and Ray Kurzweil seemed to be handing down cyber-prophecies from on high, and finding a wide receptive audience for the claims most dramatically related in Kurzweil's 1999 book The Age of Spiritual Machines. Their ideas evoked anxiety as well as hope, and they drew challenges as well as enthusiastic acclaim, but this line of thought was certainly fecund and vibrant.

That this should have been so seems unsurprising in retrospect. The late '90s saw the extension of the "digital age" to whole new classes of consumers as use of the personal computer, the cell phone and the Internet exploded, while the American economy achieved growth rates not seen in a generation. There was, too, the associated shrinkage of the budget deficit, the reports of plunging crime rates, the apparent irrelevance of so many of the problems prominent in the "declinist" rhetoric fashionable before 1995. The worries about deindustrialization that filled national debate for several years seemed especially passé, Rust Belts and trade deficits irrelevant. As the boom was held to demonstrate, economics was about information, not the making of clunky, old-fashioned things.

More than anything that actually happened, however, there was the euphoric promise of much, much more to come. In The Age of Spiritual Machines Kurzweil predicted that by 2009 we would have self-driving cars traveling down intelligent highways, or when we preferred not to go out, immersive virtual reality in every home providing an almost complete substitute. He predicted "intelligent assistants" with "natural-language understanding, problem solving, and animated personalities" routinely assisting us "with finding information, answering questions and conducting transactions." He predicted that we would put away our keyboards as text became primarily speech-created; that our telephones would have handy translation software that would let us speak in one language and be immediately heard in another; that paraplegics would walk with orthotic devices, the blind put away their canes in favor of speech-commanded electronic navigational devices, and the deaf simplify their lives with equally capable speech-to-print technology. To top it all off, he predicted that working molecular assemblers would be demonstrated in the lab, laying the groundwork for an economic revolution unlike all those that preceded it.

In those heady days the stock market was going up like a rocket (anyone remember James Glassman and Kevin Hassett's Dow 36,000?), restless employees fantasized about early retirement as their savings accounts swelled (or even quitting their jobs to become day traders), and would-be entrepreneurs eyed cyberspace as a bold new frontier (where it seemed like a company consisting merely of a web site really could be worth a billion dollars). The idea that prosperity would go hand in hand with ecological sanity enjoyed great currency, since the information economy was supposed to make natural resources much less important, something that was easier to believe with oil going for $10 a barrel. Besides, if worse came to worst and the world got a little too warm for our liking, we could just have our nanites eat up all the pesky carbon molecules trapping excess heat in our atmosphere, and spit them back out as the buckytubes with which we would build the skyhooks launching us on a new age of space exploration.

In short, all the rules of the game that made economics a dismal science seemed to have been repealed, and we were all supposed to be bound for globalized cyber-utopia. Yet the tech boom did not go on forever – far from it. Consumers did get new toys, courtesy of Apple. But there's no denying that the Ipods and Iphones and Ipads to which so many consumers thrill are a far cry from self-driving cars and immersive VR (which Kurzweil has since quietly and cautiously deferred from 2009 to the 2020s), let alone Eric Drexler-style "engines of creation."

Indeed, it would seem that the developments of the last decade represent little more than a refinement of the technologies that arrived in the '90s, rather than a prelude to those wonders that have yet to arrive, or to endless good times. Not only was the bull market on which so many hopes were pinned really and truly gone after the tech bubble burst, but that disappointment set the stage for the even bigger crisis in which we remain mired, resulting not from an overexcited response to some new technology, but speculation over that most old-fashioned of goods, real estate. After it was all over, the first decade of the 2000s saw only anemic GDP growth in the United States – a mere 1.5 percent a year when adjusted for inflation. (That's about a third lower than we managed in the much-maligned 1970s, and comparable to the 1930s.)

The picture was even worse than the growth numbers suggested, our deeper problems never having gone away. Debt and deindustrialization clearly had not been rendered irrelevant, as our swelling trade deficit and the plunging value of the dollar showed, and neither had natural resources. Despite the mediocre rate of national economic growth, commodities prices shot up, way up, oil hitting a $150 a barrel. Americans grumbled at the expense of paying more at the pump – but in less fortunate parts of the globe, the result was nothing short of a crisis as tens of millions were pushed over the line into hunger, and deadly food riots made the headlines. And while the 1990s had wars, terrorism and fears about weapons of mass destruction, the September 11 attacks and the subsequent length, intrusiveness, divisiveness and cost in blood, treasure and much else of the wars waged in its aftermath has been a rude and powerful shock. (The changing experience of commercial flying, a relatively small matter when all things are considered, by itself seems enough to refute the image of the world as a place of borderless convenience.)

There's no shortage of ideas for fixes, but in an echo of the history of many a past troubled society that failed to halt its decline, those ideas remain largely talk, the rate of actual progress maddeningly glacial, or even negative. This is not to say that we don't expect change, however. We do expect change, plenty of it. But we expect that ever less of it will be benevolent, or even to include much that can be thought of as exciting. Pundit Michael Lind has gone so far as to dub the first half of the twenty-first century "The Boring Age." Given what we were promised, The Boring Age is necessarily an Age of Disappointment. Given the challenges facing us, the failure to grow and innovate seems all too likely to make the Boring Age also an Age of Failure and an Age of Catastrophe – and not the kind of ultra-high-tech catastrophe that those credulous toward but anxious about a Singularity (like Bill Joy) have in mind, but the far more banal ends we have dreaded so much more for so much longer. (The word "Malthusian" comes to mind.)

The claims by writers like Lind still go against the conventional wisdom, of course. A great many people are very enthusiastic about the implications of the latest consumer products, and dismissive of the failure of the grander promises to arrive. Still, I can't help being struck by how the production of original claims and arguments for the technological Singularity supposed to signify such change have fallen off since the tech boom years. Surveys of technological futurology like defense analyst Peter Singer's Wired for War increasingly seem to go over and over the same, aging list of authors and works (Kurzweil, Vernor Vinge, Hans Moravec, and the rest), rather than presenting the new ideas and new advocates one might expect if this line of thought was more deeply influential. Indeed, I have found the vigor of the critics of Singularitarianism far more striking as of late, with talk of a backlash against the Singularity recently prominent even within science fiction, the word going from selling point to liability in book titles. In place of techno-optimism, Paolo Bacigalupi's portraits of eco-catastrophe seem more representative of our moment.

Of course, social, economic and political trends may not be the whole story. Thesis invites antithesis, and there is no question that the associated tropes have been heavily used for quite a while now, so that even as good novels continue to be written about the theme of runaway technological change sweeping us toward the realization of new dreams (and new nightmares), a case can be made for letting it lie fallow a while for purely artistic reasons. Still, looking back science fiction seems to have been more vibrant in boom times, less so in periods of bust – perhaps, because the genre flourishes when utopian hopes are in tension with the inevitable dreads, rather than surrendering the field to them. Just as history and economics may lean toward cyclical theories in bleak times, such doldrums seem conducive to gloomy scenarios of thwarted progress, or escape into outright fantasy, in our speculative fiction.

Even without going so far as that, with the Singularity rebuffed we are shoved away from our most extreme visions of a world transformed. The alternatives being less radical, and most of them quite amply treated by earlier writers, they cannot but appear old-fashioned, and as a practical matter their handling tends to result in a recycling of yesteryear's ideas. (Even Vinge wasn't able to get away from old standards like nuclear war and Gunther Stent-like "golden ages" in his 2007 seminar "What if the Singularity Does NOT Happen?") The temptation and tendency to look backward rather than forward is all the stronger since a self-consciously retro approach is all about the relationship of the present with the past. Whether the purpose is to take refuge from a frightening, alienating or unpromising present (or future) in a romanticized past, imagine other directions we might have taken (or try and understand the one we did take), search where we came from for clues to where we may be going, or simply savor the aesthetic and other pleasures of creative anachronism, the pull of stories about a modified yesteryear is powerful indeed under such circumstances.

Still, even if the Singularitarians prove to be wrong about particular theories (like Kurzweil's "law of accelerating returns") or the precise timing of particular innovations (like the prediction that mind uploading will be practical circa 2045), they may yet be right about the broader direction of our technological development. It is worth remembering that artificial intelligence research has been an area of repeated booms and busts, the raising of high hopes followed by crushing disappointments that give way to a renewal of optimism some time later, for at least half a century now, AI spring following AI winter. It is worth remembering, too, that Singularitarian thinking did not begin with Vinge, Kurzweil and company, going back as it does not only through the whole history of science fiction, but serious thinkers like I.J. Good and J.D. Bernal, and even before them.

For as long as there has been a human condition, human life has been defined by the struggle against its limits, and the aspiration to transcend them. A straight line can be drawn from the builders of the splendid tombs of antiquity (if not their unrecorded, nameless predecessors), and the alchemists' pursuit of the philosopher's stone, to the Renaissance humanism celebrated in Pico della Mirandola's "Oration on the Dignity of Man," all the way down to the debate between William Godwin, the Marquis de Condorcet and Thomas Malthus, to the Russian Cosmists who were the first to articulate a truly posthuman vision, and only after many more evolutions, the Singularitarians who made such a splash in the 1990s. It seems inconceivable they will be the last to espouse the idea, or to claim that the Big Moment is close at hand.

In short, that more basic idea's more likely to be on the backburner than on its way out. A comeback's all but certain, both inside and outside the genre, and perhaps sooner than we can guess.

Saturday, July 9, 2011

Of Mary Sue and Gary Stu

The term "Mary Sue" (and its male counterpart, "Gary Stu") typically refers to idealized characters that seem to function principally as wish-fulfillment fantasies, implying one-dimensionality, and a narrowed range of dramatic possibility.1 It is also generally regarded as pejorative, and the proclamation of a character as a Sue or a Stu widely taken for an attack.

Still, Sues and Stus are pervasive in fiction, across the media spectrum. After all, a writer putting a version of themselves, but better, to paper is likely to be well inside Sue/Stu territory. The same goes for a writer creating an idealized figure with which they hope to make their reader identify in such a way. A writer setting out to create a character which is a role model, a "positive counterstereotype," or anything else of the kind has to be very careful indeed if they don't want to cross that line.

One may go so far as to say that the Sue/Stu is a default mode for fiction, so standard in some stories that we barely notice their Sue-ness. After all, what does the average television show offer? Attractive people saying polished, scripted things, usually while wearing nice clothes and sitting in handsomely furnished rooms. The world of the TV professional--the doctors and lawyers who give the impression virtually no other kind of work exists--tends to be a fantasy of competence and glamour, with most of the ethical rough edges rubbed off. The TV physician or surgeon is always brilliant, and always deeply concerned with the care of the patient; the worst that can be said of them is that they're cranky or eccentric, like John Becker (Becker) or Perry Cox (Scrubs) or Gregory House (House), and even that crankiness is often presented as part of their excellence--for instance, in their demanding the best from their subordinates, or their willingness to buck the system to save a life. From the deadpan Law & Order to the strutting, speechifying (and surprisingly musical) inanities of David E. Kelley-produced dramedies like Boston Legal, the TV attorney is little different, and usually better dressed, their well-cut suits rather more becoming than surgeon's scrubs.

It's not a real doctor or lawyer viewers are seeing, but the fantasy version of what they do and how they live--and it may well be that some write their Mary Sues and Gary Stus defensively, to forestall the "minority pressure" (to use Ray Bradbury's term) that would descend on them if an influential group (such as doctors and lawyers happen to be) were to be offended by a more realistic depiction.

Leave aside the upper middle-class vision of life predominant in TV drama for a moment, and consider the action hero instead. Most of these have been the author's Gary Stu. Robert Howard had Conan, Ian Fleming had James Bond, Clive Cussler had Dirk Pitt, and Tom Clancy had Jack Ryan and Mr. Clark. Ex-Navy SEAL Richard Marcinko was his own Gary Stu, picking up where his autobiography left off with a series of action-adventure novels--and even a first-person shooter--in which Richard Marcinko himself is the star. Such characters are smarter, stronger, tougher, braver and generally better than ordinary people because they have to be; they wouldn't survive what their authors put them through otherwise. (It takes a cartoon character to make it in a cartoon world.) Very often they have ridiculously large portfolios of skills, because it's the only way the writer can keep them at the center of the plot, moving it along, rather than having to cede the spotlight to other characters.

In fact, outright geniuses in all fields are a dime a dozen, and their intellectual feats--the lightning-fast hacks of computer systems they've never seen before; the immediate improvisation of effective on-the-spot solutions to world-ending crises; the overnight, single-handed creation of futuristic inventions that in real life defy the scientific-industrial complexes of the most powerful countries on Earth--bear about as much relationship to the capabilities and accomplishments of real-life geniuses as the physical feats of comic book superheroes do to those of the greatest athletes. (I suspect few appreciate the latter, though. Unbelievable as it may seem, the idiocies of Eureka's scripts reflect a widespread view of how scientific R & D actually goes, and there are actual working engineers who lament their employers will never give them the chance to be Reed Richards.)

In short, Mary Sues and Gary Stus do have their weaknesses, especially when judged by a standard of realism and character drama, but they certainly have their uses and their places, which is why writers write them, and why so many readers gravitate to them (or at least, stories featuring them). Unfortunately, writers don't always recognize those uses and places, and make a botch of things by placing them in an inappropriate context--or simply by writing the character badly.

The truth is that a good Sue/Stu is just as tough to write as a more realistic character, and perhaps even tougher. It's not easy to make such characters likable, let alone relatable. It's not easy to sustain suspense when the hero's unbeatable and indestructible. And all too often, the execution ends up lacking subtlety, or nuance. The writer pushes things too far. We end up with a character who is not just a talented winner, but someone who is talented at everything, always right, always gets the last word, always one-ups everyone else, always gets away with everything, is always being fawned over for their abilities and general wonderfulness--and it's quite a feat to keep this from getting wearisome in a hurry, one that exceeds the skill of many an author. In other cases they make the character a surrogate for their nastier impulses. We end up with a braggart, a jerk, a bully or worse, and so we sympathize, and empathize, with the people who have to put up with them instead of our ostensible protagonist.

Nonetheless, even in the best of cases, the plain and simple truth where fantasy is concerned is that we don't all have the same fantasies. Quite the contrary, others' fantasies very often touch our insecurities, our bitter memories of rejection or failure, our deepest, pettiest resentments and antipathies; we can experience someone else's wish-fulfillment as an assault on our own wishes, and even our own being (and it must be admitted, a Sue or Stu is sometimes conceived as exactly that, the wishes to be fulfilled by them not pleasant for others, or the Other). And at the same time, we can be irrationally offended when someone criticizes our favorite Sues and Stus.

In principle, there's no reason why one can't enjoy their own Sues and Stus, recognize the right of others to the same, and still be annoyed by particular fantasy characters, still call characters what they are, and respond appropriately to genuinely pernicious Sues and Stus. Alas, the elevation of gut feelings to moral principles by identity politics, and all the irrationality and hypocrisy and viciousness that go with this has made the unavoidable wrangling about Mary Sues and Gary Stus far more bitter than it has to be, so that in many a situation one hesitates even to verbalize the identification. (Call it the "politics of personal fantasy.") It's yet another way in which real life is unkind to fantasy--and unalloyed fantasy, a thing one shares at their own risk.

1. The 1974 short story "A Trekkie's Tale" (available here), actually a parody of the tendency in fan fiction about Star Trek, is the source of the term.

Subscribe Now: Feed Icon