Friday, November 24, 2017

David Walsh on the Oscar Contenders

Earlier this month I mentioned David Walsh's review of Django Unchained, and his remarks on Zero Dark Thirty. Interestingly, just two days before the Academy Awards ceremony, he has produced a follow-up to these reviews in which he extends his analysis of the weaknesses of those films, and also why they have attracted such enthusiasm among the "psuedo-left" (in comparison with, for instance, Steven Spielberg's Lincoln), noting that this
well-heeled social layer, conditioned by decades of academic anti-Marxism, identity politics and self-absorption, rejects the notion of progress, the appeal of reason, the ability to learn anything from history, the impact of ideas on the population, mass mobilizations and centralized force. It responds strongly to irrationality, mythologizing, the "carnivalesque," petty bourgeois individualism, racialism, gender politics, vulgarity and social backwardness.
This is as succinct and forceful a description of the politics of postmodernism, and the way in which it manifests itself in the taste for a certain kind of "dark and gritty" storytelling, as I have seen in some time.

You can read Walsh's full remarks here.

Also of interest to those interested in this line of cultural criticism: David Walsh's review of the recent film Not Fade Away, in which he debunks the romanticizing of the 1960s as a "golden age in which everything seemed possible and revolution was in the air."

Monday, September 18, 2017

Reassessing Kurzweil's 2009 Predictions

Raymond Kurzweil's 1999 book The Age of Spiritual Machines remains a touchstone of futurologists and their critics not only because of Kurzweil's continuing presence within the information technology and futurology worlds, but because the book also offered multiple, lengthy, even comprehensive lists of forecasts. Many of his forecasts were about subjects far outside his realm of expertise (macroeconomics, geopolitics, military science, etc.), and unsurprisingly were commonplaces, but a good many of them were precise technological forecasts lending themselves to testing against the day's technological state of the art--and thereby checking the progress of the technologies with which the book was concerned, and perhaps, also our progress in the direction he claimed we were going in, toward a technological Singularity.

Naturally a good many observers (myself included) reviewed Kurzweil's 1999 predictions for 2009 when that year arrived. Different writers, depending on which forecasts they chose to focus on, and how they judged them, came to different conclusions about his accuracy. I emphasized those precise, easily tested technological forecasts, and saw that many had not come to pass. I noted, too, that while not accounting for each and every error, there was a recognizable pattern in a good many of these, namely that Kurzweil assumed advances in neural networks enabling the kind of "bottom-up" machine learning that made for better and improving pattern recognition, as with the speech recognition supposed to make for the Language User Interfaces we never really got.

Looking back on those predictions from almost a decade on, however, it seems worth remarking that the improvement in neural nets was indeed disappointing in the first decade of the twenty-first century--but got a good deal faster in the second decade. And right along with it there has been improvement in most of the areas he seemed to have been overoptimistic about, like translation technology. Indeed, as one of those who looked at his 1999 predictions for 2009 and was underwhelmed, I find myself increasingly suspecting that Kurzweil was accurate enough in guessing what would happen, and how, but, due to overoptimism about the rate of improvement in the underlying technology, was off the mark in regard to when by a rough decade.

Moreover, this does not seem to have been the only area where this was the case. The same may go for the use of carbon nanotubes, another area where after earlier disappointments this decade technologists achieved successes that may plausibly open the way to the new designs of 3-D chips he described, permitting vast improvements in computer performance. (As it happens, Kurzweil guessed that by 2019 3-D chips based on a nanomaterial substrate would be the norm. If IBM is right, he will have been off by rather less than a decade in this case.) It may also go for virtual reality--which likewise seems to be running a rough decade behind his guesses.

The advance in all these areas has, along with a burst of progress in other areas in which Kurzweil took much less interest (like self-driving cars of a different type than he conceived, and electricity production from renewable sources), contributed to a greater bullishness about these technologies--a techno-optimism such as I do not think we have had since the heady years of the late 1990s when access to personal computing, the Internet and mobile telephony began the proliferation and convergence that has people watching movies off Netflix on their "smart" phones while sitting on the bus.1

Of course, it should still be remembered that much of what has been talked about here is not yet an accomplished fact. The advances in AI have yet to make for proven improvements in consumer goods, while those 3-D nanotube-based chips have yet to enter production. At best, the first true self-driving car will not hit the market until 2020, while renewable energy (hydroelectric excepted) is just beginning to play a major role in the world's energy mix. Moreover, that the recent past has seen things pick up does not mean that the new pace of progress will remain the new norm. We might, in line with Kurzweil's expectation of accelerating returns, see it get faster still--or slow down once more.

1. Kurzweil's self-driving cars were based on "intelligent roads," whereas the cars getting the attention today are autonomous.

Review: The Wages of Destruction: The Making and Breaking of the Nazi Economy, by Adam Tooze

New York, Penguin, 2006, pp. 799.

I remember that reading Paul Kennedy's The Rise and Fall of Great Powers back in college I pored over the figures for Germany's World War II-era income and production and felt that something didn't quite fit. German arms output seemed a good deal less impressive than I expected given the country's image as an ultra-efficient manufacturing colossus, and the fact of its control over nearly the whole of the European continent.

The common explanation for this was political--that an irrational Nazi regime, for various reasons (its slowness to move to a total war footing out of fear of a repeat of the Revolution of 1918, a self-defeating brutality in its exploitation of occupied territories, Hitler's self-interestedly playing various interests off against each other at the expense of the war effort, etc.) failed to properly utilize its potentially overwhelming industrial capacity (at least, prior to the arrival of Albert Speer, who came along too late to accomplish very much).

By and large, I accepted this explanation.

More recently Adam Tooze's Wolfson prize winning study The Wages of Destruction: The Making and Breaking of the Nazi Economy offered a different view--that the question was not a matter of political failure of this type, but the material limitations of the German economic base. Germany was simply not big enough or rich enough to hold its own against an alliance of the United States, the British Empire and the Soviet Union--the coalescence of which Tooze contends was virtually inevitable, so that Hitler's attempt to create a continental empire in the east in the face of their opposition was simply not explicable in any rational terms. (Indeed, he holds it to be comprehensible only in terms of Hitler's bizarre, conspiratorial, apocalyptic racial theories.)

To the end of making that case Tooze devotes his book to a study of the German economy through the crucial period (1933-1945), less on the basis of new data than old and well-known data (more or less like what I saw reading Kennedy) that has simply been marginalized in a great deal of analysis of the war. As he notes Germany was the world's second-largest industrial power in the early twentieth century, well ahead of its European neighbors in manufacturing productivity and output, and "world-class" in several key heavy industrial and high-tech lines like steel, chemicals, machine tools and electrical equipment.

However, German manufacturing appears less impressive when one looks at the sector overall. While Germany still excelled its European neighbors in productivity when taken this way, its lead was slighter than might be imagined, so much so that the difference between Germany, and for example, Britain, was negligible in comparison with the difference between Germany and the much more efficient (because much more "Fordist") United States (America getting twice as much per-worker output in many lines). There were even key heavy/high-tech lines where Germany remained a relatively small producer--like motor vehicles, the volume of output of Germany's automotive industry far behind that of the U.S..

Moreover, manufacturing was just one sector of the economy--with the other sectors even worse off. The country's comparably small, densely populated and less than resource-rich territory was problematic from the standpoint of its extractive sectors, which were also less productive than they might have been. The country's agricultural sector, notably, lagged not just the U.S. but even Britain in productivity, suffering from having too many of its workers on too little land divided into too many farms. All of this contributed not only to the country's per capita and national income being less than its manufacturing successes implied, but the harsh reality that those manufacturing successes depended heavily on imports of food, energy and raw materials, to the point that the country's trade was scarcely balanced by its manufacturing exports.

The result was that not only was Germany poorer than is usually appreciated, but its resource-financial situation differing from that of the U.S. (resource-rich) or Britain (resource-poor but financially powerful, with the trading privileges empire brings) in this manner left it with less room than they to export less or import more, as a major military build-up demanded. Moreover, the attempts to alleviate this constraint by developing domestic sources of imported materials like iron, and inventing synthetic substitutes for the oil and rubber Germany could not produce locally, on top of entailing additional inefficiencies, fell far short of eliminating the problem, leaving it very vulnerable to the cut-off of essential imports, and something have to give, fairly quickly.

This reading of the history is borne out by what actually did happen, the German government actually making ruthlessly efficient use of its domestic resources almost immediately on taking power--the facts about this refuting such clichés of the historiography as the unwillingness of the German government to sacrifice its people's living standards or put women to work for the sake of its military effort. (Indeed, already in the '30s the German government's system of central control of foreign exchange and key materials was gutting the production of consumer items like clothing in favor of armaments, while Germany had more of its women in the work force than Britain did.) However, for all that, Germany's merely pre-war preparations--its drive for maximum short-term output in the key military-industrial lines--translated to severe, unsustainable economic strain before the fighting even began, in such ways as the neglect of key infrastructure like the railroads, the price distortions caused by intricate central control, and the emergence of crises of cash flow and foreign exchange in spite of all the managerial feats and painful sacrifices. The inescapable recognition that the German economy by 1938 was a house of cards forced the government to curtail its rearmament programs at a point at which it was still nowhere near to being a match for Britain at sea, and disadvantaged in the face of combined Anglo-French air and ground forces in key respects (like numbers of up-to-date tanks), with the balance already swinging away from it to its opponents.1 And then even the cutbacks still left it in an economically precarious position.

Of course, the fighting did begin in 1939, and Germany did make successful conquests during the first year that did enlarge its resource base. However, in hindsight the conquests added less than might be imagined because of the limited size, resource bases and development of the countries it overran, not just the nations of Eastern Europe, but in Western Europe (Norway and Denmark, the Low Countries, and even France) as well.2 The result was that even the conquests fell far short of putting Germany on an even footing with the combined Big Three, the disparity still so wide as to moot many of the claims about how this or that more careful use of Axis resources by the Nazi leadership might have turned the tide of the war.3 Tooze reinforces this contention by showing that, for the most part, the Germans generally did what could be done with what was at their disposal (puncturing the "myth" of Albert Speer).4

A few questionable analogies apart (as when Tooze compares Germany's economic development in the 1930s to that of mid-income countries like South Africa today) Tooze offers a comprehensive and convincing image of the German economy, and especially its critical weaknesses. Moreover, while he pays less detailed attention to the other side of the balance sheet (it seemed to me that he was insufficiently attentive not just to France's demographic and industrial vulnerabilities but to Britain's financial and industrial weaknesses and problems of military overstretch in the late '30s), he is quite correct to contend that Germany, and even a Germany in control of much of continental Europe, was at a grave disadvantage in a prolonged war with a U.S.-U.K.-Soviet alliance.

Nonetheless, this emphasis on the inability of Germany to hold its own against the combined Allied Big Three obscures a weakness much more important than the deficiencies of the economic analysis--namely his depiction of the anti-Nazi alliance's coming together the way it did as a virtual certainty, his case for which is superficial and conventional compared with his economic analysis.5 He slights the leeriness of the U.K. and France about allying themselves with the Soviet Union that actually did prevent their coming together in 1939, with disastrous consequences. (Even after Poland, French conservatives were much more interested in fighting the Soviets than fighting Hitler. Indeed, the response of many of them to French defeat appears to have been a George Costanza-like "restrained jubilation" at the prospect of dispensing with democracy and setting up a fascist state in William Shirer's account of the Third Republic's collapse.)

Continuing along these lines, Tooze then goes on to slight the reality that, after the shock of the Battle of France (which brought Italy into the war on Germany's side), the hard-pressed, cash-strapped U.K. might have sought a negotiated peace, or that isolationism might have won the day in the United States, depriving Britain of American support. (In its absence the country would have had to settle with Germany--and the U.S. would have had few plausible options against Germany afterward, in spite of Germany's failures in the Battle of Britain and its naval limitations.) These developments would not have changed Germany's asset base--but they would have taken Britain out of the war, and likely kept Germany from having to fight the United States as well, leaving it to focus on just the Soviet Union, which would have changed the balance between them dramatically. Equally Tooze does not acknowledge, let alone refute, the possibility that in spite of the balance of power being so heavily against it as it was different German military decisions (most obviously "the Mediterranean strategy") could have produced a different outcome.

Additionally, these oft-studied "What ifs" aside, it is worth remembering the way the war did go. After Britain decided to stick it out against Germany, American support was exceedingly slow in coming, Britain bankrupt in March 1941 before the aid spigot was turned on fully. Moreover, even with Britain still against it, even after a delay by one crucial month due to the German invasion of the Balkans, mid-course changes to the plan of attack that cost additional time, and famously inimical weather, German troops managed to come within sight of Moscow before the invasion's first really significant repulse. And even afterward a depleted Soviet Union remained vulnerable (while any Anglo-American "second front" remained a long way off), permitting Germany to continue taking the offensive through the year, up to and even after the turning point of Stalingrad--following which it was another two-and-a-half more years of fighting before V-E Day.

One should also remember the cost of this protracted effort. The Soviet Union lost some twenty-five million lives. Britain lost what remained of the foundations of its status as a world empire, a great power, and even an independent force in world affairs (in such a weak position that it had to take a $4 billion U.S. loan immediately after the war). The U.S. did not suffer anything comparable, but it is worth remembering that its extraordinary industrial effort went far beyond what even the optimists thought possible at the time--and was a thing the Allies could not have taken for granted in the earlier part of the war, even after the American entry.

Consequently, while in hindsight it appears clear that Germany's chances for attaining anything resembling its maximum war aims were never more than low (indeed, would probably be deemed delusional but for the astonishing Allied timidity and incompetence in the key September 1939-May 1940 period), the attainment of those aims became increasingly implausible again after the failures of German forces against the Soviet Union in late 1941, and virtually inconceivable after 1942 came to a close, a victory so long, horrifically costly and deeply exhausting cannot be regarded with the complacency Tooze shows. Indeed, in his failure to critically examine this side of the issue, it appears that while Tooze takes on, and debunks, one myth of World War II (of German economic might), he promotes others that have themselves already been debunked--most importantly, that of the political establishments of the future members of the wartime United Nations having been steadfast opponents of Nazism from the very beginning.6

1. The initial plans had called for an air force of 21,000 planes when Germany went to war. The Luftwaffe never had more than 5,000, however.
2. Paul Bairoch's oft-cited 1982 data set on world manufacturing output through history gives Germany 13 percent of world industrial capacity. All the rest of non-Soviet, continental Europe together was another 13 percent, widely dispersed among its various nations, not all of which Germany controlled (Sweden and Switzerland remained independent), while the productivity of Germany and occupied Europe was hampered by the cut-off from imported inputs by the Allies' control of the oceans. By contrast, Bairoch's data gives the U.S. alone 31 percent of world manufacturing output in 1938 (more than all non-Soviet Europe combined).
3. To return to Bairoch's figures, the U.S. in coalition with the U.K. (another 11 percent) and the Soviet Union (another 9 percent) possessed 51 percent of world manufacturing capacity--a 4-to-1 advantage over Germany, and a 2-to-1 advantage over all of non-Soviet, continental Europe combined (independent states that merely traded with Germany included). Moreover, even these figures understate the margin of Allied advantage, given how the U.S. economy boomed during the war years (its economy growing about three-quarters), and Britain brought its broader Empire with its additional resources into the alliance. Considering the access to the food, energy and raw materials needed to keep all that capacity humming would seem likely to widen the disparity rather than diminish it.
In fairness, this was somewhat offset by two factors: German conquest of much of the western Soviet Union, and the absorption of a portion of Allied resources by the simultaneous war in the Asia-Pacific region. However, much of the Soviet industrial base in the conquered parts was hurriedly relocated eastward ahead of their advance, and contributed to the Soviet productive effort (and more still denied the Germans). At the same time the diversion of Allied resources by the Pacific War was limited by Japan's more modest military-industrial capability (it had just 5 percent of world manufacturing in 1938), China's weight on the Allied side (much of the Japanese army and air force tied up fighting against the Chinese), and the Allies' prioritization of the war in Europe (the "Europe First" or "Germany First" policy).
4. Tooze contends that the rise of German output in the last part of the war was largely the result of decisions taken earlier in it, and Speer simply around when those decisions paid off in the form of production gains. Tooze also offers a critical view of those initiatives Speer did undertake, like the effort to mass-produce U-boats late in the war.
5. While the critique of German strategy is solid, Tooze treats Allied strategy much less critically--a particular weakness when it comes to his assessment of the Combined Bomber Offensive.
6. The body of work regarding Britain in this respect can be found distilled in Clive Ponting's impressive 1940: Myth and Reality.

Reading Revisionism

It seems to me that I have been running across a great deal of revisionist World War II-era economic history as of late, focused primarily on how the balance stood between two of the war's major participants--Britain and Germany.

The old view was that the British Empire was stodgy and enfeebled, and looked it when it went up against sleek, ultra-modern, ultra-efficient Germany.

Much recent historiography has taken a different view--that Germany was not really so tough, nor Britain's performance so shabby. That it was really Germany that was the economic underdog against the trading and financial colossus--and, for all its dings, the techno-industrial colossus--Britain happened to be.

Of course, there is nothing wrong with revisionism as such. New information, and the availability of new ways of looking at, should prompt rethinkings of old perceptions and assumptions. However, revisionism also has a way of pandering to the fashion of the times--and this seems to be one of those cases.

This line of argument seems to say that Britain didn't do so badly out of free trade. That the country's weaknesses in manufacturing just didn't matter that much--that the emphasis on finance instead was just fine. That, by implication, the conservative establishment did just fine, overseeing the economy, and managing the country's foreign affairs, and then fighting the war--not needing any help from a bunch of lefties and Laborites, thank you very much, and certainly not disgracing itself the way they claim it had clearly done post-Dunkirk.

It seems more than coincidence that this is all very congenial to a conservative outlook--in particular the one that today says free trade, financialization, deindustrialization, the balance of payments (and all the rest of the consequences the neoliberal turn has had for the economies of the developed world) simply do not matter, that our future can be trusted to economic orthodoxy, that critical and dissenting views have nothing to offer. And also the one that says Britain can do just fine on its own, without any allies, continental or otherwise.

It is no more convincing in regard to the past than it is in regard to the present.

Still, even if one doubts the more extreme claims (there is simply no refuting Britain's failures as a manufacturing and trading nation), there is a considerable body of hard fact making it clear that Germany had some grave weaknesses--and Britain, some strengths that have to be acknowledged in any proper appraisal of the way the war was fought. So does it go in two books I have just reviewed here--Adam Tooze's study of the German economy in the Nazi period, The Wages of Destruction; and David Edgerton's more self-explanatory title, Britain's War Machine.

Friday, August 18, 2017

Animated Films at the U.S. Box Office, 1980-2016: The Stats

Considering the stats on the action film (such films now typically six of the ten top-grossing films in a given year), I naturally found myself wondering--what about the other four? Naturally I thought of animated films--and again noticed a trend that exploded a little later. In the '80s there was not a single animated feature in the top ten of any year, apart from the partial exception of Who Framed Roger Rabbit? (1988). (Even the celebrated The Little Mermaid only made the #13 spot.) However, starting with 1991's Beauty and the Beast it became common in the '90s for there to be at least one such film in the top ten (six years had at least one, for an average of 0.8), much more standard in the 2000s (1.7 films a year, with 2008 having four such movies), and yet again in the 2010s. Every single year from 2010 on has had at least one, every year but 2011 at least two, and 2010 an amazing five of the top ten--clearly, at the expense of the action movies which had an especially poor showing that year.

In the 2010s that means that between the action movies, and the animated films, together accounting for eight to nine of the top ten movies in any given year, they pretty much have the uppermost reaches of the box office monopolized in a way that no two genres have ever had before, so far as I can tell. That one to two spots at most? One is often taken up by something similar--like a live-action version of an animated hit of yesteryear's (like 2014's Maleficent or 2015's Cinderella), leaving at most one other for any and everything else, from adult-oriented comedies (like 2012's Ted) to the occasional mainstream drama (like 2014's American Sniper).

Feel free to check my data (and my math) for yourself.

The Data Set
1980-1989: 0.
1990-1999: 8. (0.8 a year.)
1991-1: Beauty and the Beast.
1992-1: Aladdin.
1994-1: The Lion King.
1995-2: Toy Story, Pocahontas.
1998-1: A Bug's Life.
1999-2: Toy Story 2, Tarzan.
2000-2009: 17 films (1.7 a year).
2000-0
2001-2: Shrek, Monsters Inc.
2002-1: Ice Age.
2003-1: Finding Nemo.
2004-3: Shrek 2, The Incredibles, Polar Express.
2005-1: Madagascar.
2006-3: Cars, Happy Feet, Ice Age 2.
2007-1: Shrek 3.
2008-4: Wall-E, Kung Fu Panda, Madagascar, Dr. Suess' Horton Hears a Who.
2009-1: Up.
2010-2016: 19 films (2.7 a year).
2010-5: Toy Story 3, Despicable Me, Shrek 4, How to Train Your Dragon, Tangled.
2011-1: Cars 2.
2012-2: Brave, Madagascar 3.
2013-3: Frozen, Despicable Me 2, Monsters University.
2014-2: The LEGO Movie, Big Hero 6.
2015-2: Inside Out, Minions.
2016-4: Finding Dory, The Secret Life of Pets, Zootopia, Sing.

Wednesday, July 5, 2017

The Action Film Becomes King of the Box Office: Actual Numbers

It seems generally understood that the action movie took off in the '80s, and, while going through some changes (becoming less often R-rated and more often PG-13-rated, setting aside the cops and commandos to focus on fantasy and science fiction themes), captured a bigger and bigger share of the box office until it reached its predominant position today.

Still, this is something people seem to take as a given without generally checking it for themselves, and so I decided to do just that, looking at the ten highest-grossing films of every year from 1980 to the present, as listed by that ever-handy database, Box Office Mojo.

Of course, this was a trickier endeavor than it might sound. I had to decide, after all, just what to count as an action film. In my particular count I strove to leave aside action-comedies where the accent is on comedy (Stakeout, Kindergarten Cop), sports dramas (the Rocky series), and suspense thrillers (Witness, Silence of the Lambs, Basic Instinct), war films (Saving Private Ryan, Pearl Harbor), historical epics (Titanic) and fantasy and science fiction movies (Harry Potter) that may contain action but strike me as not really being put together as action films. I also left out animated films of all types even when they might have qualified (The Incredibles) to focus solely on live-action features. Some "subjectivity" was unavoidably involved in determining whether or not films made the cut even so. (I included Mr. and Mrs. Smith in the count for 2005. Some, in my place, probably would not.) Still, I generally let myself be guided by Box Office Mojo when in doubt, and I suspect that, especially toward the latter part of the period, I undercounted rather than overcounted, and in the process understated just how much the market had shifted in the direction of action films (and to be frank, thought that the safer side to err on).

Ultimately I concluded that 2.2 such films a year were average in the '80s (1980--1989), 3.6 in '90s (1990-1999), 4.8 in the '00s (2000-2009), and 5.4 for the current decade so far (2010-2016). Moreover, discarding the unusual year of 2010 (just 2 action movies), a half dozen action films in the top ten have been the the norm for the last period (2011-2016). This amounts not just to a clear majority of the most-seen movies, but close to triple the proportion seen in the '80s.

Moreover, the upward trend was steady through the whole period. It does not seem insignificant that, while 2 a year were average for the '80s, 1989 was the first year in which the three top spots were all claimed by action movies (Batman, Indiana Jones and Lethal Weapon #1, #2 and #3 respectively). Nor that no year after 1988 had fewer than two such movies in the top ten, and only 3 of the 28 years that followed (1991, 1992 and 2010) had under three. Indeed, during the 2000-2009 period, not one year had less than four movies of the type in its top ten, and after 2010, not one year had under six.

Especially considering that I probably undercounted rather than overcounted, there is no denying the magnitude of that shift by this measure. And while at the moment I have not taken an equally comprehensive approach to the top 20 (which would provide an even fuller picture), my impression is that it would reinforce this assessment, rather than undermine it. If in 1980 we look at the top 20, we still see just one action movie--Empire Strikes Back. But if we do the same in 2016 we add to the six films in the top ten Doctor Strange, Jason Bourne, Star Trek Beyond and X-Men: Apocalypse, four more films (even while we leave out such films as Fantastic Beasts and Where to Find Them, and Kung Fu Panda 3).

Anyone who cares to can check my counting below and see if they come up with anything much different.

The Data
1980-1989: 22 films (2.2 a year).
1980-top 10 films, only 1 action movie-Empire Strikes Back.
1981-2: Raiders of the Lost Ark and Superman II.
1982-2: Star Trek 2 and 48 HRS.
1983-3: Return of the Jedi, Octopussy, Sudden Impact.
1984-3: Beverly Hills Cop, Indiana Jones, Star Trek 3.
1985-2: Rambo 2, Jewel of the Nile.
1986-2: Top Gun, Aliens.
1987-3: Beverly Hills Cop 2, Lethal Weapon, The Untouchables.
1988-1: Die Hard.
1989-3: Batman, Indiana Jones, Lethal Weapon 2 (all in the top 3 slots, the first time this ever happened).

1990-1999: 36 films (3.6 a year).
1990-4: Teenage Mutant Ninja Turtles, The Hunt for Red October, Total Recall, Die Hard 2.
1991-2: Terminator 2, Robin Hood: Prince of Thieves.
1992-2: Batman 2, Lethal Weapon 3.
1993-4: Jurassic Park, The Fugitive, In the Line of Fire, Cliffhanger.
1994-3: True Lies, Clear and Present Danger, Speed.
1995-5: Batman 3, Goldeneye, Die Hard 3, Apollo 13, Jumanji.
1996-4: ID, Twister, Mission: Impossible, The Rock.
1997-5: Men in Black, The Lost World, Air Force One, Star Wars (Episode IV reissue), Tomorrow Never Dies.
1998-4: Armageddon, Rush Hour, Deep Impact, Godzilla.
1999-3: Star Wars: Episode I, The Matrix, The Mummy.

2000-2009: 48 films (4.8 a year).
2000-4: Mission: Impossible 2, Gladiator, A Perfect Storm, X-Men.
2001-5: The Lord of the Rings (LOTR) 1, Rush Hour 2, Mummy, Jurassic 3, The Planet of the Apes.
2002-4: Spiderman, Star Wars: Episode II, LOTR 2, Men In Black 2.
2003-6:  LOTR 3, The Pirates of the Caribbean, The Matrix 2, X-Men 2, Terminator 3, The Matrix 3.
2004-4: Spiderman 2, The Day After Tomorrow, The Bourne Supremacy, National Treasure.
2005-5: Star Wars: Episode III, The War of the Worlds, King Kong, Batman Begins, Mr and Mrs. Smith.
2006-4: Pirates of the Caribbean 2, X-Men: The Last Stand, Superman Returns, Casino Royale.
2007-7: Spiderman 3, The Transformers, Pirates of the Caribbean 3, National Treasure 2, The Bourne Ultimatum, I Am Legend Legend, 300.
2008-5: The Dark Knight, Indiana Jones and the Kingdom of the Crystal Skull, Iron Man, Hancock, Quantum of Solace.
2009-4: Avatar, Transformers 2, Star Trek, Sherlock Holmes.

2010-2016: 38 films (5.4 a year).
2010-2: Iron Man 2, Inception.
2011-6: Transformers 3, Pirates 4, Fast Five, MI 4, Sherlock 2, Thor.
2012-6: The Avengers, The Dark Knight Rises, Hunger Games, Skyfall, Hobbit 1, Spiderman.
2013-6: Hunger Games 2, Iron Man 3, Man of Steel, Hobbit 2, Fast 6, Oz the Great and Powerful.
2014-6: Hunger Games 3, Guardians of the Galaxy, Captain America 2, Hobbit 3, Transformers 4, X-Men: Days of Future Past.
2015-6: Star Wars, Jurassic, Avengers 2, Furious 7, Hunger Games 4, Spectre.
2016-6: Rogue One, Captain America III, Jungle Book, Deadpool, Batman vs. Superman, Suicide Squad.

Friday, June 16, 2017

The Superhero Film Gets a Makeover

As regular readers of this blog (all two of you, unless I'm miscounting by two) know, I have been watching the superhero movie bubble for years expecting it to pop. Of course it hasn't, so far--this, seventeen summers after Bryan Singer's X-Men (2000) (which in turn came a decade after 1989's Batman became the biggest hit of the year, and its franchise the most successful of the 1989-1995 period). And the conventional wisdom seems to be that this longest-running of action movie fashions can go on indefinitely.

I'm less sure of what to think than before. The studios have slates packed with superhero films through 2020, and by now probably beyond it as well, and at this moment I wouldn't care to bet on their shelving those plans because of an untoward change in the market--still less because of three approaches which are proving commercially viable.

R-Rated Superhero Movies
Of course, there have been plenty of R-rated superhero movies before--like the Blade movies, and Wanted and Watchmen. The difference was that they tended to not be made about first-string characters, or given first-rate budgets, with the expectation justified by their small prospect of first-rate grosses.

So did it also go with Deadpool, who was not a first-string character (Deadpool reminds us of this himself, rather crudely, in the early part of the film), and who didn't get the really big budget (Deadpool gabbing endlessly about this too). Rather than original or fresh or subversive (it had nothing on Watchmen here, or even the 2015 reboot of Fantastic Four) it struck me as a mediocre second-stringer, notable only in how heavily it relied on tired independent film-type shtick. Still, the film exploded at the box office ($363 million in North America,nearly $800 million worldwide--a feat the more impressive for it not having played in the ever-more important Chinese market), and has already had a sort of follow-up in a genuine first-stringer--Wolverine--getting an R-rated film, Logan, earlier this year. (The budget was, again, limited next to the full-blown X-Men movies at under $100 million, but the character and the budget were a bigger investment than Deadpool represented, and with over $600 million banked it seems likely to encourage even bolder moves of this kind later.)

What's going on here?

One possibility is the change in the makeup of the market. It seems that R-rated films (and not just raunchy comedies), while far from where they were in the '80s and even '90s in terms of market share, have done better recently than at any time this century, and superhero films have reflected the turn. (In 2014, American Sniper topped the box office and Gone Girl also did well, while 2015 saw The Revenant and Fifty Shades of Grey become top 20 hits at the American box office, and Mad Max: Fury Road claim the #21 spot.) I might add that reading the Box Office Guru's biweekly reports, I'm struck by how much the audience for recent films has been dominated by the over-25 crowd--younger people perhaps going to the movies less often. (Living life online, while having less access to cars and less money in general, I suspect they don't go to the theater so much as they used to do.) This might make an R-rating less prohibitive than it used to be for an action film producer, perhaps especially in this case. By this point anyone who is twenty-five probably has little memory of a time when the action genre was not dominated by superheroes. They grew up on superheroes, are used to superheroes, and so R-rated films about people with unlikely powers dressed in colorful spandex seem less silly to them, and so not such a tough sell as they would have been once upon a time.

Woman-Centered Superhero Movies
Just as we have had plenty of R-rated superhero movies in the past, we have had plenty of superhero films with female protagonists. The difference was that, as I recently wrote here, while there were movies with female superhero protagonists (Elektra, for example), and first-string superhero movies prominently including female superheroes in their casts (Black Widow in the Avengers), the flops of the early 2000s left Hollywood discouraged about presenting really first-string superhero movies centered on female protagonists. The grosses of YA dystopia films like The Hunger Games, Lucy and Mad Max have made the studios readier to go down this road, however--and the success of Wonder Woman is reinforcing this. And I suspect that at the very least the trend will endure for a while, if not prove here to stay.

Mega-Franchises
Where the decision to make big-budget films centered on female superheroes is a case of the superhero movie following trends set by others, in this case the superheroes have led the way. The Marvel studio gambled big and won big with a larger Marvel Comics Universe, which regularized Avengers-style grosses to such a degree that even an Iron Man or Captain America sequel could deliver them. Inspired by this course Warner Brothers gambled (but has not yet won big) with a comparable Justice League franchise. Since then Disney (Marvel's current owner) has given Star Wars the same treatment (and so far, won big again), while Universal, not to be left behind, has decided to do the same with a film franchise based on its classic movie monsters. (The first effort, the recent reboot of The Mummy, is a commercial disappointment, but as the WB demonstrated in plowing ahead with its Justice League despite the reservations about Man of Steel and Superman vs. Batman, the project is too big to be shut down by a single setback.)

A credulous postmodernist might gush at the possibilities for intertextuality. But the reality is that this is of principally commercial rather than artistic significance--as is the case with the other two trends discussed here. Indeed, by upping the commercial pressure (studios now want not a series delivering a solid hit every two or three years on the strength of the built-in audience, but a hit machine delivering record breaking-blockbusters once or twice a year) the greater synchronization, the higher financial stakes would seem likely to tie the artists' hands to a degree those who still lament the decline of New Hollywood can scarcely fathom as yet. Still, superficial as most of this is (we are not talking about a reinvention of superhero films here), the success of Deadpool says something about how little it might take to keep the boom going longer than the decades it has already managed.

Thoughts on the Wonder Woman Movie Actually Happening

As is well known by now to anyone who pays much attention to films of the type, DC got the Wonder Woman film made, and got it out this summer, and it has already pulled in enough money to be safely confirmed as a commercial success. (Its $460 million is just over half the $873 million the "disappointing" Superman vs. Batman made back in early 2016, and the final tally will probably fall well short of that figure--the Box Office Guru figuring something on the order $750 million. But it was not quite so big an investment, making it a very healthy return.)

In the process DC has realized what I described a few years ago as a long shot.

Of course, quite a lot has happened since--some of it, what seemed to me to be prerequisites for a Wonder Woman film. Warner Brothers firmly committed itself to a Justice League megafranchise comparable to the Marvel Comics Universe, and used the "backdoor" Justice League movie Superman vs. Batman: The Dawn of Justice to introduce new characters, Wonder Woman included.

There has, too, been something of a resurgence in big-budget action movies with female protagonists. Again, the female action hero didn't go away in the preceding years. There were still plenty of really high-profile, big-budget movies featuring action heroines--if as part of an ensemble, like the Black Widow (featured in five movies to date). There were still plenty of second-string action movies with female leads getting by on lower budgets and lower grosses--like the films of the Underworld and Resident Evil franchises (which have continued up to the present).

What there wasn't a lot of were movies combining a first-string production with a female lead between the early 2000s (the underperformance of the Charlie's Angels and Tomb Raider sequels in the summer of 2003, Catwoman winding up a flop in 2004, Aeon Flux becoming another disappointment in 2005, putting studios off) and the middle of this decade, when they began to crop up again. The Hunger Games exploded (2012), and was followed up by its mega-budgeted sequels (2013, 2014, 2015), the initially cautious but later bigger-budgeted Divergent films (2013, 2014, 2015) and the $150 million Mad Max: Fury Road (2015) (with the low-budget but high-grossing Lucy in 2014) reinforcing the trend. Still, even if this makes clear what an exaggeration the claims for Wonder Woman as something unprecedented are (both by the studios, and the sycophantic entertainment press), it has some claim to looking like a watershed moment.

That said, the question would seem to be whether such films will now be a commonplace of the cinematic landscape, or prone to the kind of boom and bust seen in the early 2000s. What do you think?

Saturday, June 3, 2017

On the Historiography of Science Fiction: Info-Dumping and Incluing

When it comes to "info-dumping" and "incluing," a considerable current of thought about science fiction hews to the standards of mainstream literature, down to those misconceptions and prejudices summed up in the truism "Show, don't tell." Simply put, info-dumps (telling) are regarded as bad, incluing (showing) as good; and the fact that we are more likely to do the latter than before is taken as a case of "Now We Know Better," and therefore are better than our more enthusiastically info-dumping forbears. Additionally John Campbell and his colleagues (like Robert Heinlein), while getting less respect from the more "literary" than they otherwise might, are still credited, at times lavishly, with "showing us the light" on this point.

However, Campbell was merely a dedicated and influential promoter of incluing, rather than its inventor. Others had done it before he and his writers came onto the scene. For example, E.M. Forster did so in his short story "The Machine Stops" (1909), and perhaps inspired by this example, so did one of the genre's founding fathers in one of its foundational works, Hugo Gernsback in Ralph 124 C 41+ (1911) (at least in his account of Ralph's use of the Telephot, which seems to echo the opening of Forster's own story). Indeed, the first chapter of Gernsback's oft-mentioned but rarely read classic uses the technique so heavily--and to my surprise, artfully--that it can be treated as a model for doing so, presenting us with a barrage of obliquely treated technological novelties in the first meeting between the titular hero and his love interest, Alice, that manages to be packed with action and novelty, but also lucidly conceived and smoothly written.

Yet, Gernsback shifted to info-dumping for the rest of the text. One reason, obviously, might be that info-dumping--telling--is simply the easier mode from a technical standpoint, setting aside the iron cage of what we call "style," and thus saving an author from having to spend five days on the same page while searching for "le mot juste." However, this does not seem the sole concern. There were particular reasons for the author to go in for incluing in that first chapter--above all, to keep cumbersome explanations from getting in the way of the action. These concerns were less operative later in the text, especially as a big part of their interest was that mainstay of science fiction from at least Francis Bacon's The New Atlantis on, the Tour of the World of Tomorrow--on which it is appropriate to have a Tour Guide along.

Indeed, Gernsback was a strong believer in the value of a good info-dump, as an editor at Amazing Stories strongly encouraging his writers to produce them. Simply put, there are things that cannot be shown intelligibly or concisely, only told, but which it is worthwhile to convey anyway; things that are worth info-dumping when one cannot inclue them; and while one may take issue with the use made of them in some of the fiction he edited, the principle is a valid one, in both science fiction and fiction generally (as H.G. Wells, appropriately, noticed and argued eloquently). Naturally, just as the author who always shows and never tells is a lot less common than the truism would have it (even Flaubert had to tell us some things straight out), so is it very rare to find a really substantial work of science fiction totally bereft of info-dumps.

Tell, Don't Show--Again

In his book How Fiction Works James Wood early on sings the praises of Gustave Flaubert as the founder of "modern realist narration," what is often glibly summed up as "show, don't tell" done right. This, as Wood remarks,
favors the telling and brilliant detail . . . privileges a high degree of visual noticing . . . maintains an unsentimental composure and knows how to withdraw, like a good valet, from superfluous commentary . . . and that the author's fingerprints on all this are, paradoxically, traceable but not visible.
Indeed, Gustave Flaubert was a genuine and highly influential master of the technique, frequently managing to convey some quite difficult content with ease and precision in many a scene in classics like Madame Bovary. However, it was worth remarking that even he used a good deal of telling--as you find if you actually pick up the book. To fill in his picture, he did not just rely on the "visual noticing" of such things as the characters' facial expressions, casual remarks and the like, but time and again delved into his characters' heads and pasts and generalized and grew abstract in relating such things as Emma Bovary's school days, and the romantic side of her that developed but was never to find fulfillment in the workaday world into which she was born and in which she had her life.
"This nature, positive in the midst of its enthusiasms, that had loved the church for the sake of the flowers, and music for the words of the songs, and literature for its passional stimulus, rebelled against the mysteries of faith as it grew irritated by discipline, a thing antipathetic to her constitution . . ."1
It is elegant telling, but telling all the same.

1. I cite here the Eleanor Marx-Aveling translation.

James Wood on Flaubert

Having been both impressed and disappointed by James Wood's How Fiction Works--impressed by his lucid exposition of some literary fundamentals, disappointed by his uncritical acceptance of them--it was a pleasant surprise to encounter his article in The New Republic, "How Flaubert Changed Literature Forever," from the opening line forward. In his 2008 book he begins his discussion of Flaubert's establishment of "modern realist narration" with the following sentence:
Novelists should thank Flaubert the way poets thank spring: it all begins again with him.
In his more recent article, however, he begins thusly:
It is hard not to resent Flaubert for making fictional prose stylish--for making style a problem for the first time in fiction.
From there he goes on to a lengthy consideration of how much of a cage that style of narration is, with its stress on the concrete, visual detail, and how in the resultant "obsession with the way of seeing," the "flattering of the seen over the unseen, the external over the interior," all of "the important things disappear," and those who abided by the rule ran the risk of very elegantly telling a story about--nothing at all.

Of course, much of this has been observed before--some of it by Flaubert himself (who did, at times, step out of the cage Wood describes so well). Virtually all of it was said by H.G. Wells when he thought about the problems posed by the kind of style discussed here in his "Digression on Novels." It might be added, too, that Wood's conclusion is rather less radical than Wells'. Where Wells ultimately chose the important things over the obsession with the way of seeing, chose to try and convey what went on in people's heads over the flattering of exterior detail, Wood closes by reiterating his admiration of the "mysterious" way in which Flaubert ultimately managed to transcend the limits of his technique to tell Bovary's story. Still, Wood's discussion of the matter is a worthy consideration of a problem far too often slighted in our age of television shows about nothing, movies about nothing and, yes, books about nothing.

Thursday, June 1, 2017

On the Historiography of Science Fiction

Writing my book Cyberpunk, Steampunk and Wizardry I produced a history of science fiction that was most concerned with the genre's most recent decades. This was in large part because it seemed to me that, in contrast with earlier periods, these decades had not been dealt with in a comprehensive fashion.

Still, to provide a proper foundation that discussion it seemed to me necessary to look at what came before--and going over the relevant history I quickly found that it was not quite as well-studied as I thought it was. Certainly a vast number of works gave overviews of it. However, it always seemed to me that Jonathan McCalmont was quite justified in declaring at the time that
science fiction lacks the critical apparatus required to support the sweeping claims made by people who use [the historical] approach. Far from being a rigorous analysis of historical fact, the historical approach to genre writing is all too often little more than a hotbed of empty phrases, unexamined assumptions and received wisdom.
So did it go in the works I found. By and large the history was a "folk history," rather than rigorous scholarship--empty phrases, unexamined assumptions, received wisdom that, even when essentially correct (as, in hindsight, it often seems to have been), explained its claims vaguely and supported them poorly, and in the process not only left us understanding it all less fully and well than would otherwise have been the case, but inhibited further work rather than encouraging it.

Of course, there were numerous exceptions to this, but these tended to be in relatively obscure, specialized works dealing with relatively small pieces of the field. Colin Greenland's The Entropy Exhibition is excellent at treating key aspects of the New Wave, while Brian Stableford's The Sociology of Science Fiction was particularly insightful in its discussion of John Campbell's work as an editor--and more recently, Mike Ashley's outstanding The Gernsback Days was truly formidable in its study of the formative, "pre-Campbell" period, deeply rooted in close examination of the relevant material.

By contrast the larger, more general works, even at their best, tended toward the folk history approach, as with Brian Aldiss and David Wingrove's Trillion Year Spree (an old review of which you can find here). The book, again, has much to commend it. Its coverage of the field is vast, the highlights all mentioned, and certainly it is packed with interesting insights that, after many years of additional reading and consideration, still strike me as valuable. But on such topics as the pulps of the '20s and '30s, for example, the book largely settles for the received wisdom, summing it all up as "Gosh-wowery . . . Bug-Eyed Monsters . . . [and] the trashy plots that went with them" (216-217). To be fair, there is truth to this--and Aldiss and Wingrove do manage to say some interesting things about the material for all that. Yet, this is less specific and well-grounded than it might be (just as it is more dismissive than it ought to be).

It seems to me that the situation is getting better, the body of better-researched, more useful coverage increasing, but all the same, the synthesis of it all has really lagged. And all that being the case one can hardly avoid the question--why has this situation persisted for so long? Certainly one factor would seem to be the history of the genre having been a fan enterprise to such a degree, for so long a time--while the scholars took little interest. (While there were earlier precursors, it was only in the 1970s that academics began to pay very much attention to science fiction, which sounds like a really long time but is not really so long in academic terms; the more so because science fiction is still a fairly marginal area of study next to more canonical work.)

Another would seem to be the conventional wisdom of literary scholarship itself, much more interested in some things than in others. To be blunt, scholars who unquestioningly embrace Modernism and postmodernism as defining what is "important" literature, who take technical experimentalism, epistemological apathy and obsession with identity as the sine qua non of what is worthy of study, and whose non-literary study has been of kindred schools of philosophy and psychology (Foucault, Lacan and the rest), are either disinclined or unequipped to deal very well with key concerns of science fiction, and accordingly much of the work that, from the point of view of the genre, is most important to its history. Instead they gravitate to those works that happen to fit in with the intellectual preoccupations they bring with them, without much interest in how they fit into the history of the field. (Consider as an example the level of attention, and the kind of attention, that Ursula K. Le Guin gets from more academic students of the genre.) And of course science fiction practitioners themselves have been influenced by all this, at least since J.G. Ballard's efforts to remake the genre in the image of the Modernists. (Tellingly Aldiss and Wingrove, while interested in and often insightful about science fiction as a genre of ideas, still tilt in favor of the more purely literary in their analysis--not least in their tracing science fiction's history not to Scientific Revolution/Enlightenment/Industrial Revolution interest in natural and applied science, and its implications for the increasingly studied shape of society, but to Romantic-Gothic sensationalism.)

The result was that in developing my image of the genre's pre-1980 history (which the first four chapters of the book are devoted to outlining, because of how foundational they are to what follows), I found myself having to spend much more time just figuring out for myself what the facts were before I could settle down to figuring out the larger picture than I'd initially planned on. The folk history had enough in it to be a guide along the way (there were at least presumptions I could investigate, test out), but alas, it was just that, such that I had as much work to do in this supposedly well-covered territory as I had in the less well-charted decades that were my original concern.

Monday, May 15, 2017

Stormbreaker, by Anthony Horowitz: a YA Moonraker?

Picking up the first volume in Anthony Horowitz's Alex Rider series, I was unsurprised to see that he took many of his cues from classic Bond films, with regard to structure, pacing, action. Still, I was surprised by how much of the specifically print Fleming there was here. Indeed, Horowitz appears to have used his novel Moonraker as the framework for his plot.

As in Moonraker a self-made industrialist is being hailed in the British press as a national hero for making an expensive gift to the British public. However, at the facility where he is producing that gift (located on an isolated bit of the English coast) there have been suspicious goings-on, among them the death of a national security state functionary. Accordingly, the British Secret Service sends an agent to the scene to investigate, where he is initially a guest of said industrialist. In the course of these events our hero happens to best the man in a game and win from him a rather large sum of money as a result--revealing in the process that his host is not just a rather unpleasant person to be around, but "no gentleman." The industrialist also happens to be a man of foreign birth. And physically not quite the norm. And as though this were not enough to set off the alarm bells, there is also the behavior of an odd foreigner with a Teutonic accent he keeps round the place, which also gets visits from a submarine delivering secret materiel.

As it happens, our ungentlemanly, foreign, "odd-looking" industrialist with the suspicious German associates and secret submarine deliveries suffered in the English public school system as a boy, and as a result, bears a burning hatred of the country, which has led him to align himself with a foreign power plotting against it. His gift to the nation is in fact poisoned--really a cover for revenge he intends to wreak on it with a weapon of mass destruction to be delivered in spectacular fashion at the ceremonial, highly publicized unveiling of that gift. That weapon will shatter Britain as a nation, while he escapes safely overseas--as the villain explains to the agent after he has captured him, because he means to kill him in colorfully hideous fashion, so there is apparently no prospect of his stopping the plot. However, the hero gets free, and unable to deliver a proper warning to the authorities, races to head off the attack himself in the very nick of time . . .

As models go, Horowitz could have done worse. Moonraker's domestic setting, and its plot's unfolding within a relatively limited space, while eschewing one of the famous attractions of the Bond series (international travel) makes the activity of this fourteen year old secret agent somewhat more plausible. That the Bond girl is engaged to someone else when she meets 007 and is never tempted to stray also makes a convenient fit with Horowitz's decision to dispense with romance entirely. Still, all this underlines the difficulties of squeezing the stuff of the James Bond adventures into a YA book.

Review: The Messiah Stone, by Martin Caidin

New York: Baen, 1986, pp. 407.

It seems that once again I am reviewing a novelist who was once a Big Name but has since slipped into obscurity--Martin Caidin. His novel Marooned, about a stranded astronaut who must be rescued (sound familiar?), became a feature film in 1969, while his 1972 book Cyborg was the basis for the television series The Six Million Dollar Man, and its spin-off The Bionic Woman. Today, however, his novels seem to be out of print, his name just about never mentioned--especially when one is not making specific reference to the media spin-offs from his books.

As it happens, the tale's protagonist is a familiar enough type. Doug Stavers not only displays a more-than-human physical strength, courage and ruthlessness, but possesses every conceivable combat, investigative and mechanical skill that a commando-mercenary-spy might possibly need in the field, honed to perfection--Stavers a survival expert who can live off the land indefinitely in any and every environment, a fluent speaker of just about every language, a pilot who has mastered every flying trick, and all the rest. As if all that were not enough, his alertness is equivalent to clairvoyance, his foresight to precognition. And all this vast prowess has been demonstrated in secret wars beyond counting waged in every corner of the globe, in which he killed lots and lots and lots of people and (while admittedly picking up a good many scars) lived to tell the tale.

Naturally he has picked up a good many not-quite-as-good-but-still-preposterously-capable friends along the way, on whose help Stavers can call on those occasions when even he cannot do it all himself. All of these friends also have the virtue of comprising a considerable cheering section, endlessly testifying to just how extraordinary he is--as do his equally admiring clients and enemies, and the women in all these categories (and also those women in none of them) who find all this completely irresistible.

Such Gary Stu figures (there, I said it) are the stock-in-trade of the action thriller writer--and as my roster of books, articles and blog posts ought to make clear, I have enjoyed my fair share of works in that genre. Still, Caidin took it so far in Stavers' case (think that other Caidin creation Steve Austin, times twenty), and was so verbose in doing so, that he made me repeatedly laugh out loud while I read the book.

Indeed, taking it all as parody would be defensible--the more in as the same sensibility informs the wider narrative. The book's title, after all, refers to the tale's more than usually hokey MacGuffin, a piece of meteorite which endows its possessor with a more-than-human charisma and power over any other person who happens to be in their presence. Naturally it is much sought after by innumerable parties (the CIA, the KGB, the Catholic Church and all the other usual suspects), including one private group that enlists Stavers to track it down and deliver it to them. All this offers plenty of occasion for the superman Stavers to display not only his ridiculous ultracompetence, but a contempt for human life to which no string of epithets can do justice, and which makes for an adventure so astonishingly dark and demented by even today's standards (I dare not spoil it by saying more) that one would have thought this by itself sufficient to give the book the cult following it does not seem to enjoy.

If this intrigues you, you might be interested in knowing that Caidin published a sequel, 1990's Dark Messiah. I haven't read it, but the two reviews of the book at Goodreads, in their very different ways, seem rather plausible to me after my experience of the first Doug Stavers novel.

Why Young Adult Fiction?

For at least a decade now the bestseller lists have seemed to be ever more dominated by works of young adult fiction. Accordingly to the data presented by Publisher's Weekly, between John Green, Veronica Roth and Jeff Kinney eight of the nine top-selling books of fiction of 2014 were of the young adult variety--with the one exception, Gillian Flynn's Gone Girl, boosted by the release of a hugely successful film adaptation that year.

There seems no shortage of possible explanations for this.

One is the greater readability of YA fiction. Publishers of YA routinely serve up more simply written books, and shorter books, that may be more attractive to grown-ups with declining literacy, greater hurry, shorter attention spans--with the problem exacerbated by how much of our reading we are doing off of screens.1

Another is that the books are, in some measure, sanitized--and so at least a partial refuge from a culture that so many find toxic, for so many reasons, be it the cultural "traditionalist" distaste for four-letter words and graphic sexuality, or a deeply feeling progressive's revulsion at the raging conformism and smugly fascistic tendencies of the purveyors of the "edgy" and "dark and gritty."

Still another might be the nature of contemporary, bourgeois adulthood and its portrayal--how narrow and dull the "grown-up" life encumbered by work in the age of Weber's "iron cage," and the conventional responsibilities; how little of the spectrum of such life gets attention from today's writers (whose eyes are ever directed toward the upper-upper middle class); and how false is its treatment (how oversimplified and glamourized and sensationalized). How much can one say about well-heeled doctors and lawyers and their adulteries? And how long can anyone go on being interested in the nonsense written about that?

For all its difficulties, youth is different and more varied, at least--and an object of curiosity to more of us in a rapidly aging world.

Jonathan McCalmont, responding to a recent piece by Adam Roberts, noted another aspect of this that seems worth mentioning, namely the fuzziness of the whole concept of adulthood. We are constantly subject to sanctimonious talk about "growing up"--but what does this really mean? One might reasonably think of it as referring to a person's amassing a certain body of knowledge, skills, personal qualities that permit them to function in the world, but as is usually the case with conventional social judgments, the criteria are rather more stringent than that, and much more brutally materialistic. For adult males, at least, he refers to the conventional "model of adulthood" as entailing an income sufficient to singlehandedly support a family (which, of course, an adult was supposed to have).

This has always been a fairly classist definition, implicitly denying full adulthood to the poor, for example, and therefore most people in society. However, in contrast with the mid-twentieth century period of broad middle class affluence (how brief it was in historical terms, and at the same time how deeply it has shaped our thinking) it has increasingly been out of reach "for all but the most supremely wealthy people." Indeed, even in comparison with "the 1990s, today’s adults not only struggle to find full-time employment but even those that do still wind up struggling to make enough money to live independently of their families" to an extent respectable opinion (and even pop culture) generally refuse to acknowledge.

Adulthood in that sense (whether one thinks it a good definition or a bad one, reasonable or unreasonable) has simply not been attainable--and no alternative version has appeared yet. And so chronological adults find themselves in just about all other ways (the level of their earnings and what these permit) endlessly "becoming adults" rather than "being adults"--"the coming-of-age process but not the experience of adulthood itself," which is exactly what YA fiction is so often about.

1. I draw together the research on what reading off a Kindle or other such device might mean in my essay "The Writing Life Today and What it Means for Science Fiction" in my book The End of Science Fiction?.

Of Working-Class Spies

H.B. Lyle recently penned an interesting article on the scarcity of working-class protagonists in spy fiction--about which he is, of course, entirely right, even more right than he may fully clarify in the article.

From Carruthers in The Riddle of the Sands (1903) on, the heroes of the genre--the Richard Hannays, the Bulldog Drummonds--tended to be residents of "clubland" (collectively referred to as "clubland heroes"). This wore thin by mid-century, but Ian Fleming gave the type an update that, while in its turn wearing thin, through continuation, imitation and reaction has kept them predominant up to the present (when 2012's Skyfall made the initially rougher-around-the-edges-seeming Daniel Craig take on the character appear even more thoroughly aristocratic than ever as heir to a vast Scottish estate).

Lyle is also quite right about the diversity and dramatic potential overlooked in the process.

Still, considering this two things occurred to me. One is that this simply reflects the reality of espionage. The sort of upbringing, education, social networks that bring one to the attention of intelligence services are decidedly not of the working-class type. The biographies of our most famous spy/novelists tell the story. John le Carre's story preceding his rise to literary fame was public school (Sherborne), Oxford (where domestic counterintel had him spying on leftist students), teaching at Eton, the Foreign Service (where he wound up in intelligence again). The Etonian Ian Fleming didn't make it to Oxbridge, unlike his big bro Peter, but after Pete went into intelligence pre-World War II Fleming followed after him quickly enough--his mom arranging the matter over dinner with Montagu Norman (Governor of the Bank of England, the Alan Greenspan of his times).

Yes, just like that--and one can find parallel tales in the trajectories of other spy/novelists, from W. Somerset Maugham to William F. Buckley--while the life of those spies who never write novels is essentially more of the same. (It is worth recalling that for le Carre writing spy fiction was a way of approaching not so much espionage as the social class of which he is such a student, and his books are striking for how cloistered and out of touch with the rest of the world they seem--but still seem mild in their portrayal of them next to nonfiction like David Leigh's The Wilson Plot.)

The other thing that occurs to me is that this simply reflects fiction in general, which has never had much time for working-class types--in part because it has so rarely been written by working-class types. As is typical for our postmodernist, class-averse era, much is said of sexism and racism in publishing--but no one dares speak of classism. However, the truth is that the matter of the education and leisure required to produce a publishable manuscript, and the question of who is likely to have the connections to get something into print (rather than make the cold-queries to the slush pile that turn their authors into collectors of form rejection letters), definitely have implications for who writes, and gets their writing published.

And of course, what we are accustomed to reinforces the matter in itself, in fiction generally and spy fiction especially. Since at least E. Philips Oppenheim the idea of the spy not only as an elite figure, but as leading a glamorous life, has been part of the appeal of such fiction, an idea that has endured even as our notions of just what makes for glamour have changed.

Stephen Akey on Literary Agents

Ordinarily discussion of the publishing industry does not even acknowledge the existence of the frustrated writer thwarted in their first publishing attempt as a relevant part of the scene. On the rare occasions when they do so it tends to be with great hostility. It is assumed that they were simply not any good, their frustration thoroughly deserved, and any objection they make to "things as they are" treated with all the contempt that authority and officialdom are capable of showing for the dissenter. Indeed, much of such writing as does refer to them is glorified trolling--as with the "confessions" of former slush pile readers I have encountered in such publications as Salon and The Guardian (the great progressive newspaper in this instance firmly on the side of the elite and firmly against the "Great Unwashed"), while a glance at the comment threads indicates that they consist mostly of just plain trolling.

The writers who would tell their own, very different, side of the story are apt to do so on their own, obscure blogs--so obscure that they are not easily found in an Internet search. Indeed, we are likely to encounter the dissenting view only secondhand--in cursory and dismissive mentions in apologia for how the publishing industry is run (as with those cited above).

Naturally it was something of a surprise to find Stephen Akey's November 2014 piece on the "problem with literary agents" in such a forum as The New Republic. To be sure, Mr. Akey is keen to distinguish himself from "furiously anti-establishment bloggers" as one respecting the role agents have to play in the industry--but all the same he discusses at some length issues far too little noted by anyone but such bloggers. In particular he remarks the extent to which agents virtually demand a prospective writer be a "brand" and "have a platform" before they have begun their career, and the more general hostility to innovation.

The situation is entirely to the advantage of illiterate celebrities extracting extra dollars and minutes of fame by way of ghostwritten works published under their names, and utterly ruinous for genuine would-be authors with nothing but their actual writing to commend them to anyone--the talented and diligent included, while they suffer from the same sanctimonious scorn as the most deluded hack ever to pick up a pen.

The Hikikomori Phenomenon: A Sociological View

I don't think I've watched the Fusion TV Channel before this past week--but while scanning the TV schedule I did recently notice their running a documentary on hikikomori, which was something of a surprise. Anime and manga fans who look beyond the hardcore science fiction/fantasy-action stuff that dominates the American market to the more slice-of-life, comedic series' hear quite a bit about the phenomenon, but this kind of coverage seemed new.

Watching the documentary, I can't say that it offered anything I hadn't seen or read about before, but it did set me thinking about the issue again, and especially those aspects not addressed in it in an overt way--like the matter of social background (rarely commented upon, especially beyond the banality that the child of an impoverished household, lacking their own room, without a family able to bear the economic burden, cannot become hikikomori).

One exception to this tendency was this paper by Tatsuya Kameda and Keigo Inukai, which raised the possibility of a correlation between a lower middle class background and the hikikomori pattern of behavior.1 Kameda and Inukai remark evidence of greater "emotional blunting" (their reactions and expressions of their reactions muted) and lower social involvement (more time at home, less time out with friends) among lower middle class young persons as compared with those in more affluent groups--and that this correlates strongly with observations of hikikomori, suggesting a great many "undiagnosed" cases at this level.

Going further than Kameda and Inukai in approaching the hikikomori phenomenon as a sociological issue rather than a psychological one was another paper by Felipe Gonzales Noguchi, "Hikikomori: An Individual, Psychological Problem or a Social Concern?" Noguchi's paper refers to the "strain theory" of Robert K. Merton, which Merton elaborates brilliantly in his classic paper, "Social Structure and Anomie."2

Merton's theory holds that society sets certain goals for its members, and certain means for realizing those goals. In the America of his own time (and our time as well, and also in contemporary Japan) the goal is upward economic striving on an individual basis. The means, by and large, is doing well in school and then getting "the good job" (or, alternatively, personal entrepreneurship, though this is for myriad reasons the road less traveled today).

Someone accepting goal and means--grinding hard at school in the hopes of entering the most prestigious college they can, on entry into said college single-mindedly chasing the most "practical" degree (i.e. best odds for most money) consistent with their talents (e.g. business or engineering rather than music or anthropology), seeking out the highest-paying job they can get as graduation approaches and then grinding hard at that job in the expectation of raises and promotion (and keeping their ear to the ground just in case something still better come along)--"conforms" perfectly.

However, there are alternatives to this acceptance of goal and means together. Some accept the goal but not the means. They want to get ahead, but try another way--like being a criminal in the most narrow, conventional sense of the term--which Merton called "innovation."

Some do not have much hope for the goal, but nonetheless abide by the approved means, going through the motions, at least, in what Merton called "ritualism." (The unenthusiastic bureaucrat doing their job adequately but no more, and counting the days until they get their pension, is a ritualist.)

Of course, some reject both goal and means, and this takes two forms. One is to try and escape the whole thing--"retreat," just drop out of the system. The other is to try and change the system--"rebellion."

Merton observed that "it is in the lower middle class that parents typically exert continuous pressure upon children to abide by the moral mandates of the society," a "severe training [that] leads many to carry a heavy burden of anxiety." This is made worse by the reality that the "social climb upward" stressed more severely here than anywhere else "is less likely to meet with success than among the upper middle class." (All other things being equal, the lower middle class kid has less access to the "good school," the "good job," than their more affluent, better connected peers do, precisely because those peers are more affluent and better connected.)

In short, lower middle class youth are under heavier pressure to conform, and at the same time, have less hope of a payoff at the end of the privations that conformity demands (while those privations may be more severe--college a different thing for the rich kid with the legacy than the scholarship kid, even if both take their studies seriously). The pressure and privation would seem more severe, and the hope dimmer, in a period where the economy is stagnant, the prospect of upward social mobility is declining, and the middle class is stressed, all features of Japanese life in the past generation.

Meanwhile, if Kameda and Inukai's observations are valid, those kids are also less able to express and thus cope with the feelings with which all this leaves them.

Of course, Merton suggests that the most likely response for the stressed but not very hopeful lower middle class sufferer from this situation is "ritualism." And it may well be that ritualism is the most typical response to such a situation. However, that by no means rules out "retreat," which (as Noguchi observed) seems to be exactly the hikikomori response. Indeed, the relative disconnect from life outside the home that Kameda and Inukai note as features of lower middle class life may further encourage this--especially if the mounting pressure, and the discrediting of conformity as a life path, make the bleakness of ritualism intolerable (while rebellion appears unworkable as an option meaningful in the individual short-term).

Even in taking the sociological view of the matter (e.g. approaching the hikikomori phenomenon as a retreat in the Mertonian sense to which lower middle class young people may be disproportionately driven), however, it is worth remembering that Western observers tend to make much of what are presumably unique features of Japanese life--its notoriously demanding education system, for example, or its low tolerance of nonconformity (while we Westerners congratulate ourselves on our freer and more tolerant ways). Still, as Lars Nesser asked,
can it be proved that the pressure Japanese youth experiences is any different from what American youth, or youths from other industrialized societies feel? Is the pressure to follow social norms so exceptionally strong in Japan compared to other countries? I do not think so.
The view of life as a demanding economic contest first and anything else a distant second at best; the combination of rising pressure and declining prospect for the lower middle class; the economic stagnation of recent decades; are things that have been more pronounced in Japan than in other places (the sharp shift from boom to bust circa 1990, notably), but none of them are unique to Japan by any means. And the truth is that there is nowhere in the world where nonconformity in any sense of the term, let alone this one (a life not dedicated to individual economic advancement), is easy. Or for that matter, social isolation or withdrawal unknown as a response to the pains that go along with this. (Merton could hardly have rounded out his paper the way he did otherwise.)

Indeed, I suspect a significant difference may be that where people are ready to accept that the hikikomori reflect a social problem in Japan, we look at their counterparts closer to home and simply mock them--making them the butt of smug jokes and sanctimonious social criticism about a generation "refusing to grow up" and just "needing a push" to do so, rather than seriously considering whether there is something bigger going on here.

1. This is, again, a situation where the word "lifestyle" is hugely inappropriate, and so I will pointedly not be using it.
2. For the purposes of this post I am using the longer version of the paper in his collection Social Theory and Social Structure. Both that version of the paper, and the whole collection, are highly recommended. Those looking for a quick overview can also get a longer version of the essentials from the Wikipedia article discussing strain theory, which seems to me to give a good round-up of his variant on it.

Review: The Power House, by William Haggard

London: Cassell, 1966, pp. 186.

As I have remarked here before, William Haggard's novels rarely get mentioned today--but when reference is made to Haggard, and certainly to a specific Haggard book, The Power House (1966) is likely to be it. Its story of a plot to overthrow a Labor PM named Harry, as it happens, is seen by many as echoing actual intrigues of the period--the reported plots against Harold Wilson by the political right during the late 1960s and 1970s described by, among others, journalist David Leigh in The Wilson Plot (1988).

Still, the story's interest goes well beyond any resemblance it may or may not bear to these events (which strikes me as slight at best). At the start of Haggard's novel far left Labor MP Victor Demuth plans to defect to the Soviet Union. Labor Prime Minister Harry Fletcher is understandably anxious to squelch the attempt with as little fuss as possible, and seeks help from his friend casino owner Jimmy Mott. As it happens, one of Mott's croupiers, Bob Snake, happens to be a relation of Colonel Russell of the Special Executive, and engaged to Demuth's daughter's Gina.

Defections are, of course, standard Cold War fare. Still, as might be expected by those who have previously read Haggard the East-West antagonism is a slight, remote thing next to the domestic ambitions and hostilities and treacheries--in which there is more than a little ordinary crime involved, and some wildly irrational personal enmity too. The PM's anxiety about Demuth has less to do with his slipping away with defense secrets and such than the embarrassment to Fletcher and the party, which has an election coming up very soon. As it happens, the Soviets, uninterested in the man, toss him back (on the advice of Colonel Russell's Soviet counterpart and later, friend, the Colonel-General), after which Russell finds it a simple enough matter to have Demuth locked up in a nursing home, away from nosy reporters.1 Nonetheless, that particular can of worms is almost immediately opened up again--by "the Squire," a rich, crippled Tory who has a seething hatred the PM while his own casino interest makes Mott a rival. He sees in the defection a tool with which he can bring Fletcher down (while also eliminating the competition for gamblers' business presented by his buddy Mott).2 Naturally Russell, and young Snake, wind up in the middle, with his near-legendary status as head of the Executive, and his mastery of bureaucratic politics, much more than a knack for gunplay or anything else of the sort, his principal instrument for dealing with the crisis.

As might be expected by those familiar with other entries in the series, what follows is a good deal of intrigue, entailing a certain amount of violence, in which Russell's involvement is rather slight. What is less expected is the extent to which all this gets personal for the usually urbane, aloof, ironic Russell, his likes and dislikes, his loyalty to some and lack of loyalty to others figuring in the story. Indeed, by way of Snake The Power House offers a good deal more than other books about Russell's personal past--as one might guess about a novel in which he comes to the rescue of a family member, especially one so unlikely-seeming as Bob Snake (an aunt of his ran away with "a bog-Irishman . . . worse than marrying a black man," with Snake the issue of the relationship).

The plot mechanics this time around seemed rather more engaging than they were in Slow Burner, while the relative novelty of the ways in which Russell gets caught up in the events made it all rather a brisk read, irrespective of its vague relation to historical events.

1. It seems worth remarking that this genial relationship came along years before M and James Bond became friendly with General Gogol in the Bond films.
2. To the Squire Fletcher is a "traitor, destroying a natural order in the name of some half-baked progress . . . subtopia, the television and the bingo hall." Interestingly the sentiment is not all that different from what we see in Fleming's novels--or for that matter, Kingsley Amis' pastiche Colonel Sun, which opens with Bond driving through television aerialled subtopia, and expressing similar distaste for it and its residents.

Saturday, May 13, 2017

On the Cusp of a Post-Scarcity Age?

It is very clear that we have not entered a post-scarcity age with regard to the more material essentials of life--energy, food, housing. We are already consuming the resources of 1.6 Earths, and the pace is accelerating--even as a vast majority of the planet remains mired in poverty, much of it poverty beyond the painfully limited imaginations of those who form respectable opinion. The potentials of known technology and organizational methods are a source of hope--but the point is that the job has not yet been done (and the politics of the last four decades are, to most of those both deeply concerned and deeply informed, cause for despair about the status quo).

Yet, it may be that we are seeing something of a post-scarcity economic reality in other areas, particularly those relating to information of certain kinds. Of course, quality is one thing, quantity another--and vast asymmetries still exist and seem likely to endure. Still, simply searching for the written word, for sound, for images still and moving, we can now access a super-abundance of them at virtually no cost.

To take one example--the one I am really concerned with here--consider fiction. Let us, for the time being, not concern ourselves with the corporate bogeyman of piracy, or even the massive availability of commercial work released under watered-down versions of copyright ("copyleft") or work with copyright intact but given away for free for promotional purposes (as by working authors hoping to build up a readership), and just focus on the work that is normally available for free, like public domain work (basically, everything ever written in human history until the early twentieth century, and a surprising chunk of the work even after that), and work that was non-commercial to begin with (like fan fiction). Single subsets of the latter (like Harry Potter fan fiction) are so prolific that an avid reader could take more than a lifetime going through just a small portion of what is already there.

The abundance may not necessarily be convenient to sift, or suit every conceivable taste. Nonetheless, someone just looking for something to read need never pay for content. This fact alone, along with the relentless competition from other media continually offering other and more alternatives to reading as a use of spare time, continually widen the gap between supply and demand in favor of supply--enough that much, much more than piracy I suspect this to be the greatest challenge facing the would-be professional writer today. And how we deal with it may be a test of how we will deal with other, larger, more material questions of scarcity and abundance in the years to come.

Friday, May 12, 2017

Review: Village of Stars, by Paul Stanton

New York: M.S. Mill & Company and William Morrow & Co., 1960, pp. 241.

Paul Stanton's Village of Stars is interesting as an example of the military techno-thrillers (or if you prefer, "proto-techno-thrillers") that appeared between World War I, and the genre's revival in the 1970s in the hands of writers like Martin Caidin, Craig Thomas, John Hackett and (in a different way) Frederick Forsyth.

Stanton's Village is set in the dozen years between Suez (1956) and the announcement of the end of British military commitments east of Suez (1968), when Britain had clearly been relegated to a tier below the U.S. and Soviet Union in the global power rankings, but still considered "a world power and a world influence" in its own right, without any other peer on that level.

As it was Britain had the world's only nuclear arsenal apart from the superpowers; and after France and then China got the bomb, still the "number three" arsenal for quite some time, its considerable stockpile carried in its fleet of over a hundred strategic bombers (the V-bombers). It was still imagined that exceptional statesmanship, or perhaps technological "innovation" might further narrow the gap between itself and the U.S. and Soviets, perhaps a breakthrough in the nuclear field (a theme seen in such prior novels as William Haggard's Slow Burner). In the meantime, the "policing" of the Indian Ocean region, because of the many colonies and closely associated ex-colonies about the region, from Australia and Malaya to Kuwait and Aden to Kenya and Tanzania, was regarded by both Britain and the U.S. as the country's special role within the Western military alliance.1

All of this, of course, is at the heart of the plot, in which Britain has developed a 100-megaton hydrogen bomb, a weapon neither of the other superpowers has, and which therefore makes it stand a bit taller than it otherwise might.2 It also happens that Britain is the key ally of Kanjistan--a (fake but real-sounding) country in the southwestern Caucasus, with the Soviet Union to its north, the Black Sea to its west, Iran to its south, in which a Soviet-backed revolt against the monarchy is underway, in support of which Soviet tanks are rolling south of their mutual border. The United States is unsupportive of British military action there (shades of Suez), but nonetheless Britain redirects ships to the area, and sends paratroops to the country, while the Air Force loads one of the 100-megaton bombs into a V-bomber dispatched to patrol off the Crimean peninsula to make clear to the Soviets that Britain is quite serious about protecting its client.

As the crisis threatens to escalate the bomber crew gets orders to fuze the device, and does so--but then when the matter is resolved diplomatically and they are told to unfuze it, they cannot do so, which is problematic because the device is set to go off if the plane descends below 5,500 feet. Naturally the question becomes how to prevent the giant bomb from going off disastrously despite the fact that the vehicle must eventually stop moving. (I remember how before the movie came out 1996's Broken Arrow was once described to me as "Speed on an airplane." It wasn't quite that, of course, but the premise of Village suggests how it could have been.)

In all this the focus is overwhelmingly on the men in the bomber at the story's center, rather than the larger picture. Tellingly the first chapter offers five pages of oblique glimpses at the emerging situation--and then seventeen about Air Vice Marshall Chatterton's personal assistant Helen Durrant attending a base dance the night of their arrival at the facility, where she happens to meet the bomber's pilot, John Falkner. Subsequent chapters continue in much the same fashion, showing the personal lives of Falkner and especially his crewmates, through lengthy scenes of their home lives showing them with their wives and their children (copilot Dick Beauchamp's failing marriage, Canadian-born Electronic Officer McQuade's happier one, Pinkney's struggle to bring up his children on his own), with only an occasional glance at the crisis in Kanjistan and the events at 10 Downing Street (let alone in Washington, New York, Moscow), much more often sketched than painted--even as the narrative increasingly emphasized the thriller plot.

I must admit that this is the opposite of what I expect of techno-thrillers. Certainly in my days as a fan snapping up the genre's latest releases, I personally favored the much more big picture-oriented style of the older Larry Bond (seen in his works up through Cauldron)--and Stanton's taking his very different approach did make for rather a slow start to the story. Nonetheless, some of the characterization was interesting--even if it was the relatively minor figure of the expert on the British super-bomb that held my attention (the physicist Dr. Marcus Zweig, his careerism and compromises, his interactions with the military personnel he must work with and the gulf of misunderstanding between them, all rather intelligently written), rather than the interactions of the principals. Additionally, if its treatment often felt overlong the relationship between Helen and John proves to be a bit more than just a love story tossed into the mix to pad out the book and broaden the market.

Meanwhile, even if the time devoted to the characters was not matched by its contribution to the interest of the story, the unfolding of the premise did provide the requisite suspense, which, if less well-detailed in many respects than I would have liked, struck me as competently thought out. In fact, the aspects of the story to which Stanton was more attentive seemed drawn with greater verisimilitude than any comparable thriller dealing with the '60s-era British armed forces of which I know.3 The interest of all this as novelty was enhanced by the novel's time capsule-like quality--a portrayal of a tanker navigator guiding his plane by the sun in particular coming to mind. Similarly conventional to the genre, but unique in being presented in a Britain-centered work, is the stress on the world power standing of the protagonist nation--as a country which has forces active all around the globe, which takes the lead in dealing with crises so far from home, which develops super-bombs other nations cannot match.4 All that seems to me plenty to give it a larger place in the history of its genre than the book seems to enjoy.

1. This string of possessions originally grew out of the old imperial interest in India, now lapsed, but had since acquired an interest in their own right. (Malayan rubber became surprisingly important to Britain's balance of payments; while the value of the Persian Gulf oilfields exploded.) Plus the U.S. was too absorbed in East Asia (dealing with Korea, China, Vietnam) to pay this region so much mind.
2. At the time 100-megaton bombs were a genuine preoccupation of the participants in the nuclear arms race, though the largest device ever actually detonated was the 50-megaton Soviet Tsar Bomba.
3. By contrast, in Ian Fleming's story of the hijacking of a V-bomber, Thunderball, some aspects of the technology's handling appear very credible, as with hijacker Giuseppe Petacchi's navigation of his plane, while others--like the seating arrangement in the aircraft--come off as awfully hazy.
4. As the bomber bearing the deadly burden makes its way from Britain to the Crimean peninsula, Stanton rather strikingly describes the lands it flies over, the people who look up in awe at this display of British power and reach.

Subscribe Now: Feed Icon