Monday, May 21, 2018

Review: Principles of Economics: An Introductory Volume, 8th edition, by Alfred Marshall

I recently noticed on Amazon that the negative reviews of Alfred Marshall's classic Principles of Economics tended to focus on the writing, charging Marshall with being an inefficient, overly wordy writer, and the book a tough read in comparison with other classic treatises like Adam Smith's The Wealth of Nations, or even John Stuart Mill's The Principles of Political Economy (themselves not exactly light reads, with Mill's book like Marshall's a textbook, with all that implies for someone trying to read it straight through). I must admit that I found the book something of a slog myself--enough so that I gave some thought as to why this was the case.

As it happens, the essential concepts Marshall covers--cost and utility, marginal and otherwise; elasticity and substitution; equilibrium; diminishing and increasing returns--and their application to the production, consumption and distribution of wealth and income by way of supply and demand--are in themselves simple, intuitive and apt to be familiar to anyone who would pick up Marshall today. And Marshall can write with great lucidity and conciseness, not only when recounting history (as he does with great polish in the appendices), but when dealing with more intricately theoretical questions (as in his admirable summing up of scientific method in the early part of Book I, Chapter III). Additionally the gloss and summary chapters offer ample assistance for anyone who loses the thread of the discussion along the way.

However, the material being covered--an economic theory based on utility--is still rather more abstract and less "common sense" or empirical than Smith or Mill's earlier work. Its presentation is also more difficult than Stanley Jevons' earlier, founding exposition of marginal utility (he called it "final utility") in The Theory of Political Economy, also heavily reliant on long verbal descriptions of microeconomic models accompanied by a great abundance of algebraic equations and graphical demonstrations of the principles and examples discussed. The reason is that where Jevons merely presented the principal line of thought, and with relative brevity, Marshall devotes many hundreds of pages to exhaustively working out the implications, in the process creating (in Marshall's own words) not a "few long chains of reasoning," but a mass of "many short chains and single connecting links" that sacrifice clarity for completeness as they bury the reader in hypothetical yet gory details. (In fact, in Book V, Marshall himself tells the majority of his readers to skip over the section's second half--altogethersome 77 pages, equal to an eighth of the main text!)

Perhaps especially given the theory's "elegance" this exhaustive working out of the implications may be less essential, less rewarding than with less compact, less tidy theories. And of course, the superabundance of mathematical content accompanying the exhaustive verbal explanation just conveys the same exact information as the verbal descriptions in a different manner. Bluntly put--if one gets the verbal descriptions, they find that working through the equations, curves and schedules doesn't really add anything to what they already knew. This is of course the norm for economics, but still, the meticulous instinctively do it, and here are bound to do it frequently and lengthily.

Accordingly, after plodding through these hundreds of very densely written pages, a reader already well-acquainted with these ideas (so much more widely diffused now than in Marshall's time) is apt to feel that they have not got much return on their effort. Knowing this does not make what may feel like an obligatory task for a student of economic theory or history any easier, but it is better to acknowledge why a book is "difficult" than just mouth about its "being hard," and if one can make allowance for all that, there is a good deal to admire here. Dated and problematic as the "Marshallian" view of economics is in many ways for readers from across the ideological spectrum (he was very much a Victorian strongly inclined to orthodoxy), Marshall, like most thinkers whose names get thrown around this way, was a rather more rigorous, nuanced thinker than his adherents, conscious of at least some of the limits of his analysis. His legacy doubtless includes some harmful misperceptions, the term he chose to use in the very title of the book—Principles of Economics (in place of the earlier term "Political Economy")--reflecting the tendency to treat economics as if it were physics, making the economic process seem"depoliticized" and above social constructs, rather than an arrangement grounded "in the contingent historical and political requirements of the prevailing social order."

Still, Marshall recognized the unavoidable imprecision of a "science of human nature" relative to classical mechanics (he compared it with the tides, not gravitation); the reality that economics derives its interest from the practical questions it addresses, and must be based on the hard facts of real life (no economic "Scholasticism" or anti-empiricism for him); the fact that people live in society, which is more than the sum of individuals, and that collective action is a thing that has to be analyzed as more than individual action writ large; that the role of time must be considered, and not simply the maximization of values but the cultivation of productive power must be taken into account in fostering national well-being; that in actual life labor is not simply another commodity, however much it may suit some to treat it that way, and that even those most narrowly committed to growth along orthodox lines cannot avoid facing the Social Question--he acknowledges all these early on, and endeavors to adhere to them subsequently. Of course, the conclusions to which these realizations lead will not satisfy those whose economic views tend in other directions, but it does show up any claim that a doctrinaire model-monger adequately lives up to Marshall's precedent, or the best in his work.

Friday, November 24, 2017

David Walsh on the Oscar Contenders

Earlier this month I mentioned David Walsh's review of Django Unchained, and his remarks on Zero Dark Thirty. Interestingly, just two days before the Academy Awards ceremony, he has produced a follow-up to these reviews in which he extends his analysis of the weaknesses of those films, and also why they have attracted such enthusiasm among the "psuedo-left" (in comparison with, for instance, Steven Spielberg's Lincoln), noting that this
well-heeled social layer, conditioned by decades of academic anti-Marxism, identity politics and self-absorption, rejects the notion of progress, the appeal of reason, the ability to learn anything from history, the impact of ideas on the population, mass mobilizations and centralized force. It responds strongly to irrationality, mythologizing, the "carnivalesque," petty bourgeois individualism, racialism, gender politics, vulgarity and social backwardness.
This is as succinct and forceful a description of the politics of postmodernism, and the way in which it manifests itself in the taste for a certain kind of "dark and gritty" storytelling, as I have seen in some time.

You can read Walsh's full remarks here.

Also of interest to those interested in this line of cultural criticism: David Walsh's review of the recent film Not Fade Away, in which he debunks the romanticizing of the 1960s as a "golden age in which everything seemed possible and revolution was in the air."

Monday, September 18, 2017

Reassessing Kurzweil's 2009 Predictions

Raymond Kurzweil's 1999 book The Age of Spiritual Machines remains a touchstone of futurologists and their critics not only because of Kurzweil's continuing presence within the information technology and futurology worlds, but because the book also offered multiple, lengthy, even comprehensive lists of forecasts. Many of his forecasts were about subjects far outside his realm of expertise (macroeconomics, geopolitics, military science, etc.), and unsurprisingly were commonplaces, but a good many of them were precise technological forecasts lending themselves to testing against the day's technological state of the art--and thereby checking the progress of the technologies with which the book was concerned, and perhaps, also our progress in the direction he claimed we were going in, toward a technological Singularity.

Naturally a good many observers (myself included) reviewed Kurzweil's 1999 predictions for 2009 when that year arrived. Different writers, depending on which forecasts they chose to focus on, and how they judged them, came to different conclusions about his accuracy. I emphasized those precise, easily tested technological forecasts, and saw that many had not come to pass. I noted, too, that while not accounting for each and every error, there was a recognizable pattern in a good many of these, namely that Kurzweil assumed advances in neural networks enabling the kind of "bottom-up" machine learning that made for better and improving pattern recognition, as with the speech recognition supposed to make for the Language User Interfaces we never really got.

Looking back on those predictions from almost a decade on, however, it seems worth remarking that the improvement in neural nets was indeed disappointing in the first decade of the twenty-first century--but got a good deal faster in the second decade. And right along with it there has been improvement in most of the areas he seemed to have been overoptimistic about, like translation technology. Indeed, as one of those who looked at his 1999 predictions for 2009 and was underwhelmed, I find myself increasingly suspecting that Kurzweil was accurate enough in guessing what would happen, and how, but, due to overoptimism about the rate of improvement in the underlying technology, was off the mark in regard to when by a rough decade.

Moreover, this does not seem to have been the only area where this was the case. The same may go for the use of carbon nanotubes, another area where after earlier disappointments this decade technologists achieved successes that may plausibly open the way to the new designs of 3-D chips he described, permitting vast improvements in computer performance. (As it happens, Kurzweil guessed that by 2019 3-D chips based on a nanomaterial substrate would be the norm. If IBM is right, he will have been off by rather less than a decade in this case.) It may also go for virtual reality--which likewise seems to be running a rough decade behind his guesses.

The advance in all these areas has, along with a burst of progress in other areas in which Kurzweil took much less interest (like self-driving cars of a different type than he conceived, and electricity production from renewable sources), contributed to a greater bullishness about these technologies--a techno-optimism such as I do not think we have had since the heady years of the late 1990s when access to personal computing, the Internet and mobile telephony began the proliferation and convergence that has people watching movies off Netflix on their "smart" phones while sitting on the bus.1

Of course, it should still be remembered that much of what has been talked about here is not yet an accomplished fact. The advances in AI have yet to make for proven improvements in consumer goods, while those 3-D nanotube-based chips have yet to enter production. At best, the first true self-driving car will not hit the market until 2020, while renewable energy (hydroelectric excepted) is just beginning to play a major role in the world's energy mix. Moreover, that the recent past has seen things pick up does not mean that the new pace of progress will remain the new norm. We might, in line with Kurzweil's expectation of accelerating returns, see it get faster still--or slow down once more.

1. Kurzweil's self-driving cars were based on "intelligent roads," whereas the cars getting the attention today are autonomous.

Review: The Wages of Destruction: The Making and Breaking of the Nazi Economy, by Adam Tooze

New York, Penguin, 2006, pp. 799.

I remember that reading Paul Kennedy's The Rise and Fall of Great Powers back in college I pored over the figures for Germany's World War II-era income and production and felt that something didn't quite fit. German arms output seemed a good deal less impressive than I expected given the country's image as an ultra-efficient manufacturing colossus, and the fact of its control over nearly the whole of the European continent.

The common explanation for this was political--that an irrational Nazi regime, for various reasons (its slowness to move to a total war footing out of fear of a repeat of the Revolution of 1918, a self-defeating brutality in its exploitation of occupied territories, Hitler's self-interestedly playing various interests off against each other at the expense of the war effort, etc.) failed to properly utilize its potentially overwhelming industrial capacity (at least, prior to the arrival of Albert Speer, who came along too late to accomplish very much).

By and large, I accepted this explanation.

More recently Adam Tooze's Wolfson prize winning study The Wages of Destruction: The Making and Breaking of the Nazi Economy offered a different view--that the question was not a matter of political failure of this type, but the material limitations of the German economic base. Germany was simply not big enough or rich enough to hold its own against an alliance of the United States, the British Empire and the Soviet Union--the coalescence of which Tooze contends was virtually inevitable, so that Hitler's attempt to create a continental empire in the east in the face of their opposition was simply not explicable in any rational terms. (Indeed, he holds it to be comprehensible only in terms of Hitler's bizarre, conspiratorial, apocalyptic racial theories.)

To the end of making that case Tooze devotes his book to a study of the German economy through the crucial period (1933-1945), less on the basis of new data than old and well-known data (more or less like what I saw reading Kennedy) that has simply been marginalized in a great deal of analysis of the war. As he notes Germany was the world's second-largest industrial power in the early twentieth century, well ahead of its European neighbors in manufacturing productivity and output, and "world-class" in several key heavy industrial and high-tech lines like steel, chemicals, machine tools and electrical equipment.

However, German manufacturing appears less impressive when one looks at the sector overall. While Germany still excelled its European neighbors in productivity when taken this way, its lead was slighter than might be imagined, so much so that the difference between Germany, and for example, Britain, was negligible in comparison with the difference between Germany and the much more efficient (because much more "Fordist") United States (America getting twice as much per-worker output in many lines). There were even key heavy/high-tech lines where Germany remained a relatively small producer--like motor vehicles, the volume of output of Germany's automotive industry far behind that of the U.S..

Moreover, manufacturing was just one sector of the economy--with the other sectors even worse off. The country's comparably small, densely populated and less than resource-rich territory was problematic from the standpoint of its extractive sectors, which were also less productive than they might have been. The country's agricultural sector, notably, lagged not just the U.S. but even Britain in productivity, suffering from having too many of its workers on too little land divided into too many farms. All of this contributed not only to the country's per capita and national income being less than its manufacturing successes implied, but the harsh reality that those manufacturing successes depended heavily on imports of food, energy and raw materials, to the point that the country's trade was scarcely balanced by its manufacturing exports.

The result was that not only was Germany poorer than is usually appreciated, but its resource-financial situation differing from that of the U.S. (resource-rich) or Britain (resource-poor but financially powerful, with the trading privileges empire brings) in this manner left it with less room than they to export less or import more, as a major military build-up demanded. Moreover, the attempts to alleviate this constraint by developing domestic sources of imported materials like iron, and inventing synthetic substitutes for the oil and rubber Germany could not produce locally, on top of entailing additional inefficiencies, fell far short of eliminating the problem, leaving it very vulnerable to the cut-off of essential imports, and something have to give, fairly quickly.

This reading of the history is borne out by what actually did happen, the German government actually making ruthlessly efficient use of its domestic resources almost immediately on taking power--the facts about this refuting such clichés of the historiography as the unwillingness of the German government to sacrifice its people's living standards or put women to work for the sake of its military effort. (Indeed, already in the '30s the German government's system of central control of foreign exchange and key materials was gutting the production of consumer items like clothing in favor of armaments, while Germany had more of its women in the work force than Britain did.) However, for all that, Germany's merely pre-war preparations--its drive for maximum short-term output in the key military-industrial lines--translated to severe, unsustainable economic strain before the fighting even began, in such ways as the neglect of key infrastructure like the railroads, the price distortions caused by intricate central control, and the emergence of crises of cash flow and foreign exchange in spite of all the managerial feats and painful sacrifices. The inescapable recognition that the German economy by 1938 was a house of cards forced the government to curtail its rearmament programs at a point at which it was still nowhere near to being a match for Britain at sea, and disadvantaged in the face of combined Anglo-French air and ground forces in key respects (like numbers of up-to-date tanks), with the balance already swinging away from it to its opponents.1 And then even the cutbacks still left it in an economically precarious position.

Of course, the fighting did begin in 1939, and Germany did make successful conquests during the first year that did enlarge its resource base. However, in hindsight the conquests added less than might be imagined because of the limited size, resource bases and development of the countries it overran, not just the nations of Eastern Europe, but in Western Europe (Norway and Denmark, the Low Countries, and even France) as well.2 The result was that even the conquests fell far short of putting Germany on an even footing with the combined Big Three, the disparity still so wide as to moot many of the claims about how this or that more careful use of Axis resources by the Nazi leadership might have turned the tide of the war.3 Tooze reinforces this contention by showing that, for the most part, the Germans generally did what could be done with what was at their disposal (puncturing the "myth" of Albert Speer).4

A few questionable analogies apart (as when Tooze compares Germany's economic development in the 1930s to that of mid-income countries like South Africa today) Tooze offers a comprehensive and convincing image of the German economy, and especially its critical weaknesses. Moreover, while he pays less detailed attention to the other side of the balance sheet (it seemed to me that he was insufficiently attentive not just to France's demographic and industrial vulnerabilities but to Britain's financial and industrial weaknesses and problems of military overstretch in the late '30s), he is quite correct to contend that Germany, and even a Germany in control of much of continental Europe, was at a grave disadvantage in a prolonged war with a U.S.-U.K.-Soviet alliance.

Nonetheless, this emphasis on the inability of Germany to hold its own against the combined Allied Big Three obscures a weakness much more important than the deficiencies of the economic analysis--namely his depiction of the anti-Nazi alliance's coming together the way it did as a virtual certainty, his case for which is superficial and conventional compared with his economic analysis.5 He slights the leeriness of the U.K. and France about allying themselves with the Soviet Union that actually did prevent their coming together in 1939, with disastrous consequences. (Even after Poland, French conservatives were much more interested in fighting the Soviets than fighting Hitler. Indeed, the response of many of them to French defeat appears to have been a George Costanza-like "restrained jubilation" at the prospect of dispensing with democracy and setting up a fascist state in William Shirer's account of the Third Republic's collapse.)

Continuing along these lines, Tooze then goes on to slight the reality that, after the shock of the Battle of France (which brought Italy into the war on Germany's side), the hard-pressed, cash-strapped U.K. might have sought a negotiated peace, or that isolationism might have won the day in the United States, depriving Britain of American support. (In its absence the country would have had to settle with Germany--and the U.S. would have had few plausible options against Germany afterward, in spite of Germany's failures in the Battle of Britain and its naval limitations.) These developments would not have changed Germany's asset base--but they would have taken Britain out of the war, and likely kept Germany from having to fight the United States as well, leaving it to focus on just the Soviet Union, which would have changed the balance between them dramatically. Equally Tooze does not acknowledge, let alone refute, the possibility that in spite of the balance of power being so heavily against it as it was different German military decisions (most obviously "the Mediterranean strategy") could have produced a different outcome.

Additionally, these oft-studied "What ifs" aside, it is worth remembering the way the war did go. After Britain decided to stick it out against Germany, American support was exceedingly slow in coming, Britain bankrupt in March 1941 before the aid spigot was turned on fully. Moreover, even with Britain still against it, even after a delay by one crucial month due to the German invasion of the Balkans, mid-course changes to the plan of attack that cost additional time, and famously inimical weather, German troops managed to come within sight of Moscow before the invasion's first really significant repulse. And even afterward a depleted Soviet Union remained vulnerable (while any Anglo-American "second front" remained a long way off), permitting Germany to continue taking the offensive through the year, up to and even after the turning point of Stalingrad--following which it was another two-and-a-half more years of fighting before V-E Day.

One should also remember the cost of this protracted effort. The Soviet Union lost some twenty-five million lives. Britain lost what remained of the foundations of its status as a world empire, a great power, and even an independent force in world affairs (in such a weak position that it had to take a $4 billion U.S. loan immediately after the war). The U.S. did not suffer anything comparable, but it is worth remembering that its extraordinary industrial effort went far beyond what even the optimists thought possible at the time--and was a thing the Allies could not have taken for granted in the earlier part of the war, even after the American entry.

Consequently, while in hindsight it appears clear that Germany's chances for attaining anything resembling its maximum war aims were never more than low (indeed, would probably be deemed delusional but for the astonishing Allied timidity and incompetence in the key September 1939-May 1940 period), the attainment of those aims became increasingly implausible again after the failures of German forces against the Soviet Union in late 1941, and virtually inconceivable after 1942 came to a close, a victory so long, horrifically costly and deeply exhausting cannot be regarded with the complacency Tooze shows. Indeed, in his failure to critically examine this side of the issue, it appears that while Tooze takes on, and debunks, one myth of World War II (of German economic might), he promotes others that have themselves already been debunked--most importantly, that of the political establishments of the future members of the wartime United Nations having been steadfast opponents of Nazism from the very beginning.6

1. The initial plans had called for an air force of 21,000 planes when Germany went to war. The Luftwaffe never had more than 5,000, however.
2. Paul Bairoch's oft-cited 1982 data set on world manufacturing output through history gives Germany 13 percent of world industrial capacity. All the rest of non-Soviet, continental Europe together was another 13 percent, widely dispersed among its various nations, not all of which Germany controlled (Sweden and Switzerland remained independent), while the productivity of Germany and occupied Europe was hampered by the cut-off from imported inputs by the Allies' control of the oceans. By contrast, Bairoch's data gives the U.S. alone 31 percent of world manufacturing output in 1938 (more than all non-Soviet Europe combined).
3. To return to Bairoch's figures, the U.S. in coalition with the U.K. (another 11 percent) and the Soviet Union (another 9 percent) possessed 51 percent of world manufacturing capacity--a 4-to-1 advantage over Germany, and a 2-to-1 advantage over all of non-Soviet, continental Europe combined (independent states that merely traded with Germany included). Moreover, even these figures understate the margin of Allied advantage, given how the U.S. economy boomed during the war years (its economy growing about three-quarters), and Britain brought its broader Empire with its additional resources into the alliance. Considering the access to the food, energy and raw materials needed to keep all that capacity humming would seem likely to widen the disparity rather than diminish it.
In fairness, this was somewhat offset by two factors: German conquest of much of the western Soviet Union, and the absorption of a portion of Allied resources by the simultaneous war in the Asia-Pacific region. However, much of the Soviet industrial base in the conquered parts was hurriedly relocated eastward ahead of their advance, and contributed to the Soviet productive effort (and more still denied the Germans). At the same time the diversion of Allied resources by the Pacific War was limited by Japan's more modest military-industrial capability (it had just 5 percent of world manufacturing in 1938), China's weight on the Allied side (much of the Japanese army and air force tied up fighting against the Chinese), and the Allies' prioritization of the war in Europe (the "Europe First" or "Germany First" policy).
4. Tooze contends that the rise of German output in the last part of the war was largely the result of decisions taken earlier in it, and Speer simply around when those decisions paid off in the form of production gains. Tooze also offers a critical view of those initiatives Speer did undertake, like the effort to mass-produce U-boats late in the war.
5. While the critique of German strategy is solid, Tooze treats Allied strategy much less critically--a particular weakness when it comes to his assessment of the Combined Bomber Offensive.
6. The body of work regarding Britain in this respect can be found distilled in Clive Ponting's impressive 1940: Myth and Reality.

Reading Revisionism

It seems to me that I have been running across a great deal of revisionist World War II-era economic history as of late, focused primarily on how the balance stood between two of the war's major participants--Britain and Germany.

The old view was that the British Empire was stodgy and enfeebled, and looked it when it went up against sleek, ultra-modern, ultra-efficient Germany.

Much recent historiography has taken a different view--that Germany was not really so tough, nor Britain's performance so shabby. That it was really Germany that was the economic underdog against the trading and financial colossus--and, for all its dings, the techno-industrial colossus--Britain happened to be.

Of course, there is nothing wrong with revisionism as such. New information, and the availability of new ways of looking at, should prompt rethinkings of old perceptions and assumptions. However, revisionism also has a way of pandering to the fashion of the times--and this seems to be one of those cases.

This line of argument seems to say that Britain didn't do so badly out of free trade. That the country's weaknesses in manufacturing just didn't matter that much--that the emphasis on finance instead was just fine. That, by implication, the conservative establishment did just fine, overseeing the economy, and managing the country's foreign affairs, and then fighting the war--not needing any help from a bunch of lefties and Laborites, thank you very much, and certainly not disgracing itself the way they claim it had clearly done post-Dunkirk.

It seems more than coincidence that this is all very congenial to a conservative outlook--in particular the one that today says free trade, financialization, deindustrialization, the balance of payments (and all the rest of the consequences the neoliberal turn has had for the economies of the developed world) simply do not matter, that our future can be trusted to economic orthodoxy, that critical and dissenting views have nothing to offer. And also the one that says Britain can do just fine on its own, without any allies, continental or otherwise.

It is no more convincing in regard to the past than it is in regard to the present.

Still, even if one doubts the more extreme claims (there is simply no refuting Britain's failures as a manufacturing and trading nation), there is a considerable body of hard fact making it clear that Germany had some grave weaknesses--and Britain, some strengths that have to be acknowledged in any proper appraisal of the way the war was fought. So does it go in two books I have just reviewed here--Adam Tooze's study of the German economy in the Nazi period, The Wages of Destruction; and David Edgerton's more self-explanatory title, Britain's War Machine.

Friday, August 18, 2017

Animated Films at the U.S. Box Office, 1980-2016: The Stats

Considering the stats on the action film (such films now typically six of the ten top-grossing films in a given year), I naturally found myself wondering--what about the other four? Naturally I thought of animated films--and again noticed a trend that exploded a little later. In the '80s there was not a single animated feature in the top ten of any year, apart from the partial exception of Who Framed Roger Rabbit? (1988). (Even the celebrated The Little Mermaid only made the #13 spot.) However, starting with 1991's Beauty and the Beast it became common in the '90s for there to be at least one such film in the top ten (six years had at least one, for an average of 0.8), much more standard in the 2000s (1.7 films a year, with 2008 having four such movies), and yet again in the 2010s. Every single year from 2010 on has had at least one, every year but 2011 at least two, and 2010 an amazing five of the top ten--clearly, at the expense of the action movies which had an especially poor showing that year.

In the 2010s that means that between the action movies, and the animated films, together accounting for eight to nine of the top ten movies in any given year, they pretty much have the uppermost reaches of the box office monopolized in a way that no two genres have ever had before, so far as I can tell. That one to two spots at most? One is often taken up by something similar--like a live-action version of an animated hit of yesteryear's (like 2014's Maleficent or 2015's Cinderella), leaving at most one other for any and everything else, from adult-oriented comedies (like 2012's Ted) to the occasional mainstream drama (like 2014's American Sniper).

Feel free to check my data (and my math) for yourself.

The Data Set
1980-1989: 0.
1990-1999: 8. (0.8 a year.)
1991-1: Beauty and the Beast.
1992-1: Aladdin.
1994-1: The Lion King.
1995-2: Toy Story, Pocahontas.
1998-1: A Bug's Life.
1999-2: Toy Story 2, Tarzan.
2000-2009: 17 films (1.7 a year).
2000-0
2001-2: Shrek, Monsters Inc.
2002-1: Ice Age.
2003-1: Finding Nemo.
2004-3: Shrek 2, The Incredibles, Polar Express.
2005-1: Madagascar.
2006-3: Cars, Happy Feet, Ice Age 2.
2007-1: Shrek 3.
2008-4: Wall-E, Kung Fu Panda, Madagascar, Dr. Suess' Horton Hears a Who.
2009-1: Up.
2010-2016: 19 films (2.7 a year).
2010-5: Toy Story 3, Despicable Me, Shrek 4, How to Train Your Dragon, Tangled.
2011-1: Cars 2.
2012-2: Brave, Madagascar 3.
2013-3: Frozen, Despicable Me 2, Monsters University.
2014-2: The LEGO Movie, Big Hero 6.
2015-2: Inside Out, Minions.
2016-4: Finding Dory, The Secret Life of Pets, Zootopia, Sing.

Wednesday, July 5, 2017

The Action Film Becomes King of the Box Office: Actual Numbers

It seems generally understood that the action movie took off in the '80s, and, while going through some changes (becoming less often R-rated and more often PG-13-rated, setting aside the cops and commandos to focus on fantasy and science fiction themes), captured a bigger and bigger share of the box office until it reached its predominant position today.

Still, this is something people seem to take as a given without generally checking it for themselves, and so I decided to do just that, looking at the ten highest-grossing films of every year from 1980 to the present, as listed by that ever-handy database, Box Office Mojo.

Of course, this was a trickier endeavor than it might sound. I had to decide, after all, just what to count as an action film. In my particular count I strove to leave aside action-comedies where the accent is on comedy (Stakeout, Kindergarten Cop), sports dramas (the Rocky series), and suspense thrillers (Witness, Silence of the Lambs, Basic Instinct), war films (Saving Private Ryan, Pearl Harbor), historical epics (Titanic) and fantasy and science fiction movies (Harry Potter) that may contain action but strike me as not really being put together as action films. I also left out animated films of all types even when they might have qualified (The Incredibles) to focus solely on live-action features. Some "subjectivity" was unavoidably involved in determining whether or not films made the cut even so. (I included Mr. and Mrs. Smith in the count for 2005. Some, in my place, probably would not.) Still, I generally let myself be guided by Box Office Mojo when in doubt, and I suspect that, especially toward the latter part of the period, I undercounted rather than overcounted, and in the process understated just how much the market had shifted in the direction of action films (and to be frank, thought that the safer side to err on).

Ultimately I concluded that 2.2 such films a year were average in the '80s (1980--1989), 3.6 in '90s (1990-1999), 4.8 in the '00s (2000-2009), and 5.4 for the current decade so far (2010-2016). Moreover, discarding the unusual year of 2010 (just 2 action movies), a half dozen action films in the top ten have been the the norm for the last period (2011-2016). This amounts not just to a clear majority of the most-seen movies, but close to triple the proportion seen in the '80s.

Moreover, the upward trend was steady through the whole period. It does not seem insignificant that, while 2 a year were average for the '80s, 1989 was the first year in which the three top spots were all claimed by action movies (Batman, Indiana Jones and Lethal Weapon #1, #2 and #3 respectively). Nor that no year after 1988 had fewer than two such movies in the top ten, and only 3 of the 28 years that followed (1991, 1992 and 2010) had under three. Indeed, during the 2000-2009 period, not one year had less than four movies of the type in its top ten, and after 2010, not one year had under six.

Especially considering that I probably undercounted rather than overcounted, there is no denying the magnitude of that shift by this measure. And while at the moment I have not taken an equally comprehensive approach to the top 20 (which would provide an even fuller picture), my impression is that it would reinforce this assessment, rather than undermine it. If in 1980 we look at the top 20, we still see just one action movie--Empire Strikes Back. But if we do the same in 2016 we add to the six films in the top ten Doctor Strange, Jason Bourne, Star Trek Beyond and X-Men: Apocalypse, four more films (even while we leave out such films as Fantastic Beasts and Where to Find Them, and Kung Fu Panda 3).

Anyone who cares to can check my counting below and see if they come up with anything much different.

The Data
1980-1989: 22 films (2.2 a year).
1980-top 10 films, only 1 action movie-Empire Strikes Back.
1981-2: Raiders of the Lost Ark and Superman II.
1982-2: Star Trek 2 and 48 HRS.
1983-3: Return of the Jedi, Octopussy, Sudden Impact.
1984-3: Beverly Hills Cop, Indiana Jones, Star Trek 3.
1985-2: Rambo 2, Jewel of the Nile.
1986-2: Top Gun, Aliens.
1987-3: Beverly Hills Cop 2, Lethal Weapon, The Untouchables.
1988-1: Die Hard.
1989-3: Batman, Indiana Jones, Lethal Weapon 2 (all in the top 3 slots, the first time this ever happened).

1990-1999: 36 films (3.6 a year).
1990-4: Teenage Mutant Ninja Turtles, The Hunt for Red October, Total Recall, Die Hard 2.
1991-2: Terminator 2, Robin Hood: Prince of Thieves.
1992-2: Batman 2, Lethal Weapon 3.
1993-4: Jurassic Park, The Fugitive, In the Line of Fire, Cliffhanger.
1994-3: True Lies, Clear and Present Danger, Speed.
1995-5: Batman 3, Goldeneye, Die Hard 3, Apollo 13, Jumanji.
1996-4: ID, Twister, Mission: Impossible, The Rock.
1997-5: Men in Black, The Lost World, Air Force One, Star Wars (Episode IV reissue), Tomorrow Never Dies.
1998-4: Armageddon, Rush Hour, Deep Impact, Godzilla.
1999-3: Star Wars: Episode I, The Matrix, The Mummy.

2000-2009: 48 films (4.8 a year).
2000-4: Mission: Impossible 2, Gladiator, A Perfect Storm, X-Men.
2001-5: The Lord of the Rings (LOTR) 1, Rush Hour 2, Mummy, Jurassic 3, The Planet of the Apes.
2002-4: Spiderman, Star Wars: Episode II, LOTR 2, Men In Black 2.
2003-6:  LOTR 3, The Pirates of the Caribbean, The Matrix 2, X-Men 2, Terminator 3, The Matrix 3.
2004-4: Spiderman 2, The Day After Tomorrow, The Bourne Supremacy, National Treasure.
2005-5: Star Wars: Episode III, The War of the Worlds, King Kong, Batman Begins, Mr and Mrs. Smith.
2006-4: Pirates of the Caribbean 2, X-Men: The Last Stand, Superman Returns, Casino Royale.
2007-7: Spiderman 3, The Transformers, Pirates of the Caribbean 3, National Treasure 2, The Bourne Ultimatum, I Am Legend Legend, 300.
2008-5: The Dark Knight, Indiana Jones and the Kingdom of the Crystal Skull, Iron Man, Hancock, Quantum of Solace.
2009-4: Avatar, Transformers 2, Star Trek, Sherlock Holmes.

2010-2016: 38 films (5.4 a year).
2010-2: Iron Man 2, Inception.
2011-6: Transformers 3, Pirates 4, Fast Five, MI 4, Sherlock 2, Thor.
2012-6: The Avengers, The Dark Knight Rises, Hunger Games, Skyfall, Hobbit 1, Spiderman.
2013-6: Hunger Games 2, Iron Man 3, Man of Steel, Hobbit 2, Fast 6, Oz the Great and Powerful.
2014-6: Hunger Games 3, Guardians of the Galaxy, Captain America 2, Hobbit 3, Transformers 4, X-Men: Days of Future Past.
2015-6: Star Wars, Jurassic, Avengers 2, Furious 7, Hunger Games 4, Spectre.
2016-6: Rogue One, Captain America III, Jungle Book, Deadpool, Batman vs. Superman, Suicide Squad.

Friday, June 16, 2017

The Superhero Film Gets a Makeover

As regular readers of this blog (all two of you, unless I'm miscounting by two) know, I have been watching the superhero movie bubble for years expecting it to pop. Of course it hasn't, so far--this, seventeen summers after Bryan Singer's X-Men (2000) (which in turn came a decade after 1989's Batman became the biggest hit of the year, and its franchise the most successful of the 1989-1995 period). And the conventional wisdom seems to be that this longest-running of action movie fashions can go on indefinitely.

I'm less sure of what to think than before. The studios have slates packed with superhero films through 2020, and by now probably beyond it as well, and at this moment I wouldn't care to bet on their shelving those plans because of an untoward change in the market--still less because of three approaches which are proving commercially viable.

R-Rated Superhero Movies
Of course, there have been plenty of R-rated superhero movies before--like the Blade movies, and Wanted and Watchmen. The difference was that they tended to not be made about first-string characters, or given first-rate budgets, with the expectation justified by their small prospect of first-rate grosses.

So did it also go with Deadpool, who was not a first-string character (Deadpool reminds us of this himself, rather crudely, in the early part of the film), and who didn't get the really big budget (Deadpool gabbing endlessly about this too). Rather than original or fresh or subversive (it had nothing on Watchmen here, or even the 2015 reboot of Fantastic Four) it struck me as a mediocre second-stringer, notable only in how heavily it relied on tired independent film-type shtick. Still, the film exploded at the box office ($363 million in North America,nearly $800 million worldwide--a feat the more impressive for it not having played in the ever-more important Chinese market), and has already had a sort of follow-up in a genuine first-stringer--Wolverine--getting an R-rated film, Logan, earlier this year. (The budget was, again, limited next to the full-blown X-Men movies at under $100 million, but the character and the budget were a bigger investment than Deadpool represented, and with over $600 million banked it seems likely to encourage even bolder moves of this kind later.)

What's going on here?

One possibility is the change in the makeup of the market. It seems that R-rated films (and not just raunchy comedies), while far from where they were in the '80s and even '90s in terms of market share, have done better recently than at any time this century, and superhero films have reflected the turn. (In 2014, American Sniper topped the box office and Gone Girl also did well, while 2015 saw The Revenant and Fifty Shades of Grey become top 20 hits at the American box office, and Mad Max: Fury Road claim the #21 spot.) I might add that reading the Box Office Guru's biweekly reports, I'm struck by how much the audience for recent films has been dominated by the over-25 crowd--younger people perhaps going to the movies less often. (Living life online, while having less access to cars and less money in general, I suspect they don't go to the theater so much as they used to do.) This might make an R-rating less prohibitive than it used to be for an action film producer, perhaps especially in this case. By this point anyone who is twenty-five probably has little memory of a time when the action genre was not dominated by superheroes. They grew up on superheroes, are used to superheroes, and so R-rated films about people with unlikely powers dressed in colorful spandex seem less silly to them, and so not such a tough sell as they would have been once upon a time.

Woman-Centered Superhero Movies
Just as we have had plenty of R-rated superhero movies in the past, we have had plenty of superhero films with female protagonists. The difference was that, as I recently wrote here, while there were movies with female superhero protagonists (Elektra, for example), and first-string superhero movies prominently including female superheroes in their casts (Black Widow in the Avengers), the flops of the early 2000s left Hollywood discouraged about presenting really first-string superhero movies centered on female protagonists. The grosses of YA dystopia films like The Hunger Games, Lucy and Mad Max have made the studios readier to go down this road, however--and the success of Wonder Woman is reinforcing this. And I suspect that at the very least the trend will endure for a while, if not prove here to stay.

Mega-Franchises
Where the decision to make big-budget films centered on female superheroes is a case of the superhero movie following trends set by others, in this case the superheroes have led the way. The Marvel studio gambled big and won big with a larger Marvel Comics Universe, which regularized Avengers-style grosses to such a degree that even an Iron Man or Captain America sequel could deliver them. Inspired by this course Warner Brothers gambled (but has not yet won big) with a comparable Justice League franchise. Since then Disney (Marvel's current owner) has given Star Wars the same treatment (and so far, won big again), while Universal, not to be left behind, has decided to do the same with a film franchise based on its classic movie monsters. (The first effort, the recent reboot of The Mummy, is a commercial disappointment, but as the WB demonstrated in plowing ahead with its Justice League despite the reservations about Man of Steel and Superman vs. Batman, the project is too big to be shut down by a single setback.)

A credulous postmodernist might gush at the possibilities for intertextuality. But the reality is that this is of principally commercial rather than artistic significance--as is the case with the other two trends discussed here. Indeed, by upping the commercial pressure (studios now want not a series delivering a solid hit every two or three years on the strength of the built-in audience, but a hit machine delivering record breaking-blockbusters once or twice a year) the greater synchronization, the higher financial stakes would seem likely to tie the artists' hands to a degree those who still lament the decline of New Hollywood can scarcely fathom as yet. Still, superficial as most of this is (we are not talking about a reinvention of superhero films here), the success of Deadpool says something about how little it might take to keep the boom going longer than the decades it has already managed.

Thoughts on the Wonder Woman Movie Actually Happening

As is well known by now to anyone who pays much attention to films of the type, DC got the Wonder Woman film made, and got it out this summer, and it has already pulled in enough money to be safely confirmed as a commercial success. (Its $460 million is just over half the $873 million the "disappointing" Superman vs. Batman made back in early 2016, and the final tally will probably fall well short of that figure--the Box Office Guru figuring something on the order $750 million. But it was not quite so big an investment, making it a very healthy return.)

In the process DC has realized what I described a few years ago as a long shot.

Of course, quite a lot has happened since--some of it, what seemed to me to be prerequisites for a Wonder Woman film. Warner Brothers firmly committed itself to a Justice League megafranchise comparable to the Marvel Comics Universe, and used the "backdoor" Justice League movie Superman vs. Batman: The Dawn of Justice to introduce new characters, Wonder Woman included.

There has, too, been something of a resurgence in big-budget action movies with female protagonists. Again, the female action hero didn't go away in the preceding years. There were still plenty of really high-profile, big-budget movies featuring action heroines--if as part of an ensemble, like the Black Widow (featured in five movies to date). There were still plenty of second-string action movies with female leads getting by on lower budgets and lower grosses--like the films of the Underworld and Resident Evil franchises (which have continued up to the present).

What there wasn't a lot of were movies combining a first-string production with a female lead between the early 2000s (the underperformance of the Charlie's Angels and Tomb Raider sequels in the summer of 2003, Catwoman winding up a flop in 2004, Aeon Flux becoming another disappointment in 2005, putting studios off) and the middle of this decade, when they began to crop up again. The Hunger Games exploded (2012), and was followed up by its mega-budgeted sequels (2013, 2014, 2015), the initially cautious but later bigger-budgeted Divergent films (2013, 2014, 2015) and the $150 million Mad Max: Fury Road (2015) (with the low-budget but high-grossing Lucy in 2014) reinforcing the trend. Still, even if this makes clear what an exaggeration the claims for Wonder Woman as something unprecedented are (both by the studios, and the sycophantic entertainment press), it has some claim to looking like a watershed moment.

That said, the question would seem to be whether such films will now be a commonplace of the cinematic landscape, or prone to the kind of boom and bust seen in the early 2000s. What do you think?

Saturday, June 3, 2017

On the Historiography of Science Fiction: Info-Dumping and Incluing

When it comes to "info-dumping" and "incluing," a considerable current of thought about science fiction hews to the standards of mainstream literature, down to those misconceptions and prejudices summed up in the truism "Show, don't tell." Simply put, info-dumps (telling) are regarded as bad, incluing (showing) as good; and the fact that we are more likely to do the latter than before is taken as a case of "Now We Know Better," and therefore are better than our more enthusiastically info-dumping forbears. Additionally John Campbell and his colleagues (like Robert Heinlein), while getting less respect from the more "literary" than they otherwise might, are still credited, at times lavishly, with "showing us the light" on this point.

However, Campbell was merely a dedicated and influential promoter of incluing, rather than its inventor. Others had done it before he and his writers came onto the scene. For example, E.M. Forster did so in his short story "The Machine Stops" (1909), and perhaps inspired by this example, so did one of the genre's founding fathers in one of its foundational works, Hugo Gernsback in Ralph 124 C 41+ (1911) (at least in his account of Ralph's use of the Telephot, which seems to echo the opening of Forster's own story). Indeed, the first chapter of Gernsback's oft-mentioned but rarely read classic uses the technique so heavily--and to my surprise, artfully--that it can be treated as a model for doing so, presenting us with a barrage of obliquely treated technological novelties in the first meeting between the titular hero and his love interest, Alice, that manages to be packed with action and novelty, but also lucidly conceived and smoothly written.

Yet, Gernsback shifted to info-dumping for the rest of the text. One reason, obviously, might be that info-dumping--telling--is simply the easier mode from a technical standpoint, setting aside the iron cage of what we call "style," and thus saving an author from having to spend five days on the same page while searching for "le mot juste." However, this does not seem the sole concern. There were particular reasons for the author to go in for incluing in that first chapter--above all, to keep cumbersome explanations from getting in the way of the action. These concerns were less operative later in the text, especially as a big part of their interest was that mainstay of science fiction from at least Francis Bacon's The New Atlantis on, the Tour of the World of Tomorrow--on which it is appropriate to have a Tour Guide along.

Indeed, Gernsback was a strong believer in the value of a good info-dump, as an editor at Amazing Stories strongly encouraging his writers to produce them. Simply put, there are things that cannot be shown intelligibly or concisely, only told, but which it is worthwhile to convey anyway; things that are worth info-dumping when one cannot inclue them; and while one may take issue with the use made of them in some of the fiction he edited, the principle is a valid one, in both science fiction and fiction generally (as H.G. Wells, appropriately, noticed and argued eloquently). Naturally, just as the author who always shows and never tells is a lot less common than the truism would have it (even Flaubert had to tell us some things straight out), so is it very rare to find a really substantial work of science fiction totally bereft of info-dumps.

Tell, Don't Show--Again

In his book How Fiction Works James Wood early on sings the praises of Gustave Flaubert as the founder of "modern realist narration," what is often glibly summed up as "show, don't tell" done right. This, as Wood remarks,
favors the telling and brilliant detail . . . privileges a high degree of visual noticing . . . maintains an unsentimental composure and knows how to withdraw, like a good valet, from superfluous commentary . . . and that the author's fingerprints on all this are, paradoxically, traceable but not visible.
Indeed, Gustave Flaubert was a genuine and highly influential master of the technique, frequently managing to convey some quite difficult content with ease and precision in many a scene in classics like Madame Bovary. However, it was worth remarking that even he used a good deal of telling--as you find if you actually pick up the book. To fill in his picture, he did not just rely on the "visual noticing" of such things as the characters' facial expressions, casual remarks and the like, but time and again delved into his characters' heads and pasts and generalized and grew abstract in relating such things as Emma Bovary's school days, and the romantic side of her that developed but was never to find fulfillment in the workaday world into which she was born and in which she had her life.
"This nature, positive in the midst of its enthusiasms, that had loved the church for the sake of the flowers, and music for the words of the songs, and literature for its passional stimulus, rebelled against the mysteries of faith as it grew irritated by discipline, a thing antipathetic to her constitution . . ."1
It is elegant telling, but telling all the same.

1. I cite here the Eleanor Marx-Aveling translation.

James Wood on Flaubert

Having been both impressed and disappointed by James Wood's How Fiction Works--impressed by his lucid exposition of some literary fundamentals, disappointed by his uncritical acceptance of them--it was a pleasant surprise to encounter his article in The New Republic, "How Flaubert Changed Literature Forever," from the opening line forward. In his 2008 book he begins his discussion of Flaubert's establishment of "modern realist narration" with the following sentence:
Novelists should thank Flaubert the way poets thank spring: it all begins again with him.
In his more recent article, however, he begins thusly:
It is hard not to resent Flaubert for making fictional prose stylish--for making style a problem for the first time in fiction.
From there he goes on to a lengthy consideration of how much of a cage that style of narration is, with its stress on the concrete, visual detail, and how in the resultant "obsession with the way of seeing," the "flattering of the seen over the unseen, the external over the interior," all of "the important things disappear," and those who abided by the rule ran the risk of very elegantly telling a story about--nothing at all.

Of course, much of this has been observed before--some of it by Flaubert himself (who did, at times, step out of the cage Wood describes so well). Virtually all of it was said by H.G. Wells when he thought about the problems posed by the kind of style discussed here in his "Digression on Novels." It might be added, too, that Wood's conclusion is rather less radical than Wells'. Where Wells ultimately chose the important things over the obsession with the way of seeing, chose to try and convey what went on in people's heads over the flattering of exterior detail, Wood closes by reiterating his admiration of the "mysterious" way in which Flaubert ultimately managed to transcend the limits of his technique to tell Bovary's story. Still, Wood's discussion of the matter is a worthy consideration of a problem far too often slighted in our age of television shows about nothing, movies about nothing and, yes, books about nothing.

Thursday, June 1, 2017

On the Historiography of Science Fiction

Writing my book Cyberpunk, Steampunk and Wizardry I produced a history of science fiction that was most concerned with the genre's most recent decades. This was in large part because it seemed to me that, in contrast with earlier periods, these decades had not been dealt with in a comprehensive fashion.

Still, to provide a proper foundation that discussion it seemed to me necessary to look at what came before--and going over the relevant history I quickly found that it was not quite as well-studied as I thought it was. Certainly a vast number of works gave overviews of it. However, it always seemed to me that Jonathan McCalmont was quite justified in declaring at the time that
science fiction lacks the critical apparatus required to support the sweeping claims made by people who use [the historical] approach. Far from being a rigorous analysis of historical fact, the historical approach to genre writing is all too often little more than a hotbed of empty phrases, unexamined assumptions and received wisdom.
So did it go in the works I found. By and large the history was a "folk history," rather than rigorous scholarship--empty phrases, unexamined assumptions, received wisdom that, even when essentially correct (as, in hindsight, it often seems to have been), explained its claims vaguely and supported them poorly, and in the process not only left us understanding it all less fully and well than would otherwise have been the case, but inhibited further work rather than encouraging it.

Of course, there were numerous exceptions to this, but these tended to be in relatively obscure, specialized works dealing with relatively small pieces of the field. Colin Greenland's The Entropy Exhibition is excellent at treating key aspects of the New Wave, while Brian Stableford's The Sociology of Science Fiction was particularly insightful in its discussion of John Campbell's work as an editor--and more recently, Mike Ashley's outstanding The Gernsback Days was truly formidable in its study of the formative, "pre-Campbell" period, deeply rooted in close examination of the relevant material.

By contrast the larger, more general works, even at their best, tended toward the folk history approach, as with Brian Aldiss and David Wingrove's Trillion Year Spree (an old review of which you can find here). The book, again, has much to commend it. Its coverage of the field is vast, the highlights all mentioned, and certainly it is packed with interesting insights that, after many years of additional reading and consideration, still strike me as valuable. But on such topics as the pulps of the '20s and '30s, for example, the book largely settles for the received wisdom, summing it all up as "Gosh-wowery . . . Bug-Eyed Monsters . . . [and] the trashy plots that went with them" (216-217). To be fair, there is truth to this--and Aldiss and Wingrove do manage to say some interesting things about the material for all that. Yet, this is less specific and well-grounded than it might be (just as it is more dismissive than it ought to be).

It seems to me that the situation is getting better, the body of better-researched, more useful coverage increasing, but all the same, the synthesis of it all has really lagged. And all that being the case one can hardly avoid the question--why has this situation persisted for so long? Certainly one factor would seem to be the history of the genre having been a fan enterprise to such a degree, for so long a time--while the scholars took little interest. (While there were earlier precursors, it was only in the 1970s that academics began to pay very much attention to science fiction, which sounds like a really long time but is not really so long in academic terms; the more so because science fiction is still a fairly marginal area of study next to more canonical work.)

Another would seem to be the conventional wisdom of literary scholarship itself, much more interested in some things than in others. To be blunt, scholars who unquestioningly embrace Modernism and postmodernism as defining what is "important" literature, who take technical experimentalism, epistemological apathy and obsession with identity as the sine qua non of what is worthy of study, and whose non-literary study has been of kindred schools of philosophy and psychology (Foucault, Lacan and the rest), are either disinclined or unequipped to deal very well with key concerns of science fiction, and accordingly much of the work that, from the point of view of the genre, is most important to its history. Instead they gravitate to those works that happen to fit in with the intellectual preoccupations they bring with them, without much interest in how they fit into the history of the field. (Consider as an example the level of attention, and the kind of attention, that Ursula K. Le Guin gets from more academic students of the genre.) And of course science fiction practitioners themselves have been influenced by all this, at least since J.G. Ballard's efforts to remake the genre in the image of the Modernists. (Tellingly Aldiss and Wingrove, while interested in and often insightful about science fiction as a genre of ideas, still tilt in favor of the more purely literary in their analysis--not least in their tracing science fiction's history not to Scientific Revolution/Enlightenment/Industrial Revolution interest in natural and applied science, and its implications for the increasingly studied shape of society, but to Romantic-Gothic sensationalism.)

The result was that in developing my image of the genre's pre-1980 history (which the first four chapters of the book are devoted to outlining, because of how foundational they are to what follows), I found myself having to spend much more time just figuring out for myself what the facts were before I could settle down to figuring out the larger picture than I'd initially planned on. The folk history had enough in it to be a guide along the way (there were at least presumptions I could investigate, test out), but alas, it was just that, such that I had as much work to do in this supposedly well-covered territory as I had in the less well-charted decades that were my original concern.

Monday, May 15, 2017

Stormbreaker, by Anthony Horowitz: a YA Moonraker?

Picking up the first volume in Anthony Horowitz's Alex Rider series, I was unsurprised to see that he took many of his cues from classic Bond films, with regard to structure, pacing, action. Still, I was surprised by how much of the specifically print Fleming there was here. Indeed, Horowitz appears to have used his novel Moonraker as the framework for his plot.

As in Moonraker a self-made industrialist is being hailed in the British press as a national hero for making an expensive gift to the British public. However, at the facility where he is producing that gift (located on an isolated bit of the English coast) there have been suspicious goings-on, among them the death of a national security state functionary. Accordingly, the British Secret Service sends an agent to the scene to investigate, where he is initially a guest of said industrialist. In the course of these events our hero happens to best the man in a game and win from him a rather large sum of money as a result--revealing in the process that his host is not just a rather unpleasant person to be around, but "no gentleman." The industrialist also happens to be a man of foreign birth. And physically not quite the norm. And as though this were not enough to set off the alarm bells, there is also the behavior of an odd foreigner with a Teutonic accent he keeps round the place, which also gets visits from a submarine delivering secret materiel.

As it happens, our ungentlemanly, foreign, "odd-looking" industrialist with the suspicious German associates and secret submarine deliveries suffered in the English public school system as a boy, and as a result, bears a burning hatred of the country, which has led him to align himself with a foreign power plotting against it. His gift to the nation is in fact poisoned--really a cover for revenge he intends to wreak on it with a weapon of mass destruction to be delivered in spectacular fashion at the ceremonial, highly publicized unveiling of that gift. That weapon will shatter Britain as a nation, while he escapes safely overseas--as the villain explains to the agent after he has captured him, because he means to kill him in colorfully hideous fashion, so there is apparently no prospect of his stopping the plot. However, the hero gets free, and unable to deliver a proper warning to the authorities, races to head off the attack himself in the very nick of time . . .

As models go, Horowitz could have done worse. Moonraker's domestic setting, and its plot's unfolding within a relatively limited space, while eschewing one of the famous attractions of the Bond series (international travel) makes the activity of this fourteen year old secret agent somewhat more plausible. That the Bond girl is engaged to someone else when she meets 007 and is never tempted to stray also makes a convenient fit with Horowitz's decision to dispense with romance entirely. Still, all this underlines the difficulties of squeezing the stuff of the James Bond adventures into a YA book.

Review: The Messiah Stone, by Martin Caidin

New York: Baen, 1986, pp. 407.

It seems that once again I am reviewing a novelist who was once a Big Name but has since slipped into obscurity--Martin Caidin. His novel Marooned, about a stranded astronaut who must be rescued (sound familiar?), became a feature film in 1969, while his 1972 book Cyborg was the basis for the television series The Six Million Dollar Man, and its spin-off The Bionic Woman. Today, however, his novels seem to be out of print, his name just about never mentioned--especially when one is not making specific reference to the media spin-offs from his books.

As it happens, the tale's protagonist is a familiar enough type. Doug Stavers not only displays a more-than-human physical strength, courage and ruthlessness, but possesses every conceivable combat, investigative and mechanical skill that a commando-mercenary-spy might possibly need in the field, honed to perfection--Stavers a survival expert who can live off the land indefinitely in any and every environment, a fluent speaker of just about every language, a pilot who has mastered every flying trick, and all the rest. As if all that were not enough, his alertness is equivalent to clairvoyance, his foresight to precognition. And all this vast prowess has been demonstrated in secret wars beyond counting waged in every corner of the globe, in which he killed lots and lots and lots of people and (while admittedly picking up a good many scars) lived to tell the tale.

Naturally he has picked up a good many not-quite-as-good-but-still-preposterously-capable friends along the way, on whose help Stavers can call on those occasions when even he cannot do it all himself. All of these friends also have the virtue of comprising a considerable cheering section, endlessly testifying to just how extraordinary he is--as do his equally admiring clients and enemies, and the women in all these categories (and also those women in none of them) who find all this completely irresistible.

Such Gary Stu figures (there, I said it) are the stock-in-trade of the action thriller writer--and as my roster of books, articles and blog posts ought to make clear, I have enjoyed my fair share of works in that genre. Still, Caidin took it so far in Stavers' case (think that other Caidin creation Steve Austin, times twenty), and was so verbose in doing so, that he made me repeatedly laugh out loud while I read the book.

Indeed, taking it all as parody would be defensible--the more in as the same sensibility informs the wider narrative. The book's title, after all, refers to the tale's more than usually hokey MacGuffin, a piece of meteorite which endows its possessor with a more-than-human charisma and power over any other person who happens to be in their presence. Naturally it is much sought after by innumerable parties (the CIA, the KGB, the Catholic Church and all the other usual suspects), including one private group that enlists Stavers to track it down and deliver it to them. All this offers plenty of occasion for the superman Stavers to display not only his ridiculous ultracompetence, but a contempt for human life to which no string of epithets can do justice, and which makes for an adventure so astonishingly dark and demented by even today's standards (I dare not spoil it by saying more) that one would have thought this by itself sufficient to give the book the cult following it does not seem to enjoy.

If this intrigues you, you might be interested in knowing that Caidin published a sequel, 1990's Dark Messiah. I haven't read it, but the two reviews of the book at Goodreads, in their very different ways, seem rather plausible to me after my experience of the first Doug Stavers novel.

Subscribe Now: Feed Icon