Friday, May 25, 2018

Remembering-or Not Remembering-George Bernard Shaw

People still like to quote George Bernard Shaw, but as is usually the case with those who throw around words attributed to Famous People in the hopes of borrowing their prestige and authority, it's usually a matter of people quoting things they haven't actually read, things that they know at best secondhand.1 In fact, I suspect that Shaw would not be remembered as anything but an epigram-generating machine were it not for one of his plays having been turned into a popular musical (Pygmalion, turned into My Fair Lady).

It seems to me that much of this is due to the same reason that H.G. Wells is remembered principally as a science fiction writer. He was a socially critical philosopher-artist, inclined to rationalism, leftishness, socialism. And both those writers were more interested in writing about something, and "ventilating the point at issue," even at the price of explicit dialogues where the characters explained themselves and the world, than in abiding by the canons of "good form" as propounded by the conventionally minded, who insist on "Show, don't tell" and a firewall between art and "politics" (and especially, those politics that discomfit the genteel, those sorts that don't discomfit them not noticed by them as politics at all).

This means, basically, those who worship at the altar of Henry James and the Modernists. Who have dominated "highbrow" opinion for the last century, and have not been very well-disposed to the Wells' and Shaws. And the standing of these two writers has suffered for it.

Very, very badly. (If unfamiliar with this, I suggest you check out Mark Schorer's classic of the New criticism, "Technique as Discovery," the damage from which Wells' reputation is still reeling.)

Still, if Wells and Shaw were much alike, there were noteworthy differences. Reading Shaw I at times get the sense of a guest at a polite salon trying to impress his hosts--and, when trying too hard, serving up irony for irony's sake. Not unlike Oscar Wilde that way. Still, if I find this irksome, both those writers did have more substantive things to say, and said them well enough that they deserve to still be read today--in Shaw's case, rather more than he is now.

1. In case you're curious about where the little epigrams come from, my impression is that most of the quotation seems to be derived from the play Man and Superman.

Thursday, May 24, 2018

Farewell, IROSF

A decade ago I was a fairly regular contributor to the Internet Review of Science Fiction. I enjoyed the site both as a science fiction fan, because it put out so much interesting stuff, and as a writer, because it was a particularly pleasant site to write for--with the two not unrelated. IROSF gave its writers broad latitude with regard to subject matter and length, offered quick (two weeks--and they really meant it) and personalized responses even when rejecting unsolicited material, and by the standards of such markets, high pay rates. And the work it put out was read. (My article "Of Alternate Nineteenth Centuries: The Enduring Appeal of Steampunk" got me onto the blog of the Science Fiction Writers' of America--I was not even a member--and into the pages of Japan's most prominent science fiction magazine, Hayakawa S-F, in translation.)

I eventually published nine pieces there (seven are gathered together in my essay collection After the New Wave, a substantial part of that book, while one more is appended to my history of the field, Cyberpunk, Steampunk and Wizardry), and had it remained a going concern I would have been pleased to go on writing for them. Alas, IROSF did not last, the site posting its final edition in February 2010. The site did remain up, but went offline this month. However, like everything else on the Internet, you can still check out its full contents via the Internet Archive's Wayback Machine.

Looking Back: "The Golden Age of SF Television"

Back when I wrote "The Golden Age of Science Fiction Television: Looking Back at SFTV During the Long 1990s" for the Internet Review of Science Fiction, I didn't quite follow every science fiction and fantasy show on TV all the way through from beginning to end, but I did see a very big chunk of it, especially the more widely-seen and noted material. That was my basis for writing that article, and more generally, my attempts at broad assessments of the genre in that medium.

I certainly haven't attempted to follow science fiction television in such a comprehensive way since. This has partly been a matter of inclination. After several years of heavy reviewing and writing, I was a bit burned out, and when I got back to writing about science fiction, I simply didn't approach it the same way. I was more content to be interested in some things and not pay attention to others, with the second list constantly growing. (I've lost all patience with the Frankenstein complex cliche Hollywood relentlessly serves up, which means no Westworld--a remake of a project by Michael Crichton, groan--no Humans, no lots of things for me.)

But it has also been a matter of sheer mass. Today there are more channels in the cable package than ever, for which more original content is being produced than ever before, and it isn't just a matter of TV channels now. There are all the online services--Netflix, Amazon, Hulu and I'm sure three billion new ones between the time I started writing this post, and the time you're finally reading this.

Even just covering Marvel's superhero stuff, for instance, means signing up with multiple services over and above a premium cable package--because in addition to all their TV stuff, on ABC (Agent of SHIELD), FOX (The Gifted), FX (Legion) and Freeform (Cloak & Dagger), and the web shows like WHIH Newsfront, there is also Runaways on Hulu, and six different series' on Netflix alone (Daredevil, Jessica Jones, Luke Cage, Iron Fist, the Defenders, The Punisher). And all this is without considering children's television (five different animated shows on Dinsey XD).

It doesn't help that TV is so much more arc-oriented than it used to be, making casual viewing or late entry, less worthwhile than before. The tone doesn't help either. While a large amount of '90s television was hokey, it was easy viewing, easy to just leave on because you were busy with something else or not really in the mood to get into anything or just unable to find anything better, in contrast with today's TV writers and directors and producers, ever desperate to show off how intense and "edgy" they can be--which is, more often than not, a matter of showing off how pseudomature they can be in that obnoxious indie movie maker way. (Remember Deadpool? Groan again.) It also doesn't help that the sheer crowding, the abundance of short seasons, the too risque-for-syndication-even-with-heavy-editing content makes many of them unlikely candidates for the kind of rerun arrangement that lets a casual viewer see a whole show from beginning to end without going out of their way to try (the way I did with The Pretender, Angel, Charmed and Smallville on TNT).

In fact, I gave up on Game of Thrones in the fourth season, and haven't gone back. I don't think I will, either.

But I will probably eventually get around to picking up The Winds of Winter.

If it ever comes out.

Which it won't be this year.

A fact which has contributed to his fans being so starved for a continuation that the trumpet the mere mention that Martin is "working on it," the slightest hint that perhaps the work has already been completed, is trumpeted as "good news" by the Express.

Retiring Bond?

The place of the Bond novels in the history of spy fiction is their being the vehicle for the transmission of the old-style clubland adventure tradition to the post-war world.

This was partly a matter of its adroitness in adapting it to new realities. Britain was no longer the dominant power in the world it was in the day of Duckworth Drew, or even Bulldog Drummond, but he could still see Britain as a world power by way of its partnership with a U.S. bound to it not just by a common heritage or values, but the Cold War. In service to that Britain Bond was no longer a gentleman of leisure with too much time on his hands, but inserted in the bureaucratic reality of a modern security state, while reflecting a more egalitarian era, the element of luxury is a "semi-aristocratness" smuggled in by the back door (in Kingsley Amis' phrasing). And conservative that he is, Bond cannot be so content with that earlier international standing, or entirely at ease with the post-war order he is defending; and perhaps not wholly untouched by changed attitudes toward nationalism, empire and the rest; his conservatism is both reactionary on the one hand, and carries it with a certain freight of ambivalence and irony on the other (such that while one can see Bulldog Drummond here, one can see Maugham, Ambler, Greene too).

The Bond films updated the concept yet again, while broadening its appeal, playing down the politics, embracing the Playboy era, loading up on gadgets and gimmicks. In the process it did much to define, or redefine, the spy as an updated clubland hero for a new, and global, generation of filmgoers.

Even more consequentially, they invented the action film (its particular structure and pacing, its use of set pieces, their essential range of types and scales, the techniques for editing and photographing them), and the blockbuster as we know it (not only making franchises out of such successes, but using massive publicity and wide initial releases to front-load the grosses, while raking in additional dollars through shameless merchandising). Hollywood did not really master, let alone improve, on the practice until the mid-to-late '70s, with Barry Diller-Don Simpson-Michael Eisner high concept, with the TV ad blitz that preceded the release of Jaws, with Spielberg and Lucas' stream of adrenaline-oriented hits (Jaws, Star Wars, Indiana Jones).

But at this point, not only has the novels' update of clubland become a historical curiosity, but so has the update of the update by the films. Additionally, if the Bond films, even after ceasing to be really innovative (this ended with the '60s), remained relatively unique (until Star Wars), this was decreasingly the case. Instead the Bond movies, with more or less competence, traded on brand name and past good will, while sporadically reinventing themselves after the fashions set by others. There has been enough money in the game to keep it going up until now. Yet, that is a far cry from there being a point to it all for anyone but those who collect the profits.

What do you think? Do you look forward to new Fleming-era Bond novels and the continuation of the reboot? Or, defying the poptimist fashion, do you agree with Stuart Heritage that the best next James Bond would be no James Bond at all?

You are cordially invited to get in your two cents here.

Los Angeles in the Action Film

Looking back at the action films of the '80s, and even the '90s, one is struck by how much they used  Los Angeles as a setting.

Classics like Beverly Hills Cop (and its two sequels), and The Terminator (and its first two sequels), and Commando, and Lethal Weapon (and all three of its sequels), and the original Die Hard--they were all set in L.A.. So were Blue Thunder, and Speed. And along with them a slew of less celebrated films, like To Live and Die in L.A., and Cobra, and Tango & Cash, and Hard to Kill, and Predator 2, and Point Break, and The Last Boy Scout, and The Last Action Hero (appropriately, given its being a compendium of action movie cliche), and Demolition Man, and Volcano, and Face/Off, and Rush Hour, and Gone in 60 Seconds, and, and, and . . .

It seems fair to say that if an action movie was not pointedly set abroad (like the second and third Rambo films), or in a fantasy world or outer space (like Conan the Barbarian or Aliens); and especially when our heroes, as was so often the case, were big-city cops fighting criminals; L.A. was not just the most popular setting, but the default setting, such that the car chase through the Los Angeles River became a cliche of the genre.1

One doesn't see L.A. in such movies nearly so much now.

Of course, an obvious explanation is that L.A. was where the movie industry was, and so a logical, convenient place for writers to write about, and for crews to shoot in. That industry has since sent its production away--to Vancouver, to Atlanta (where the local Pinewood Studios franchise became the locus of the Marvel movie machine from Ant-Man forward), and any other locality where the government is willing to foot part of the bill for the megabudget blockbusters.

But I don't think that's the whole story, especially given that the film industry often enough used other locations prior to that (not least, San Francisco and New York for those founding cop-action movies, Bullitt, The French Connection, Dirty Harry and 48 Hrs.), and has never stopped playing the game of shooting in one place while pretending that it's in another. (That a lot of movies are shot in Vancouver and Atlanta doesn't mean that we have a lot of movies actually set in Vancouver and Atlanta. Indeed, some of the $400 million reportedly spent on the last Avengers film was devoted to making Atlanta look like New York.)

And as it happens, there was plenty of reason not just to shoot in L.A., but to say they were doing so. After all, by the 1980s Los Angeles was the country's second-largest city (just overtaking Chicago then), and, fast-growing in its fast-growing region, the Sun Belt, the West coast, the Pacific coast on the verge of what everyone said would be the Pacific century, looking more dynamic, more like the future, than the older metropolises in the older Northeast and Midwest (New York, Chicago, Philadelphia, Detroit, Boston). Unsurprisingly L.A. seemed to incarnate more than other American cities the dreams and fantasies of the age, and at the same time, fear that that dream was souring. Beverly Hills Cop played up the luxury and glamour and sunshine (a striking contrast with Axel Foley's hometown of Detroit)--while in Predator 2 urban decay and crime gang problems turned it into a blighted war zone as appealing to the Yautja as a place for sport hunting as the Central American jungle from the first film.

Something of this remains with us. Just as it was natural for Baywatch to be set in L.A., it wasn't just a dubious faithfulness to or nostalgia for the original that made the inevitable feature film version also an L.A. production.

But at the same time, the action movies changed. The paramilitary action genre that made the LAPD officer chasing a drug lord through the L.A River seem so obvious a choice of theme got tired, its associated neuroses grew less compelling, while American film became more thoroughly unmoored from and less interested in addressing any kind of American reality in general (even in the silly, backward way action movies tended to do). At the same time action movies scaled up beyond the natural limits of what cop-and-criminal car chase and gunplay could accommodate (already the helicopters and international villains of the Lethal Weapon movies, the James Bond gadgetry and massive vehicles of Tango & Cash, reflected a certain imaginative strain), while the disaster movies to which the L.A.-as-future-disaster area theme played also became less salient, since the escalating mayhem meant that just about every action movie was a disaster movie.

Spy-themed action endured a bit better, but that was usually set abroad, and to the extent that cops and street hoods were still on the big screen, they increasingly acted like superspies. (Thus did Rush Hour go foreign for its sequels, to Hong Kong, to Paris, while the Fast and Furious escalated from car theft to superspy action complete with missile-firing submarines, and even the Bad Boys, who had never been in L.A. at all, had the big finish of their second film in Cuba.) Superhero-themed movies exploded, but superheroes are New York, not L.A. (explicitly so in the Marvel films, implicitly so in the D.C. Comics movies), while the proportion of films not set on anything even pretending to be contemporary Earth shot way up.

And so while Hollywood remains in L.A., one would be less likely to suspect it looking at today's movies, and certainly its action movies. In fact (and this is a decidedly unscientific impression), looking at the film of recent years L.A. never seems so conspicuous as it does in crummy independent movies about people trying to make movies--almost as if actually shooting your film in Los Angeles just shows that you haven't made the big time yet.

1. The only exception among the really top-rank, classic '80s action movies of this type would seem to be Robocop, which was set in Detroit, an even more pointed choice for a story of urban decay.

Sociology and British Sitcoms: Remembering Are You Being Served?

I first watched that PBS staple, the BBC sitcom, Jeremy Lloyd and David Croft's Are You Being Served? (1972-1985) what feels like a lifetime or two ago.

If you've never seen it, it is about the salespersons in the clothing department of an old-fashioned British department store.

Reading the Wikipedia article about it recently I noted that much of the commentary on the show's content (all too predictably) concentrated on the sexual humor. However, as the article also acknowledges,
The main humorous base of the series was a merciless parody of the British class system. This permeated almost every interaction and was especially evident in the conversations between the maintenance men and the ostensibly higher-class store personnel.
And indeed, what stands out most about the show in my recollection is the sociological insight it showed in this parody--in such things as the workers' conception of their jobs, their perception of their standing within their firm and within society more generally, and the ways in which they related to people of other classes above and below theirs within the British social hierarchy (like "the maintenance men").

Years later, reading works like Daniel Bell's The Coming of the Post-Industrial Society or C. Wright Mills' White Collar, I found myself thinking that I'd already learned much of what they had to teach--in reruns of that TV show. The pretensions and prejudices, the illusions and delusions, the fantasies and realities that Bell, Mills and others wrote about (the boundaries of that vexed term "middle class," the differing attitudes toward organized labor, etc.) were all amply dramatized there. Simply watching Mr. Mash bicker with Captain (Corporal?) Peacock was the equivalent of a master's class in these matters.

How many situation comedies can you say that about today?

Wednesday, May 23, 2018

Steven Poole Reviews the New Bond Novel

The Guardian's Steven Poole has just reviewed Anthony Horowitz's Forever and a Day--the next Bond novel, which is notable for two features. One is that it is the first time since Raymond Benson that an author has had a chance to pen a second of the Bond continuation books. The other is that it represents yet a new wrinkle in that series by offering a prequel to Casino Royale.

Poole remarks that Horowitz's novel serves up "disappointing" bits of exposition (like where we get his preference for his dry martinis shaken, not stirred, and even "the name is Bond, James Bond"), and "prose throughout is more verbose and cliched than the brutal efficiencies of Fleming." However, he also praises the choice of villain, remarks that Horowitz is "good at the action scenes," and declares the book on the whole "still an enjoyably compact thriller, with an absolutely killer last line . . . [with] some pleasingly echt Bond moments."

It seems a rather plausible assessment as far as it goes, given my impressions of Horowitz's prior effort, Trigger Mortis (which you can read about, here)--though Poole, perhaps predictably, skirts the issue of whether there is a point to his having made it a prequel (could the adventure have been just as satisfying as another '50s era entry in Bond's adventures?), and whether there is any point to writing more novels in a series where so much is modified (Poole acknowledges, among other things, the gender relations, again predictably, and Horowitz's apologies for the extent to which he carried forward Fleming's attitudes, even while, as Poole's observation suggests, exaggerating that extent). The book hits the market next week in the UK, but arrives in the States in November (according to Amazon, at least).

I expect to get in my two cents then.

Monday, May 21, 2018

John le Carrè and the Bestseller List

Recently going through the New York Times and Publisher's Weekly bestseller lists with an eye to the performance of spy fiction over recent decades for a recent paper, I must admit that what the hard data did was mostly confirm my more casual impressions—that the genre had done very well commercially in the '60s, the '70s, the '80s, and then became much less conspicuous in the '90s. (Predictably given not just the inevitability of changes in commercial fashions, but the damper the end of the Cold War put on the genre, but notable all the same.)

Still, it was something of a surprise just how well one of the bigger names sold—namely, John le Carrè. As the tables appended to my paper show, he was commercially on a level with such titans of the "airport novel" as Ian Fleming, Frederick Forsyth, Robert Ludlum and Tom Clancy. Going by the PW lists (the information from the year-end editions of which are conveniently gathered together by Wikipedia), from The Spy Who Came in From the Cold (the top-selling novel of 1965 in the United States) to The Russia House, just about all his spy novels were among their year's top ten sellers.

This adds up to a quarter of a century at the very top, an extraordinary run, especially back in those days when the uppermost ranks of the publishing world saw rather more flux than they do now.

That his sales were quite so strong is all the greater given that, reflecting his comparatively greater acclaim by critics, he was anything but a producer of the kind of crowd-pleasers that made those other authors such big names. Not only were his stories slow-paced and lacking in action, decidedly unglamorous and preoccupied with moral ambiguity, but they were so obliquely told that I suspect anyone who picks up anything by him from The Looking Glass War on and does not feel bewildered by the goings-on can count themselves a highly accomplished reader.

Indeed, I find myself wondering—is this a case of audiences having become less tolerant of such writing, or is it the case that people were buying his novels and just pretending to understand them, or even just pretending to read them, because it seemed fashionable to do so? All those copies of the volumes detailing the adventures of Smiley and company, merely purchased to make the buyer look sophisticated by sitting on their coffee table or their bookshelf?

Any thoughts?

Review: Seeing is Believing: How Hollywood Taught Us to Stop Worrying and Love the '50s, by Peter Biskind

Today it seems that Peter Biskind is better known as journalist than academic, on the strength of his history of the "New Hollywood" of the '70s, Easy Riders and Raging Bulls, and his more recent history of the post-'70s independent film movement, Down and Dirty Pictures. However, Biskind's study of Hollywood film in the '50s, Seeing is Believing, is not a chronicle of tinsel town in that period, but a more conventional piece of film criticism, close-reading a selection of the period's cinematic classic (from The Blackboard Jungle to 12 Angry Men, and from My Darling Clementine to High Noon, from The Day the Earth Stood Still to On the Waterfront), and a sprinkling of less celebrated films (like Hell or High Water, or Underworld U.S.A.).

Biskind's examination of film in this decade (the '50s here are defined as extending between the end of World War II and the flowering of the 1960s as a recognizably different area) is deeply informed by his reading of the ideological balance of that period. Key to understanding his book, it is also one of the most incisive discussions of that topic I have ever run across, which makes it seem doubly worth discussing here at some length.

Central to Biskind's analysis is what he reads as the mainstream, centrist position, which he terms "corporate-liberalism." This was formed by the recognition of the increasingly large scale and complex character of modern life (the liberal recognizing the importance of the national and international above the local, the age of Big Organizations and of atom age, jet age, space age technology); by the dark view of human nature suggested by psychologists like Sigmund Freud, and the theory of totalitarianism propounded by figures like Hannah Arendt; and by the prestige that New Deal reformism and psychoanalysis/ psychotherapy enjoyed then as never before or since. Its response to this mix of influences, experiences and challenges was a view of the country as consisting of an array of diverse interests, all equally valid, and which it thought just had to get along with one another on terms of (to quote Daniel Bell) "pragmatic give-and-take rather than . . . wars to the death." It also held that the problem of doing so had been substantially solved already by the advent of the Affluent Society.

However, making the arrangement work required not amateurs (identifiable with the ignorant, panicky, demagogue-following mob) but technocratic professionals able to understand the complexities of the situation, and devise pragmatic solutions, implementing which required consensus among the group. Consequently, from this "pluralist" view many of the changes so often discussed at the time—the replacement of the rugged individualist by the "organization man," the "inner-directed" by the "outer-directed" personality, the self-interested, combative "Protestant Ethic" William Whyte described by the "Social Ethic," the classic tough guy by the Sensitive Man—were positive changes in keeping with the new reality.

In line with its dark view of human beings and suspicion of the prospects of radical change, the valorization of the status quo as having already largely solved the "social question," and the stress on everyone getting along (which made it at best uneasy with even discussing social class, class conflict, social power, social alternatives; with even fights over principle inclining the participants to "war to the death" rather than "pragmatic give and take"), the corporate-liberal viewed "ideology" as such with suspicion.1 Of course, one might ask how they could take this position when corporate-liberalism, clearly, was very obviously an ideology. The surprisingly simple answer is that they viewed their own position as not an ideology at all but as simply "facing the facts" in a value-free, pragmatic, "realistic" fashion, in contrast with the value-minded, principled "extremist." And corporate-liberalism's approach to building and maintaining that highly valued consensus reflected this "anti-ideological," technocratic outlook.

In line with its affinity for psychology (and its rejection of more radical views) it tended toward the treatment of dysfunction, and even simply disaffection with the status quo, as something akin to mental illness, requiring "therapy" (or, in Biskind's blunter language, manipulation). This entailed bringing others around to its understanding of the world by subtly inculcating its outlook in them, such that it would seem to them their own outlook (an "internalization of social control"), a practice further facilitated through a mobilization of peer pressure. They expected that the delinquent and even the "extremist" could be brought around in this manner, much more often than not. Where the real incorrigible had to be confronted, however, the corporate-liberal recognized that there might be no alternative to that use of force with which they were uncomfortable, tended to view as damaging to the user of force as well as its victim, and regarded as necessarily tragic, but went along with anyway on the presumption of TINA (There Is No Alternative)—reflected in its propensity for militant anti-Communism at home and abroad.

The other lines of thinking can be defined in relation to this (rather conservative) liberalism, with which they tended to compromise to one degree or another.2 Those to the right of center were militantly anti-Communist not because it was ideological or "extremist," but because it was contrary to their own ideals. They favored the rugged (and "Average") individual over the expert, over the Organization and its imperative of consensus, and trusted to laissez-faire-style private interest as not only the essence of freedom, but the surest way of maximizing the benefit of all. This did not mean their altogether dispensing with notions of community, but they preferred the small scale to the large, the small town to the big city, the local to the national (let alone the international). The dysfunctional and the disaffected were not seen as sick and in need of therapy, but as simply bad, and conservatives had far fewer qualms about using force against them. Indeed, the liberal's hesitation to use force was in itself suspect to them, their readiness to try to persuade or negotiate rather than fight faintly traitorous.

Nonetheless, there were profound differences not just with the liberal center, but to its right as well—the more moderate and "sophisticated" conservatives ready to align themselves with the corporate-liberals in the punch, while those to the right of them took the harder line. (Think the college-educated Fortune 500 executive graciously heeding the advice of an academic consultant, cordially negotiating with a union boss and accepting the New Deal as a necessary evil, as compared with a George Babbitt-style small-town small businessman for whom such a thing would be anathema.) Biskind terms the former "conservative," the latter "right-wing."3

Biskind has less occasion to discuss the left, simply because one could not make an overtly left-wing film in the Hollywood of not just the Hays' Code, but McCarthy, the House Un-American Activities Committee, and the blacklist. Still, his clearing up the confusion about just what, exactly, "liberal" meant does much to clarify this as well. The embrace of ideology rather than its denial, the preference for Marx over Freud, the position that there are such things as class and class conflict and power elites, the search for social alternatives at home and reserve or opposition toward anti-Communism abroad, the suspicion of Organization Man and Organization and the therapeutic society on these grounds rather than those of the right, and all with which these things are associated (the ideas of, for example, a Mills), clearly situate a filmmaker or a work on the left—and peering through the cover lent by science fiction trappings and historical periods, films like The Day the Earth Stood Still and High Noon give him occasion to analyze a leftist work.

Impressively, Biskind manages to convey this sophisticated analysis in a highly accessible manner, succinct and happily jargon-free. Of course, Biskind's work is, ultimately, a piece of film criticism rather than social history or social science in the narrower sense of these terms, and so his doing even an excellent job with this subject matter would mean only so much within a book that is not only about film, but overwhelmingly devoted to close reading of its collection of films—not only in its conception, but its use of the available space. At least where his selection of films is concerned—by no means small or unrepresentative—his discussion of '50s politics proves an excellent lens for understanding these movies. As he shows, while the hard-driving individualist, the tough guy, remained the darling of the radical right—Gary Cooper in The Fountainhead or The Court-Martial of Billy Mitchell—elsewhere they were giving way to the more social, sensitive man better-adapted to the world of organizations than the old tough guys (neurotics or monsters or just out of date), like Montgomery Clift in Red River, heir to a John Wayne who built up his empire under old rules that no longer applied. Likewise Wayne's hard-driving commander in the conservative Flying Leathernecks had a corporate-liberal counterpart in Gregory Peck in Twelve O'Clock High, while that actor later went on to become an icon of the change as the executive who chooses family over career in The Man in the Grey Flannel Suit.

On occasion Biskind may seem to overreach. For example, I am not entirely sure that 12 Angry Men is more interested in consensus than it is in justice. Still, even here I have found his argument hard to dismiss, so that a long time after first reading it I still find myself thinking about it—not only as an exploration of these films, but of the era that produced them. As Biskind makes very clear, that era did not last, could not have lasted, given the contradictions of its characteristic outlook, which by the 1960s could no longer be ignored or papered over the way they had before. Both the right and the left were to make themselves more strongly felt in the years that followed—and the center was no longer able to hold under that pressure, with film quickly reflecting this, not least in Anthony Perkins' career. One of the archetypal Sensitive Man leads in the '50s, the type was shortly presented as really a Psycho behind the Nice Guy facade in a role that quickly blotted out his earlier screen image—while Hollywood followed up its celebration of Billy Mitchell with its satire Dr. Strangelove. All the same, there has been continuity as well as rupture, enough that Biskind's analysis seems helpful not just in understanding that formative, earlier period, but the present as well, more so than the oceans of drivel we are apt to get about pop culture, politics and the link between the two today.

1. Where C. Wright Mills wrote of "the power elite," denied the existence of any such thing, democracy and the market diffusing power so much that the concept had no salience.
2. In its dark view of human nature, qualified confidence in reason, and aversion to social change, it seems philosophically more conservative than liberal, and rather conservative in many of its objectives as well. Indeed, it can be taken for a conservatism that, in line with the conservative intellectual tradition, was not about freezing change for all time, but making concessions to unavoidable change as life went on its way (one reason why it managed to get along with the modern big business conservatism described here, in contrast with the traditionalist kind).
3. Others have used different terminologies. In Mills' New Men of Power, the first category was dubbed "sophisticated conservative," the second "practical conservative."

Review: The Sociological Imagination: 40th Anniversary Edition, by C. Wright Mills

New York: Oxford University Press, 2000, pp. 256.

Given how long ago I became acquainted with Mills' classics, The Power Elite and White Collar, I was slow to come to his book The Sociological Imagination. Less often mentioned, it simply seemed inessential. After having actually read it, though, I think I was wrong about that. Of course, his other books are quite self-contained and perfectly contained without reference to his thought on how sociologists can, do or should work. Still, the book is sufficiently rich to be worth a read for its own sake (while, I think, my appreciation of this book was strengthened by my having seen so much of his actual sociological work).

"The sociological imagination" to which the book's title refers entails the ability to relate the personal life to the collective life; the individual biography to human history. And it is underlain by the recognition that not only is there such a thing as society, but that without considering it we cannot understand the individual lives of the people in it; that we cannot usefully think about, let alone resolve, our problems without the benefit of such insight, recognizing in personal "troubles" the manifestation of bigger societal "issues." (That one does not have a job is a "trouble"; but it has to be understood in terms of the unemployment that is an "issue.") Indeed, this bestowed on social studies (he preferred the term to social science because of the latter's physics envy-evoking baggage) a great responsibility in relation to society—a key role as judges of the power-holding elite, and educators of any would-be democratic public, so that even if social scientists could not save the world, their trying to do so was an honorable and necessary endeavor, the more so given the confusion and dangers of the times as he saw them.

However, while Mills does explain this idea at length, most of the book is actually devoted to a critique of what sociologists were actually doing in his time—or as was often the case, not doing—and why, with several chapters devoted to what he saw as the problems in the way of their fulfilling their role. One of them, certainly, was the obsession with specialization—those jealousy guarded disciplinary boundaries obstacles that any researcher honestly trying to answer a meaningful question had to overstep, and which were, at least as problematic, the seed of a stifling bureaucratization and politicization of research work, as ridiculous as it was unfortunate.

Mills was also a sharp critic of the prominence of methodological controversy. In particular he criticized the extreme abstraction of what he called "Grand Theory," epitomized by the unreadable "concept"-mongering and term-coining of Talcott Parsons; and theoretically blinkered "abstracted empiricism," consisting of micro-studies without much sense of the relation or relevance of their questions and findings to the larger social scene, which had come to be so characteristic of corporate-government administrative research. Their other failings aside, the one was preoccupied with the question for methodology for its own sake, disconnected from any concrete work; while the other let a received methodology determine the problems to be studied (typically, not the really worthwhile ones), when in his view it ought to have been the other way around, methodological progress deriving from our grappling with the problems which leave us uncertain about how to deal with them. (Indeed, as Mills noted, Parsons abandoned his elaborate concepts when he actually endeavored to study a genuine sociological issue.)

And as might be guessed from a reading of Mills' other work, he was a critic not just of such academy-specific tendencies as overspecialization and the obsession with methodology, but the larger condition of society, and especially the prevailing, mainstream view—what he termed the "practical liberalism" or "pluralism," which would seem to have had connections with those academic trends he decried. Taking the facts of society for granted (especially as seen from a small-town, rural, Midwestern perspective), afraid to draw bold conclusions about the causes of social phenomena and hiding this behind an attribution of everything to innumerable minute causes, easily overwhelmed by the big questions, it inclined to a "psychologism" fixed on "the make-up of individuals"—and "rests upon an explicit metaphysical denial of the reality of social structure." The social world was so much a given as not to be considered at all; the troubled individual simply "maladjusted" and in need of being brought into "harmonious balance" with it.

If there was a positive recommendation to be derived from this line of thought, it was (as conveniently summed up in the appended essay "On Intellectual Craftsmanship") a call for "unpretentious intellectual craftsmen," with "every man . . . his own methodologist . . . his own theorist," working on significant problems without inhibition by specialization, conscious of human history and people as "historical and social actors" (as human beings in a particular time and social environment) and communicating it all clearly and simply, over "rigidity in procedure and fetishism of method and technique," the bureaucratization of research and unintelligible writing. Moreover, Mills had some optimism that the course was onward and upward. It seemed to him that the sociological imagination was becoming common currency, and that the social sciences were drawing together—with economics, for instance, becoming "political economy" again.

I cannot claim sufficient background in the state of the intellectual art in '50s sociology to confidently judge how prevalent "grand theory" and "abstracted empiricism" really were in his day. I must admit that all my knowledge of Talcott Parsons is secondhand (though that is also because those contacts with his ideas never left me with the impression that it was necessary to read his work the way it seemed necessary to read Mills, for example). Still, Mills' appraisal of the weaknesses of those approaches is compelling, and his argument for what sociology—and social science—ought to look like strikes me as axiomatic.

Still, it is also clear that he was overoptimistic about the trend in his field. As Todd Gitlin acknowledges in his afterword to this edition of the book, sociology "has slipped deeper into the troughs Mills described." To the extent that there is very much in the way of sociological imagination evident today, it is of the cheapened pop sociology kind—"sociological imagination lite, a fast-food version of nutriment, a sprinkling of holy water on the commercial trend of the moment, and a trivialization of insight" as Gitlin puts it, like the stultifyingly silly zeitgeist film criticism passing for profundity on so many review pages. Unintelligible concept-mongering, micro-studies—and of course, physics-envy—seem ever more widespread in an age of ultra-specialization. Economics has been a particularly troubling scene, with reputable and prominent economic thinkers time and time again over the years (you can see Lester Thurow do it here in 1983, Robert Heilbroner do it here in 1995) calling out that profession to no effect whatsoever, with even the crisis of 2008 having no lasting consequences—to the shock of anyone who thinks of mainstream economics as an actual would-be science rather than theology and apologetic.

I would go still further than Gitlin where sociology is concerned. While we have an abundance of cheap pop sociology, it can seem that the sociological imagination is as scarce as ever it was, and this unsurprising given the turn of economic life, politics, culture—our era of "no such thing as society" neoliberalism, "personal responsibility," "self-help," anti-intellectualism that may make what came before look like nothing, and consequent disdain for what social science has to teach. None of that has been for the better in a time that seems as much in need of the "sociological imagination" as it has ever been.

Review: Principles of Economics: An Introductory Volume, 8th edition, by Alfred Marshall

I recently noticed on Amazon that the negative reviews of Alfred Marshall's classic Principles of Economics tended to focus on the writing, charging Marshall with being an inefficient, overly wordy writer, and the book a tough read in comparison with other classic treatises like Adam Smith's The Wealth of Nations, or even John Stuart Mill's The Principles of Political Economy (themselves not exactly light reads, with Mill's book like Marshall's a textbook, with all that implies for someone trying to read it straight through). I must admit that I found the book something of a slog myself--enough so that I gave some thought as to why this was the case.

As it happens, the essential concepts Marshall covers--cost and utility, marginal and otherwise; elasticity and substitution; equilibrium; diminishing and increasing returns--and their application to the production, consumption and distribution of wealth and income by way of supply and demand--are in themselves simple, intuitive and apt to be familiar to anyone who would pick up Marshall today. And Marshall can write with great lucidity and conciseness, not only when recounting history (as he does with great polish in the appendices), but when dealing with more intricately theoretical questions (as in his admirable summing up of scientific method in the early part of Book I, Chapter III). Additionally the gloss and summary chapters offer ample assistance for anyone who loses the thread of the discussion along the way.

However, the material being covered--an economic theory based on utility--is still rather more abstract and less "common sense" or empirical than Smith or Mill's earlier work. Its presentation is also more difficult than Stanley Jevons' earlier, founding exposition of marginal utility (he called it "final utility") in The Theory of Political Economy, also heavily reliant on long verbal descriptions of microeconomic models accompanied by a great abundance of algebraic equations and graphical demonstrations of the principles and examples discussed. The reason is that where Jevons merely presented the principal line of thought, and with relative brevity, Marshall devotes many hundreds of pages to exhaustively working out the implications, in the process creating (in Marshall's own words) not a "few long chains of reasoning," but a mass of "many short chains and single connecting links" that sacrifice clarity for completeness as they bury the reader in hypothetical yet gory details. (In fact, in Book V, Marshall himself tells the majority of his readers to skip over the section's second half--altogethersome 77 pages, equal to an eighth of the main text!)

Perhaps especially given the theory's "elegance" this exhaustive working out of the implications may be less essential, less rewarding than with less compact, less tidy theories. And of course, the superabundance of mathematical content accompanying the exhaustive verbal explanation just conveys the same exact information as the verbal descriptions in a different manner. Bluntly put--if one gets the verbal descriptions, they find that working through the equations, curves and schedules doesn't really add anything to what they already knew. This is of course the norm for economics, but still, the meticulous instinctively do it, and here are bound to do it frequently and lengthily.

Accordingly, after plodding through these hundreds of very densely written pages, a reader already well-acquainted with these ideas (so much more widely diffused now than in Marshall's time) is apt to feel that they have not got much return on their effort. Knowing this does not make what may feel like an obligatory task for a student of economic theory or history any easier, but it is better to acknowledge why a book is "difficult" than just mouth about its "being hard," and if one can make allowance for all that, there is a good deal to admire here. Dated and problematic as the "Marshallian" view of economics is in many ways for readers from across the ideological spectrum (he was very much a Victorian strongly inclined to orthodoxy), Marshall, like most thinkers whose names get thrown around this way, was a rather more rigorous, nuanced thinker than his adherents, conscious of at least some of the limits of his analysis. His legacy doubtless includes some harmful misperceptions, the term he chose to use in the very title of the book—Principles of Economics (in place of the earlier term "Political Economy")--reflecting the tendency to treat economics as if it were physics, making the economic process seem"depoliticized" and above social constructs, rather than an arrangement grounded "in the contingent historical and political requirements of the prevailing social order."

Still, Marshall recognized the unavoidable imprecision of a "science of human nature" relative to classical mechanics (he compared it with the tides, not gravitation); the reality that economics derives its interest from the practical questions it addresses, and must be based on the hard facts of real life (no economic "Scholasticism" or anti-empiricism for him); the fact that people live in society, which is more than the sum of individuals, and that collective action is a thing that has to be analyzed as more than individual action writ large; that the role of time must be considered, and not simply the maximization of values but the cultivation of productive power must be taken into account in fostering national well-being; that in actual life labor is not simply another commodity, however much it may suit some to treat it that way, and that even those most narrowly committed to growth along orthodox lines cannot avoid facing the Social Question--he acknowledges all these early on, and endeavors to adhere to them subsequently. Of course, the conclusions to which these realizations lead will not satisfy those whose economic views tend in other directions, but it does show up any claim that a doctrinaire model-monger adequately lives up to Marshall's precedent, or the best in his work.

Friday, November 24, 2017

David Walsh on the Oscar Contenders

Earlier this month I mentioned David Walsh's review of Django Unchained, and his remarks on Zero Dark Thirty. Interestingly, just two days before the Academy Awards ceremony, he has produced a follow-up to these reviews in which he extends his analysis of the weaknesses of those films, and also why they have attracted such enthusiasm among the "psuedo-left" (in comparison with, for instance, Steven Spielberg's Lincoln), noting that this
well-heeled social layer, conditioned by decades of academic anti-Marxism, identity politics and self-absorption, rejects the notion of progress, the appeal of reason, the ability to learn anything from history, the impact of ideas on the population, mass mobilizations and centralized force. It responds strongly to irrationality, mythologizing, the "carnivalesque," petty bourgeois individualism, racialism, gender politics, vulgarity and social backwardness.
This is as succinct and forceful a description of the politics of postmodernism, and the way in which it manifests itself in the taste for a certain kind of "dark and gritty" storytelling, as I have seen in some time.

You can read Walsh's full remarks here.

Also of interest to those interested in this line of cultural criticism: David Walsh's review of the recent film Not Fade Away, in which he debunks the romanticizing of the 1960s as a "golden age in which everything seemed possible and revolution was in the air."

Monday, September 18, 2017

Reassessing Kurzweil's 2009 Predictions

Raymond Kurzweil's 1999 book The Age of Spiritual Machines remains a touchstone of futurologists and their critics not only because of Kurzweil's continuing presence within the information technology and futurology worlds, but because the book also offered multiple, lengthy, even comprehensive lists of forecasts. Many of his forecasts were about subjects far outside his realm of expertise (macroeconomics, geopolitics, military science, etc.), and unsurprisingly were commonplaces, but a good many of them were precise technological forecasts lending themselves to testing against the day's technological state of the art--and thereby checking the progress of the technologies with which the book was concerned, and perhaps, also our progress in the direction he claimed we were going in, toward a technological Singularity.

Naturally a good many observers (myself included) reviewed Kurzweil's 1999 predictions for 2009 when that year arrived. Different writers, depending on which forecasts they chose to focus on, and how they judged them, came to different conclusions about his accuracy. I emphasized those precise, easily tested technological forecasts, and saw that many had not come to pass. I noted, too, that while not accounting for each and every error, there was a recognizable pattern in a good many of these, namely that Kurzweil assumed advances in neural networks enabling the kind of "bottom-up" machine learning that made for better and improving pattern recognition, as with the speech recognition supposed to make for the Language User Interfaces we never really got.

Looking back on those predictions from almost a decade on, however, it seems worth remarking that the improvement in neural nets was indeed disappointing in the first decade of the twenty-first century--but got a good deal faster in the second decade. And right along with it there has been improvement in most of the areas he seemed to have been overoptimistic about, like translation technology. Indeed, as one of those who looked at his 1999 predictions for 2009 and was underwhelmed, I find myself increasingly suspecting that Kurzweil was accurate enough in guessing what would happen, and how, but, due to overoptimism about the rate of improvement in the underlying technology, was off the mark in regard to when by a rough decade.

Moreover, this does not seem to have been the only area where this was the case. The same may go for the use of carbon nanotubes, another area where after earlier disappointments this decade technologists achieved successes that may plausibly open the way to the new designs of 3-D chips he described, permitting vast improvements in computer performance. (As it happens, Kurzweil guessed that by 2019 3-D chips based on a nanomaterial substrate would be the norm. If IBM is right, he will have been off by rather less than a decade in this case.) It may also go for virtual reality--which likewise seems to be running a rough decade behind his guesses.

The advance in all these areas has, along with a burst of progress in other areas in which Kurzweil took much less interest (like self-driving cars of a different type than he conceived, and electricity production from renewable sources), contributed to a greater bullishness about these technologies--a techno-optimism such as I do not think we have had since the heady years of the late 1990s when access to personal computing, the Internet and mobile telephony began the proliferation and convergence that has people watching movies off Netflix on their "smart" phones while sitting on the bus.1

Of course, it should still be remembered that much of what has been talked about here is not yet an accomplished fact. The advances in AI have yet to make for proven improvements in consumer goods, while those 3-D nanotube-based chips have yet to enter production. At best, the first true self-driving car will not hit the market until 2020, while renewable energy (hydroelectric excepted) is just beginning to play a major role in the world's energy mix. Moreover, that the recent past has seen things pick up does not mean that the new pace of progress will remain the new norm. We might, in line with Kurzweil's expectation of accelerating returns, see it get faster still--or slow down once more.

1. Kurzweil's self-driving cars were based on "intelligent roads," whereas the cars getting the attention today are autonomous.

Review: The Wages of Destruction: The Making and Breaking of the Nazi Economy, by Adam Tooze

New York, Penguin, 2006, pp. 799.

I remember that reading Paul Kennedy's The Rise and Fall of Great Powers back in college I pored over the figures for Germany's World War II-era income and production and felt that something didn't quite fit. German arms output seemed a good deal less impressive than I expected given the country's image as an ultra-efficient manufacturing colossus, and the fact of its control over nearly the whole of the European continent.

The common explanation for this was political--that an irrational Nazi regime, for various reasons (its slowness to move to a total war footing out of fear of a repeat of the Revolution of 1918, a self-defeating brutality in its exploitation of occupied territories, Hitler's self-interestedly playing various interests off against each other at the expense of the war effort, etc.) failed to properly utilize its potentially overwhelming industrial capacity (at least, prior to the arrival of Albert Speer, who came along too late to accomplish very much).

By and large, I accepted this explanation.

More recently Adam Tooze's Wolfson prize winning study The Wages of Destruction: The Making and Breaking of the Nazi Economy offered a different view--that the question was not a matter of political failure of this type, but the material limitations of the German economic base. Germany was simply not big enough or rich enough to hold its own against an alliance of the United States, the British Empire and the Soviet Union--the coalescence of which Tooze contends was virtually inevitable, so that Hitler's attempt to create a continental empire in the east in the face of their opposition was simply not explicable in any rational terms. (Indeed, he holds it to be comprehensible only in terms of Hitler's bizarre, conspiratorial, apocalyptic racial theories.)

To the end of making that case Tooze devotes his book to a study of the German economy through the crucial period (1933-1945), less on the basis of new data than old and well-known data (more or less like what I saw reading Kennedy) that has simply been marginalized in a great deal of analysis of the war. As he notes Germany was the world's second-largest industrial power in the early twentieth century, well ahead of its European neighbors in manufacturing productivity and output, and "world-class" in several key heavy industrial and high-tech lines like steel, chemicals, machine tools and electrical equipment.

However, German manufacturing appears less impressive when one looks at the sector overall. While Germany still excelled its European neighbors in productivity when taken this way, its lead was slighter than might be imagined, so much so that the difference between Germany, and for example, Britain, was negligible in comparison with the difference between Germany and the much more efficient (because much more "Fordist") United States (America getting twice as much per-worker output in many lines). There were even key heavy/high-tech lines where Germany remained a relatively small producer--like motor vehicles, the volume of output of Germany's automotive industry far behind that of the U.S..

Moreover, manufacturing was just one sector of the economy--with the other sectors even worse off. The country's comparably small, densely populated and less than resource-rich territory was problematic from the standpoint of its extractive sectors, which were also less productive than they might have been. The country's agricultural sector, notably, lagged not just the U.S. but even Britain in productivity, suffering from having too many of its workers on too little land divided into too many farms. All of this contributed not only to the country's per capita and national income being less than its manufacturing successes implied, but the harsh reality that those manufacturing successes depended heavily on imports of food, energy and raw materials, to the point that the country's trade was scarcely balanced by its manufacturing exports.

The result was that not only was Germany poorer than is usually appreciated, but its resource-financial situation differing from that of the U.S. (resource-rich) or Britain (resource-poor but financially powerful, with the trading privileges empire brings) in this manner left it with less room than they to export less or import more, as a major military build-up demanded. Moreover, the attempts to alleviate this constraint by developing domestic sources of imported materials like iron, and inventing synthetic substitutes for the oil and rubber Germany could not produce locally, on top of entailing additional inefficiencies, fell far short of eliminating the problem, leaving it very vulnerable to the cut-off of essential imports, and something have to give, fairly quickly.

This reading of the history is borne out by what actually did happen, the German government actually making ruthlessly efficient use of its domestic resources almost immediately on taking power--the facts about this refuting such clichés of the historiography as the unwillingness of the German government to sacrifice its people's living standards or put women to work for the sake of its military effort. (Indeed, already in the '30s the German government's system of central control of foreign exchange and key materials was gutting the production of consumer items like clothing in favor of armaments, while Germany had more of its women in the work force than Britain did.) However, for all that, Germany's merely pre-war preparations--its drive for maximum short-term output in the key military-industrial lines--translated to severe, unsustainable economic strain before the fighting even began, in such ways as the neglect of key infrastructure like the railroads, the price distortions caused by intricate central control, and the emergence of crises of cash flow and foreign exchange in spite of all the managerial feats and painful sacrifices. The inescapable recognition that the German economy by 1938 was a house of cards forced the government to curtail its rearmament programs at a point at which it was still nowhere near to being a match for Britain at sea, and disadvantaged in the face of combined Anglo-French air and ground forces in key respects (like numbers of up-to-date tanks), with the balance already swinging away from it to its opponents.1 And then even the cutbacks still left it in an economically precarious position.

Of course, the fighting did begin in 1939, and Germany did make successful conquests during the first year that did enlarge its resource base. However, in hindsight the conquests added less than might be imagined because of the limited size, resource bases and development of the countries it overran, not just the nations of Eastern Europe, but in Western Europe (Norway and Denmark, the Low Countries, and even France) as well.2 The result was that even the conquests fell far short of putting Germany on an even footing with the combined Big Three, the disparity still so wide as to moot many of the claims about how this or that more careful use of Axis resources by the Nazi leadership might have turned the tide of the war.3 Tooze reinforces this contention by showing that, for the most part, the Germans generally did what could be done with what was at their disposal (puncturing the "myth" of Albert Speer).4

A few questionable analogies apart (as when Tooze compares Germany's economic development in the 1930s to that of mid-income countries like South Africa today) Tooze offers a comprehensive and convincing image of the German economy, and especially its critical weaknesses. Moreover, while he pays less detailed attention to the other side of the balance sheet (it seemed to me that he was insufficiently attentive not just to France's demographic and industrial vulnerabilities but to Britain's financial and industrial weaknesses and problems of military overstretch in the late '30s), he is quite correct to contend that Germany, and even a Germany in control of much of continental Europe, was at a grave disadvantage in a prolonged war with a U.S.-U.K.-Soviet alliance.

Nonetheless, this emphasis on the inability of Germany to hold its own against the combined Allied Big Three obscures a weakness much more important than the deficiencies of the economic analysis--namely his depiction of the anti-Nazi alliance's coming together the way it did as a virtual certainty, his case for which is superficial and conventional compared with his economic analysis.5 He slights the leeriness of the U.K. and France about allying themselves with the Soviet Union that actually did prevent their coming together in 1939, with disastrous consequences. (Even after Poland, French conservatives were much more interested in fighting the Soviets than fighting Hitler. Indeed, the response of many of them to French defeat appears to have been a George Costanza-like "restrained jubilation" at the prospect of dispensing with democracy and setting up a fascist state in William Shirer's account of the Third Republic's collapse.)

Continuing along these lines, Tooze then goes on to slight the reality that, after the shock of the Battle of France (which brought Italy into the war on Germany's side), the hard-pressed, cash-strapped U.K. might have sought a negotiated peace, or that isolationism might have won the day in the United States, depriving Britain of American support. (In its absence the country would have had to settle with Germany--and the U.S. would have had few plausible options against Germany afterward, in spite of Germany's failures in the Battle of Britain and its naval limitations.) These developments would not have changed Germany's asset base--but they would have taken Britain out of the war, and likely kept Germany from having to fight the United States as well, leaving it to focus on just the Soviet Union, which would have changed the balance between them dramatically. Equally Tooze does not acknowledge, let alone refute, the possibility that in spite of the balance of power being so heavily against it as it was different German military decisions (most obviously "the Mediterranean strategy") could have produced a different outcome.

Additionally, these oft-studied "What ifs" aside, it is worth remembering the way the war did go. After Britain decided to stick it out against Germany, American support was exceedingly slow in coming, Britain bankrupt in March 1941 before the aid spigot was turned on fully. Moreover, even with Britain still against it, even after a delay by one crucial month due to the German invasion of the Balkans, mid-course changes to the plan of attack that cost additional time, and famously inimical weather, German troops managed to come within sight of Moscow before the invasion's first really significant repulse. And even afterward a depleted Soviet Union remained vulnerable (while any Anglo-American "second front" remained a long way off), permitting Germany to continue taking the offensive through the year, up to and even after the turning point of Stalingrad--following which it was another two-and-a-half more years of fighting before V-E Day.

One should also remember the cost of this protracted effort. The Soviet Union lost some twenty-five million lives. Britain lost what remained of the foundations of its status as a world empire, a great power, and even an independent force in world affairs (in such a weak position that it had to take a $4 billion U.S. loan immediately after the war). The U.S. did not suffer anything comparable, but it is worth remembering that its extraordinary industrial effort went far beyond what even the optimists thought possible at the time--and was a thing the Allies could not have taken for granted in the earlier part of the war, even after the American entry.

Consequently, while in hindsight it appears clear that Germany's chances for attaining anything resembling its maximum war aims were never more than low (indeed, would probably be deemed delusional but for the astonishing Allied timidity and incompetence in the key September 1939-May 1940 period), the attainment of those aims became increasingly implausible again after the failures of German forces against the Soviet Union in late 1941, and virtually inconceivable after 1942 came to a close, a victory so long, horrifically costly and deeply exhausting cannot be regarded with the complacency Tooze shows. Indeed, in his failure to critically examine this side of the issue, it appears that while Tooze takes on, and debunks, one myth of World War II (of German economic might), he promotes others that have themselves already been debunked--most importantly, that of the political establishments of the future members of the wartime United Nations having been steadfast opponents of Nazism from the very beginning.6

1. The initial plans had called for an air force of 21,000 planes when Germany went to war. The Luftwaffe never had more than 5,000, however.
2. Paul Bairoch's oft-cited 1982 data set on world manufacturing output through history gives Germany 13 percent of world industrial capacity. All the rest of non-Soviet, continental Europe together was another 13 percent, widely dispersed among its various nations, not all of which Germany controlled (Sweden and Switzerland remained independent), while the productivity of Germany and occupied Europe was hampered by the cut-off from imported inputs by the Allies' control of the oceans. By contrast, Bairoch's data gives the U.S. alone 31 percent of world manufacturing output in 1938 (more than all non-Soviet Europe combined).
3. To return to Bairoch's figures, the U.S. in coalition with the U.K. (another 11 percent) and the Soviet Union (another 9 percent) possessed 51 percent of world manufacturing capacity--a 4-to-1 advantage over Germany, and a 2-to-1 advantage over all of non-Soviet, continental Europe combined (independent states that merely traded with Germany included). Moreover, even these figures understate the margin of Allied advantage, given how the U.S. economy boomed during the war years (its economy growing about three-quarters), and Britain brought its broader Empire with its additional resources into the alliance. Considering the access to the food, energy and raw materials needed to keep all that capacity humming would seem likely to widen the disparity rather than diminish it.
In fairness, this was somewhat offset by two factors: German conquest of much of the western Soviet Union, and the absorption of a portion of Allied resources by the simultaneous war in the Asia-Pacific region. However, much of the Soviet industrial base in the conquered parts was hurriedly relocated eastward ahead of their advance, and contributed to the Soviet productive effort (and more still denied the Germans). At the same time the diversion of Allied resources by the Pacific War was limited by Japan's more modest military-industrial capability (it had just 5 percent of world manufacturing in 1938), China's weight on the Allied side (much of the Japanese army and air force tied up fighting against the Chinese), and the Allies' prioritization of the war in Europe (the "Europe First" or "Germany First" policy).
4. Tooze contends that the rise of German output in the last part of the war was largely the result of decisions taken earlier in it, and Speer simply around when those decisions paid off in the form of production gains. Tooze also offers a critical view of those initiatives Speer did undertake, like the effort to mass-produce U-boats late in the war.
5. While the critique of German strategy is solid, Tooze treats Allied strategy much less critically--a particular weakness when it comes to his assessment of the Combined Bomber Offensive.
6. The body of work regarding Britain in this respect can be found distilled in Clive Ponting's impressive 1940: Myth and Reality.

Reading Revisionism

It seems to me that I have been running across a great deal of revisionist World War II-era economic history as of late, focused primarily on how the balance stood between two of the war's major participants--Britain and Germany.

The old view was that the British Empire was stodgy and enfeebled, and looked it when it went up against sleek, ultra-modern, ultra-efficient Germany.

Much recent historiography has taken a different view--that Germany was not really so tough, nor Britain's performance so shabby. That it was really Germany that was the economic underdog against the trading and financial colossus--and, for all its dings, the techno-industrial colossus--Britain happened to be.

Of course, there is nothing wrong with revisionism as such. New information, and the availability of new ways of looking at, should prompt rethinkings of old perceptions and assumptions. However, revisionism also has a way of pandering to the fashion of the times--and this seems to be one of those cases.

This line of argument seems to say that Britain didn't do so badly out of free trade. That the country's weaknesses in manufacturing just didn't matter that much--that the emphasis on finance instead was just fine. That, by implication, the conservative establishment did just fine, overseeing the economy, and managing the country's foreign affairs, and then fighting the war--not needing any help from a bunch of lefties and Laborites, thank you very much, and certainly not disgracing itself the way they claim it had clearly done post-Dunkirk.

It seems more than coincidence that this is all very congenial to a conservative outlook--in particular the one that today says free trade, financialization, deindustrialization, the balance of payments (and all the rest of the consequences the neoliberal turn has had for the economies of the developed world) simply do not matter, that our future can be trusted to economic orthodoxy, that critical and dissenting views have nothing to offer. And also the one that says Britain can do just fine on its own, without any allies, continental or otherwise.

It is no more convincing in regard to the past than it is in regard to the present.

Still, even if one doubts the more extreme claims (there is simply no refuting Britain's failures as a manufacturing and trading nation), there is a considerable body of hard fact making it clear that Germany had some grave weaknesses--and Britain, some strengths that have to be acknowledged in any proper appraisal of the way the war was fought. So does it go in two books I have just reviewed here--Adam Tooze's study of the German economy in the Nazi period, The Wages of Destruction; and David Edgerton's more self-explanatory title, Britain's War Machine.

Subscribe Now: Feed Icon