Monday, May 15, 2017

The Hikikomori Phenomenon: A Sociological View

I don't think I've watched the Fusion TV Channel before this past week--but while scanning the TV schedule I did recently notice their running a documentary on hikikomori, which was something of a surprise. Anime and manga fans who look beyond the hardcore science fiction/fantasy-action stuff that dominates the American market to the more slice-of-life, comedic series' hear quite a bit about the phenomenon, but this kind of coverage seemed new.

Watching the documentary, I can't say that it offered anything I hadn't seen or read about before, but it did set me thinking about the issue again, and especially those aspects not addressed in it in an overt way--like the matter of social background (rarely commented upon, especially beyond the banality that the child of an impoverished household, lacking their own room, without a family able to bear the economic burden, cannot become hikikomori).

One exception to this tendency was this paper by Tatsuya Kameda and Keigo Inukai, which raised the possibility of a correlation between a lower middle class background and the hikikomori pattern of behavior.1 Kameda and Inukai remark evidence of greater "emotional blunting" (their reactions and expressions of their reactions muted) and lower social involvement (more time at home, less time out with friends) among lower middle class young persons as compared with those in more affluent groups--and that this correlates strongly with observations of hikikomori, suggesting a great many "undiagnosed" cases at this level.

Going further than Kameda and Inukai in approaching the hikikomori phenomenon as a sociological issue rather than a psychological one was another paper by Felipe Gonzales Noguchi, "Hikikomori: An Individual, Psychological Problem or a Social Concern?" Noguchi's paper refers to the "strain theory" of Robert K. Merton, which Merton elaborates brilliantly in his classic paper, "Social Structure and Anomie."2

Merton's theory holds that society sets certain goals for its members, and certain means for realizing those goals. In the America of his own time (and our time as well, and also in contemporary Japan) the goal is upward economic striving on an individual basis. The means, by and large, is doing well in school and then getting "the good job" (or, alternatively, personal entrepreneurship, though this is for myriad reasons the road less traveled today).

Someone accepting goal and means--grinding hard at school in the hopes of entering the most prestigious college they can, on entry into said college single-mindedly chasing the most "practical" degree (i.e. best odds for most money) consistent with their talents (e.g. business or engineering rather than music or anthropology), seeking out the highest-paying job they can get as graduation approaches and then grinding hard at that job in the expectation of raises and promotion (and keeping their ear to the ground just in case something still better come along)--"conforms" perfectly.

However, there are alternatives to this acceptance of goal and means together. Some accept the goal but not the means. They want to get ahead, but try another way--like being a criminal in the most narrow, conventional sense of the term--which Merton called "innovation."

Some do not have much hope for the goal, but nonetheless abide by the approved means, going through the motions, at least, in what Merton called "ritualism." (The unenthusiastic bureaucrat doing their job adequately but no more, and counting the days until they get their pension, is a ritualist.)

Of course, some reject both goal and means, and this takes two forms. One is to try and escape the whole thing--"retreat," just drop out of the system. The other is to try and change the system--"rebellion."

Merton observed that "it is in the lower middle class that parents typically exert continuous pressure upon children to abide by the moral mandates of the society," a "severe training [that] leads many to carry a heavy burden of anxiety." This is made worse by the reality that the "social climb upward" stressed more severely here than anywhere else "is less likely to meet with success than among the upper middle class." (All other things being equal, the lower middle class kid has less access to the "good school," the "good job," than their more affluent, better connected peers do, precisely because those peers are more affluent and better connected.)

In short, lower middle class youth are under heavier pressure to conform, and at the same time, have less hope of a payoff at the end of the privations that conformity demands (while those privations may be more severe--college a different thing for the rich kid with the legacy than the scholarship kid, even if both take their studies seriously). The pressure and privation would seem more severe, and the hope dimmer, in a period where the economy is stagnant, the prospect of upward social mobility is declining, and the middle class is stressed, all features of Japanese life in the past generation.

Meanwhile, if Kameda and Inukai's observations are valid, those kids are also less able to express and thus cope with the feelings with which all this leaves them.

Of course, Merton suggests that the most likely response for the stressed but not very hopeful lower middle class sufferer from this situation is "ritualism." And it may well be that ritualism is the most typical response to such a situation. However, that by no means rules out "retreat," which (as Noguchi observed) seems to be exactly the hikikomori response. Indeed, the relative disconnect from life outside the home that Kameda and Inukai note as features of lower middle class life may further encourage this--especially if the mounting pressure, and the discrediting of conformity as a life path, make the bleakness of ritualism intolerable (while rebellion appears unworkable as an option meaningful in the individual short-term).

Even in taking the sociological view of the matter (e.g. approaching the hikikomori phenomenon as a retreat in the Mertonian sense to which lower middle class young people may be disproportionately driven), however, it is worth remembering that Western observers tend to make much of what are presumably unique features of Japanese life--its notoriously demanding education system, for example, or its low tolerance of nonconformity (while we Westerners congratulate ourselves on our freer and more tolerant ways). Still, as Lars Nesser asked,
can it be proved that the pressure Japanese youth experiences is any different from what American youth, or youths from other industrialized societies feel? Is the pressure to follow social norms so exceptionally strong in Japan compared to other countries? I do not think so.
The view of life as a demanding economic contest first and anything else a distant second at best; the combination of rising pressure and declining prospect for the lower middle class; the economic stagnation of recent decades; are things that have been more pronounced in Japan than in other places (the sharp shift from boom to bust circa 1990, notably), but none of them are unique to Japan by any means. And the truth is that there is nowhere in the world where nonconformity in any sense of the term, let alone this one (a life not dedicated to individual economic advancement), is easy. Or for that matter, social isolation or withdrawal unknown as a response to the pains that go along with this. (Merton could hardly have rounded out his paper the way he did otherwise.)

Indeed, I suspect a significant difference may be that where people are ready to accept that the hikikomori reflect a social problem in Japan, we look at their counterparts closer to home and simply mock them--making them the butt of smug jokes and sanctimonious social criticism about a generation "refusing to grow up" and just "needing a push" to do so, rather than seriously considering whether there is something bigger going on here.

1. This is, again, a situation where the word "lifestyle" is hugely inappropriate, and so I will pointedly not be using it.
2. For the purposes of this post I am using the longer version of the paper in his collection Social Theory and Social Structure. Both that version of the paper, and the whole collection, are highly recommended. Those looking for a quick overview can also get a longer version of the essentials from the Wikipedia article discussing strain theory, which seems to me to give a good round-up of his variant on it.

Review: The Power House, by William Haggard

London: Cassell, 1966, pp. 186.

As I have remarked here before, William Haggard's novels rarely get mentioned today--but when reference is made to Haggard, and certainly to a specific Haggard book, The Power House (1966) is likely to be it. Its story of a plot to overthrow a Labor PM named Harry, as it happens, is seen by many as echoing actual intrigues of the period--the reported plots against Harold Wilson by the political right during the late 1960s and 1970s described by, among others, journalist David Leigh in The Wilson Plot (1988).

Still, the story's interest goes well beyond any resemblance it may or may not bear to these events (which strikes me as slight at best). At the start of Haggard's novel far left Labor MP Victor Demuth plans to defect to the Soviet Union. Labor Prime Minister Harry Fletcher is understandably anxious to squelch the attempt with as little fuss as possible, and seeks help from his friend casino owner Jimmy Mott. As it happens, one of Mott's croupiers, Bob Snake, happens to be a relation of Colonel Russell of the Special Executive, and engaged to Demuth's daughter's Gina.

Defections are, of course, standard Cold War fare. Still, as might be expected by those who have previously read Haggard the East-West antagonism is a slight, remote thing next to the domestic ambitions and hostilities and treacheries--in which there is more than a little ordinary crime involved, and some wildly irrational personal enmity too. The PM's anxiety about Demuth has less to do with his slipping away with defense secrets and such than the embarrassment to Fletcher and the party, which has an election coming up very soon. As it happens, the Soviets, uninterested in the man, toss him back (on the advice of Colonel Russell's Soviet counterpart and later, friend, the Colonel-General), after which Russell finds it a simple enough matter to have Demuth locked up in a nursing home, away from nosy reporters.1 Nonetheless, that particular can of worms is almost immediately opened up again--by "the Squire," a rich, crippled Tory who has a seething hatred the PM while his own casino interest makes Mott a rival. He sees in the defection a tool with which he can bring Fletcher down (while also eliminating the competition for gamblers' business presented by his buddy Mott).2 Naturally Russell, and young Snake, wind up in the middle, with his near-legendary status as head of the Executive, and his mastery of bureaucratic politics, much more than a knack for gunplay or anything else of the sort, his principal instrument for dealing with the crisis.

As might be expected by those familiar with other entries in the series, what follows is a good deal of intrigue, entailing a certain amount of violence, in which Russell's involvement is rather slight. What is less expected is the extent to which all this gets personal for the usually urbane, aloof, ironic Russell, his likes and dislikes, his loyalty to some and lack of loyalty to others figuring in the story. Indeed, by way of Snake The Power House offers a good deal more than other books about Russell's personal past--as one might guess about a novel in which he comes to the rescue of a family member, especially one so unlikely-seeming as Bob Snake (an aunt of his ran away with "a bog-Irishman . . . worse than marrying a black man," with Snake the issue of the relationship).

The plot mechanics this time around seemed rather more engaging than they were in Slow Burner, while the relative novelty of the ways in which Russell gets caught up in the events made it all rather a brisk read, irrespective of its vague relation to historical events.

1. It seems worth remarking that this genial relationship came along years before M and James Bond became friendly with General Gogol in the Bond films.
2. To the Squire Fletcher is a "traitor, destroying a natural order in the name of some half-baked progress . . . subtopia, the television and the bingo hall." Interestingly the sentiment is not all that different from what we see in Fleming's novels--or for that matter, Kingsley Amis' pastiche Colonel Sun, which opens with Bond driving through television aerialled subtopia, and expressing similar distaste for it and its residents.

Saturday, May 13, 2017

On the Cusp of a Post-Scarcity Age?

It is very clear that we have not entered a post-scarcity age with regard to the more material essentials of life--energy, food, housing. We are already consuming the resources of 1.6 Earths, and the pace is accelerating--even as a vast majority of the planet remains mired in poverty, much of it poverty beyond the painfully limited imaginations of those who form respectable opinion. The potentials of known technology and organizational methods are a source of hope--but the point is that the job has not yet been done (and the politics of the last four decades are, to most of those both deeply concerned and deeply informed, cause for despair about the status quo).

Yet, it may be that we are seeing something of a post-scarcity economic reality in other areas, particularly those relating to information of certain kinds. Of course, quality is one thing, quantity another--and vast asymmetries still exist and seem likely to endure. Still, simply searching for the written word, for sound, for images still and moving, we can now access a super-abundance of them at virtually no cost.

To take one example--the one I am really concerned with here--consider fiction. Let us, for the time being, not concern ourselves with the corporate bogeyman of piracy, or even the massive availability of commercial work released under watered-down versions of copyright ("copyleft") or work with copyright intact but given away for free for promotional purposes (as by working authors hoping to build up a readership), and just focus on the work that is normally available for free, like public domain work (basically, everything ever written in human history until the early twentieth century, and a surprising chunk of the work even after that), and work that was non-commercial to begin with (like fan fiction). Single subsets of the latter (like Harry Potter fan fiction) are so prolific that an avid reader could take more than a lifetime going through just a small portion of what is already there.

The abundance may not necessarily be convenient to sift, or suit every conceivable taste. Nonetheless, someone just looking for something to read need never pay for content. This fact alone, along with the relentless competition from other media continually offering other and more alternatives to reading as a use of spare time, continually widen the gap between supply and demand in favor of supply--enough that much, much more than piracy I suspect this to be the greatest challenge facing the would-be professional writer today. And how we deal with it may be a test of how we will deal with other, larger, more material questions of scarcity and abundance in the years to come.

Friday, May 12, 2017

Review: Village of Stars, by Paul Stanton

New York: M.S. Mill & Company and William Morrow & Co., 1960, pp. 241.

Paul Stanton's Village of Stars is interesting as an example of the military techno-thrillers (or if you prefer, "proto-techno-thrillers") that appeared between World War I, and the genre's revival in the 1970s in the hands of writers like Martin Caidin, Craig Thomas, John Hackett and (in a different way) Frederick Forsyth.

Stanton's Village is set in the dozen years between Suez (1956) and the announcement of the end of British military commitments east of Suez (1968), when Britain had clearly been relegated to a tier below the U.S. and Soviet Union in the global power rankings, but still considered "a world power and a world influence" in its own right, without any other peer on that level.

As it was Britain had the world's only nuclear arsenal apart from the superpowers; and after France and then China got the bomb, still the "number three" arsenal for quite some time, its considerable stockpile carried in its fleet of over a hundred strategic bombers (the V-bombers). It was still imagined that exceptional statesmanship, or perhaps technological "innovation" might further narrow the gap between itself and the U.S. and Soviets, perhaps a breakthrough in the nuclear field (a theme seen in such prior novels as William Haggard's Slow Burner). In the meantime, the "policing" of the Indian Ocean region, because of the many colonies and closely associated ex-colonies about the region, from Australia and Malaya to Kuwait and Aden to Kenya and Tanzania, was regarded by both Britain and the U.S. as the country's special role within the Western military alliance.1

All of this, of course, is at the heart of the plot, in which Britain has developed a 100-megaton hydrogen bomb, a weapon neither of the other superpowers has, and which therefore makes it stand a bit taller than it otherwise might.2 It also happens that Britain is the key ally of Kanjistan--a (fake but real-sounding) country in the southwestern Caucasus, with the Soviet Union to its north, the Black Sea to its west, Iran to its south, in which a Soviet-backed revolt against the monarchy is underway, in support of which Soviet tanks are rolling south of their mutual border. The United States is unsupportive of British military action there (shades of Suez), but nonetheless Britain redirects ships to the area, and sends paratroops to the country, while the Air Force loads one of the 100-megaton bombs into a V-bomber dispatched to patrol off the Crimean peninsula to make clear to the Soviets that Britain is quite serious about protecting its client.

As the crisis threatens to escalate the bomber crew gets orders to fuze the device, and does so--but then when the matter is resolved diplomatically and they are told to unfuze it, they cannot do so, which is problematic because the device is set to go off if the plane descends below 5,500 feet. Naturally the question becomes how to prevent the giant bomb from going off disastrously despite the fact that the vehicle must eventually stop moving. (I remember how before the movie came out 1996's Broken Arrow was once described to me as "Speed on an airplane." It wasn't quite that, of course, but the premise of Village suggests how it could have been.)

In all this the focus is overwhelmingly on the men in the bomber at the story's center, rather than the larger picture. Tellingly the first chapter offers five pages of oblique glimpses at the emerging situation--and then seventeen about Air Vice Marshall Chatterton's personal assistant Helen Durrant attending a base dance the night of their arrival at the facility, where she happens to meet the bomber's pilot, John Falkner. Subsequent chapters continue in much the same fashion, showing the personal lives of Falkner and especially his crewmates, through lengthy scenes of their home lives showing them with their wives and their children (copilot Dick Beauchamp's failing marriage, Canadian-born Electronic Officer McQuade's happier one, Pinkney's struggle to bring up his children on his own), with only an occasional glance at the crisis in Kanjistan and the events in 10 Downing Street (let alone Washington, New York, Moscow), much more often sketched than painted--even as the narrative increasingly emphasized the thriller plot.

I must admit that this is the opposite of what I expect of techno-thrillers. Certainly in my days as a fan snapping up the genre's latest releases, I personally favored the much more big picture-oriented style of the older Larry Bond (seen in his works up through Cauldron)--and Stanton's taking his very different approach did make for rather a slow start to the story. Nonetheless, some of the characterization was interesting--even if it was the relatively minor figure of the expert on the British super-bomb that held my attention (the physicist Dr. Marcus Zweig, his careerism and compromises, his interactions with the military personnel he must work with and the gulf of misunderstanding between them, all rather intelligently written), rather than the interactions of the principals. Additionally, if its treatment often felt overlong the relationship between Helen and John proves to be a bit more than just a love story tossed into the mix to pad out the book and broaden the market.

Meanwhile, even if the time devoted to the characters was not matched by its contribution to the interest of the story, the unfolding of the premise did provide the requisite suspense, which, if less well-detailed in many respects than I would have liked, struck me as competently thought out. In fact, the aspects of the story to which Stanton was more attentive seemed drawn with greater verisimilitude than any comparable thriller dealing with the '60s-era British armed forces of which I know.3 The interest of all this as novelty was enhanced by the novel's time capsule-like quality--a portrayal of a tanker navigator guiding his plane by the sun in particular coming to mind. Similarly conventional to the genre, but unique in being presented in a Britain-centered work, is the stress on the world power standing of the protagonist nation--as a country which has forces active all around the globe, which takes the lead in dealing with crises so far from home, which develops super-bombs other nations cannot match.4 All that seems to me plenty to give it a larger place in the history of its genre than the book seems to enjoy.

1. This string of possessions originally grew out of the old imperial interest in India, now lapsed, but had since acquired an interest in their own right. (Malayan rubber became surprisingly important to Britain's balance of payments; while the value of the Persian Gulf oilfields exploded.) Plus the U.S. was too absorbed in East Asia (dealing with Korea, China, Vietnam) to pay this region so much mind.
2. At the time 100-megaton bombs were a genuine preoccupation of the participants in the nuclear arms race, though the largest device ever actually detonated was the 50-megaton Soviet Tsar Bomba.
3. By contrast, in Ian Fleming's story of the hijacking of a V-bomber, Thunderball, some aspects of the technology's handling appear very credible, as with hijacker Giuseppe Petacchi's navigation of his plane, while others--like the seating arrangement in the aircraft--come off as awfully hazy.
4. As the bomber bearing the deadly burden makes its way from Britain to the Crimean peninsula, Stanton rather strikingly describes the lands it flies over, the people who look up in awe at this display of British power and reach.

Remember Paul Stanton?

Arthur David Beaty flew with the Royal Air Force in World War II and then after the war became a pilot with the British Overseas Airway Corporation. He also became a psychologist, and penned a pioneering study of the human factor in aircraft accidents--titled, with poetic obliquity, The Human Factor in Aircraft Accidents (1969).

However, he was perhaps most widely known under the name Paul Stanton, the pseudonym under which he wrote twenty novels, mostly aviation-themed tales, many of which, if not quite classifiable as techno-thrillers, are at the very least recognizable prototypes of the form.

One of these, Cone of Silence, had been inspired by the tragic fate of the British Comet airliner, and became a major motion picture by the same name, Trouble in the Sky, starring Grand Moff Tarkin (Peter Cushing) and M (Bernard Lee).

The same year Trouble in the Sky hit theaters, Stanton put out another aviation-themed novel, Village of Stars, which also got some attention from the film industry--Alfred Hitchcock buying the rights, but, alas, never making the movie.

Village of Stars is not just a techno-thriller, but specifically a military techno-thriller from well before the field became fashionable--the tale of a British V-bomber crew who find themselves at the heart of a nuclear crisis circa 1960.

You can read my review of the book here.

Thursday, May 11, 2017

What Ever Happened to Gold Eagle Publishing?

The Mack "the Executioner" Bolan novels that began with 1969's War Against the Mafia are regularly credited with founding the contemporary action-adventure genre. Today the series has over 400 novels in print. Most of these novels, and most of the better-known paperback series' of the action-adventure type (like Richard Murphy and Warren Sapir's The Destroyer, or James Axler's post-apocalyptic Deathlands), have been published by Gold Eagle press, a division of romance novel giant Harlequin dedicated to action-adventure fiction.

This sort of fiction, naturally, did not get much attention from the more prestigious reviews, still less highbrow critics. All the same, it accounted for a major chunk of the market in its heyday, the Los Angeles Times reporting that in 1987 Gold Eagle "shipped nearly 500 million copies of titles in its five leading men's adventure series alone."

Five hundred million copies in five series' in one year!

Even if there is one too many zeroes in the number quoted above, one would imagine these to be record book-caliber sales.

However, when NewsCorp bought out Harlequin a few years back, and decided to shut down this particular division, the whole genre and its publishers had already become so obscure that, despite the high profile of the associated firms, the fate of Gold Eagle was not noted by a single major news outlet (at least, so far as I have been able to find).

What happened?

Precisely because the genre never seems to have got much attention from analysts, and still less than that in more recent years, I haven't had much to work with in trying to figure this out. But I suspect a number of things.

One reason may be that paperback fiction, not just in paramilitary action but any genre, has been subject to something like the pressure filmmaking faced with the advent of TV in this age of ever-widening entertainment options. Over the '50s and '60s, movie attendance fell by almost an order of magnitude (from averaging something like 30 times a year to 4). Some of what had previously been B-movie content migrated to television, or up into A-picture territory (like thrillers, which were increasingly big-budget and polished), putting the squeeze on the B-grade stuff at the lower end of the film market.

Similarly the pulpy paperback novel would seem to have been squeezed by our own ever-widening, ever-more portable entertainment options. On the bus, the train, the plane, you can listen to music, play a video game, watch a TV show or movie, talk on your phone, text, peruse social media. And in fact it seems that the people doing all that far outnumber those who read--while the readers have ever more options themselves, many of them totally free. (The person reading off their screen next to you may be looking at fan fiction.) And now self-published writers are making bigger strides at the pulpy end of the book market than they are anywhere else--rarely selling very many copies individually, but the sheer number of people each selling just a few copies adds up to take a bite out of this little-studied end of the market. Meanwhile, the "A-picture" equivalent of the book market--the major press hardcover release--offers ever more of the same kind of content, "big-budget" and polished. (There's no shortage of thrillers or romance or anything else in that form; and certainly I can't think of any Mack Bolan or equivalent book that gives us quite the over-the-top ride that Matthew Reilly's Michael Bay-like successions of fifty-page action sequences serve up.)

Having so many other things to do than read, having so many other things offering comparable pleasures to read, massively shrinks the audience for the old-style paperback heroes.

There is, too, the attachment of many of the highest profile paperback lines to a particular style of writing that has since dated. Today it seems a commonplace that Harlequin is looking old-fashioned next to E.L. James, and suffering for the fact. Where paperback action-adventure is concerned, it might be remembered the genre was strongly bound up with a style of paramilitary fiction that exploded in the '70s and '80s, but has since declined--right along with cinematic derivatives like Dirty Harry and Rambo and the Punisher, whose two 21st century films are comparative black marks on Marvel's record of commercial success. (Instead people expect superpowered superheroes and supervillains.)

However, some of the problems would seem to attach to the action-adventure genre specifically. Romantic movies would still seem to leave a niche for romantic novels, because of how much more scope novels have for getting into characters' heads and exploring their feelings, and the importance of this to their appeal. Video gaming certainly does not seem to have eaten very much into the romance market. (Japan has visual novels and dating simulators, but I'm not sure how big a factor they are relative to publishing, and they've certainly not caught on in the States to anything like the same degree.) But the action-adventure genre is something else, because of its stress on outwardly-directed, highly visual, highly complex action, portrayed with adrenaline-pumping immediacy and forcefulness; and on pacing brisk enough to keep us from noticing how silly the content usually is. Astonishingly close as a Matthew Reilly gets to that, in the end the fact remains that movies generally do this better than books, video games better than movies, so why read a shoot 'em up when you can just watch one--or play one? (And again, do it anywhere, anytime now? Reilly's novels have sold millions of copies--but it has to be admitted that this sales record falls far short of that first rank of commercially successful novelists.)

In any event, this is all speculative, and my purpose in writing this post has not just been to share my guesses, but to invite yours. Does it sound like there's anything I've overlooked here? Please feel free to share your ideas in the comments thread below.

Thursday, May 4, 2017

Understanding the Word "Cool"

Certainly one of the more frustrating words for those trying to work out what the words we hear every minute of every day actually mean is "cool"--at least, when it is taken apart from its not very illuminating use as a generalized term of approval. The lengthy Wikipedia article on the subject, for example, offers no clear answer, but many possible answers of differing quality.

The closest it comes to a general explanation is in the article's first, introductory paragraph:
Coolness is an aesthetic of attitude, behavior, comportment, appearance and style which is generally admired. Because of the varied and changing connotations of cool, as well as its subjective nature, the word has no single meaning. It has associations of composure and self-control (cf. the OED definition) and often is used as an expression of admiration or approval.1
Still, difficult as many have found it to pin down the word's meaning, it seems safe to say that "coolness" is an aura of indifference rooted in a sense of untouchability, infallibility, invulnerability--all of which is due to social position, personal prowess, personal resources at least partially material. (By contrast someone equally projecting an aura of indifference, but not "untouchable," not privileged, is seen as just a crank--lest anyone think coolness is just about "attitude.")

In short, coolness is the more or less universal "leisure class" aesthetic--and indeed, this reading would seem to be substantiated by the discussions of variations on the cool aesthetic in Africa, Europe, East Asia and elsewhere in the article cited above.

It also seems safe to say that those who manage to project the cool (e.g. leisure class) aura are imbued by it with certain privileges. They do not necessarily have to do what everyone else does; they get to be individuals; they are the ones who set the standard. And the element of distinctiveness or genuineness or innovation, if not displeasing, makes them seem cooler still.

Nonetheless, because it is a matter of aura and, frankly, delusion (that untouchability, infallibility, invulnerability are not real, cannot be), inextricable from the vagaries of social status and the pretensions of the leisure class way of looking at the world, coolness is very fragile. True, one does not have to do what everyone else does, one gets to be an individual and set the standard--but only within certain bounds. A cool person might make an occasional unconventional choice in the things they wear or consume or use--but too much of this sort of thing and one might wonder. If the expression of one's individuality clashes with those foundations of one's standing as "cool," for example--if one ceases to seem indifferent because they are obviously passionate about something--then their status as a cool person is jeopardized. And of course, being truly nonconformist or unconventional (in their political opinions, for example) similarly jeopardizes their social position, and therefore their status. Their coolness might survive that--but more often they will be demoted from cool to crank in the view of others.

In the end, it would seem, the aura of the indifferent cool individual is generally not about the genuinely innovative or radical or rebellious, but simply a conformist, conventional person who happens to be richer, freer, more permissive and permitted more than the rest, and hasn't yet made any really significant misstep. And so it goes with the "cool kids"--kids from more affluent and more permissive homes who get to flaunt the fashionable labels and brands, and equally flaunt "pseudomature" behavior.

In the end it all seems much ado about very little--and frankly a bit depressing to anyone who thinks social hierarchy and superficiality and general backwardness of these kinds are less than glorious features of the human story.

1. For what it is worth, I got this quote March 12, 2017.

Toward a History of Video Gaming

While researching the history of genre science fiction I found that its historians manage to produce a relatively coherent picture of it through the '70s. There is, for the half century up to that point, a series of centers on which to focus--key editors, publications, themes, styles, subgenres, movements, authors, works. Not everything is tidily reducible to these centers. Still, the center is a helpful starting point for an analysis--and even the outliers can often be related to it, if reacting against or paralleling it. (Arthur C. Clarke, for example, is a Golden Age giant--but he was not cultivated by John Campbell as part of the Astounding crowd the way Isaac Asimov and Robert Heinlein were, instead influenced by Olaf Stapledon, coming out of a related but different tradition.)

After the '70s--"after the New Wave"--the picture becomes much more confusing, the field lacking such centers, and it proves more difficult for anyone to get a handle on it all, even with the decades of perspective we now have. People talk about, for example, cyberpunk as having been important, but even that term's use is contentious and confused in a way that "Golden Age" or "New Wave" are not. In fact, after many, many years of thinking about the issue the best I was able to do for Cyberpunk, Steampunk and Wizardry was find a series of key themes running through the last four decades or so:

1. The rise of science fiction as a mass market genre. (This was a commercial, business change rather than a strictly artistic development, but hugely important for all that.)
2. Postmodernist science fiction. (Postmodernism in science fiction goes back at least to Philip K. Dick--but amid talk of "radical hard science fiction" it became an explicit, self-conscious object for an influential coterie of writers, and cyberpunk and steampunk are best understood through this lens, even as some of their elements, and the labels themselves, entered into much wider usage.)
3. Alternate history. (Again not new, but it became more commonplace, and actually started to become a genre in its own right.)
4. The blending of science fiction and fantasy. (Also not new, but again more common and more self-conscious--as evident in the "New Weird" and so forth.)

Right now the history of video gaming seems comparable. It appears relatively easy to get a coherent history up to a point--the '90s in this case--but after that coverage of the subject gets much more chaotic. Before then, the arcade and home console/computer were centers--and closely connected ones at that, with arcade hits regularly going on to become successes on the home console. However, now we have a good deal more fragmentation. There is the traditional, solitary experience--but there is also massively multiplayer online gaming. We had mobile gaming before, but now there is the division between users of dedicated devices accommodating complex play, and gaming on the cell phones and tablets "everyone" has. There are the divisions between hardcore and casual gamers, between those who grab the latest game right away and retro gaming. Meanwhile, the near-dominance of gaming by Japan (and indeed, Nintendo) has given way to not just a renaissance of Western gaming, but by way of the relative ease and low cost of producing games for the more casual player, a more globalized market (with the Angry Birds coming out of Finland, Flappy Bird coming out of Vietnam).

There are simply too many different technologies, markets, subcultures for any one analyst to feel themselves in command of it all. Or so it seems to me.

Does anyone have a different take on the situation?

Wednesday, May 3, 2017

Thoughts on the Box Office: XXX: The Return of Xander Cage

The performance of XXX: The Return of Xander Cage at the box office is a familiar story now--a film not very warmly received in the U.S. did much better in China, enough to turn a potential flop into what looks like a solid earner.

In China the movie made more in its first weekend than it made in its whole American run (over $60 million, versus its $45 million total stateside take), and went on to rake in a staggering $164 million. That nearly matched its earnings just about everywhere else, bringing the total gross up to $346 million, which means the film has now made roughly four times its stated production budget, at which point it seems safe to assume a profit is being turned.

So I guess I was right about the series' U.S. gross, more or less (that it would not come close to the heights of the original), wrong about how well it would play overseas and especially in China, where Diesel seems especially popular, benefiting perhaps more from the fortunes of the Fast and Furious franchise there than in other places, which has performed exceptionally well there. (Furious 7's China gross exceeded its U.S. earnings--$390 million to the $353 million it took in the U.S.--while so far Fate of the Furious is doing better, having pulled in $320 million to date.)

I guess, too, that while the talk of Paramount's interest in a fourth XXX film before the movie's release seemed overly bullish to me, the sequel now seems very likely to happen, and sooner rather than later.

Tuesday, March 21, 2017

What Ever Happened to the Japanese Microchip Industry?

David E. Sanger wrote in the New York Times in 1986 that the
American electronics industry . . . has virtually lost the entire memory market to low-cost Japanese competitors . . . Dozens of American manufacturers have fled the commodity memory chip business, unable to match Japan's remarkable manufacturing efficiencies or constant price cutting . . . By most estimates, Japanese manufacturers have seized a remarkable 85 percent of the market for the current generation of chips . . .
Moreover, Japan's lead showed signs of widening. By 1988 the country accounted for over half the world's microchip production (51 percent in 1988), with the top three makers (NEC, Hitachi, Toshiba) all Japanese. At the end of the decade Japanese chip makers were expected to put 16-megabit chips onto the market while U.S. companies were just getting around to making 4-megabit ones (two doublings behind), and Japan was the first to unveil the prototypes of far more advanced chips than those--the 64-megabit chip in 1990 and the 256-megabit in 1992 (64 times--6 doublings--as powerful as the 4-megabit chip standard at the time), events which got considerable press coverage. Indeed, Japan's position in the sector was regarded as emblematic of its industrial prowess, and even a basis for claims of superpower status.1

Such talk evaporated during the '90s, and seems virtually forgotten today.

What happened?

The conventional wisdom chalks it up to the idea that the Japanese companies weren't doing so well as they had been given credit for, the U.S. better--that Japanese business had got about as far as its practices could take it and was too rigid to change, in contrast with freewheeling, endlessly reinventing itself America, epitomized by that subject of endless libertarian paeans, the Silicon Valley start-up.

However, while American firms rallied (with Intel the outstanding success story) and Japanese firms made mistakes in coping with the resulting, more competitive market (like the flawed restructuring efforts that spun off firms too undercapitalized to compete), the reality is more complex. Such a disproportionate share of any market as Japan enjoyed in the late '80s generally tends to be fleeting--especially when the product is evolving rapidly, and the market rapidly growing. Both of these considerations applied here, as rapidly growing chip capabilities meant rapid changes in the productive plant, and chip consumption rose exponentially--making advantage temporary, and creating opportunity for those looking to grab a piece of the action.

There was, too, the question of how Japanese chipmakers came to enjoy their extraordinary '80s-era position in the first place--not only through the quality of their chips, but the success of their most important customers in their own lines. These happened to be Japanese makers of consumer electronics (like Sony), who had the dominant global position in their own sectors in those years--which, since they bought their chips from a few Japanese firms, made those industrial giants' squarely orienting themselves to their chip needs a plausible strategy. However, as the share of the world consumer electronics market these companies enjoyed declined, so did the share of the potential customer base for microchips they constituted--just as new, lower-cost chip producers entered the world market (like South Korea's Samsung).

Still, for all the changes, and the missteps, Japan remains a significant producer of chips today--accounting for 14 percent of the world total in 2013, after just the United States and South Korea.

Additionally, there is the matter of the market for the equipment needed to make the chips, like the raw materials from which the chips are made (like silicon wafers) and the manufacturing equipment needed to print circuits on them (like photolithography equipment). Japan's production accounts for an extraordinary 50 percent of the former, and 30 percent of the latter. While less often publicized, such totals in these difficult, exacting areas are arguably even more impressive testaments to Japan's manufacturing prowess than the chip sales of three decades ago.

1. Shintaro Ishihara famously declared in his book The Japan That Can Say No that Japan's lead in the technology put the nuclear balance in its hands, because that nation alone could make the chips needed for accurate nuclear warhead guidance, while the victory of U.S. forces in the 1991 Gulf War was hailed as actually a triumph of Japanese technology, because there were Japanese components guiding its weapons.

James Bond for the YA Crowd?

It has long been impossible to pay much attention to contemporary science fiction and fantasy and not be hugely aware of the presence of young adult fiction within it. The historic commercial success of such franchises as Harry Potter, Twilight, Percy Jackson, The Hunger Games and Divergent (among others) has loomed large within not just the genre landscape, but popular culture as a whole. Indeed, Game of Thrones apart, virtually every really major publishing success of the genre has been a YA phenomenon. And while the esteem for such works among the hard core of science fiction and fantasy fans, and especially the "higherbrow" among them, has not been on a par with their commercial success, YA has attracted the attention of such critical darlings as Cory Doctorow, Paul di Filippo and Scott Westerfeld, each writing heavily in this area in recent years, and even getting some critical recognition for the results, as with the Hugo nomination Doctorow got for Little Brother.

The same cannot be said of other genres. Indeed, science fiction and fantasy have done as well as they have in YA because of the lingering prejudice that they are kid's stuff anyway, and ironically, because of the way in which the adult stuff has become so adult--so involved, so dense, so literary, often at the same time, that someone who has not been a longtime reader of contemporary science fiction and fantasy has a hard time getting into it, or even getting it at all. (That a book like The Hunger Games is so apt to seem derivative and undemanding and unimpressive to someone who reads full-blown literary science fiction for grown-ups is an asset, not a liability, in the marketplace.)

By contrast, a YA thriller is necessarily a toned-down thriller--which does not mean that it cannot be entertaining, but must eschew the easier ways of achieving its effects by being restrained in handling its violence and other, rougher fare. All the same, writers do write thrillers aimed at the YA market, and that has long included spin-offs of 007--going all the way back at least to R.D. Mascott's The Adventures of James Bond Junior 003 1/2 (1967)--as well as imitations, one of the more successful of which has been the Alex Rider novels of Anthony Horowitz, of which there are presently ten in print, with an eleventh reportedly on the way this year.

Personally I have taken little interest in these efforts, giving them only cursory attention even while tracking down and reading every one of the regular continuation novels. Still, having recently reviewed Horowoitz's Bond continuation novel Trigger Mortis (2015), I decided to give the first Alex Rider book, Stormbreaker, a look, not because I thought he had done anything really new with the concept (the tradition was already well-worn when Fleming came up with Bond, just an update of a half century of clubland heroes), but because I was curious as to how he would cram Bondian adventure into the life of a young adult, and whether very much of such adventure would remain in it when he was done. You can read my thoughts on that book here.

Thursday, February 23, 2017

Review: Power-Up: How Japanese Video Games Gave the World an Extra Life, by Chris Kohler

Indianapolis, IN: Bradygames, 2004, pp. 312.

Chris Kohler's book, as the title promises, is concerned with the revolution wrought by Japanese video game makers between the 1970s and the 1990s, in which Nintendo and its most famous titles (from Donkey Kong to Pokemon) loomed so large. The subject is complex enough that, rather than attempting to offer a relatively linear history of the field, the book goes through it subject by subject, generally with good results. Kohler displays a robust interest in the formal aspects of the games--their appearance, structure, modes of play and storytelling--and explains these incisively, both in cases of specific, earlier pioneering games, and the larger history of the form. This extends to minute analysis of classics from Donkey Kong to the original Final Fantasy with the aid of numerous screen captures arranged in flow chart-like fashion, affording not just something of a lesson in the Poetics of Video Gaming, but enabling us to look at these old games with fresh eyes.

In doing justice to his subject matter Kohler's discussion extends well beyond gaming to its interconnections with manga, anime, pop music and other corners of Japanese pop culture--which did so much to make Japanese video games what they were. The creative stars were not techies, but had interests and backgrounds in the visual arts and storytelling media (like manga), as was the case with Pac-Man creator Toru Iwatani; Donkey Kong, Mario and Zelda creator Shigeru Miyamoto; and Dragon Quest creator Yuji Horii. And this was crucial to the contributions that they did make to their field--the appealing characters, narratives, visuals, music and gameplay experience, and the associated techniques and refinements (like cinematic cut scenes) that lifted the genre above the minimalist sports and shooting games that defined the early, more narrowly programmer-driven history of the field. This extends even to glances at intriguing but oft-overlooked games like Miyamoto's RPG Mother, and Fumito Ueda's ICO. There is also a good deal of material on how the product had to be translated and localized for sale in foreign markets.

It might be added that Kohler's book contains enough material on the history of the business to be interesting as a recounting of the development of the information technology sector--the more so for how different this recounting is from the Silicon Valley mythology to which discussion of the American gaming industry (all too predictably) conforms. Start-ups by computer programmers (like INIS) do have their part in the story. However, the businesses that played the central role were established companies in other fields that took a chance on the new sector (Nintendo, for example, was an almost century-old manufacturer of playing cards), and the stars of the story, artists that they were, Company Men, rather than entrepreneurs striking out on their own.

Moreover, while Kohler's book has undoubtedly dated in its coverage of a relatively fast-changing field, much of what it says still seems relevant--in particular, Nintendo's accent on sheer fun (which in the years since held the Wii in good stead, relative to the high cost and hardcore gaming orientation of other consoles). Additionally, the timing of Power-Up's release--in the early 2000s--is still recent enough to provide a certain amount of perspective on the major changes in gaming since then, namely the decline of the standing of Japanese firms relative to Western ones in the field; the claims that Nintendo may have gone from cutting-edge to backward-looking; and the possibility that this is just part of a still larger story, specifically the transition of gaming away from consoles to online and mobile devices.

That said, it could be argued that, important as Nintendo is in this story, the discussion may be a bit too Nintendo-centric to be taken as a history of Japanese gaming. (Despite some very real successes, Sega, for example, does not even rate an index listing here.1) It should also be remembered that while Kohler is generally knowledgeable about and respectful of the culture of which he is writing, this is nonetheless an American book aimed at an American reader. Anyone looking for much discussion of the reception for Japanese games in any foreign market but the U.S. would have to look elsewhere. On some occasions, he also approaches his subject through American stereotypes about the two countries (e.g. conformist, stifling Japan vs. everybody-chases-their-dreams America), as when he recounts the career of Pokemon creator Satoshi Tajiri, declaring that
had Japanese societal norms had their way, Pokemon would never have been born, its creator not given the freedom to follow his own path . . . he wouldn't go on to college, but spent two years at a technical school. His father got him a job as an electrical technician. He refused to take it.
The expectation that one go on to college and take the workaday job rather than devote oneself to a creative career is hardly some foreign exoticism, but similarly the norm in America (where parents are also apt to be far from pleased to hear their kid tell them they mean to be an artist rather than go for the practical degree and the steady eight-to-six).

There are also instances in which the presentation of his information could have been improved. Despite the distinctly American view, Kohler does not always provide the clarifying notes that an American audience would expect. (The Super Mario Brothers 2 he discusses in his overview of the Super Mario series is not the one we recall in North America, an earlier game more derivative of the original that was not released in the U.S. at the time--and that fact only gets proper acknowledgment in a much later chapter.) Additionally, there are sometimes listings of information within the main text of a chapter that might have been better reserved for tables or appendices--as with a seven page listing of releases of soundtracks of the music of the Final Fantasy game franchise.

However, these are comparative quibbles in regard to a book that I, for one, found to be well worth my while all these years later, a continued relevance reflected in Kohler's releasing an updated and expanded edition of the book in 2016 (24 pages longer, according to its page on Amazon). You can read about it here.

1. Sega's Master system, after all, was the closest thing Nintendo had to a rival in the 8-bit era, and its Genesis console virtually on par as competitor in the 16-bit era, while later consoles met with varying degrees of success up to the Dreamcast. Additionally Sega produced one of the few video game characters that can be compared with Mario as a pop culture icon, Sonic the Hedgehog.

Sunday, February 5, 2017

A Fragment on Fan Writing

In researching Star Wars in Context I was time and again surprised to find that answers to many fairly obvious questions about the Star Wars franchise--even discussion of those questions--were awfully scarce on the web. Some of it was even nonexistent.

For example, "everyone" knows that George Lucas based Star Wars on Akira Kurosawa's The Hidden Fortress to a considerable degree. Indeed, in a typically shallow and pointless display of erudition, Ted Mosby even dropped this little factoid on Stella while sitting her down for her first viewing of the movie on How I Met Your Mother.

Yet, just how did one film inspire the other? And not less important, what are the significant differences? Actual discussion of the connections and parallels between one film and the other, even of the most superficial kind, is actually quite elusive for a Googler--and for anyone in general.

This was not a problem for me because I had plenty of my own to say about that; and anyway, to the extent that I was saying something others weren't, well, that said to me that writing the book wasn't so pointless as I'd feared.

All the same, why is this kind of thing so often the case?

Simply put, it's a lot easier to serve up generalizations than home in on the finer details, easier to offer impressions than analysis, easier to assert than to really explain--and while generalizations, impressions and assertions are not necessarily uninformative or unhelpful (some of them actually are informative and helpful), we could generally do with a whole lot more detail, analysis and explanation than we are getting in our online chatter.1

It probably doesn't help that the comparative few capable of doing better--who have the grasp of the history and technique of the medium they want to write about, who really know something about film and actually see films like Hidden Fortress (black and white, subtitles, etc.) so that they understand them and can communicate that understanding--rarely (not never, but rarely) write about films like Star Wars. And when they bother, they rarely take the same trouble that they do with more "serious" subjects.

And so a Mosby can get away with such a pointless display of erudition as dropping the name of Kurosawa's film--pointless because I'm not sure what significance this could have had for Stella in the scene, unless she was familiar with the other film, and it was some sort of cinematic touchstone for her, and we were never given reason to think that she was; and because making too much of the connection has doubtless confused the issue for many.

Review: Britain's War Machine: Weapons, Resources and Experts in the Second World War, by David Edgerton

New York: Oxford University Press, 2011, pp. 445.

In his book Warfare State, David Edgerton made the case that British historiography has tended to overlook the fact of a military-industrial complex as a massive presence in the country's life at mid-century, distorting understandings of matters like government support for industry and science, and the welfare state that gets far more attention.

Edgerton followed up this study with a book concentrating on the British warfare state in the World War II period, Britain's war machine.

Rather than a comprehensive history of Britain's war effort, the book focuses on particular aspects of that effort, and makes a number of contentions, among the most important the following:

* Far from being finished as an economic power by Victorian decline, World War I and Depression, Britain remained a considerable economic power in the 1930s, a central element of which was its still being a considerable industrial power. Moreover, that industrial base was not just a matter of strength in old, "declining" sectors like coal, iron, steel and shipbuilding, but also the emergence of major British players in the new high-tech sectors; while in contrast with the derisory view of British productivity, the country was actually quite up-to-date in this respect--often superior to Germany, and in respects on a level with the U.S..

* British technological prowess extended beyond the civilian sphere to the military. Moreover, military innovation was not, as is often imagined, largely a matter of civilians (e.g. inspired individual outsiders) who had to fight against conservative authorities. Rather there was a vast military establishment actively initiating programs seen through by state scientists. Indeed, even those thought of as outsiders fighting the establishment, like jet engine pioneer Frank Whittle, were often insiders--Whittle "an air force officer" sent to Cambridge by his service which then seconded him to a company set up specifically to develop his idea. If anything, British decisionmakers may have been too quick to place their faith in technical fixes for the problems they faced. That we think otherwise is due to the civilian academics having been in a better position to tell their story.

* More generally, Britain translated its considerable economic, industrial, technological strength into commensurate military strength. Far from being disarmed, Britain had the world's largest navy, a first-rank air force with an unmatched bombing arm and defensive radar system, and the world's most thoroughly mechanized and motorized army through the interwar period. Reflecting the strength of its military-industrial base, it was also the world's largest arms exporter of the interwar years--while this base was massively expanded in the years of rearmament, beginning at the relatively early date of 1935 (the same time as Germany's rearmament from a much weaker position and smaller economic base).

* To the extent that Britain was bent on appeasement during the 1930s, it was not a matter of an anti-military left, but a pro-military right which rearmed while conciliating Hitler, even as the normally more pacifistic left sought a harder line in dealings with him.

* When Britain did finally enter the fight, the initial expectation was not one of a hopeless conflict, but that its superior wealth and techno-industrial capability, in contact with the larger world's resources by way of the sea, gave it confidence of eventual victory. This is not belied but affirmed by the manner in which Britain went to war: partnering with continental allies backed up by a British contingent, while relying on its naval and economic instrument to bring the aggressor to heel.

* This confidence was not extirpated by the fall of France, in part because of Britain's considerable resources; and in part because at the time Britain never considered itself alone, even in the June 1940-June 1941 period--having as it did an empire covering a quarter of the world, the backing of exiled governments which brought over significant assets (like Norway's huge fleet of merchant ships), and access to the production and resources of the Americas. (The most that could be said was that it was the only great power directly engaging the Nazis.)

* Rather than a period of national unity (or leftist triumph) which unprecedentedly brought Labor into government, and the Left more generally, the war was largely an affair of the conservative political Establishment and its military-industrial complex.

* When the war ended, Britain--validating the optimism about its ability to win its war--was less damaged than is widely appreciated. Certainly it suffered far less loss of blood and treasure than its continental counterparts, even at the height of the war. (In the 1940-1943 period when the U-boat war was raging, Britain, despite the U-boats, actually managed to get by fairly well by enlarging domestic production and making more effective use of its shipping.) Rather its position relative to the rest of the world was diminished mainly by the extraordinary rebound of the U.S. from the Depression, combined with the decision of the U.S. to remain engaged in Eurasian and world affairs in the way it had opted not to be after World War I.

I see little room for argument with many of these claims. As Edgerton argues, Britain did remain a substantial economic, industrial and military power that went into the war very well-armed rather than unprepared, thanks in part to an inventive and highly productive military-industrial complex. Appeasement was more a reflection of the will of the right than the left, and the country then went to war not under the anti-Hitler left but an essentially Establishment regime. Utilizing its traditional military approach, there was wide expectation that the country would see the war through to victory with its allies (Britain was never alone even after the fall of France), the country was never more than a long way from being broken by the U-boat attacks, and its economic-military capacity came out of the war less diminished in absolute terms than in relative ones, thanks to the extraordinary American growth of the 1930s.

Indeed, to the extent that Edgerton sets the record straight on the "While Britain Slept" image of a country that could have avoided the war but for its failure to rearm; on the actuality and weight of a military-industrial complex in British political life; on just who was really promoting appeasement; on the consistency in Britain's pattern of war-making; and on the real limits of the Left's influence and accomplishments in this era; he does the historiography a considerable service. To a lesser extent, one may say the same for his putting the British experience during the war into perspective. (Others had it far, far worse.)

However, his study also has its weaknesses. His characterization of the strategic situation is particularly flawed. While he compares how Britain and Germany stacked up against one another, in the 1930s that was far from the only relevant balance of power. For British planners, the concern was Britain and Germany in Europe, Britain and Italy in the Mediterranean, and Japan in the Far East, with the nightmare scenario Britain having to fight all three at the same time--as was actually the case by December 1941.1

Still more significant is his often superficial treatment of the macroeconomic picture, and the way that side of the situation evolved during the conflict. Edgerton seems to me correct about the country's large, sophisticated military-industrial complex--but slights the important matter of the rest of the industrial base. As it happens, he actually makes favorable comparison of the arms factories with the coal mines, cotton mills and civilian shipyards that he himself notes had received little investment since the early 1920s--but avoids drawing the conclusion about that lack of investment. Equally, he shows very little sense of nuance in discussing the country's newer, high-tech sectors, taking no interest in whether Britain's firms were world-class companies, or mere second-stringers unable to compete outside a protected home and sterling area market; for whether the British divisions of foreign firms were low-end assembly units putting together imported parts as a way of circumventing the tariff barrier, or genuine high-end production capacity testifying to and developing a broader and deeper British know-how.

Still less does the book consider what any or all of these facts meant for even the narrower question of the military-industrial complex, let alone the larger matter of financing the war. After all, a robust defense sector still needs metal products, machine tools and other goods not strictly in its line, so that a really first-class military-industrial capacity requires a first-class industrial capacity generally--and Britain's position was problematic there. To a very great degree the steel and the machine tools it needed to make its weapons had to be imported from the United States (and even the Germans), while many of the components of successes like the Supermarine Spitfire had to be imported also (the plane not just made with American tools, but packing American instruments and machine guns). And as it was ultimately the civilian economy that had to pay for such efforts, all of this meant that, coming on top of an already deteriorating export position, trade balance, balance of payments, the country's economy was under serious strain before the war even began (less severe than Germany's, but an unsustainable strain all the same).

The war, of course, made matters much worse--a fact again given short shrift in the book. The section of Chapter Three considering the matter is headed "SAVED BY THE U.S.A.?" with the question mark conspicuous and significant. He emphasizes that Britain paid its way up to that point in dollars and declares that "it was buying from the USA without heed to its longer-term economic needs . . . because it knew from the end of 1940 that U.S.-financed help was likely to be available for the future." However, there is no way to take this as anything but a slighting of the hard fact of British bankruptcy a year and a half into the war, when the country's hard currency reserve was down to nearly nothing while the conflict was far from won. There is also no conclusion other than that had it not been for America's turning on the money spigot, Britain would have had to make peace with the Axis powers in early 1941, a peace that would have left the Nazis dominant in Western Europe, and free to turn east, after which the Soviet Union really would have been alone. Still less does this refute the equally hard fact of Britain's weakened financial condition after the war, when it was dependent on massive U.S. backing (a billion-pound loan in 1946, support for its currency and outright bail-outs for decades, techno-military transfers like the nuclear submarine and Polaris missile), which at times came at high cost (like the painful post-war devaluation of sterling), despite which it consistently fell short of realizing its governments' schemes for reinvigorating its economy, expanding its welfare state or retaining its world political and military role.

All of this testified to a very real weakness on Britain's part, specifically the deficiency of its manufacturing sector when broadly approached, with all its economic consequences, not least for its ability to bear the stresses of war as well as not only the larger U.S., but as well as the country had done in the World War I era. The result is that what Edgerton really does in this part of the discussion is remind the reader that Britain had strengths as well as weaknesses, successes as well as failures in its economic life in the interwar era, and its economic effort during the war, rather than integrate the two to create a more satisfying whole. As a result Britain's War Machine works less well as a new history than a corrective to some of the conventional wisdom--needed as that may be.

1. This is, of course, the more important because of the implications of Japan's military victories for the endurance of Britain's south and east Asian empire--and that, in turn, for its standing as a world power.

Notes on Kirby Buckets Warped

I'm just as surprised as anyone to be writing about this show.

I was scarcely aware of the existence of Kirby Buckets until just a few weeks ago, and have as yet seen very little of it prior to the recent third season, which caught my attention because of how unusual it has been for broadcast television--the sharp shift of the show in genre and structure (the episodic tween sitcom about an aspiring cartoonist become a 13-episode story of interdimensional hopping, heavy on science fiction parody), and the unique airing schedule (the 13 episodes airing on 13 weekday mornings over three weeks).

Alas, the writing rarely rises above the level of the mildly amusing. In fact, the heavy reliance on gross-out humor reflects a certain laziness in its pursuit of its target demographic. All the same, the makers of the show actually do serve up an arc, rather than just tease the audience with the prospect of one--and manage to have some fun with the science fiction clichés they evoke. (Of course there's a post-apocalyptic dimension where the characters meet the Mad Max versions of the people they know; here they come complete with Australian accents.)

Additionally, the cast is a pleasant surprise, accomplishing a lot even when they have just a little to work with, with three of its members pleasant surprises. Suzi Barrett shows a good deal of comedic flair in the role of Kirby's mom, getting her fair share of laughs. Olivia Stuck somehow makes Kirby's sister-from-hell Dawn sympathetic (or at least, pitiable). And of course, improv master, veteran voice actor and "Simlish" cocreator Stephen Kearin's Principal Mitchell is a memorable mass of eccentricities sufficient to (almost) singlehandedly make the hackiest of shows watchable.

And so it went down to the finale (aired Thursday), which, to the creative team's credit, actually wrapped up a storyline, and in the process, offered the sense of a bigger tale ending as Kirby, Mitchell, Dawn and the rest closed one chapter in their lives and began another. However, whether all this will be enough to lead to a fourth season is a different matter. The show, poorly rated to begin with (one reason for the change, I suspect), has seen its viewership plunge to abysmal levels--under 200,000 if I read the numbers right, rather worse than the shows Disney XD so recently axed, Lab Rats: Elite Force and Gamer's Guide to Pretty Much Everything. Whether it will survive that will depend, I suppose, on whether viewership picks up during the reruns this weekend (and further airings of the show), whether the executives feel bothered by the way they are running out of live-action shows to put on the air--or the creators can sell them on another sharp shift of course. And maybe all of them together.

Subscribe Now: Feed Icon