Thursday, May 11, 2017

What Ever Happened to Gold Eagle Publishing?

The Mack "the Executioner" Bolan novels that began with 1969's War Against the Mafia are regularly credited with founding the contemporary action-adventure genre. Today the series has over 400 novels in print. Most of these novels, and most of the better-known paperback series' of the action-adventure type (like Richard Murphy and Warren Sapir's The Destroyer, or James Axler's post-apocalyptic Deathlands), have been published by Gold Eagle press, a division of romance novel giant Harlequin dedicated to action-adventure fiction.

This sort of fiction, naturally, did not get much attention from the more prestigious reviews, still less highbrow critics. All the same, it accounted for a major chunk of the market in its heyday, the Los Angeles Times reporting that in 1987 Gold Eagle "shipped nearly 500 million copies of titles in its five leading men's adventure series alone."

Five hundred million copies in five series' in one year!

Even if there is one too many zeroes in the number quoted above, one would imagine these to be record book-caliber sales.

However, when NewsCorp bought out Harlequin a few years back, and decided to shut down this particular division, the whole genre and its publishers had already become so obscure that, despite the high profile of the associated firms, the fate of Gold Eagle was not noted by a single major news outlet (at least, so far as I have been able to find).

What happened?

Precisely because the genre never seems to have got much attention from analysts, and still less than that in more recent years, I haven't had much to work with in trying to figure this out. But I suspect a number of things.

One reason may be that paperback fiction, not just in paramilitary action but any genre, has been subject to something like the pressure filmmaking faced with the advent of TV in this age of ever-widening entertainment options. Over the '50s and '60s, movie attendance fell by almost an order of magnitude (from averaging something like 30 times a year to 4). Some of what had previously been B-movie content migrated to television, or up into A-picture territory (like thrillers, which were increasingly big-budget and polished), putting the squeeze on the B-grade stuff at the lower end of the film market.

Similarly the pulpy paperback novel would seem to have been squeezed by our own ever-widening, ever-more portable entertainment options. On the bus, the train, the plane, you can listen to music, play a video game, watch a TV show or movie, talk on your phone, text, peruse social media. And in fact it seems that the people doing all that far outnumber those who read--while the readers have ever more options themselves, many of them totally free. (The person reading off their screen next to you may be looking at fan fiction.) And now self-published writers are making bigger strides at the pulpy end of the book market than they are anywhere else--rarely selling very many copies individually, but the sheer number of people each selling just a few copies adds up to take a bite out of this little-studied end of the market. Meanwhile, the "A-picture" equivalent of the book market--the major press hardcover release--offers ever more of the same kind of content, "big-budget" and polished. (There's no shortage of thrillers or romance or anything else in that form; and certainly I can't think of any Mack Bolan or equivalent book that gives us quite the over-the-top ride that Matthew Reilly's Michael Bay-like successions of fifty-page action sequences serve up.)

Having so many other things to do than read, having so many other things offering comparable pleasures to read, massively shrinks the audience for the old-style paperback heroes.

There is, too, the attachment of many of the highest profile paperback lines to a particular style of writing that has since dated. Today it seems a commonplace that Harlequin is looking old-fashioned next to E.L. James, and suffering for the fact. Where paperback action-adventure is concerned, it might be remembered the genre was strongly bound up with a style of paramilitary fiction that exploded in the '70s and '80s, but has since declined--right along with cinematic derivatives like Dirty Harry and Rambo and the Punisher, whose two 21st century films are comparative black marks on Marvel's record of commercial success. (Instead people expect superpowered superheroes and supervillains.)

However, some of the problems would seem to attach to the action-adventure genre specifically. Romantic movies would still seem to leave a niche for romantic novels, because of how much more scope novels have for getting into characters' heads and exploring their feelings, and the importance of this to their appeal. Video gaming certainly does not seem to have eaten very much into the romance market. (Japan has visual novels and dating simulators, but I'm not sure how big a factor they are relative to publishing, and they've certainly not caught on in the States to anything like the same degree.) But the action-adventure genre is something else, because of its stress on outwardly-directed, highly visual, highly complex action, portrayed with adrenaline-pumping immediacy and forcefulness; and on pacing brisk enough to keep us from noticing how silly the content usually is. Astonishingly close as a Matthew Reilly gets to that, in the end the fact remains that movies generally do this better than books, video games better than movies, so why read a shoot 'em up when you can just watch one--or play one? (And again, do it anywhere, anytime now? Reilly's novels have sold millions of copies--but it has to be admitted that this sales record falls far short of that first rank of commercially successful novelists.)

In any event, this is all speculative, and my purpose in writing this post has not just been to share my guesses, but to invite yours. Does it sound like there's anything I've overlooked here? Please feel free to share your ideas in the comments thread below.

Thursday, May 4, 2017

Understanding the Word "Cool"

Certainly one of the more frustrating words for those trying to work out what the words we hear every minute of every day actually mean is "cool"--at least, when it is taken apart from its not very illuminating use as a generalized term of approval. The lengthy Wikipedia article on the subject, for example, offers no clear answer, but many possible answers of differing quality.

The closest it comes to a general explanation is in the article's first, introductory paragraph:
Coolness is an aesthetic of attitude, behavior, comportment, appearance and style which is generally admired. Because of the varied and changing connotations of cool, as well as its subjective nature, the word has no single meaning. It has associations of composure and self-control (cf. the OED definition) and often is used as an expression of admiration or approval.1
Still, difficult as many have found it to pin down the word's meaning, it seems safe to say that "coolness" is an aura of indifference rooted in a sense of untouchability, infallibility, invulnerability--all of which is due to social position, personal prowess, personal resources at least partially material. (By contrast someone equally projecting an aura of indifference, but not "untouchable," not privileged, is seen as just a crank--lest anyone think coolness is just about "attitude.")

In short, coolness is the more or less universal "leisure class" aesthetic--and indeed, this reading would seem to be substantiated by the discussions of variations on the cool aesthetic in Africa, Europe, East Asia and elsewhere in the article cited above.

It also seems safe to say that those who manage to project the cool (e.g. leisure class) aura are imbued by it with certain privileges. They do not necessarily have to do what everyone else does; they get to be individuals; they are the ones who set the standard. And the element of distinctiveness or genuineness or innovation, if not displeasing, makes them seem cooler still.

Nonetheless, because it is a matter of aura and, frankly, delusion (that untouchability, infallibility, invulnerability are not real, cannot be), inextricable from the vagaries of social status and the pretensions of the leisure class way of looking at the world, coolness is very fragile. True, one does not have to do what everyone else does, one gets to be an individual and set the standard--but only within certain bounds. A cool person might make an occasional unconventional choice in the things they wear or consume or use--but too much of this sort of thing and one might wonder. If the expression of one's individuality clashes with those foundations of one's standing as "cool," for example--if one ceases to seem indifferent because they are obviously passionate about something--then their status as a cool person is jeopardized. And of course, being truly nonconformist or unconventional (in their political opinions, for example) similarly jeopardizes their social position, and therefore their status. Their coolness might survive that--but more often they will be demoted from cool to crank in the view of others.

In the end, it would seem, the aura of the indifferent cool individual is generally not about the genuinely innovative or radical or rebellious, but simply a conformist, conventional person who happens to be richer, freer, more permissive and permitted more than the rest, and hasn't yet made any really significant misstep. And so it goes with the "cool kids"--kids from more affluent and more permissive homes who get to flaunt the fashionable labels and brands, and equally flaunt "pseudomature" behavior.

In the end it all seems much ado about very little--and frankly a bit depressing to anyone who thinks social hierarchy and superficiality and general backwardness of these kinds are less than glorious features of the human story.

1. For what it is worth, I got this quote March 12, 2017.

Toward a History of Video Gaming

While researching the history of genre science fiction I found that its historians manage to produce a relatively coherent picture of it through the '70s. There is, for the half century up to that point, a series of centers on which to focus--key editors, publications, themes, styles, subgenres, movements, authors, works. Not everything is tidily reducible to these centers. Still, the center is a helpful starting point for an analysis--and even the outliers can often be related to it, if reacting against or paralleling it. (Arthur C. Clarke, for example, is a Golden Age giant--but he was not cultivated by John Campbell as part of the Astounding crowd the way Isaac Asimov and Robert Heinlein were, instead influenced by Olaf Stapledon, coming out of a related but different tradition.)

After the '70s--"after the New Wave"--the picture becomes much more confusing, the field lacking such centers, and it proves more difficult for anyone to get a handle on it all, even with the decades of perspective we now have. People talk about, for example, cyberpunk as having been important, but even that term's use is contentious and confused in a way that "Golden Age" or "New Wave" are not. In fact, after many, many years of thinking about the issue the best I was able to do for Cyberpunk, Steampunk and Wizardry was find a series of key themes running through the last four decades or so:

1. The rise of science fiction as a mass market genre. (This was a commercial, business change rather than a strictly artistic development, but hugely important for all that.)
2. Postmodernist science fiction. (Postmodernism in science fiction goes back at least to Philip K. Dick--but amid talk of "radical hard science fiction" it became an explicit, self-conscious object for an influential coterie of writers, and cyberpunk and steampunk are best understood through this lens, even as some of their elements, and the labels themselves, entered into much wider usage.)
3. Alternate history. (Again not new, but it became more commonplace, and actually started to become a genre in its own right.)
4. The blending of science fiction and fantasy. (Also not new, but again more common and more self-conscious--as evident in the "New Weird" and so forth.)

Right now the history of video gaming seems comparable. It appears relatively easy to get a coherent history up to a point--the '90s in this case--but after that coverage of the subject gets much more chaotic. Before then, the arcade and home console/computer were centers--and closely connected ones at that, with arcade hits regularly going on to become successes on the home console. However, now we have a good deal more fragmentation. There is the traditional, solitary experience--but there is also massively multiplayer online gaming. We had mobile gaming before, but now there is the division between users of dedicated devices accommodating complex play, and gaming on the cell phones and tablets "everyone" has. There are the divisions between hardcore and casual gamers, between those who grab the latest game right away and retro gaming. Meanwhile, the near-dominance of gaming by Japan (and indeed, Nintendo) has given way to not just a renaissance of Western gaming, but by way of the relative ease and low cost of producing games for the more casual player, a more globalized market (with the Angry Birds coming out of Finland, Flappy Bird coming out of Vietnam).

There are simply too many different technologies, markets, subcultures for any one analyst to feel themselves in command of it all. Or so it seems to me.

Does anyone have a different take on the situation?

Wednesday, May 3, 2017

Thoughts on the Box Office: XXX: The Return of Xander Cage

The performance of XXX: The Return of Xander Cage at the box office is a familiar story now--a film not very warmly received in the U.S. did much better in China, enough to turn a potential flop into what looks like a solid earner.

In China the movie made more in its first weekend than it made in its whole American run (over $60 million, versus its $45 million total stateside take), and went on to rake in a staggering $164 million. That nearly matched its earnings just about everywhere else, bringing the total gross up to $346 million, which means the film has now made roughly four times its stated production budget, at which point it seems safe to assume a profit is being turned.

So I guess I was right about the series' U.S. gross, more or less (that it would not come close to the heights of the original), wrong about how well it would play overseas and especially in China, where Diesel seems especially popular, benefiting perhaps more from the fortunes of the Fast and Furious franchise there than in other places, which has performed exceptionally well there. (Furious 7's China gross exceeded its U.S. earnings--$390 million to the $353 million it took in the U.S.--while so far Fate of the Furious is doing better, having pulled in $320 million to date.)

I guess, too, that while the talk of Paramount's interest in a fourth XXX film before the movie's release seemed overly bullish to me, the sequel now seems very likely to happen, and sooner rather than later.

Tuesday, March 21, 2017

What Ever Happened to the Japanese Microchip Industry?

David E. Sanger wrote in the New York Times in 1986 that the
American electronics industry . . . has virtually lost the entire memory market to low-cost Japanese competitors . . . Dozens of American manufacturers have fled the commodity memory chip business, unable to match Japan's remarkable manufacturing efficiencies or constant price cutting . . . By most estimates, Japanese manufacturers have seized a remarkable 85 percent of the market for the current generation of chips . . .
Moreover, Japan's lead showed signs of widening. By 1988 the country accounted for over half the world's microchip production (51 percent in 1988), with the top three makers (NEC, Hitachi, Toshiba) all Japanese. At the end of the decade Japanese chip makers were expected to put 16-megabit chips onto the market while U.S. companies were just getting around to making 4-megabit ones (two doublings behind), and Japan was the first to unveil the prototypes of far more advanced chips than those--the 64-megabit chip in 1990 and the 256-megabit in 1992 (64 times--6 doublings--as powerful as the 4-megabit chip standard at the time), events which got considerable press coverage. Indeed, Japan's position in the sector was regarded as emblematic of its industrial prowess, and even a basis for claims of superpower status.1

Such talk evaporated during the '90s, and seems virtually forgotten today.

What happened?

The conventional wisdom chalks it up to the idea that the Japanese companies weren't doing so well as they had been given credit for, the U.S. better--that Japanese business had got about as far as its practices could take it and was too rigid to change, in contrast with freewheeling, endlessly reinventing itself America, epitomized by that subject of endless libertarian paeans, the Silicon Valley start-up.

However, while American firms rallied (with Intel the outstanding success story) and Japanese firms made mistakes in coping with the resulting, more competitive market (like the flawed restructuring efforts that spun off firms too undercapitalized to compete), the reality is more complex. Such a disproportionate share of any market as Japan enjoyed in the late '80s generally tends to be fleeting--especially when the product is evolving rapidly, and the market rapidly growing. Both of these considerations applied here, as rapidly growing chip capabilities meant rapid changes in the productive plant, and chip consumption rose exponentially--making advantage temporary, and creating opportunity for those looking to grab a piece of the action.

There was, too, the question of how Japanese chipmakers came to enjoy their extraordinary '80s-era position in the first place--not only through the quality of their chips, but the success of their most important customers in their own lines. These happened to be Japanese makers of consumer electronics (like Sony), who had the dominant global position in their own sectors in those years--which, since they bought their chips from a few Japanese firms, made those industrial giants' squarely orienting themselves to their chip needs a plausible strategy. However, as the share of the world consumer electronics market these companies enjoyed declined, so did the share of the potential customer base for microchips they constituted--just as new, lower-cost chip producers entered the world market (like South Korea's Samsung).

Still, for all the changes, and the missteps, Japan remains a significant producer of chips today--accounting for 14 percent of the world total in 2013, after just the United States and South Korea.

Additionally, there is the matter of the market for the equipment needed to make the chips, like the raw materials from which the chips are made (like silicon wafers) and the manufacturing equipment needed to print circuits on them (like photolithography equipment). Japan's production accounts for an extraordinary 50 percent of the former, and 30 percent of the latter. While less often publicized, such totals in these difficult, exacting areas are arguably even more impressive testaments to Japan's manufacturing prowess than the chip sales of three decades ago.

1. Shintaro Ishihara famously declared in his book The Japan That Can Say No that Japan's lead in the technology put the nuclear balance in its hands, because that nation alone could make the chips needed for accurate nuclear warhead guidance, while the victory of U.S. forces in the 1991 Gulf War was hailed as actually a triumph of Japanese technology, because there were Japanese components guiding its weapons.

James Bond for the YA Crowd?

It has long been impossible to pay much attention to contemporary science fiction and fantasy and not be hugely aware of the presence of young adult fiction within it. The historic commercial success of such franchises as Harry Potter, Twilight, Percy Jackson, The Hunger Games and Divergent (among others) has loomed large within not just the genre landscape, but popular culture as a whole. Indeed, Game of Thrones apart, virtually every really major publishing success of the genre has been a YA phenomenon. And while the esteem for such works among the hard core of science fiction and fantasy fans, and especially the "higherbrow" among them, has not been on a par with their commercial success, YA has attracted the attention of such critical darlings as Cory Doctorow, Paul di Filippo and Scott Westerfeld, each writing heavily in this area in recent years, and even getting some critical recognition for the results, as with the Hugo nomination Doctorow got for Little Brother.

The same cannot be said of other genres. Indeed, science fiction and fantasy have done as well as they have in YA because of the lingering prejudice that they are kid's stuff anyway, and ironically, because of the way in which the adult stuff has become so adult--so involved, so dense, so literary, often at the same time, that someone who has not been a longtime reader of contemporary science fiction and fantasy has a hard time getting into it, or even getting it at all. (That a book like The Hunger Games is so apt to seem derivative and undemanding and unimpressive to someone who reads full-blown literary science fiction for grown-ups is an asset, not a liability, in the marketplace.)

By contrast, a YA thriller is necessarily a toned-down thriller--which does not mean that it cannot be entertaining, but must eschew the easier ways of achieving its effects by being restrained in handling its violence and other, rougher fare. All the same, writers do write thrillers aimed at the YA market, and that has long included spin-offs of 007--going all the way back at least to R.D. Mascott's The Adventures of James Bond Junior 003 1/2 (1967)--as well as imitations, one of the more successful of which has been the Alex Rider novels of Anthony Horowitz, of which there are presently ten in print, with an eleventh reportedly on the way this year.

Personally I have taken little interest in these efforts, giving them only cursory attention even while tracking down and reading every one of the regular continuation novels. Still, having recently reviewed Horowoitz's Bond continuation novel Trigger Mortis (2015), I decided to give the first Alex Rider book, Stormbreaker, a look, not because I thought he had done anything really new with the concept (the tradition was already well-worn when Fleming came up with Bond, just an update of a half century of clubland heroes), but because I was curious as to how he would cram Bondian adventure into the life of a young adult, and whether very much of such adventure would remain in it when he was done. You can read my thoughts on that book here.

Thursday, February 23, 2017

Review: Power-Up: How Japanese Video Games Gave the World an Extra Life, by Chris Kohler

Indianapolis, IN: Bradygames, 2004, pp. 312.

Chris Kohler's book, as the title promises, is concerned with the revolution wrought by Japanese video game makers between the 1970s and the 1990s, in which Nintendo and its most famous titles (from Donkey Kong to Pokemon) loomed so large. The subject is complex enough that, rather than attempting to offer a relatively linear history of the field, the book goes through it subject by subject, generally with good results. Kohler displays a robust interest in the formal aspects of the games--their appearance, structure, modes of play and storytelling--and explains these incisively, both in cases of specific, earlier pioneering games, and the larger history of the form. This extends to minute analysis of classics from Donkey Kong to the original Final Fantasy with the aid of numerous screen captures arranged in flow chart-like fashion, affording not just something of a lesson in the Poetics of Video Gaming, but enabling us to look at these old games with fresh eyes.

In doing justice to his subject matter Kohler's discussion extends well beyond gaming to its interconnections with manga, anime, pop music and other corners of Japanese pop culture--which did so much to make Japanese video games what they were. The creative stars were not techies, but had interests and backgrounds in the visual arts and storytelling media (like manga), as was the case with Pac-Man creator Toru Iwatani; Donkey Kong, Mario and Zelda creator Shigeru Miyamoto; and Dragon Quest creator Yuji Horii. And this was crucial to the contributions that they did make to their field--the appealing characters, narratives, visuals, music and gameplay experience, and the associated techniques and refinements (like cinematic cut scenes) that lifted the genre above the minimalist sports and shooting games that defined the early, more narrowly programmer-driven history of the field. This extends even to glances at intriguing but oft-overlooked games like Miyamoto's RPG Mother, and Fumito Ueda's ICO. There is also a good deal of material on how the product had to be translated and localized for sale in foreign markets.

It might be added that Kohler's book contains enough material on the history of the business to be interesting as a recounting of the development of the information technology sector--the more so for how different this recounting is from the Silicon Valley mythology to which discussion of the American gaming industry (all too predictably) conforms. Start-ups by computer programmers (like INIS) do have their part in the story. However, the businesses that played the central role were established companies in other fields that took a chance on the new sector (Nintendo, for example, was an almost century-old manufacturer of playing cards), and the stars of the story, artists that they were, Company Men, rather than entrepreneurs striking out on their own.

Moreover, while Kohler's book has undoubtedly dated in its coverage of a relatively fast-changing field, much of what it says still seems relevant--in particular, Nintendo's accent on sheer fun (which in the years since held the Wii in good stead, relative to the high cost and hardcore gaming orientation of other consoles). Additionally, the timing of Power-Up's release--in the early 2000s--is still recent enough to provide a certain amount of perspective on the major changes in gaming since then, namely the decline of the standing of Japanese firms relative to Western ones in the field; the claims that Nintendo may have gone from cutting-edge to backward-looking; and the possibility that this is just part of a still larger story, specifically the transition of gaming away from consoles to online and mobile devices.

That said, it could be argued that, important as Nintendo is in this story, the discussion may be a bit too Nintendo-centric to be taken as a history of Japanese gaming. (Despite some very real successes, Sega, for example, does not even rate an index listing here.1) It should also be remembered that while Kohler is generally knowledgeable about and respectful of the culture of which he is writing, this is nonetheless an American book aimed at an American reader. Anyone looking for much discussion of the reception for Japanese games in any foreign market but the U.S. would have to look elsewhere. On some occasions, he also approaches his subject through American stereotypes about the two countries (e.g. conformist, stifling Japan vs. everybody-chases-their-dreams America), as when he recounts the career of Pokemon creator Satoshi Tajiri, declaring that
had Japanese societal norms had their way, Pokemon would never have been born, its creator not given the freedom to follow his own path . . . he wouldn't go on to college, but spent two years at a technical school. His father got him a job as an electrical technician. He refused to take it.
The expectation that one go on to college and take the workaday job rather than devote oneself to a creative career is hardly some foreign exoticism, but similarly the norm in America (where parents are also apt to be far from pleased to hear their kid tell them they mean to be an artist rather than go for the practical degree and the steady eight-to-six).

There are also instances in which the presentation of his information could have been improved. Despite the distinctly American view, Kohler does not always provide the clarifying notes that an American audience would expect. (The Super Mario Brothers 2 he discusses in his overview of the Super Mario series is not the one we recall in North America, an earlier game more derivative of the original that was not released in the U.S. at the time--and that fact only gets proper acknowledgment in a much later chapter.) Additionally, there are sometimes listings of information within the main text of a chapter that might have been better reserved for tables or appendices--as with a seven page listing of releases of soundtracks of the music of the Final Fantasy game franchise.

However, these are comparative quibbles in regard to a book that I, for one, found to be well worth my while all these years later, a continued relevance reflected in Kohler's releasing an updated and expanded edition of the book in 2016 (24 pages longer, according to its page on Amazon). You can read about it here.

1. Sega's Master system, after all, was the closest thing Nintendo had to a rival in the 8-bit era, and its Genesis console virtually on par as competitor in the 16-bit era, while later consoles met with varying degrees of success up to the Dreamcast. Additionally Sega produced one of the few video game characters that can be compared with Mario as a pop culture icon, Sonic the Hedgehog.

Sunday, February 5, 2017

A Fragment on Fan Writing

In researching Star Wars in Context I was time and again surprised to find that answers to many fairly obvious questions about the Star Wars franchise--even discussion of those questions--were awfully scarce on the web. Some of it was even nonexistent.

For example, "everyone" knows that George Lucas based Star Wars on Akira Kurosawa's The Hidden Fortress to a considerable degree. Indeed, in a typically shallow and pointless display of erudition, Ted Mosby even dropped this little factoid on Stella while sitting her down for her first viewing of the movie on How I Met Your Mother.

Yet, just how did one film inspire the other? And not less important, what are the significant differences? Actual discussion of the connections and parallels between one film and the other, even of the most superficial kind, is actually quite elusive for a Googler--and for anyone in general.

This was not a problem for me because I had plenty of my own to say about that; and anyway, to the extent that I was saying something others weren't, well, that said to me that writing the book wasn't so pointless as I'd feared.

All the same, why is this kind of thing so often the case?

Simply put, it's a lot easier to serve up generalizations than home in on the finer details, easier to offer impressions than analysis, easier to assert than to really explain--and while generalizations, impressions and assertions are not necessarily uninformative or unhelpful (some of them actually are informative and helpful), we could generally do with a whole lot more detail, analysis and explanation than we are getting in our online chatter.1

It probably doesn't help that the comparative few capable of doing better--who have the grasp of the history and technique of the medium they want to write about, who really know something about film and actually see films like Hidden Fortress (black and white, subtitles, etc.) so that they understand them and can communicate that understanding--rarely (not never, but rarely) write about films like Star Wars. And when they bother, they rarely take the same trouble that they do with more "serious" subjects.

And so a Mosby can get away with such a pointless display of erudition as dropping the name of Kurosawa's film--pointless because I'm not sure what significance this could have had for Stella in the scene, unless she was familiar with the other film, and it was some sort of cinematic touchstone for her, and we were never given reason to think that she was; and because making too much of the connection has doubtless confused the issue for many.

Book Review: Britain's War Machine: Weapons, Resources and Experts in the Second World War, by David Edgerton

New York: Oxford University Press, 2011, pp. 445.

In his book Warfare State, David Edgerton made the case that British historiography has tended to overlook the fact of a military-industrial complex as a massive presence in the country's life at mid-century, distorting understandings of matters like government support for industry and science, and the welfare state that gets far more attention.

Edgerton followed up this study with a book concentrating on the British warfare state in the World War II period, Britain's war machine.

Rather than a comprehensive history of Britain's war effort, the book focuses on particular aspects of that effort, and makes a number of contentions, among the most important the following:

* Far from being finished as an economic power by Victorian decline, World War I and Depression, Britain remained a considerable economic power in the 1930s, a central element of which was its still being a considerable industrial power. Moreover, that industrial base was not just a matter of strength in old, "declining" sectors like coal, iron, steel and shipbuilding, but also the emergence of major British players in the new high-tech sectors; while in contrast with the derisory view of British productivity, the country was actually quite up-to-date in this respect--often superior to Germany, and in respects on a level with the U.S..

* British technological prowess extended beyond the civilian sphere to the military. Moreover, military innovation was not, as is often imagined, largely a matter of civilians (e.g. inspired individual outsiders) who had to fight against conservative authorities. Rather there was a vast military establishment actively initiating programs seen through by state scientists. Indeed, even those thought of as outsiders fighting the establishment, like jet engine pioneer Frank Whittle, were often insiders--Whittle "an air force officer" sent to Cambridge by his service which then seconded him to a company set up specifically to develop his idea. If anything, British decisionmakers may have been too quick to place their faith in technical fixes for the problems they faced. That we think otherwise is due to the civilian academics having been in a better position to tell their story.

* More generally, Britain translated its considerable economic, industrial, technological strength into commensurate military strength. Far from being disarmed, Britain had the world's largest navy, a first-rank air force with an unmatched bombing arm and defensive radar system, and the world's most thoroughly mechanized and motorized army through the interwar period. Reflecting the strength of its military-industrial base, it was also the world's largest arms exporter of the interwar years--while this base was massively expanded in the years of rearmament, beginning at the relatively early date of 1935 (the same time as Germany's rearmament from a much weaker position and smaller economic base).

* To the extent that Britain was bent on appeasement during the 1930s, it was not a matter of an anti-military left, but a pro-military right which rearmed while conciliating Hitler, even as the normally more pacifistic left sought a harder line in dealings with him.

* When Britain did finally enter the fight, the initial expectation was not one of a hopeless conflict, but that its superior wealth and techno-industrial capability, in contact with the larger world's resources by way of the sea, gave it confidence of eventual victory. This is not belied but affirmed by the manner in which Britain went to war: partnering with continental allies backed up by a British contingent, while relying on its naval and economic instrument to bring the aggressor to heel.

* This confidence was not extirpated by the fall of France, in part because of Britain's considerable resources; and in part because at the time Britain never considered itself alone, even in the June 1940-June 1941 period--having as it did an empire covering a quarter of the world, the backing of exiled governments which brought over significant assets (like Norway's huge fleet of merchant ships), and access to the production and resources of the Americas. (The most that could be said was that it was the only great power directly engaging the Nazis.)

* Rather than a period of national unity (or leftist triumph) which unprecedentedly brought Labor into government, and the Left more generally, the war was largely an affair of the conservative political Establishment and its military-industrial complex.

* When the war ended, Britain--validating the optimism about its ability to win its war--was less damaged than is widely appreciated. Certainly it suffered far less loss of blood and treasure than its continental counterparts, even at the height of the war. (In the 1940-1943 period when the U-boat war was raging, Britain, despite the U-boats, actually managed to get by fairly well by enlarging domestic production and making more effective use of its shipping.) Rather its position relative to the rest of the world was diminished mainly by the extraordinary rebound of the U.S. from the Depression, combined with the decision of the U.S. to remain engaged in Eurasian and world affairs in the way it had opted not to be after World War I.

I see little room for argument with many of these claims. As Edgerton argues, Britain did remain a substantial economic, industrial and military power that went into the war very well-armed rather than unprepared, thanks in part to an inventive and highly productive military-industrial complex. Appeasement was more a reflection of the will of the right than the left, and the country then went to war not under the anti-Hitler left but an essentially Establishment regime. Utilizing its traditional military approach, there was wide expectation that the country would see the war through to victory with its allies (Britain was never alone even after the fall of France), the country was never more than a long way from being broken by the U-boat attacks, and its economic-military capacity came out of the war less diminished in absolute terms than in relative ones, thanks to the extraordinary American growth of the 1930s.

Indeed, to the extent that Edgerton sets the record straight on the "While Britain Slept" image of a country that could have avoided the war but for its failure to rearm; on the actuality and weight of a military-industrial complex in British political life; on just who was really promoting appeasement; on the consistency in Britain's pattern of war-making; and on the real limits of the Left's influence and accomplishments in this era; he does the historiography a considerable service. To a lesser extent, one may say the same for his putting the British experience during the war into perspective. (Others had it far, far worse.)

However, his study also has its weaknesses. His characterization of the strategic situation is particularly flawed. While he compares how Britain and Germany stacked up against one another, in the 1930s that was far from the only relevant balance of power. For British planners, the concern was Britain and Germany in Europe, Britain and Italy in the Mediterranean, and Japan in the Far East, with the nightmare scenario Britain having to fight all three at the same time--as was actually the case by December 1941.1

Still more significant is his often superficial treatment of the macroeconomic picture, and the way that side of the situation evolved during the conflict. Edgerton seems to me correct about the country's large, sophisticated military-industrial complex--but slights the important matter of the rest of the industrial base. As it happens, he actually makes favorable comparison of the arms factories with the coal mines, cotton mills and civilian shipyards that he himself notes had received little investment since the early 1920s--but avoids drawing the conclusion about that lack of investment. Equally, he shows very little sense of nuance in discussing the country's newer, high-tech sectors, taking no interest in whether Britain's firms were world-class companies, or mere second-stringers unable to compete outside a protected home and sterling area market; for whether the British divisions of foreign firms were low-end assembly units putting together imported parts as a way of circumventing the tariff barrier, or genuine high-end production capacity testifying to and developing a broader and deeper British know-how.

Still less does the book consider what any or all of these facts meant for even the narrower question of the military-industrial complex, let alone the larger matter of financing the war. After all, a robust defense sector still needs metal products, machine tools and other goods not strictly in its line, so that a really first-class military-industrial capacity requires a first-class industrial capacity generally--and Britain's position was problematic there. To a very great degree the steel and the machine tools it needed to make its weapons had to be imported from the United States (and even the Germans), while many of the components of successes like the Supermarine Spitfire had to be imported also (the plane not just made with American tools, but packing American instruments and machine guns). And as it was ultimately the civilian economy that had to pay for such efforts, all of this meant that, coming on top of an already deteriorating export position, trade balance, balance of payments, the country's economy was under serious strain before the war even began (less severe than Germany's, but an unsustainable strain all the same).

The war, of course, made matters much worse--a fact again given short shrift in the book. The section of Chapter Three considering the matter is headed "SAVED BY THE U.S.A.?" with the question mark conspicuous and significant. He emphasizes that Britain paid its way up to that point in dollars and declares that "it was buying from the USA without heed to its longer-term economic needs . . . because it knew from the end of 1940 that U.S.-financed help was likely to be available for the future." However, there is no way to take this as anything but a slighting of the hard fact of British bankruptcy a year and a half into the war, when the country's hard currency reserve was down to nearly nothing while the conflict was far from won. There is also no conclusion other than that had it not been for America's turning on the money spigot, Britain would have had to make peace with the Axis powers in early 1941, a peace that would have left the Nazis dominant in Western Europe, and free to turn east, after which the Soviet Union really would have been alone. Still less does this refute the equally hard fact of Britain's weakened financial condition after the war, when it was dependent on massive U.S. backing (a billion-pound loan in 1946, support for its currency and outright bail-outs for decades, techno-military transfers like the nuclear submarine and Polaris missile), which at times came at high cost (like the painful post-war devaluation of sterling), despite which it consistently fell short of realizing its governments' schemes for reinvigorating its economy, expanding its welfare state or retaining its world political and military role.

All of this testified to a very real weakness on Britain's part, specifically the deficiency of its manufacturing sector when broadly approached, with all its economic consequences, not least for its ability to bear the stresses of war as well as not only the larger U.S., but as well as the country had done in the World War I era. The result is that what Edgerton really does in this part of the discussion is remind the reader that Britain had strengths as well as weaknesses, successes as well as failures in its economic life in the interwar era, and its economic effort during the war, rather than integrate the two to create a more satisfying whole. As a result Britain's War Machine works less well as a new history than a corrective to some of the conventional wisdom--needed as that may be.

1. This is, of course, the more important because of the implications of Japan's military victories for the endurance of Britain's south and east Asian empire--and that, in turn, for its standing as a world power.

Notes on Kirby Buckets Warped

I'm just as surprised as anyone else to be writing about this show.

I was scarcely aware of the existence of Kirby Buckets until just a few weeks ago, and have as yet seen very little of it prior to the recent third season, which caught my attention because of how unusual it has been for broadcast television--the sharp shift of the show in genre and structure (the episodic tween sitcom about an aspiring cartoonist become a 13-episode story of interdimensional hopping, heavy on science fiction parody), and the unique airing schedule (the 13 episodes airing on 13 weekday mornings over three weeks).

Alas, the writing rarely rises above the level of the mildly amusing. In fact, the heavy reliance on gross-out humor reflects a certain laziness in its pursuit of its target demographic. All the same, the makers of the show actually do serve up an arc, rather than just tease the audience with the prospect of one--and manage to have some fun with the science fiction clichés they evoke. (Of course there's a post-apocalyptic dimension where the characters meet the Mad Max versions of the people they know; here they come complete with Australian accents.)

Additionally, the cast is a pleasant surprise, accomplishing a lot even when they have just a little to work with, with three of its members pleasant surprises. Suzi Barrett shows a good deal of comedic flair in the role of Kirby's mom, getting her fair share of laughs. Olivia Stuck somehow makes Kirby's sister-from-hell Dawn sympathetic (or at least, pitiable). And of course, improv master, veteran voice actor and "Simlish" cocreator Stephen Kearin's Principal Mitchell is a memorable mass of eccentricities (almost) sufficient to singlehandedly make the hackiest of shows watchable.

And so it went down to the finale (aired Thursday), which, to the creative team's credit, actually wrapped up a storyline, and in the process, offered the sense of a bigger tale ending as Kirby, Mitchell, Dawn and the rest closed one chapter in their lives and began another. However, whether all this will be enough to lead to a fourth season is a different matter. The show, poorly rated to begin with (one reason for the change, I suspect), has seen its viewership plunge to abysmal levels--under 200,000 if I read the numbers right, rather worse than the shows Disney XD so recently axed, Lab Rats: Elite Force and Gamer's Guide to Pretty Much Everything. Whether it will survive that will depend, I suppose, on whether viewership picks up during the reruns this weekend (and further airings of the show), whether the executives feel bothered by the way they are running out of live-action shows to put on the air--or the creators can sell them on another sharp shift of course. And maybe all of them together.

SFTV for the Younger Crowd: A Few Thoughts

When writing my article "The Golden Age of Science Fiction Television: Looking Back at SFTV the Long 1990s," (or revising it for reissue in my book After the New Wave) I specifically concentrated on North American-produced live-action television oriented toward adult audiences, leaving out animation, and children-oriented programming of both the animated and live-action varieties. This was partly because I'd seen very little of it since elementary school (certainly nowhere near enough to even think about writing anything comprehensive), but also because I didn't think that the scope I set for the article was overly narrow, and I didn't think that broadening it would have changed the picture it presented very much.

I'm still not sure that it would have. Given how long and how heavily science fiction's tropes have been mined, and how dependent the genre has become on Baroque handling of old concepts and inside jokes, TV aimed at a young audience seems an unlikely place for fresh ideas.

Still, in hindsight it does seem fair to note that fiction supposedly for children often isn't really that--fiction genuinely about children and their experiences, or at least engaging the way they really see the world. Rather it's just the same stuff written for the regular, "adult" audience, sanitized sufficiently to pass a more stringent censor. This isn't altogether new--the early versions of fairy tales children grow up were not the stuff of Disney films. Still, this seems to have become more conspicuous in recent years, partly as pop culture has gone increasingly metafictional. I remember, for example, an episode of Animaniacs that was an extended parody of Apocalypse Now (and even more obscure still, the making-of documentary Hearts of Darkness). Now on the Disney Channel they have a sitcom about a summer camp run by a family with the last name Swearengen, where one of the campers quotes Omar from The Wire ("You come at the king . . .") in an episode that is basically Scarface with candy and video games instead of the original product.

And as all this might suggest, I have noticed some relatively sophisticated genre content here and there. An obvious instance is Jimmy Neutron. That show can seem like merely another iteration on the century-and-a-half-old tradition of scrawny boy genius inventors.1 Still, it is notable for its consciously retro handling of that retro idea--Jimmy growing up in a household where everything from mom's hairdo to the design of the family television set looks like it came out of the '50s, in a town called, of all things, Retroville (complete with a Clark Gable lookalike for mayor).2 Steampunk has remained a genre of cult rather than real mass appeal--but Harvey Beaks included an eccentric steampunk enthusiast in its cast of characters, and served up an hour-long musical steampunk special.

Something of this has been evident in live-action programming as well. Television has been awash in superheroes for years--but Disney XD offered a more original take than most in developing a show about a hospital where they go when they need medical treatment and the two kids who stumble into it in Mighty Med. More recently the same channel's Kirby Buckets, for its third season, shifted from sitcom wackiness set (mostly) in the mundane, regular world, to a season-long, dimension-hopping adventure through alternate timelines.3 A good deal more tightly written than most of the season-length stories we get (arcs on American TV remain mostly about stringing audiences along), it actually is a story with a real beginning, middle and end, and the experiment has been all the more striking for its 13 episodes being aired in a mere three weeks (an exceptionally rare move for broadcast television).

Whatever else one may saw about it, very little of the comparable stuff pitched at grown-ups in prime time as of late has been as knowing or risky or innovative or audacious. (And when these shows are at their best, rarely as much fun.)

1. The original "Edisonade" was Edward Sylvester Ellis' 1868 dime novel The Huge Hunter; or The Steam Man of the Prairies.
2. While Ellis celebrated the young, lone amateur inventor, the red-brick universities and Land Grant colleges and Massachusetts Institute of Technology were promoting national science policies through their graduation of their first candidates, while scarcely a decade later Thomas Edison established the world's first big corporate lab.

Making Sense of B.O.

We hear all the time about the economics of filmmaking--that this movie cost this much, that it made that much. But a lot of the talk strikes me as far, far removed from even the best known realities of the business.

Movies cost far more than their production budgets--and their backers make less than their grosses at the box office. On the one hand, marketing and promotion budgets are much less often announced than production budgets, but are often as large. The result is that the producers of a $150 million movie may be thought to have spent twice that much when the distribution costs are taken into account.

Meanwhile, studios get maybe forty to fifty percent of the gross of a film.

This means that instead of a $150 million movie earning $200 million making a solid $50 million profit, it may have actually left the backers $200 million in the red--because they spent $300 million, and only got back $100 million, or even less.

Yet, it is worth remembering that film production and other costs are often offset from other sources--like product placement, and even government subsidy. (The makers of Thor 2 got a tax rebate worth some $30 million from the British government.) It is worth remembering, too, that domestic and foreign box office are not always clearly distinguished by commentators, and the foreign markets ever more important, particularly as key large markets (e.g. China) have become more affluent. (Warcraft made $47 million in the U.S., almost ten times as much globally--with about half the total gross earned in China alone.) At the same time, the box office is not the only source of revenue--there being video and TV rights, and merchandising, which can add mightily to the total, and so turn a flop into a moneymaker much more often than might be imagined, especially if one thinks in longer than "this-quarter, next-quarter" terms. (Hudson Hawk, the most notorious flop of the summer of '91, managed a profit five years after its release--but the point is that it did manage it.)

And all that is without getting into the creative bookkeeping in which studios have been known to engage regarding the costs, which remind all involved to take their cut of the gross, not the net, if they can. (Paramount's claims about the profitability of Forrest Gump were quite the story at the time.)

All this is a reminder that like many another industry, for all that we hear about, filmmaking is less transparent than it appears. It is also a reminder not to expect a streak of underperforming films--of sequels nobody asked for and which it turned out the public really didn't want--like 2016's rather long streak of them after the hit-packed spring (X-Men, Alice, the Turtles, Independence Day, etc.)--to have too much effect on how Hollywood does things. The sheer variety of ways of fraying costs, and of revenue streams to tap, as well as the lack of any other business model Hollywood might find remotely attractive given its bottom-line priorities and the reality of a film business that makes the high-concept blockbuster more than just an exercise in business cynicism, are likely to keep it at the present game for a long time to come.

Monday, January 9, 2017

On The Word "Deserve"

Like "lifestyle" the word "deserve" has come to be grossly misused, overused and abused, replacing many another word and concept in the process – and consequently, diminishing the average person's already vanishingly small ability to think.

The word "deserve" properly refers to those things that have actually been earned (like an award recognizing particular accomplishments). To say one deserves something is to make an indisputable moral claim on their behalf. However, the term is being used with mind-numbing regularity in place of words like "want," and "need," and "right" (as in "have a right to"). Certainly we all have wants. (You might want a private jet.) We all have needs. (You need food and oxygen to live.) We all have rights of varying kinds. (Free speech is an inalienable human right, while someone might have a right to the inheritance of a particular property or the award of damages following some injury, given the laws prevailing at a particular time and place.)

The upshot of this is that one may want, need or have a right to the things they deserve – but they do not necessarily deserve the things they want, need or claim as a right. Yet, there seems an increasing insistence on dressing up want, need and right in the moralistic language of deserts. Take, for instance, the immediate cause of this post, which was my hearing an anchor on The Weather Channel say last winter that ski resorts in a particular region were finally getting the snow they "deserve." The owners of ski resorts want and need snow because it enables them to operate their establishments, and a particular resort owner may have done the things ordinarily seen as meriting business success, but it is nothing short of bizarre to say instead that resorts "deserve" snow.

I suppose this particular butchery of the English language contains something of the tendency to view every outcome in a person's life as a matter of their own, personal morality – an idea very much in line with the self-help/religious ideas that have long enjoyed wide currency in the United States. There is, too, what Thorstein Veblen in his classic The Theory of the Leisure Class called the "habit of invidious distinction," because where there are the deserving there are also the undeserving.

It would seem that at the bottom of such things is a deeply conservative impulse to justify – and sanctify – everything as it is, not least the inequalities of wealth and the callousness toward the poor and disenfranchised that increasingly characterize American life. The CEOs of Fortune 500 companies are assumed to "deserve" their seven and eight figure compensation packages (even as they make disastrous decisions), while many vehemently deny that the people who perform the work without which their companies could not possibly remain going concerns "deserve" a living wage. Some people "deserve" megayachts, while others do not "deserve" health care – or even food and shelter.

In other words, the mind-boggling greed of some is not merely excused but justified, celebrated, exalted on the grounds of what they "deserve," while the claims of others to having their most basic physical needs met (and many would say, their most basic rights as human beings recognized) are dismissed on the very same grounds, which happens to be not the content of one's character, but the content of one's bank account, personal worth equated with "net worth." One person is "worth" fifty billion dollars and another "worth" nothing – in effect, worthless.

All of this is a revolting tissue of absurdities which only confuses and cheapens the idea of morality itself. But I don't think it's going away any time soon. If anything, the way the political winds are blowing, it seems likely the tendency will only get stronger.

Wednesday, December 21, 2016

Slow Burner and '50s Britain

The plot of William Haggard's Slow Burner (reviewed here) centers on a British program to develop civil nuclear power in the hopes of scoring a major economic coup.

While nuclear energy has never stopped being topical, this idea was especially so at the time because in the '50s the British government really did bet heavily on its scientists achieving a breakthrough in civil nuclear power, both for the sake of cheap domestic power, and as a source of export income with which to achieve a healthy balance of payments (as a country dependent on massive food and energy imports, while its manufacturing and financial position slipped).1 The object of those hopes was the Magnox reactor, which never justified such a confidence (the world generally preferred the American pressurized water reactor, today still the mainstay of civil nuclear power), but the expectations do come to pass in Slow Burner, specifically in the titular, very different technology. A nuclear power source compact enough to be installed in a suburban attic and packed up and driven about in the boot of a car, it put Britain twenty years ahead of the rest of the world in that field--the only way, the book says, that the British economy was not twenty years behind it.

Moreover, the implications of this for Britain's economic life are repeatedly underlined within the story, so much so that the characters worry that the thief might be running West as much as East, and seem more concerned about the implications of losing perhaps their only prospect for a healthy balance of payments than they are about the Soviets upsetting the Cold War balance of power. Indeed, it is Britain's economic predicament that Sir Jeremy Bates has in mind ("Fifty or sixty million . . . and food, at the level of subsistence, for perhaps forty"; manufacturing plant "a generation out of date") when he thinks to himself that a "man who could consider going abroad, selling his knowledge, was worse than a danger, worse than an apostate" (114).

The point comes up in smaller ways, too--a burglar enlisted by Colonel Russell's people for an illegal black bag job told that if things go badly he could be resettled where he likes in the Sterling Area.

Sterling Area? he wonders, surprised by the qualification.

Yes, he's told, because just now dollars are hard to come by.

Next to this any menace from the Soviets in the book appears vague, shadowy.

Unsurprisingly, a good part of the book's interest for me was in its quality of being a time capsule from '50s Britain, capable of surprising in such ways.

1. While it doesn't say much about the government's specifically nuclear ambitions, David Edgerton's Warfare State nonetheless has a good deal to say about British policy regarding R & D in these years, and the view of some of its critics that it was too devoted to big-ticket prestige projects (of which the Concorde supersonic transport was another example).

Tuesday, December 20, 2016

An Un-Bond: William Haggard's Colonel Charles Russell

I remember that when reading Ian Fleming's James Bond novels I was struck by how much the character and his outlook, his associated image of glamour, the basis of his existence as a globe-trotting British agent (policing the Empire in its last days, playing junior but more skillful partner to the Americans in the Cold War contest) was very much a product of a particular period that has since passed. This has been so much the case that novelists working in the series, less able to rely on brand name and flashy filmmaking than their cinematic counterparts, have in recent years so often seen no option but to go back to those days--Sebastian Faulks and William Boyd taking Bond back to the '60s, Anthony Horowitz taking Bond all the way back to the '50s (or in the case of Jeffrey Deaver, starting with the character again from scratch in the present day).

Still, Fleming's creation has held up considerably better than Haggard's, and his longtime protagonist, Colonel Charles Russell, head of the imaginary Special Executive.

Russell, like Bond, is a formidable ex-military counterintelligence operative; like Bond, handsome, suave, urbane; like Bond a man who enjoys good food, drink and the other luxuries. Still, his version of luxury is different. Instead of the dated image of jet air travel and casinos, there is the even more dated image of the club, the country house, enjoyed by this old officer with his regimental moustache and pipe. Bond might brood about the taxi driver's manner, but Russell never has occasion to, appearing to exist untouched by the changes of the world surrounding him.

Indeed, at times Haggard's characters can almost seem caricatures (at least, to an American rather sensitized to "stage Englishmen" by terrible Hollywood writing). This is particularly the case when Haggard writes figures like Sir Jeremy Bates in Slow Burner, with the following line exemplary: "it was barely four. It would be unheard of for Sir Jeremy Bates to leave his office at tea-time" (116); or better still, when Bates comes home after a drunk to a valet who offers no question or comment:
Confound and damn the fellow! His lack of interest was an insult. A gentleman's gentleman--the convention had survived, the convention of the English manservant, secret, uninquiring, impersonal (111-112).
It is much the same with William Nichol, who after nearly getting run over by a would-be assassin in a big truck, and losing his hat, takes a cab because he "was not a man to walk hatless in London" (127).

Such things are even more evident in their social attitudes, the bigotry and snobbery fiercer and more bluntly expressed. While Fleming wrote many a foreign villain, he generally eschewed depicting treachery by Britons. The French unions may have been a Soviet fifth column in Casino Royale--but the British unions (even the Jamaican unions), however much Fleming disliked organized labor, were in the end no traitors to the nation. Many a scientist can be counted among Fleming's villains--but scientists as such were not presented as a perverse lot.

Not so with Haggard, who condemns scientists as such as untrustworthy because their affinity for reason makes them a bunch of damned crypto-Communists; and when the traitor is exposed, the scientist is indeed a man of "the far, far Left," while also being "no gentleman," not solely as a matter of his less than upper-crust social origin but his "character" in his dealings with his wife.1 (And that is not only all we know about him, but, it seems in Haggard's rather old-fashioned view, all that we need to know.)

All that makes it seem less surprising that there was never a Colonel Russell film franchise--and why, even by the standards of series' that did not land successful movie franchises to keep him on people's minds, the books have slipped into comparative obscurity.

1. "Take a clever boy . . . and put him into a laboratory for the next seven or eight years. What emerged inevitably was a materialist . . . a man who would assume without question that the methods of science could be applied to human societies" (58)--a prospect the narrator clearly regarded with horror.

Subscribe Now: Feed Icon