The names of Fyodor Dostoyevsky and Theodore Dreiser do not seem to be spoken together very often and it is easy to understand why given the immense differences of period, country, political and social perspective, prose style--Dostoyevsky a nineteenth century Russian who, for most of his career, was Christian (specifically, Orthodox Christian) and Slavophile and inclined to a particular blend of romantic realism, while Dreiser was a twentieth century American of leftist, eventually Communist, politics who as a storyteller favored Zolaesque naturalism. Still, I often find myself comparing what seem to be generally accounted their greatest works, The Karamazov Brothers (1880) and An American Tragedy (1925).
In those two books those two authors each take a murder case in a small provincial town and ultimately produce from it a thousand-page epic about everything. I can think of nothing like them since, with that pop cultural supernova of the '90s, John Berendt's "nonfiction novel" Midnight in the Garden of Good and Evil, negatively demonstrating the magnitude of those other authors' achievement. Characteristic of a decade that reveled in "quirkiness" for its own sake (we see it in shows like Northern Exposure and Picket Fences, in the indie films of the time), Berendt's book is in the end small-time, shallow, unilluminating (in a word, postmodern) stuff entirely unworthy of its grandiose title.
By contrast "Midnight in the Garden of Good and Evil" would fit as the title of Dostoyevsky's book perfectly, his
Karamazov Brothers seeing in that one provincial murder the struggle over the future of Russia, humanity and even the universe. (Don't forget--Dostoyevsky was personally acquainted with, and apparently influenced by, Nikolai Fedorov, and yes, it mattered in this book.)
But, as one might guess, it would not fit Dreiser's, for whom gardens of good and evil would not really be a relevant, helpful concept, which is a reminder of the compelling differences as well as the similarities. The social science-minded, naturalistic perspective of Dreiser had him bringing the entirety of America's social reality (class and work and economics and prejudice and sexual attitudes and the urban-rural divide, the realities and the illusions, everything) in the murder at the heart of An American Tragedy--his book about everything reflecting a different understanding of what everything happened to consist.
However, the differences hardly end in that distinction between the mystical religious author and the materialist socialist, with one that I find myself thinking about often the cast of characters, and in particular the position of the central figure in relation to the rest. In The Karamazov Brothers the tragedy is distinctly a family tragedy, where four brothers (of which Alyosha is presented as the protagonist) whose lives are closely interconnected with those of each other and their father are all, if only through inaction, only in thought, implicated in the murder of the father in some way. By contrast Dreiser's Clyde Griffiths is alone, adrift, throughout the story--alienated from father and mother and siblings in the early part of the book, forced to leave his family behind entirely after involvement in a deadly hit-and-run, and then after a long shot encounter with his rich uncle in Chicago and going to Lycurgus, New York to try his chances with him, being kept at more than arm's length by that uncle and his family even as they bar him from association with anyone else, with that alienation and isolation and loneliness at every turn playing its part in the disaster that ultimately befalls all concerned--the infamous atomization of American life ("emotional destitutes, as poor in their family connections as Afghans or Sudanese are in money," none other than Edward Luttwak was to write) making it entirely fitting that such atomization plays so great a part in Clyde's story.
Thursday, March 31, 2022
Wednesday, March 30, 2022
Are Moral Panics Less Common Than They Used to Be?
Recently revisiting the moral panic over the release of Joker it struck me that such panics seem less common than they used to be--for example, in comparison with the 1990s.
Where explanation of the matter is concerned my guess would be that this is because of the fragmentation and polarization of the public--or at least, that fragmentation and polarization becoming much more difficult to overlook. It seems to me fairly obvious that any really large group of people--and certainly a country of hundreds of millions--can only in the hyperbole of court poets singing for their suppers be imagined as at all usefully generalizable as collectively attentive to the same thing in the same moment and reacting to it in the same, conventional way.
Did we all attend to, for example, the grotesque stupidities of the Tonya Harding-Nancy Kerrigan melodrama with baited breath in the 1990s? I can most assuredly tell you that many of us were more interested in just about anything else you may care to name (and appalled when the venal idiots who run the mainstream news happily let it crowd other, more important events out of the headlines in their gleeful attentiveness to it).
But now in an age of digital media, in which the most extreme or idiosyncratic views have some chance to enter the public's consciousness, the pretense looks that much more foolish.
The result is that even the court poets (not an impressive bunch--court poets generally aren't) find it far more difficult to sing songs of a nation united in alarums and excursions, especially over the stupid things that usually inspire such. Rather the panics, which I suspect are as frequent as ever in this age of ever-escalating cyber-mainlined fear, shock and outrage, and perhaps even more so, simply happen to be the panic of a particular portion of society, not the whole.
Where explanation of the matter is concerned my guess would be that this is because of the fragmentation and polarization of the public--or at least, that fragmentation and polarization becoming much more difficult to overlook. It seems to me fairly obvious that any really large group of people--and certainly a country of hundreds of millions--can only in the hyperbole of court poets singing for their suppers be imagined as at all usefully generalizable as collectively attentive to the same thing in the same moment and reacting to it in the same, conventional way.
Did we all attend to, for example, the grotesque stupidities of the Tonya Harding-Nancy Kerrigan melodrama with baited breath in the 1990s? I can most assuredly tell you that many of us were more interested in just about anything else you may care to name (and appalled when the venal idiots who run the mainstream news happily let it crowd other, more important events out of the headlines in their gleeful attentiveness to it).
But now in an age of digital media, in which the most extreme or idiosyncratic views have some chance to enter the public's consciousness, the pretense looks that much more foolish.
The result is that even the court poets (not an impressive bunch--court poets generally aren't) find it far more difficult to sing songs of a nation united in alarums and excursions, especially over the stupid things that usually inspire such. Rather the panics, which I suspect are as frequent as ever in this age of ever-escalating cyber-mainlined fear, shock and outrage, and perhaps even more so, simply happen to be the panic of a particular portion of society, not the whole.
Monday, March 28, 2022
Of Isaac Asimov's Foundation Saga and H.G. Wells' The Shape of Things to Come: The Figure of Hari Seldon
In a prior post I discussed some of the parallels and the differences between Isaac Asimov's Foundation novels and H.G. Wells' The Shape of Things to Come--not least both tales envisioning the restoration of civilization by a science- and engineering-minded elite, with a significant difference Asimov's reimagining the Medieval Church as the Empire-restoring Foundation, whereas Wells' more radical book discarded religious pretenses, and emphasized a new World State which would move past the weaknesses of the pre-Fall order.
Another significant difference would seem to be the sheer elitism of the conception, with the Foundation the mover of galactic events, and above and in back of it the psychohistorian Hari Seldon who saw and planned it all out mathematically in a manner that can seem more divine than human, the more so for his odd technological deification--with the opening of the Vault containing his recorded messages in moments of crisis having the quality of religious revelation intended to speed the fulfillment of a "Divine Plan." (Indeed, in the second book, Foundation and Empire, Asimov has a character remark the fact of "a whole culture brought up to a blind, blubbering belief that a folk hero of the past has everything all planned out and is taking care of every little pieces of their . . . lives," a "thought-pattern [that] evoked religious characteristics . . . strong faith reactions.")
I can think of nothing comparable to Seldon in Wells, and certainly not The Shape of Things to Come. Certainly Wells is not without respect for individual achievement, and the movement establishing the World State in his saga certainly has its leaders, among whom one can find personal greatness. (Indeed, he is not without his elitist tendencies--it was, in fact, a significant element in his complex and increasingly unfriendly attitude toward Marxism and its reliance on the working class, of which he did not think much, a fact which is a significant theme of his Experiment in Autobiography.) Yet the protagonists of Shape, taken as individuals, are in the end humans--more intelligent than others, better educated, more responsible, but still, just humans and not gods--who faced facts that had been increasingly apparent to a great many people for a very long time, and acted accordingly. Unsurprisingly the trend of history, and within it the collective efforts of the Control, tower above any one figure. (Indeed, writing this post I easily enough recalled the Air and Sea Control, but I had to look up the names of specific characters in the book in order to recall them.)
Altogether I suspect that the conception of Seldon owes far less to Wells than to American science fiction's taste for scientific supermen who can singlehandedly bend the course of history, exceedingly evident in the Edisonades of the nineteenth century (in the American-style "invasion stories," for example), with which idea John Campbell seemed obsessed--not merely as a teller and editor of stories, but in actual life, as he went from one movement to another in pursuit of something that would render the vision real (Scientology, psi, etc.).
Today, looking at the worship of "tech billionaires" as saviors, and the way tales of superheroes fill our screens, it would seem that this particular legacy of older American science fiction remains very much with us now.
Another significant difference would seem to be the sheer elitism of the conception, with the Foundation the mover of galactic events, and above and in back of it the psychohistorian Hari Seldon who saw and planned it all out mathematically in a manner that can seem more divine than human, the more so for his odd technological deification--with the opening of the Vault containing his recorded messages in moments of crisis having the quality of religious revelation intended to speed the fulfillment of a "Divine Plan." (Indeed, in the second book, Foundation and Empire, Asimov has a character remark the fact of "a whole culture brought up to a blind, blubbering belief that a folk hero of the past has everything all planned out and is taking care of every little pieces of their . . . lives," a "thought-pattern [that] evoked religious characteristics . . . strong faith reactions.")
I can think of nothing comparable to Seldon in Wells, and certainly not The Shape of Things to Come. Certainly Wells is not without respect for individual achievement, and the movement establishing the World State in his saga certainly has its leaders, among whom one can find personal greatness. (Indeed, he is not without his elitist tendencies--it was, in fact, a significant element in his complex and increasingly unfriendly attitude toward Marxism and its reliance on the working class, of which he did not think much, a fact which is a significant theme of his Experiment in Autobiography.) Yet the protagonists of Shape, taken as individuals, are in the end humans--more intelligent than others, better educated, more responsible, but still, just humans and not gods--who faced facts that had been increasingly apparent to a great many people for a very long time, and acted accordingly. Unsurprisingly the trend of history, and within it the collective efforts of the Control, tower above any one figure. (Indeed, writing this post I easily enough recalled the Air and Sea Control, but I had to look up the names of specific characters in the book in order to recall them.)
Altogether I suspect that the conception of Seldon owes far less to Wells than to American science fiction's taste for scientific supermen who can singlehandedly bend the course of history, exceedingly evident in the Edisonades of the nineteenth century (in the American-style "invasion stories," for example), with which idea John Campbell seemed obsessed--not merely as a teller and editor of stories, but in actual life, as he went from one movement to another in pursuit of something that would render the vision real (Scientology, psi, etc.).
Today, looking at the worship of "tech billionaires" as saviors, and the way tales of superheroes fill our screens, it would seem that this particular legacy of older American science fiction remains very much with us now.
Of Isaac Asimov's Foundation Saga and H.G. Wells' The Shape of Things to Come
Looking back at the Golden Age of science fiction it has seemed to me that H.G. Wells was a hugely important influence. Certainly John Campbell's derivation of inspiration from Wells' The Time Machine in the early story "Twilight" is much discussed--while reading Campbell's discussion of his own ideas about the genre, and how others have interpreted them, has persuaded me that he largely adapted Wells' ideas to his own inclinations (a case I make in Chapter Two of Cyberpunk, Steampunk and Wizardry).
Perhaps unsurprisingly I found myself often thinking of Wells, and in particular what I have tended to view as the summary work capping off his career, The Shape of Things to Come, when recently reading Isaac Asimov's Foundation trilogy, with which Wells, and in particular that then-recent book--scarcely a decade old when Asimov undertook the writing of the Foundation saga--has been less associated. (Instead it is Edward Gibbons' The History of the Decline and Fall of the Roman Empire that Asimov was to speak.)
In both of those tales civilization fell in a manner all too predictable to intelligent observers. And in both a small elite left over from "Before the Fall," with a greater understanding of social reality and a command of critical technical skills, built an organization that sustained the old knowledge and connections, and through its control of essential technology, exerted a civilizing influence (the Air and Sea Control in Wells), not least by letting it consistently outmaneuver the backward, violent elements that attempted to oppose or exploit them for their selfish ends (which maneuvering made the reconstruction of order a far less bloody matter than it would otherwise have been). There is also what both those premises bespeak, a hope for rational control of the course of human life and human history, with such control exercised by scientists and engineers who can apply the same kind of foresight and control they do to physical forces to society, not least by leveraging their various forms of expertise (technological as well as social) into political power.
There even seems some significant resemblance in the narrative structure, if not with Wells' book, then the cinematic adaptation of it, Things to Come. The 1936 movie took Wells' work (which had been written as a precognitive glimpse of an advanced history textbook from the twenty-second century, and read like one) and dramatized it by way of a series of illustrative and allegorical vignettes from the various stages of the history it covered--with the first presenting the experiences of a flier in the world war that wrecks civilization, the second a member of the Control (here known as "Wings Over the World") confronting and defeating a grubby warlord, and then, following a montage depicting the rebuilding of the world on clean, modern lines, the controversy that ensues in the new World State over a planned moon mission. Thus does Asimov's Foundation likewise proceed, not as a history textbook, but through such vignettes showing the work of Foundation founder Hari Seldon, the Foundation's first confrontation with the space age barbarians of Anacreon after the Empire's effective collapse a half century after the first vignette, etc., etc. . . . (entirely befitting, I think, Asimov's view of science fiction as a genre distinguished by its concern for humanity's collective destiny over the destiny of any one individual, a sentiment with which, I think, Wells would have been sympathetic).
Yet along with these impressive similarities I was struck by at least one significant difference reflecting the intellectual inspirations and the politics to which the two writers were permitted to give expression. Asimov reimagined the fall of Rome and the Dark and Middle Ages on a galactic scale, with his Foundation early on appearing a variation on the Catholic Church, using its monopoly of old knowledge and its religious authority to bend the barbarians to its will, with an eye to the restoration of Empire, all as Asimov stressed the specifically intellectual failings of late Galactic Empire culture ("the damming of curiosity," the decline of the scientific ethos, with which Mayor Hardin charged the Foundation's scientists themselves). Certainly the Foundation does evolve, taking on new forms as the galactic situation changes in ways bespeaking a materialist historical sense (with its Church-like aspect giving way to a commercial empire when the "money power" becomes more important), and by the second book there is even an expectation that the new Empire will somehow be better than the version which preceded it--but of that later, better, state we know no details. By contrast Wells, who was more openly radical than any writer had the opportunity to be in the pages of John Campbell's Astounding, and for that matter any really major science fiction writer in Red Scared Cold War America, described not a process of pseudo-Roman decline and Medieval restoration of civilization and its benefits, but explicitly offered a radical critique of the social system that had been before directed at capitalism, the nation-state, the traditional culture of religiosity, nationalism and other elements accompanying them, as out of step with the practical governance of human affairs, as demonstrated when it ineluctably led to economic slump, war and civilizational collapse. The point was therefore not to put what had fallen apart back together, even in tweaked form, but to fill the vacuum with the kind of government the world had really needed all along, and suffered for lacking (explicitly depicted as a scientific, socialist World State clearly, profoundly different from what had stood before).
Still, even allowing for the difference that can itself seem to sum up the tight boundaries within which American genre science fiction developed ("stories based 10,000 years in the future where all the sciences have progressed fabulously except for one," as Mack Reynolds protested in Who Killed Science Fiction?, specifically drawing attention to the feudalism of the Foundation stories), there was still something of the Wellsian regard for science, reason and their potential to order human affairs in a better way that has become much, much less evident in science fiction since, as seen in its proneness to treat the apocalypse not as a challenge to the guardians of civilization, but an occasion to wallow in reptile-brained misanthropy, whether amid the survivalism of the immediate aftermath of civilization's downfall, or the dystopias that, as in The Hunger Games, rose from the ashes to bring the survivors new nightmares.
Perhaps unsurprisingly I found myself often thinking of Wells, and in particular what I have tended to view as the summary work capping off his career, The Shape of Things to Come, when recently reading Isaac Asimov's Foundation trilogy, with which Wells, and in particular that then-recent book--scarcely a decade old when Asimov undertook the writing of the Foundation saga--has been less associated. (Instead it is Edward Gibbons' The History of the Decline and Fall of the Roman Empire that Asimov was to speak.)
In both of those tales civilization fell in a manner all too predictable to intelligent observers. And in both a small elite left over from "Before the Fall," with a greater understanding of social reality and a command of critical technical skills, built an organization that sustained the old knowledge and connections, and through its control of essential technology, exerted a civilizing influence (the Air and Sea Control in Wells), not least by letting it consistently outmaneuver the backward, violent elements that attempted to oppose or exploit them for their selfish ends (which maneuvering made the reconstruction of order a far less bloody matter than it would otherwise have been). There is also what both those premises bespeak, a hope for rational control of the course of human life and human history, with such control exercised by scientists and engineers who can apply the same kind of foresight and control they do to physical forces to society, not least by leveraging their various forms of expertise (technological as well as social) into political power.
There even seems some significant resemblance in the narrative structure, if not with Wells' book, then the cinematic adaptation of it, Things to Come. The 1936 movie took Wells' work (which had been written as a precognitive glimpse of an advanced history textbook from the twenty-second century, and read like one) and dramatized it by way of a series of illustrative and allegorical vignettes from the various stages of the history it covered--with the first presenting the experiences of a flier in the world war that wrecks civilization, the second a member of the Control (here known as "Wings Over the World") confronting and defeating a grubby warlord, and then, following a montage depicting the rebuilding of the world on clean, modern lines, the controversy that ensues in the new World State over a planned moon mission. Thus does Asimov's Foundation likewise proceed, not as a history textbook, but through such vignettes showing the work of Foundation founder Hari Seldon, the Foundation's first confrontation with the space age barbarians of Anacreon after the Empire's effective collapse a half century after the first vignette, etc., etc. . . . (entirely befitting, I think, Asimov's view of science fiction as a genre distinguished by its concern for humanity's collective destiny over the destiny of any one individual, a sentiment with which, I think, Wells would have been sympathetic).
Yet along with these impressive similarities I was struck by at least one significant difference reflecting the intellectual inspirations and the politics to which the two writers were permitted to give expression. Asimov reimagined the fall of Rome and the Dark and Middle Ages on a galactic scale, with his Foundation early on appearing a variation on the Catholic Church, using its monopoly of old knowledge and its religious authority to bend the barbarians to its will, with an eye to the restoration of Empire, all as Asimov stressed the specifically intellectual failings of late Galactic Empire culture ("the damming of curiosity," the decline of the scientific ethos, with which Mayor Hardin charged the Foundation's scientists themselves). Certainly the Foundation does evolve, taking on new forms as the galactic situation changes in ways bespeaking a materialist historical sense (with its Church-like aspect giving way to a commercial empire when the "money power" becomes more important), and by the second book there is even an expectation that the new Empire will somehow be better than the version which preceded it--but of that later, better, state we know no details. By contrast Wells, who was more openly radical than any writer had the opportunity to be in the pages of John Campbell's Astounding, and for that matter any really major science fiction writer in Red Scared Cold War America, described not a process of pseudo-Roman decline and Medieval restoration of civilization and its benefits, but explicitly offered a radical critique of the social system that had been before directed at capitalism, the nation-state, the traditional culture of religiosity, nationalism and other elements accompanying them, as out of step with the practical governance of human affairs, as demonstrated when it ineluctably led to economic slump, war and civilizational collapse. The point was therefore not to put what had fallen apart back together, even in tweaked form, but to fill the vacuum with the kind of government the world had really needed all along, and suffered for lacking (explicitly depicted as a scientific, socialist World State clearly, profoundly different from what had stood before).
Still, even allowing for the difference that can itself seem to sum up the tight boundaries within which American genre science fiction developed ("stories based 10,000 years in the future where all the sciences have progressed fabulously except for one," as Mack Reynolds protested in Who Killed Science Fiction?, specifically drawing attention to the feudalism of the Foundation stories), there was still something of the Wellsian regard for science, reason and their potential to order human affairs in a better way that has become much, much less evident in science fiction since, as seen in its proneness to treat the apocalypse not as a challenge to the guardians of civilization, but an occasion to wallow in reptile-brained misanthropy, whether amid the survivalism of the immediate aftermath of civilization's downfall, or the dystopias that, as in The Hunger Games, rose from the ashes to bring the survivors new nightmares.
Saturday, March 26, 2022
The Rules of Post-Apocalyptic Fiction?
I recall some years ago encountering a page on some web site that purported to offer advice to writers of post-apocalyptic fiction. The purveyor of said advice flatly told the reader that the characters in fiction of this type should have no time to think of anything but immediate physical survival among the ruins.
This piece of "advice" annoyed me greatly. Admittedly I am generally suspicious of one-size-fits-all advice broadcast to the world at large as, even when actually, technically, correct (as it is not much of the time), overwhelmingly likely to either be so obvious that no one really needs to be told that advice (like "Don't take a nap in the middle of a highway at rush hour"), or be completely unusable by the vast majority of people in their actual life (like "If you care to abide by the proper protocol, do not bow from the waist when you meet the Queen of England in person for the first time").
I am particularly suspicious of the sort of writing "instruction" which in an area that (in contrast with, for example, the forms of writing a business letter, or the conventions of academic papers) is supposed to be all about creativity and self-expression, barrages the students with "shoulds," and still worse, "musts." Certainly fiction tends to adhere to certain principles, with particular techniques achieving particular effects, and we can discuss them in rational ways in at least some degree, and it can certainly be useful to be familiar with them on that level. There is undeniably such a thing as craft knowledge that one will (probably) have to get one way or the other in order to produce something worthwhile. But even there one ought to be careful of "should" and "must," especially at such a basic narrative level as the aforementioned author discussed. This not least because of the fact that particular techniques achieve particular effects does not necessarily settle for us which effects we should go far, especially if we are not playing the usually self-defeating game of chasing after "What's hot now" (or the still worse game Oscar Wilde described of "degrad[ing] the classics into authorities," and using them "as a means for checking the progress of art"--of which cultural crime I only hesitate to accuse the dubious instructor precisely because they failed to show any knowledge of the classics, post-apocalyptic or otherwise, whatsoever).
Making matters worse the author here seemed to me demonstrably wrong, and in ways all too telling of a present-minded shallowness. After all, there is much more to post-apocalyptic fiction than reptile-minded survivalism. There is also the struggle to move past survivalism--to save what remains of civlization, or rebuild it, perhaps along new and better lines. And this has seemed to me the far more compelling kind of story, as I have recently been reminded when picking up once more Isaac Asimov's classic, Foundation, most assuredly a piece of post-apocalyptic fiction in its recounting the struggle to abbreviate the Dark Age following civilization's collapse and relapse into barbarism. It is, of course, hardly a fashionable work these days, but that is as much, or more, because of its considerable virtues (its scope, its interest in society, its rationalism) as any failings it may supposedly have--virtues which have me rethinking yet again my old appraisal of the book, such that I expect I will have more to say about it and the original trilogy of which it was a part in the coming days.
This piece of "advice" annoyed me greatly. Admittedly I am generally suspicious of one-size-fits-all advice broadcast to the world at large as, even when actually, technically, correct (as it is not much of the time), overwhelmingly likely to either be so obvious that no one really needs to be told that advice (like "Don't take a nap in the middle of a highway at rush hour"), or be completely unusable by the vast majority of people in their actual life (like "If you care to abide by the proper protocol, do not bow from the waist when you meet the Queen of England in person for the first time").
I am particularly suspicious of the sort of writing "instruction" which in an area that (in contrast with, for example, the forms of writing a business letter, or the conventions of academic papers) is supposed to be all about creativity and self-expression, barrages the students with "shoulds," and still worse, "musts." Certainly fiction tends to adhere to certain principles, with particular techniques achieving particular effects, and we can discuss them in rational ways in at least some degree, and it can certainly be useful to be familiar with them on that level. There is undeniably such a thing as craft knowledge that one will (probably) have to get one way or the other in order to produce something worthwhile. But even there one ought to be careful of "should" and "must," especially at such a basic narrative level as the aforementioned author discussed. This not least because of the fact that particular techniques achieve particular effects does not necessarily settle for us which effects we should go far, especially if we are not playing the usually self-defeating game of chasing after "What's hot now" (or the still worse game Oscar Wilde described of "degrad[ing] the classics into authorities," and using them "as a means for checking the progress of art"--of which cultural crime I only hesitate to accuse the dubious instructor precisely because they failed to show any knowledge of the classics, post-apocalyptic or otherwise, whatsoever).
Making matters worse the author here seemed to me demonstrably wrong, and in ways all too telling of a present-minded shallowness. After all, there is much more to post-apocalyptic fiction than reptile-minded survivalism. There is also the struggle to move past survivalism--to save what remains of civlization, or rebuild it, perhaps along new and better lines. And this has seemed to me the far more compelling kind of story, as I have recently been reminded when picking up once more Isaac Asimov's classic, Foundation, most assuredly a piece of post-apocalyptic fiction in its recounting the struggle to abbreviate the Dark Age following civilization's collapse and relapse into barbarism. It is, of course, hardly a fashionable work these days, but that is as much, or more, because of its considerable virtues (its scope, its interest in society, its rationalism) as any failings it may supposedly have--virtues which have me rethinking yet again my old appraisal of the book, such that I expect I will have more to say about it and the original trilogy of which it was a part in the coming days.
The Small Screen Generation Gap: Why Old Folks Go for Comfort Food
It is an old pop cultural cliché that CBS is an older person's channel, apparently rooted in how that channel was king in the '70s and early '80s (home of Dallas and MASH and Norman Lear's sitcoms and much, much else), and then saw other channels pretty much eat its lunch in the following years. This gave the impression that people still watching CBS were overwhelmingly loyalists from its boom years, with this reaffirmed by the way such hits as it managed to have in the late '80s and '90s looked like shows for the old folks--Murder She Wrote, Diagnosis Murder, Touched by an Angel (the kind of stuff that has since ended up staples of the Hallmark Channels), while the cool kids were watching FOX.
Now network TV's audience generally looks like that of CBS used to--one reason why, it would seem, that network that may well have catered to older viewers best is the current ratings champ.
Much of this generation gap may be a matter of simple habits--like, at the end of the day, when looking to relax, picking up the remote and turning on the TV and looking at a particular channel rather than going online and sifting through streaming offerings looking for something to watch, arguably a more choice-intensive process, reflecting the difference in what the old and young, respectively, look for from TV. Older people grew up on TV-as-comfort-food, which on some level remains their expectation and good enough for them for most of the time. This is all the more the case as they are more likely to find richer content elsewhere than younger folks--more likely to CRACK OPEN A BOOK, or just watch a movie that may not be a Big Dumb Blockbuster (such as they have so many of on, for instance, Turner Classic Movies), and, again, prone to treat TV watching as distraction rather than vocation, as younger folks seem to do.
It may also be that these generational differences are not just a matter of growing up in different media eras, but of being at a different point in their lives. I suspect that young people, especially if they have had relatively comfortable, sheltered, constrained existences, are particularly attracted to material which is the exact opposite--the bigger, wider world, the sensational, the intense. After all, even where those lives are comfortable they don't have much freedom or power, and freedom and power are what they want to experience vicariously, while the combination of a certain age with a certain mentality means trying to show off, if only to oneself, just how tough and worldly and mature one is (when they really aren't). The edgelord and grimdark have a certain appeal in that place.
It isn't quite the same for older folks, who may not be wholly averse to the bigger, wider world, the sensational, the intense, but probably mostly just want a break—the more for their having learned to find entertainment in the familiar, the everyday. (I couldn't imagine myself fining much pleasure in those fat "realist" nineteenth century novels, but a couple of decades on I was happily reading Balzac and Trollope.) Older persons have also moved past, and maybe become ironic toward, the fantasies of youth—not least, of freedom and power, seen through them to the illusions they are. And certainly they have nothing to prove to themselves or anyone else about how tough and worldly and mature they are--the wiser of them having realized they aren't that much of any of those things, that very few of us if any ever get to be so and that we'd be happier not going through what it takes to get there. ("Until a man is twenty-five, he still thinks, every so often, that under the right circumstances he could be the baddest motherfucker in the world . . ." Neal Stephenson wrote in Snow Crash's most memorable line. Well, they're not twenty-five anymore and they're damned glad the "right circumstances" never came up even if they really could have been so bad.) Edginess? Darkness? At this stage of things they have probably had more edgy experiences than they would like--had their fill of darkness, especially if they have learned just what a dark place that everyday world is, noticed and processed the things the young don't notice and haven't processed. And so they may be up for an adventure or a drama every now and then--but perhaps more often would prefer to have a laugh instead.
Now network TV's audience generally looks like that of CBS used to--one reason why, it would seem, that network that may well have catered to older viewers best is the current ratings champ.
Much of this generation gap may be a matter of simple habits--like, at the end of the day, when looking to relax, picking up the remote and turning on the TV and looking at a particular channel rather than going online and sifting through streaming offerings looking for something to watch, arguably a more choice-intensive process, reflecting the difference in what the old and young, respectively, look for from TV. Older people grew up on TV-as-comfort-food, which on some level remains their expectation and good enough for them for most of the time. This is all the more the case as they are more likely to find richer content elsewhere than younger folks--more likely to CRACK OPEN A BOOK, or just watch a movie that may not be a Big Dumb Blockbuster (such as they have so many of on, for instance, Turner Classic Movies), and, again, prone to treat TV watching as distraction rather than vocation, as younger folks seem to do.
It may also be that these generational differences are not just a matter of growing up in different media eras, but of being at a different point in their lives. I suspect that young people, especially if they have had relatively comfortable, sheltered, constrained existences, are particularly attracted to material which is the exact opposite--the bigger, wider world, the sensational, the intense. After all, even where those lives are comfortable they don't have much freedom or power, and freedom and power are what they want to experience vicariously, while the combination of a certain age with a certain mentality means trying to show off, if only to oneself, just how tough and worldly and mature one is (when they really aren't). The edgelord and grimdark have a certain appeal in that place.
It isn't quite the same for older folks, who may not be wholly averse to the bigger, wider world, the sensational, the intense, but probably mostly just want a break—the more for their having learned to find entertainment in the familiar, the everyday. (I couldn't imagine myself fining much pleasure in those fat "realist" nineteenth century novels, but a couple of decades on I was happily reading Balzac and Trollope.) Older persons have also moved past, and maybe become ironic toward, the fantasies of youth—not least, of freedom and power, seen through them to the illusions they are. And certainly they have nothing to prove to themselves or anyone else about how tough and worldly and mature they are--the wiser of them having realized they aren't that much of any of those things, that very few of us if any ever get to be so and that we'd be happier not going through what it takes to get there. ("Until a man is twenty-five, he still thinks, every so often, that under the right circumstances he could be the baddest motherfucker in the world . . ." Neal Stephenson wrote in Snow Crash's most memorable line. Well, they're not twenty-five anymore and they're damned glad the "right circumstances" never came up even if they really could have been so bad.) Edginess? Darkness? At this stage of things they have probably had more edgy experiences than they would like--had their fill of darkness, especially if they have learned just what a dark place that everyday world is, noticed and processed the things the young don't notice and haven't processed. And so they may be up for an adventure or a drama every now and then--but perhaps more often would prefer to have a laugh instead.
Reflections on Network TV as Comfort Food
Not long ago a TV critic who, like most of his ilk, worships at the altar of "prestige TV," recently reported eschewing it for a mere week and watching network TV instead as an experiment (the Nobel committee must be informed at once!) and then discussed his findings. One (for him, anyway) significant theme of those findings is that network TV tends toward "comfort food." Of course, I am not sure how much that can be said of network TV today as against before. He talked quite a bit about reality TV, which I have regarded as nasty stuff from the start, while sitcoms certainly seem to have got much meaner over the years--with the asshole behavior on Friends giving way to the still more thoroughly asshole behavior on How I Met Your Mother or The Office (which contributed much to the current, wildly excessive use of the word "cringe"). And when it isn't reality TV or sitcoms it is more often than not procedurals dealing with serious, life-and-death stuff--crime, medical emergencies, and the rest, in increasingly graphic fashion (CSI a long way from the old Quincy episodes).
Still, precisely because of the seriousness of the subject matter the procedurals seem especially worth mentioning. Even as the images got increasingly graphic they still tended to deal with ugly things order upheld by episode's end--making comfort food (or at least, what some were ready to accept as such) out of ingredients that looked unpromising at first glance. And it seems worth thinking about why this was so characteristic of TV. The most basic thing seems to me to be that, given the limited resources early TV had to work with, and the tightness of the censorship under which the makers of television had to work in the form's early days (you couldn't even show a pregnant woman on TV), comfort food was the medium's most plausible offering--a fact reinforced by the structure of the market. The competition was limited among a few outlets each going for mammoth shares of the market, while "prime time" was king--which is to say people sitting on a comfortable piece of furniture at the end of the day and staring at their screen like cavemen at the fire in their hearth.
Between the limited options, and sheer weariness--TV was what you watched when you didn't have the energy for anything else, like reading a book, or going out for a movie--people easily became habituated not to seeking out the scintillating, but leaving on what was least distasteful, with the name of the game keeping them from changing the channel at the end, one show leading to another to another so that you had them through the night, week after week, year after year. Where the broader audience is concerned it's easier to get by, especially if your material isn't all that great, by not taking yourself too seriously. (Indeed, looking at old reruns of Magnum P.I., The A-Team, MacGyver, I am constantly reminded of just how much those action-adventure shows got by on their sense of humor.) Meanwhile the practical difficulty of accessing the content (if you didn't catch it when it was aired, that was pretty much it), and the importance of summer reruns and a lucrative syndication market, meant there was a heavy stress on "rewatch" value, on things people would be pleased, or at least willing, to see again and again. This, too, encouraged the making of material people could take easily in bite-sized pieces, again and again. (Think of how we watched certain episodes of, for instance, the original Star Trek, or The Simpsons, until even their minutiae became part of the common frame of reference--like a beard that wasn't there before denoting the evil parallel universe version of an individual, or the way just about every line of dialogue in the first eight seasons or so of The Simpsons seems to have its own Reddit page, its own memes.) All of that encouraged a lowest common denominator logic that, again, pushed producers toward offering up comfort food, and audiences toward taking it.
All of that has since changed. The resources available to producers expanded, the censorship to which they were subject by no means disappeared but became more relaxed in respects that were not insignificant, the competition intensified, and people started treating TV-watching as a serious cultural activity, the more in as it became so much more accessible--rather than something looked at the end of the day something they watched on the smart phone on the way to work, or even at work when they should have been working. Meanwhile, their consumption of other media changed. Certainly no matter how much some in the press deny it, people read less, and what they read is, well, not what it once was. (I still can't get over the fact that John le Carrè was one of the biggest bestsellers around back in the '80s. Today even Robert Ludlum is likely too tough, as indicated by how a "young adult" version of Dan Brown is thought to be called for.) At the movies what we get are "Big Dumb Blockbusters" that can make us, as Joe Bauer did, look back longingly to a day and age that had "movies that had stories so you cared whose ass it was and why it was farting." Thus TV has become that much more important as their cultural and intellectual "nourishment"--all as people have been positioned to see what they want when they want by conveniences ranging from programmable VCRs to streaming, and the concern for the rewatch value of the show in reruns has fallen by the wayside, again, because there is so much out there from which to choose.
The result is that there is much more opportunity--and incentive--to produce TV which is not comfort food. But that hardly means that there is no audience for the old comfort food stuff, even if there would seem a generation gap here, with the Big Three/Big Four networks counting on those older audiences (one reason, I suppose, why they have been cranking out "new seasons" of The X-Files, Roseanne and Law & Order), while young people eat up the rather less comfortable stuff the other cable channels, streaming and the rest serve up as a matter of course.
Still, precisely because of the seriousness of the subject matter the procedurals seem especially worth mentioning. Even as the images got increasingly graphic they still tended to deal with ugly things order upheld by episode's end--making comfort food (or at least, what some were ready to accept as such) out of ingredients that looked unpromising at first glance. And it seems worth thinking about why this was so characteristic of TV. The most basic thing seems to me to be that, given the limited resources early TV had to work with, and the tightness of the censorship under which the makers of television had to work in the form's early days (you couldn't even show a pregnant woman on TV), comfort food was the medium's most plausible offering--a fact reinforced by the structure of the market. The competition was limited among a few outlets each going for mammoth shares of the market, while "prime time" was king--which is to say people sitting on a comfortable piece of furniture at the end of the day and staring at their screen like cavemen at the fire in their hearth.
Between the limited options, and sheer weariness--TV was what you watched when you didn't have the energy for anything else, like reading a book, or going out for a movie--people easily became habituated not to seeking out the scintillating, but leaving on what was least distasteful, with the name of the game keeping them from changing the channel at the end, one show leading to another to another so that you had them through the night, week after week, year after year. Where the broader audience is concerned it's easier to get by, especially if your material isn't all that great, by not taking yourself too seriously. (Indeed, looking at old reruns of Magnum P.I., The A-Team, MacGyver, I am constantly reminded of just how much those action-adventure shows got by on their sense of humor.) Meanwhile the practical difficulty of accessing the content (if you didn't catch it when it was aired, that was pretty much it), and the importance of summer reruns and a lucrative syndication market, meant there was a heavy stress on "rewatch" value, on things people would be pleased, or at least willing, to see again and again. This, too, encouraged the making of material people could take easily in bite-sized pieces, again and again. (Think of how we watched certain episodes of, for instance, the original Star Trek, or The Simpsons, until even their minutiae became part of the common frame of reference--like a beard that wasn't there before denoting the evil parallel universe version of an individual, or the way just about every line of dialogue in the first eight seasons or so of The Simpsons seems to have its own Reddit page, its own memes.) All of that encouraged a lowest common denominator logic that, again, pushed producers toward offering up comfort food, and audiences toward taking it.
All of that has since changed. The resources available to producers expanded, the censorship to which they were subject by no means disappeared but became more relaxed in respects that were not insignificant, the competition intensified, and people started treating TV-watching as a serious cultural activity, the more in as it became so much more accessible--rather than something looked at the end of the day something they watched on the smart phone on the way to work, or even at work when they should have been working. Meanwhile, their consumption of other media changed. Certainly no matter how much some in the press deny it, people read less, and what they read is, well, not what it once was. (I still can't get over the fact that John le Carrè was one of the biggest bestsellers around back in the '80s. Today even Robert Ludlum is likely too tough, as indicated by how a "young adult" version of Dan Brown is thought to be called for.) At the movies what we get are "Big Dumb Blockbusters" that can make us, as Joe Bauer did, look back longingly to a day and age that had "movies that had stories so you cared whose ass it was and why it was farting." Thus TV has become that much more important as their cultural and intellectual "nourishment"--all as people have been positioned to see what they want when they want by conveniences ranging from programmable VCRs to streaming, and the concern for the rewatch value of the show in reruns has fallen by the wayside, again, because there is so much out there from which to choose.
The result is that there is much more opportunity--and incentive--to produce TV which is not comfort food. But that hardly means that there is no audience for the old comfort food stuff, even if there would seem a generation gap here, with the Big Three/Big Four networks counting on those older audiences (one reason, I suppose, why they have been cranking out "new seasons" of The X-Files, Roseanne and Law & Order), while young people eat up the rather less comfortable stuff the other cable channels, streaming and the rest serve up as a matter of course.
Friday, March 11, 2022
Sam & Cat & the Techno-Hype of the ‘10s
Those who were attentive to children’s television in the late ‘00s and early ‘10s are likely to remember Dan Schneider’s Nickelodeon sitcoms iCarly and VICTORiOUS--the former one of the bigger hits of those glory days for original basic cable programming drawing in as many as 11 million viewers at its peak (numbers a Big Four network would have been happy to see in their own prime time programming). Of course, even those hits ran their course, and a predictable recourse on Schneider’s part was pairing two particularly well-received characters, Sam Puckett from iCarly and Cat Valentine from VICTORiOUS, in a spin-off show, Sam & Cat (2013-2014).
I suspect that for most the show is memorable merely as a minor element in the Schneiderverse that added a little more iCarly and VICTORiOUS content, and a footnote in the careers of the personages involved. However, recalling Sam & Cat a few years on it interests me as a time capsule from the prior round of techno-hype. After all, wacky as the adventures of the two leads could be, the show was never presented as science fiction in the manner of, for example, Schneider’s superhero-themed sitcom Henry Danger. It was set in the "mundane" present. But all the same visits to a restaurant with robot waiters were a regular feature of the episodes (the eatery "Bots" appeared in 21 of the 35 episodes), while one episode revolved around the chaos that ensued when a drone delivery went very badly (the drone carrying off a baby the two leads, who make their living with a babysitter service, were watching for a client). That was a reflection of how near at hand the technologies were thought to be, an impression which itself seems to have furthered the impression--inspiring a restauranteur in the Chinese metropolis of Ningbo to use robot waiters himself.
Of course, eight years after the show’s end what was supposed to be just around the corner remains remote--our restaurants still reliant on human servers to the point that robotized service remains sufficiently a novelty to bring in tourists and, when one shows up in your town, warrant mention in the local news, while you are very unlikely to see a drone bringing you your next retail order (and that in spite of two years and counting of pandemic-related disruption and a flood of inflationary cheap credit that gave business ample incentive and opportunity to aim for more automated operations. Just as the technology was "not yet there" in 2014 it does not seem to quite be here in 2022 either, with the result that Sam & Cat unintentionally looks a bit like The Jetsons has to later viewers, yesteryear’s vision of tomorrow.
I suspect that for most the show is memorable merely as a minor element in the Schneiderverse that added a little more iCarly and VICTORiOUS content, and a footnote in the careers of the personages involved. However, recalling Sam & Cat a few years on it interests me as a time capsule from the prior round of techno-hype. After all, wacky as the adventures of the two leads could be, the show was never presented as science fiction in the manner of, for example, Schneider’s superhero-themed sitcom Henry Danger. It was set in the "mundane" present. But all the same visits to a restaurant with robot waiters were a regular feature of the episodes (the eatery "Bots" appeared in 21 of the 35 episodes), while one episode revolved around the chaos that ensued when a drone delivery went very badly (the drone carrying off a baby the two leads, who make their living with a babysitter service, were watching for a client). That was a reflection of how near at hand the technologies were thought to be, an impression which itself seems to have furthered the impression--inspiring a restauranteur in the Chinese metropolis of Ningbo to use robot waiters himself.
Of course, eight years after the show’s end what was supposed to be just around the corner remains remote--our restaurants still reliant on human servers to the point that robotized service remains sufficiently a novelty to bring in tourists and, when one shows up in your town, warrant mention in the local news, while you are very unlikely to see a drone bringing you your next retail order (and that in spite of two years and counting of pandemic-related disruption and a flood of inflationary cheap credit that gave business ample incentive and opportunity to aim for more automated operations. Just as the technology was "not yet there" in 2014 it does not seem to quite be here in 2022 either, with the result that Sam & Cat unintentionally looks a bit like The Jetsons has to later viewers, yesteryear’s vision of tomorrow.
A Note on the Directorial Career of Elaine May
Where the history of the "New Hollywood" is concerned some figures, of course, get far more attention than others--and others less--in ways that do not seem wholly explicable simply on the basis of their presence or the influence of their contributions (for better and worse), with two factors seeming to me to be worth noting.
The first would seem to be the particular importance accorded directors in that era, auteur theory-minded chroniclers often making it a story of auteur directors--a Robert Altman, a Peter Bogdanovich, a William Friedkin, a Francis Ford Coppola, a Hal Ashby, a Martin Scorsese, a Michael Cimino. As a result those New Hollywood figures who were a huge part of the scene, and even directed notable films, but on the whole made a bigger mark as screenwriters than as directors--a John Milius or a Robert Towne or a Paul Schrader--get that much less attention.
The second is that the New Hollywood tends to be identified with particular themes and a particular aesthetic—work which was socially critical, genuinely gritty, and taboo-breaking and envelope-pushing with regard to what could be put on the screen, with this extending not only to sex, violence and politics, but to cinematic technique (with Peter Biskind remarking the "Brechtian" tendency in '70s Hollywood). The result was that an auteur New Hollywood movie that did not fit the pattern has been less likely to be recognized as part of that scene--with an obvious example George Lucas' Star Wars (very much a New Hollywood film in a lot of ways, but very far removed from this stereotype, and indeed much more likely to be talked about as the beginning of the post-New Hollywood of crowd-pleasing merchandised, franchised action movie blockbusters).
Considering the career of Elaine May it seems to me that attentiveness to her career--which had her very much part of that scene, from her time as Mike Nichols' comedy partner forward, and her collaboration with Warren Beatty, as well as a maker of three major feature films in her own right in the 1970s (complete with typically New Hollywood battles over "final cut") and the very New Hollywood denouement to her time as a director with her fourth, Ishtar (bad buzz full of accusations of a crazed director going wildly overbudget on an unworthy project, movie critics dutifully bashing it in an atmosphere of frenzy, etc.)--has suffered in the same ways. As with Towne and Schrader her status as a screenwriter (with credits on, among other films, Heaven Can Wait and Reds and Tootsie, and even in the '90s Wolf and The Bird Cage and Primary Colors) overshadowing the reception for the movies she helmed. And as with Star Wars her films, like A New Leaf and The Heartbreak Kid, while deservedly regarded as classics, are a far cry from the New Hollywood stereotype, while, if perhaps coming closer as a crime drama, 1976's Mikey and Nicky was badly chopped up by the studio and practically buried (so much so that her career as a director could be considered to have already ended once before)--all as 1987's Ishtar got an undeservedly bad reception, precisely because of the ways in which it was actually New Hollywood (not least, its politics). Still, recent years have seen Ishtar undergoing a rehabilitation (very belated, and perhaps limited in comparison with other New Hollywood classics, but a rehabilitation all the same), which may, besides leading to a reevaluation of her work, also contribute to a fuller understanding of New Hollywood's history extending beyond the clichés--and with it, just how it was that Hollywood has ended up in its current, artistically none-too-fortunate state.
The first would seem to be the particular importance accorded directors in that era, auteur theory-minded chroniclers often making it a story of auteur directors--a Robert Altman, a Peter Bogdanovich, a William Friedkin, a Francis Ford Coppola, a Hal Ashby, a Martin Scorsese, a Michael Cimino. As a result those New Hollywood figures who were a huge part of the scene, and even directed notable films, but on the whole made a bigger mark as screenwriters than as directors--a John Milius or a Robert Towne or a Paul Schrader--get that much less attention.
The second is that the New Hollywood tends to be identified with particular themes and a particular aesthetic—work which was socially critical, genuinely gritty, and taboo-breaking and envelope-pushing with regard to what could be put on the screen, with this extending not only to sex, violence and politics, but to cinematic technique (with Peter Biskind remarking the "Brechtian" tendency in '70s Hollywood). The result was that an auteur New Hollywood movie that did not fit the pattern has been less likely to be recognized as part of that scene--with an obvious example George Lucas' Star Wars (very much a New Hollywood film in a lot of ways, but very far removed from this stereotype, and indeed much more likely to be talked about as the beginning of the post-New Hollywood of crowd-pleasing merchandised, franchised action movie blockbusters).
Considering the career of Elaine May it seems to me that attentiveness to her career--which had her very much part of that scene, from her time as Mike Nichols' comedy partner forward, and her collaboration with Warren Beatty, as well as a maker of three major feature films in her own right in the 1970s (complete with typically New Hollywood battles over "final cut") and the very New Hollywood denouement to her time as a director with her fourth, Ishtar (bad buzz full of accusations of a crazed director going wildly overbudget on an unworthy project, movie critics dutifully bashing it in an atmosphere of frenzy, etc.)--has suffered in the same ways. As with Towne and Schrader her status as a screenwriter (with credits on, among other films, Heaven Can Wait and Reds and Tootsie, and even in the '90s Wolf and The Bird Cage and Primary Colors) overshadowing the reception for the movies she helmed. And as with Star Wars her films, like A New Leaf and The Heartbreak Kid, while deservedly regarded as classics, are a far cry from the New Hollywood stereotype, while, if perhaps coming closer as a crime drama, 1976's Mikey and Nicky was badly chopped up by the studio and practically buried (so much so that her career as a director could be considered to have already ended once before)--all as 1987's Ishtar got an undeservedly bad reception, precisely because of the ways in which it was actually New Hollywood (not least, its politics). Still, recent years have seen Ishtar undergoing a rehabilitation (very belated, and perhaps limited in comparison with other New Hollywood classics, but a rehabilitation all the same), which may, besides leading to a reevaluation of her work, also contribute to a fuller understanding of New Hollywood's history extending beyond the clichés--and with it, just how it was that Hollywood has ended up in its current, artistically none-too-fortunate state.
Ishtar, Vindicated?
In the waning days of the New Hollywood, when the Suits were getting their own back and breaking the artists to their Neuordnung of executive-controlled high concept guck, the press--then as now dominated by sycophants, if perhaps not quite so totally--followed a particular pattern in regard to the release of movie after movie. Simply put, a filmmaker who had rubbed the execs the wrong way would, in the midst of a major would be accused of going "out of control" in the pursuit of something not worth pursuing--going wildly overbudget in their obsession with some half-baked vision. Afterward the critics would dutifully say the movie was terrible, the film would be deemed a box office flop, and the filmmaker would never get another chance like that again.
Thus did it go with Francis Ford Coppola on Apocalypse Now, and Michael Cimino on Heaven's Gate. Of course, the reputation of those films has improved greatly since they got that treatment, to such a degree that it is they and not their duck-talking detractors who have been vindicated by history. It may have been too little, too late, to save their careers (whereas the careers of said hecklers, of course, went unscathed), but for whatever it may be worth that late vindication demonstrated the primacy of industry politics, and often politics politics, as "liberal Hollywood" cracked down on those artists who were even a little too far to the left of the liking of those in charge. (Heaven's Gate star Kris Kristofferson, certainly, has been very frank about what he believes happened with that movie, declaring that in a political atmosphere in which the Attorney General of the United States would go so far as to tell "studio heads that 'there should be no more pictures made with a negative view of American history'" the "right-wing revolutionaries" of the Reagan period "assassinated my last real movie.)
As it went with Apocalypse Now and Heaven's Gate so did it go with Ishtar, an Elaine May-helmed film costarring Warren Beatty (the folks who brought you Reds), with both a big budget and an outlook that can, from the standpoint of the Rambo-Top Gun '80s, seem audaciously leftish, with the CIA a backer of brutal despots who live in luxury while keeping their people in grinding poverty ("The dome of the emir's palace is gold, and the people of Ishtar have never seen a refrigerator"), and the Agency a picture of both murderousness and incompetence, and the good guys the rebels fighting their client, who win in the end.
I have to admit that there seems less prospect of this movie ever enjoying the "unfairly maligned masterpiece" status Apocalypse Now or Heaven's Gate has acquired over the years. May's film included some clever ideas, like the casting of Warren Beatty as the confidence-lacking, forlorn "loser" and Dustin Hoffman as the self-confident "ladies' man," but this was probably too "'60s inside joke" to go over well with the general audience in 1987 (or after), while the movie never comes close to the comic flair of Elaine's prior A New Leaf or The Heartbreak Kid. Those movies' best scenes had a tightness about the writing and direction--literally tight, in that they tended to take place among people clustered around a small table (think, for example, of Charles Grodin's Lenny Cantrow asking Eddie Albert for Cybil Shepherd's hand in marriage, or trying to break up with his wife Lila on their honeymoon, in The Heartbreak Kid), with this reflecting the source of the comedic tension, namely the collision between the aggressively one track-minded stupidity of the protagonist and the hapless individual unfortunately in his path with no way to turn. Alas, May is working on a broader canvas here--as wide as the Sahara in which the story is set--while the characters as written lack that forcefulness. (Looking at Dustin Hoffman's Chuck Clarke and Warren Beatty's Lyle Rogers I found myself remembering what Roger Ebert said of A Night at the Roxbury, calling it "the first comedy I've attended where you feel that to laugh would be cruel to the characters." Thus did it seem in regard to Clarke and Rogers--who in an early scene attempts suicide after his girlfriend breaks up with him.) The approach is different, the characters different (even the ladies-man-in-his-own-mind Clarke is no Cantrow), and I came away with the impression that May, while admirable in attempting to do something different, had in this case undertaken an experiment that did not entirely work. Still, that leaves Ishtar a good deal better than its longtime "one of the worst movies ever" status, and, even if, again, this is happening after the damage to her career (and Beatty's career), was done, it seems only fair that the movie's reputation has been improving in recent years--and a reminder of the real politics of Hollywood, and the entertainment press, in the '80s and since.
Thus did it go with Francis Ford Coppola on Apocalypse Now, and Michael Cimino on Heaven's Gate. Of course, the reputation of those films has improved greatly since they got that treatment, to such a degree that it is they and not their duck-talking detractors who have been vindicated by history. It may have been too little, too late, to save their careers (whereas the careers of said hecklers, of course, went unscathed), but for whatever it may be worth that late vindication demonstrated the primacy of industry politics, and often politics politics, as "liberal Hollywood" cracked down on those artists who were even a little too far to the left of the liking of those in charge. (Heaven's Gate star Kris Kristofferson, certainly, has been very frank about what he believes happened with that movie, declaring that in a political atmosphere in which the Attorney General of the United States would go so far as to tell "studio heads that 'there should be no more pictures made with a negative view of American history'" the "right-wing revolutionaries" of the Reagan period "assassinated my last real movie.)
As it went with Apocalypse Now and Heaven's Gate so did it go with Ishtar, an Elaine May-helmed film costarring Warren Beatty (the folks who brought you Reds), with both a big budget and an outlook that can, from the standpoint of the Rambo-Top Gun '80s, seem audaciously leftish, with the CIA a backer of brutal despots who live in luxury while keeping their people in grinding poverty ("The dome of the emir's palace is gold, and the people of Ishtar have never seen a refrigerator"), and the Agency a picture of both murderousness and incompetence, and the good guys the rebels fighting their client, who win in the end.
I have to admit that there seems less prospect of this movie ever enjoying the "unfairly maligned masterpiece" status Apocalypse Now or Heaven's Gate has acquired over the years. May's film included some clever ideas, like the casting of Warren Beatty as the confidence-lacking, forlorn "loser" and Dustin Hoffman as the self-confident "ladies' man," but this was probably too "'60s inside joke" to go over well with the general audience in 1987 (or after), while the movie never comes close to the comic flair of Elaine's prior A New Leaf or The Heartbreak Kid. Those movies' best scenes had a tightness about the writing and direction--literally tight, in that they tended to take place among people clustered around a small table (think, for example, of Charles Grodin's Lenny Cantrow asking Eddie Albert for Cybil Shepherd's hand in marriage, or trying to break up with his wife Lila on their honeymoon, in The Heartbreak Kid), with this reflecting the source of the comedic tension, namely the collision between the aggressively one track-minded stupidity of the protagonist and the hapless individual unfortunately in his path with no way to turn. Alas, May is working on a broader canvas here--as wide as the Sahara in which the story is set--while the characters as written lack that forcefulness. (Looking at Dustin Hoffman's Chuck Clarke and Warren Beatty's Lyle Rogers I found myself remembering what Roger Ebert said of A Night at the Roxbury, calling it "the first comedy I've attended where you feel that to laugh would be cruel to the characters." Thus did it seem in regard to Clarke and Rogers--who in an early scene attempts suicide after his girlfriend breaks up with him.) The approach is different, the characters different (even the ladies-man-in-his-own-mind Clarke is no Cantrow), and I came away with the impression that May, while admirable in attempting to do something different, had in this case undertaken an experiment that did not entirely work. Still, that leaves Ishtar a good deal better than its longtime "one of the worst movies ever" status, and, even if, again, this is happening after the damage to her career (and Beatty's career), was done, it seems only fair that the movie's reputation has been improving in recent years--and a reminder of the real politics of Hollywood, and the entertainment press, in the '80s and since.
The Hypocrisy of Respectful Disagreement--and the Scarcity of Genuine "Respect" in the World
I have previously remarked the reality that, especially if one takes the word respect to mean what it has long meant, deference, then "respectful disagreement" is a contradiction in terms.
Yet people insist that there can be such a thing as that.
Apart from the not inconsiderable reasons that they don't know the meanings of simple words, why do so many insist on that?
Simply put, they don't want to give up their disagreement while at the same time not admitting that they are being disrespectful--while, perhaps, determinedly overlooking the disrespect with which they are being treated for the sake of avoiding confrontation.
The pretension of respectful disagreement also enables them to overlook the reality that the "respect" of which we hear so much overwhelmingly derives from sources other than genuine esteem for those upon whom respect is bestowed. "The essence of any hierarchy is retaliation" Samuel Shem wrote in The House of God, and so does it go with the "respect for authority" incessantly propounded by so many as the highest of virtues--respect far, far more likely than anything else to be a matter of tribute extorted by the more-powerful from the less-powerful in a manner that, like so much else we take it for granted, gives the lie to all our talk of freedom, equality, individual dignity and the rest.
It also reminds us that what respect is supposed to mean in many minds is a rare thing indeed. After all, respectfulness as something other than an outward show of deference to another out of fear of a blow requires humility, patience, self-restraint on our part. It means admitting that perhaps, just perhaps, another person knows better than we do (as in "I respect your knowledge" or "I respect your judgment")--or that even if we are sure we really do know better we have no right to dispute their decision (as in "I respect your privacy"). It means that because of this we listen rather than speak, and resign our right to a say.
That said, now may be a good time to ask yourself honestly just how many people in your life who have actually displayed such qualities, as against how many loudmouthed idiots who think they know best about all things and insist that everyone hear what they have to say, all the time, on everything, regardless of whether they want to or not, no matter what, commonly inflicting on those around them not only their insufferable disrespect but what may be their even more insufferable stupidity. If there is even one person in your life who genuinely has these qualities you can count yourself lucky--and I dare say, owe them at least a measure of genuine respect for having those qualities.
Yet people insist that there can be such a thing as that.
Apart from the not inconsiderable reasons that they don't know the meanings of simple words, why do so many insist on that?
Simply put, they don't want to give up their disagreement while at the same time not admitting that they are being disrespectful--while, perhaps, determinedly overlooking the disrespect with which they are being treated for the sake of avoiding confrontation.
The pretension of respectful disagreement also enables them to overlook the reality that the "respect" of which we hear so much overwhelmingly derives from sources other than genuine esteem for those upon whom respect is bestowed. "The essence of any hierarchy is retaliation" Samuel Shem wrote in The House of God, and so does it go with the "respect for authority" incessantly propounded by so many as the highest of virtues--respect far, far more likely than anything else to be a matter of tribute extorted by the more-powerful from the less-powerful in a manner that, like so much else we take it for granted, gives the lie to all our talk of freedom, equality, individual dignity and the rest.
It also reminds us that what respect is supposed to mean in many minds is a rare thing indeed. After all, respectfulness as something other than an outward show of deference to another out of fear of a blow requires humility, patience, self-restraint on our part. It means admitting that perhaps, just perhaps, another person knows better than we do (as in "I respect your knowledge" or "I respect your judgment")--or that even if we are sure we really do know better we have no right to dispute their decision (as in "I respect your privacy"). It means that because of this we listen rather than speak, and resign our right to a say.
That said, now may be a good time to ask yourself honestly just how many people in your life who have actually displayed such qualities, as against how many loudmouthed idiots who think they know best about all things and insist that everyone hear what they have to say, all the time, on everything, regardless of whether they want to or not, no matter what, commonly inflicting on those around them not only their insufferable disrespect but what may be their even more insufferable stupidity. If there is even one person in your life who genuinely has these qualities you can count yourself lucky--and I dare say, owe them at least a measure of genuine respect for having those qualities.
'90s Ludlum: Notes on The Apocalypse Watch
In thinking about Robert Ludlum’s career I have tended to compare the Ludlum of the '70s, and especially the early and mid-'70s with the later Ludlum--who produced a lower output of larger and more formulaic books that significantly reused older ideas, which relied less on their political edge, or plain and simple suspense, and more on shoot 'em up action for their interest (books like The Bourne Identity and The Parsifal Mosaic rather than Trevayne or The Chancellor Manuscript). Still, reconsidering what I think of as Ludlum's three "Fourth Reich" novels--the '70s-era The Holcroft Covenant, the '80s-era The Aquitaine Progression, and the '90s-era The Apocalypse Watch--it seemed to me that there was something worth saying specifically about the Ludlum of the '90s, whose work did, indeed, seem distinctly of the decade, certainly to go by that particular book.
A logical starting point is that particular ‘90s trait, the thinness of the political premise of The Apocalypse Watch. What are these "Nazis" all about? What exactly are they looking to do and what is driving them, and enabling them? In contrast with the prior books' evocations of history, the sense they offered of the villains' demons, the high places in which those villains were situated, the sense of society's vulnerability to the madmen and their manipulations, this time around the villains' objects and sponsors are discussed in only the vaguest manner, which along with their less formidable situation leaves them looking a hazier, weaker foe than in Ludlum's preceding uses of the theme, the more so for this book's particular plot structure. In those earlier books the villains seemed like giants, and the hero very small next to them, because of how untouchable the villains were, and how these untouchable villains put a target on his back, forcing him on the run. In line with this pattern, when writing professional intelligence operatives Ludlum made a point of alienating them from their organization--to the point that their own people are pursuing them with a readiness to kill (The Matarese Circle, The Bourne Identity, The Parsifal Mosaic). By contrast The Apocalypse Watch's Drew Latham comes off as merely a cop walking a beat, and in line with that duty tackling a pack of particularly nasty but essentially commonplace gangsters. (It factors into the sense of cop drama-ness about it that in contrast with the usual globetrotting Ludlum narratives Latham's activities are mostly confined to a single metro area--while it is all too apt that in the midst of his running about his superior calls him a "loose cannon.")
As one might guess the scenario, even beyond its familiarity, has less edge, and the author endeavors to make up for it with action, the reliance on which is exceedingly heavy even by late Ludlum standards. The book really piles shootout atop shootout atop shootout--indeed, piles commando raid atop commando raid atop commando raid--through its six hundred pages. At their best serviceable rather than, to use that descriptor Ludlum liked so much, "brilliant," the sequences' details become repetitious, two separate scenes of the type entailing the heroes' stumbling upon prostitutes who just so happen to have been hired by their targets, and whose disgust with their "johns" makes them amenable to aiding the commandos' infiltration of the target’s residence. Connected with this was another break with Ludlum's usual pattern, namely the techno-thriller/sci-fi touches--from mind-controlling computer chips implanted in people's heads, to giant World War II-era Messerschmitt gliders, to a "They saved Hitler’s brain!"-type twist at the end, that, unfortunately, are of a piece with the comparative sloppiness of the plotting (with much in the overstuffed concoction amounting to little, and much not properly followed up, like, you know, the Nazis setting the Bundestag on fire).
Meanwhile, in those portions of the book in which there isn’t actually shooting (there are some) the author offers up the kind of "snappy" dialogue that drags out a scene without enriching it, and which while far from unprecedented in Ludlum’s work, still felt as if it belonged to a different "voice." Where Ludlum himself remarked particular characters of his older works as sounding like they came from the "thirties . . . [of] the late night television films" (The Matlock Paper) or "thirties Harvard with . . . pretentious emphasis" (Trevayne) in apparent obliviousness to how much his books in general sound like that, here what the dialogue recalled was stereotyped hard-boiled crime, with the evocation displaying a self-consciousness and show-offiness and glee about their anachronism. This seemed all the more telling given the metafictional indulgences (the pseudo-hardboiled dialogue apart, a minor plot point centering on a character's attempt to buy an Aston Martin DB4, which car's appearance in Goldfinger is mentioned more than once, while Latham's father’s work as an archaeologist inevitably meant an Indiana Jones reference).
All in all picking up this Ludlum novel felt like picking up a big Jack Ryan hardback and finding a really long Op-Center novel inside. In fact, I would not be surprised to find that Ludlum had started the book (there is a genuinely Ludlumesque feel to some of the earlier material--something of his prose and his sensibility evident the initial scene in the Nazi facility in Austria, and later the Jean-Pierre Villiers subplot), that the author had for whatever reason been unable to meet his schedule (burning out, perhaps, after so many years and books and amid the collapsing market for novels like these in the ‘90s?), and then handed the work over to some jobbing writer to finish it, who either decided, or was told, to go for a book that was more "contemporary" or "hipper," with this seeming the more plausible given how Ludlum was so soon to be "co-authoring" books in great numbers, by and large blander, virtually mass-manufactured books of the kind that have characterized so much more of popular fiction since--and which, when I find myself looking for a piece of light reading, send me running back to the more satisfying thrillers of an earlier day.
A logical starting point is that particular ‘90s trait, the thinness of the political premise of The Apocalypse Watch. What are these "Nazis" all about? What exactly are they looking to do and what is driving them, and enabling them? In contrast with the prior books' evocations of history, the sense they offered of the villains' demons, the high places in which those villains were situated, the sense of society's vulnerability to the madmen and their manipulations, this time around the villains' objects and sponsors are discussed in only the vaguest manner, which along with their less formidable situation leaves them looking a hazier, weaker foe than in Ludlum's preceding uses of the theme, the more so for this book's particular plot structure. In those earlier books the villains seemed like giants, and the hero very small next to them, because of how untouchable the villains were, and how these untouchable villains put a target on his back, forcing him on the run. In line with this pattern, when writing professional intelligence operatives Ludlum made a point of alienating them from their organization--to the point that their own people are pursuing them with a readiness to kill (The Matarese Circle, The Bourne Identity, The Parsifal Mosaic). By contrast The Apocalypse Watch's Drew Latham comes off as merely a cop walking a beat, and in line with that duty tackling a pack of particularly nasty but essentially commonplace gangsters. (It factors into the sense of cop drama-ness about it that in contrast with the usual globetrotting Ludlum narratives Latham's activities are mostly confined to a single metro area--while it is all too apt that in the midst of his running about his superior calls him a "loose cannon.")
As one might guess the scenario, even beyond its familiarity, has less edge, and the author endeavors to make up for it with action, the reliance on which is exceedingly heavy even by late Ludlum standards. The book really piles shootout atop shootout atop shootout--indeed, piles commando raid atop commando raid atop commando raid--through its six hundred pages. At their best serviceable rather than, to use that descriptor Ludlum liked so much, "brilliant," the sequences' details become repetitious, two separate scenes of the type entailing the heroes' stumbling upon prostitutes who just so happen to have been hired by their targets, and whose disgust with their "johns" makes them amenable to aiding the commandos' infiltration of the target’s residence. Connected with this was another break with Ludlum's usual pattern, namely the techno-thriller/sci-fi touches--from mind-controlling computer chips implanted in people's heads, to giant World War II-era Messerschmitt gliders, to a "They saved Hitler’s brain!"-type twist at the end, that, unfortunately, are of a piece with the comparative sloppiness of the plotting (with much in the overstuffed concoction amounting to little, and much not properly followed up, like, you know, the Nazis setting the Bundestag on fire).
Meanwhile, in those portions of the book in which there isn’t actually shooting (there are some) the author offers up the kind of "snappy" dialogue that drags out a scene without enriching it, and which while far from unprecedented in Ludlum’s work, still felt as if it belonged to a different "voice." Where Ludlum himself remarked particular characters of his older works as sounding like they came from the "thirties . . . [of] the late night television films" (The Matlock Paper) or "thirties Harvard with . . . pretentious emphasis" (Trevayne) in apparent obliviousness to how much his books in general sound like that, here what the dialogue recalled was stereotyped hard-boiled crime, with the evocation displaying a self-consciousness and show-offiness and glee about their anachronism. This seemed all the more telling given the metafictional indulgences (the pseudo-hardboiled dialogue apart, a minor plot point centering on a character's attempt to buy an Aston Martin DB4, which car's appearance in Goldfinger is mentioned more than once, while Latham's father’s work as an archaeologist inevitably meant an Indiana Jones reference).
All in all picking up this Ludlum novel felt like picking up a big Jack Ryan hardback and finding a really long Op-Center novel inside. In fact, I would not be surprised to find that Ludlum had started the book (there is a genuinely Ludlumesque feel to some of the earlier material--something of his prose and his sensibility evident the initial scene in the Nazi facility in Austria, and later the Jean-Pierre Villiers subplot), that the author had for whatever reason been unable to meet his schedule (burning out, perhaps, after so many years and books and amid the collapsing market for novels like these in the ‘90s?), and then handed the work over to some jobbing writer to finish it, who either decided, or was told, to go for a book that was more "contemporary" or "hipper," with this seeming the more plausible given how Ludlum was so soon to be "co-authoring" books in great numbers, by and large blander, virtually mass-manufactured books of the kind that have characterized so much more of popular fiction since--and which, when I find myself looking for a piece of light reading, send me running back to the more satisfying thrillers of an earlier day.
Monday, March 7, 2022
The Sopranos and the Decline of Television Criticism
As I have remarked in the past the country was clearly going off the rails mentally back in the ‘90s--but it knew it was going off the rails, and at least gave some sign of caring that it was going off the rails, in part because it occasionally criticized itself for doing so. And the main thing that distinguishes the cultural commentary of the time from where we are now is the demise of that self-awareness and that capacity for self-criticism (even if by that point they had been reduced to a mere flicker of what they once were).
One small example of that would seem to be how television criticism has changed. I remember, for instance, how unhinged the TV critics got in their "rapturous praises" of The Sopranos--and also remember a hilarious Saturday Night Live sketch that presented a commercial for the show’s upcoming season, saturated with quotes plucked from (mock) reviews claiming that The Sopranos "will one day replace oxygen as the thing we breathe in order to stay alive" (!).
Since then it seems that this kind of over-the-top praise has become a common mode for TV criticism, particularly for the more prestigious cable/streaming productions (They're mad for Mad Men, euphoric about Euphoria! Breaking Bad! House of Cards! Ad nauseam.)--and no one thinks twice about it. Instead "everyone" insists that this is the Golden Age of TV--because the critics tell them it is, and they have lost their own critical perspective on those critics.
What happened to make it so? I suspect one factor was that so much TV became such a "middlebrow," "Midcult" affair--using the "modern idiom in the service of the banal," as Dwight Macdonald put it, doing arthousey things that made the typical "educated" person think it was art even when it wasn’t. An aspect of this is how television became so much more complicated even when it wasn’t being particularly intelligent--with Arrow a good example of this, I think. What I saw of the show was pretty standard stuff as far as comic book action-adventure goes. (Thus did season four, for the millionth time, have a villain whose big plan for "regenerating" the world was instigating a massive nuclear exchange.) But the nonlinearity of the episodes, the fact that not just the "present day" material of the episodes was connected with season-length arcs but so were the constant flashbacks, with the former significantly evoking the latter, made the show demanding viewing, even when one could unkindly characterize it as stale and stupid.
If you will forgive what may seem a pretentious allusion--putting in the brain-work to follow the tale told by an idiot, to other idiots, for the delight of idiots, can make people who may not neccesarily be idiots miss the fact that the tale signifies nothing. Thus do English students go on battering their heads against the middle finger to the principle of coherent, rational meaning that is T.S. Eliot’s "The Wasteland" a century after the publication of that unfortunate poem, in spite of the complete frankness of Eliot about just what it was that he did in writing it, while convalescing after a "nervous breakdown" ("just sayin’," as they say).
The students, of course, would not be doing this were they not unfortunate in having instructors who, monomaniacally fixated on claiming to find meanings in works other academics told them were "important" (they usually don’t make that judgment themselves); and completely accepting of the sheer, nihilistic depth of Modernism’s irrationality and anti-rationality in all its absurdity and contradiction without really understanding it; prove they don't understand it by presuming to dig such meanings out of such works in quasi-rational fashion, with points given for abstruseness and implausibility in what can seem more parlor game than scholarly enterprise. (John Crowe Ransom, whose The New Criticism, frustrating as it is, was for me indispensable to getting this stuff, wrote that for the Modernists, for whom it seemed "the bottom [had been knocked] out of history and language and [they] become early Greeks again" thought a work's "ontological density . . . proves itself by logical obscurity"--its incoherence from the logical, rational standpoint proof that it is the truth about the world in which we live. So yeah, if this stuff seems crazy to you it’s not you, it’s them, definitely them.)
The critics at the more upmarket and mainstream outlets, being middlebrows, are those who accepted the instructors’ beliefs unquestioningly, or have come to imitate those who did. Naturally they fall for the trick every time, while, a scale ranging from "good to excellent" not being satisfactory anymore, they grade it all from "excellent to outstanding," while taking the fulsomeness of their praises totally for granted.
One small example of that would seem to be how television criticism has changed. I remember, for instance, how unhinged the TV critics got in their "rapturous praises" of The Sopranos--and also remember a hilarious Saturday Night Live sketch that presented a commercial for the show’s upcoming season, saturated with quotes plucked from (mock) reviews claiming that The Sopranos "will one day replace oxygen as the thing we breathe in order to stay alive" (!).
Since then it seems that this kind of over-the-top praise has become a common mode for TV criticism, particularly for the more prestigious cable/streaming productions (They're mad for Mad Men, euphoric about Euphoria! Breaking Bad! House of Cards! Ad nauseam.)--and no one thinks twice about it. Instead "everyone" insists that this is the Golden Age of TV--because the critics tell them it is, and they have lost their own critical perspective on those critics.
What happened to make it so? I suspect one factor was that so much TV became such a "middlebrow," "Midcult" affair--using the "modern idiom in the service of the banal," as Dwight Macdonald put it, doing arthousey things that made the typical "educated" person think it was art even when it wasn’t. An aspect of this is how television became so much more complicated even when it wasn’t being particularly intelligent--with Arrow a good example of this, I think. What I saw of the show was pretty standard stuff as far as comic book action-adventure goes. (Thus did season four, for the millionth time, have a villain whose big plan for "regenerating" the world was instigating a massive nuclear exchange.) But the nonlinearity of the episodes, the fact that not just the "present day" material of the episodes was connected with season-length arcs but so were the constant flashbacks, with the former significantly evoking the latter, made the show demanding viewing, even when one could unkindly characterize it as stale and stupid.
If you will forgive what may seem a pretentious allusion--putting in the brain-work to follow the tale told by an idiot, to other idiots, for the delight of idiots, can make people who may not neccesarily be idiots miss the fact that the tale signifies nothing. Thus do English students go on battering their heads against the middle finger to the principle of coherent, rational meaning that is T.S. Eliot’s "The Wasteland" a century after the publication of that unfortunate poem, in spite of the complete frankness of Eliot about just what it was that he did in writing it, while convalescing after a "nervous breakdown" ("just sayin’," as they say).
The students, of course, would not be doing this were they not unfortunate in having instructors who, monomaniacally fixated on claiming to find meanings in works other academics told them were "important" (they usually don’t make that judgment themselves); and completely accepting of the sheer, nihilistic depth of Modernism’s irrationality and anti-rationality in all its absurdity and contradiction without really understanding it; prove they don't understand it by presuming to dig such meanings out of such works in quasi-rational fashion, with points given for abstruseness and implausibility in what can seem more parlor game than scholarly enterprise. (John Crowe Ransom, whose The New Criticism, frustrating as it is, was for me indispensable to getting this stuff, wrote that for the Modernists, for whom it seemed "the bottom [had been knocked] out of history and language and [they] become early Greeks again" thought a work's "ontological density . . . proves itself by logical obscurity"--its incoherence from the logical, rational standpoint proof that it is the truth about the world in which we live. So yeah, if this stuff seems crazy to you it’s not you, it’s them, definitely them.)
The critics at the more upmarket and mainstream outlets, being middlebrows, are those who accepted the instructors’ beliefs unquestioningly, or have come to imitate those who did. Naturally they fall for the trick every time, while, a scale ranging from "good to excellent" not being satisfactory anymore, they grade it all from "excellent to outstanding," while taking the fulsomeness of their praises totally for granted.
Who's Reading Clive Cussler?
Once again it occurred to me to go to Goodreads to check out the evidences it offered of interest in one of those authors I so often find myself writing about, Clive Cussler in this case.
As I expected (because it seems to have been particularly popular with fans--it was certainly my personal favorite--and because of the movie version) Sahara headlined the list with regard to the number of ratings.
What I should have expected, but didn't, was the fewness of those ratings--the book having fewer than 58,000.
I also didn't expect (though I should have) that the next most popular book was the other one with a movie made out of it (and the word "Titanic" in the title) Raise the Titanic!, with less than half Sahara's unprepossessing number of ratings--a mere 26,000, with the number falling from there. Next on the list was the follow-up to Sahara, Inca Gold, followed by Atlantis Found--boosted by the Atlantis theme, I guess--and then Valhalla Rising, which because of its uncanny coincidence with the horrific events of September 2001, got a sales bump that gave it the only place any Dirk Pitt novel enjoys on the Publisher's Weekly top-ten-of-the-year lists, standing at #8.
Where the main-line Clive Cussler (and not Clive and Dirk Cussler)-authored Dirk Pitt books were concerned it seemed that older books tended to do less well than newer ones. Treasure and Cyclops (two books I held in particularly high regard as a fan) trailed the later (I thought, repetitive and tired) Shock Wave and Flood Tide and even the particularly disappointing Trojan Odyssey. Still, it struck me that the more recent, Goodreads-era continuations and tie-ins, the latest installments of which one still finds on grocery store paperback racks, did not better them. (The most “successful” of the post-Trojan Odyssey books was The Treasure of Khan, the most successful of the tie-ins the Grant Blackwood coauthored Spartan Gold, with about half of what Raise the Titanic! got.)
Altogether the pattern seems to me to fit what I saw with Robert Ludlum not so long ago, the biggest hits generally outdoing the less successful ones, later books doing better than earlier ones, and the books that got adaptations to other media doing better than those that did not, all of which combined to put a particularly popular later novel that got a major twenty-first century film adaptation far and away at the top--and almost everything else in the range of 20,000 ratings or less. Given that they were both roughly contemporaneous authors producing broadly similar fiction (writers who came along in the '70s and quickly made their mark with espionage stories of the more action-adventure-oriented type) the parallel may not be so surprising, but I also find myself thinking of how different Cussler could be from Ludlum. Cussler's novels were, generally, more simply plotted and more simply and plainly written than Ludlum's, and especially by the time of books like Cyclops, rather more summer blockbuster-like in subject matter and feel--more accessible, more fun for most, I would imagine. Certainly Cussler had his politics, same as the other genre authors did, but it probably mattered that, if probably not too different from those of his colleague Tom Clancy's, treated relatively casually in comparison with Clancy's stridency in such matters, and because the way the whole was put together one did not take him so seriously (because the plot points revolved around things like secret bases on the moon)--with the result that the books still dated, but perhaps less bothersomely for many who do not necessarily agree with him (so long as they do not look too closely--as one is reminded when recalling the furor over Sahara). Nevertheless, to go by what I see here, the fact would not seem to have helped him quite so much where his readership was concerned.
As I expected (because it seems to have been particularly popular with fans--it was certainly my personal favorite--and because of the movie version) Sahara headlined the list with regard to the number of ratings.
What I should have expected, but didn't, was the fewness of those ratings--the book having fewer than 58,000.
I also didn't expect (though I should have) that the next most popular book was the other one with a movie made out of it (and the word "Titanic" in the title) Raise the Titanic!, with less than half Sahara's unprepossessing number of ratings--a mere 26,000, with the number falling from there. Next on the list was the follow-up to Sahara, Inca Gold, followed by Atlantis Found--boosted by the Atlantis theme, I guess--and then Valhalla Rising, which because of its uncanny coincidence with the horrific events of September 2001, got a sales bump that gave it the only place any Dirk Pitt novel enjoys on the Publisher's Weekly top-ten-of-the-year lists, standing at #8.
Where the main-line Clive Cussler (and not Clive and Dirk Cussler)-authored Dirk Pitt books were concerned it seemed that older books tended to do less well than newer ones. Treasure and Cyclops (two books I held in particularly high regard as a fan) trailed the later (I thought, repetitive and tired) Shock Wave and Flood Tide and even the particularly disappointing Trojan Odyssey. Still, it struck me that the more recent, Goodreads-era continuations and tie-ins, the latest installments of which one still finds on grocery store paperback racks, did not better them. (The most “successful” of the post-Trojan Odyssey books was The Treasure of Khan, the most successful of the tie-ins the Grant Blackwood coauthored Spartan Gold, with about half of what Raise the Titanic! got.)
Altogether the pattern seems to me to fit what I saw with Robert Ludlum not so long ago, the biggest hits generally outdoing the less successful ones, later books doing better than earlier ones, and the books that got adaptations to other media doing better than those that did not, all of which combined to put a particularly popular later novel that got a major twenty-first century film adaptation far and away at the top--and almost everything else in the range of 20,000 ratings or less. Given that they were both roughly contemporaneous authors producing broadly similar fiction (writers who came along in the '70s and quickly made their mark with espionage stories of the more action-adventure-oriented type) the parallel may not be so surprising, but I also find myself thinking of how different Cussler could be from Ludlum. Cussler's novels were, generally, more simply plotted and more simply and plainly written than Ludlum's, and especially by the time of books like Cyclops, rather more summer blockbuster-like in subject matter and feel--more accessible, more fun for most, I would imagine. Certainly Cussler had his politics, same as the other genre authors did, but it probably mattered that, if probably not too different from those of his colleague Tom Clancy's, treated relatively casually in comparison with Clancy's stridency in such matters, and because the way the whole was put together one did not take him so seriously (because the plot points revolved around things like secret bases on the moon)--with the result that the books still dated, but perhaps less bothersomely for many who do not necessarily agree with him (so long as they do not look too closely--as one is reminded when recalling the furor over Sahara). Nevertheless, to go by what I see here, the fact would not seem to have helped him quite so much where his readership was concerned.
Joker and the Cult of Intelligence
I remember being struck by the intensity of the moral panic that surrounded the release of Todd Phillips' The Joker back in October 2019. It had been some time since we last saw anything quite like it--not since the '90s, perhaps, with the panic the more remarkable because, rather than this being a matter of "concerned parents" and the like it seemed to be "liberal" film critics whose duty should ordinarily be a defense of artistic freedom leading the calls for censorship.
The panic has since turned to disdainful dismissal of the panic. This is partly because this is the media's usual way of overcompensating after a more than usually conspicuous display of the extreme stupidity of its personnel. (Consider, for example, how it went from extreme credulousness about self-driving cars being almost here circa 2015 to its present sneering about the prospect of such vehicles ever happening.)
Yet it is also a matter of the film's limitations in handling the issues that it dared to take up (even as its taking them up was clearly the real reason so many critics had it in for the movie). Where those weaknesses were concerned a particularly significant one seemed to be the filmmakers' overreliance on Martin Scorsese's The King of Comedy, and what went with it, its blurring the line between reality and fantasy--leaving at least room for that infamous cop-out, that it was all "just a dream." However, more significant than that was the conception of the character of Arthur Fleck himself--in ways going beyond how the depiction of his mental illness allowed for the possibility. Consider the contrast between what Fleck is, and what the Joker is. In every incarnation of the Joker of which I know the Joker is not merely a murderous criminal, but a criminal "genius." However, Fleck is the opposite, a man who seems a good deal less than mediocre--consistently awkward in manner, inarticulate in speech, foolish in his choices large and small. Of course, people do have their hidden depths, and some rather unprepossessing individuals surprise us with their thoughtfulness, acumen, knowledge and other gifts--in which we may well catch the flash of genius--but I cannot think of any moment in which the film hinted at anything of the kind.
Naturally it is hard to imagine that such a man could ever become the archnemesis of the formidable Batman. And considering that I cannot help suspecting the reason is that this is because Arthur Fleck was imagined as a beaten-down working class man. One might imagine they thought so because of the genuinely smaller opportunities people have in that social strata for acquiring education and culture and developing their abilities and talents than those born in more affluent circumstances, and because it is part of the cruelty of poverty, material security and other such hardships that they are mentally stultifying. Yet even allowing for that unhappy reality something could have been done with the character here. Fleck might have been imagined as one of those poor but bright kids who missed the very limited chance at a better life society offers them; perhaps an avid reader; perhaps simply as one of those unassuming people who now and then manages to say something that catches the people looking down their noses at them off guard and leaves their supposed "betters" spluttering or speechless. However, he wasn't. And it seems to me more plausible that the writers of the movie could not imagine that such a person could be intelligent, articulate, or forceful in any way prior to his deciding that he had nothing left to lose and picking up a gun; that they thought that had he been any of those things he would have somehow ceased to be working class, as if only people who deserve to be "losers" can be found at the bottom.
The result is that, at least on this one point, when taking up subjects very little touched upon by American cinema the filmmakers' work ended up significantly colored by the prevailing prejudices about social class generally, working people particularly--and very likely, the presumption of meritocracy--to its cost.
The panic has since turned to disdainful dismissal of the panic. This is partly because this is the media's usual way of overcompensating after a more than usually conspicuous display of the extreme stupidity of its personnel. (Consider, for example, how it went from extreme credulousness about self-driving cars being almost here circa 2015 to its present sneering about the prospect of such vehicles ever happening.)
Yet it is also a matter of the film's limitations in handling the issues that it dared to take up (even as its taking them up was clearly the real reason so many critics had it in for the movie). Where those weaknesses were concerned a particularly significant one seemed to be the filmmakers' overreliance on Martin Scorsese's The King of Comedy, and what went with it, its blurring the line between reality and fantasy--leaving at least room for that infamous cop-out, that it was all "just a dream." However, more significant than that was the conception of the character of Arthur Fleck himself--in ways going beyond how the depiction of his mental illness allowed for the possibility. Consider the contrast between what Fleck is, and what the Joker is. In every incarnation of the Joker of which I know the Joker is not merely a murderous criminal, but a criminal "genius." However, Fleck is the opposite, a man who seems a good deal less than mediocre--consistently awkward in manner, inarticulate in speech, foolish in his choices large and small. Of course, people do have their hidden depths, and some rather unprepossessing individuals surprise us with their thoughtfulness, acumen, knowledge and other gifts--in which we may well catch the flash of genius--but I cannot think of any moment in which the film hinted at anything of the kind.
Naturally it is hard to imagine that such a man could ever become the archnemesis of the formidable Batman. And considering that I cannot help suspecting the reason is that this is because Arthur Fleck was imagined as a beaten-down working class man. One might imagine they thought so because of the genuinely smaller opportunities people have in that social strata for acquiring education and culture and developing their abilities and talents than those born in more affluent circumstances, and because it is part of the cruelty of poverty, material security and other such hardships that they are mentally stultifying. Yet even allowing for that unhappy reality something could have been done with the character here. Fleck might have been imagined as one of those poor but bright kids who missed the very limited chance at a better life society offers them; perhaps an avid reader; perhaps simply as one of those unassuming people who now and then manages to say something that catches the people looking down their noses at them off guard and leaves their supposed "betters" spluttering or speechless. However, he wasn't. And it seems to me more plausible that the writers of the movie could not imagine that such a person could be intelligent, articulate, or forceful in any way prior to his deciding that he had nothing left to lose and picking up a gun; that they thought that had he been any of those things he would have somehow ceased to be working class, as if only people who deserve to be "losers" can be found at the bottom.
The result is that, at least on this one point, when taking up subjects very little touched upon by American cinema the filmmakers' work ended up significantly colored by the prevailing prejudices about social class generally, working people particularly--and very likely, the presumption of meritocracy--to its cost.
Remembering Angus Calder's The People's War
For as long as I can remember the narrative about World War II prevailing in the United States (which received a massive boost in the late '90s from Tom Brokaw and "Stephen Ambrose") has been a distinctly conservative-nationalist one, and even a jingoistic-militarist one--a lesson in the ever-presence of monsters in the world, and the evil that "ideologues" do; the cowardice and foolishness of appeasement, invariably identified with leftie peaceniks, at whose every expression of reticence about going to war, any war, the word "Munich" is to be sneered; the necessity of national unity and deference to leaders in a cheerful spirit of "can-do" and "get-it-done" spirit, above all in what is so often made to seem the sole legitimate collective endeavor, war; the eternal and supreme value of the old-time patriotic and martial attitudes, and the idolization of men of the type of George Patton and Douglas MacArthur, not in spite of but because of what persons of liberal sensibility are prone to see as their failings; the inherent goodness of the English-speaking peoples and the depravity of the continentals and the world's need for America to lead it; and of course, the superiority of the elder generation, the superhuman "Greatest Generation," to its spoiled, flabby, worthless children and grandchildren and great-grandchildren.
Moreover, this understanding corresponded to how Americans think of others as experiencing the war, especially those "nearest others," the British. Passing more or less the same concept through their "stage Englishman" stereotypes, of course, they picture a stiff upper-lipped elite Carrying on and Making Do as the bombs fell--whether in those mansions to which children like the Pevensies were evacuated, or at the front, where Cockney enlisted men cheerfully followed their public school-graduated officers to death in a display of "convenient social virtue" that made even "the Yanks" as described here look so egalitarian that the Brits could only shake their heads.
Of course, anyone who gets beyond the most superficial knowledge of the conflict quickly begins noticing a great deal that does not fit in with this narrative. Still, to catch the inconsistencies is one thing, to add them up to create a different and possibly truer image is quite another, and that made discovering Angus Calder's The People's War quite the revelation when I first happened upon it way back when--rather than a history of the conflict in which those on top tell those below them what they would like to think an attempt at a history of the conflict from below. The intent apart, there was what it showed--a deeply unequal and deeply divided nation which brought all its pre-war bagagge with it into the conflict, where the elite, not so different from its "better Hitler than Blum"-minded counterparts on the continent, profoundly ignorant and profoundly disdainful of its poorer countrymen, and thinking first, last and always of the preservation of their own privileges at any cost to the country and the world, proved reluctant and feckless war-leaders, and discredited themselves in the wake of defeat after defeat in Norway, the Low Countries, France, where the "miracle" of Dunkirk was really a partial escape in the wake of a disastrous defeat that would never have been allowed to happen at all by an even slightly determined and competent Anglo-French leadership, and which turned what might have been a limited post-World War I blow-up into an even greater catastrophe. It also showed coming out of it a war waged by the people, and won by the people, the close of which, befitting a "people's war," brought with it a new social contract--with the Beveridge report, the election of '45, the first Attlee government, and the beginning of a period of reform rather than the end that the war proved to be elsewhere (in America, the end of the New Deal, with the reformist impulse not to make much comeback until the Great Society era a generation later).
Radically leftish as all this sounds there is a critique of this view from the left--not least of the extent to which the war never ceased to be an Establishment war, the sharp limits on the post-war reforms at home, and the extent to which even very limited reform at home was matched by constancy in policy abroad. Rather than leading the way to a world where there would never be another war--to the left's vision of global governance that would spare the world another conflict like the one that had just ended--that Labour government took the role of number two member of the Western alliance in the Cold War, while fighting to hold on to the Empire as much as it could for as long as it could. And in line with all that, as David Edgerton points out, the welfare state was consistently second to the warfare state in British post-war life. In short, if it was well that the Allies defeated the Axis and Britain's people did their part, and it was well that the British people had their welfare state, it was no happy ending to the tale--with the fact the more obvious in the age of Thatcher, and Blair, and Cameron, and Johnson.
As one might guess from the fact, in considering how little remembered "the people's war"-type understanding of the conflict seems to be by these days by the British public (which, contrary to the incessant output of Heritage drama, seems no more historically-minded than its American peers), it is not the leftist critique that has shunted it aside, but quite the opposite--the triumph of the rightist-nationalistic version of the war's memory, which to go by what we have seen on the big screen has been promoted vehemently in recent years by Euro-disdaining Brexiteers and Blackadder-bashing right-wing revisionists. In an age in which the danger not only of war but full-blown great power war presses this can seem a most inopportune forgetting of the painful lessons of what has been widely accounted the bloodiest century in human history.
Moreover, this understanding corresponded to how Americans think of others as experiencing the war, especially those "nearest others," the British. Passing more or less the same concept through their "stage Englishman" stereotypes, of course, they picture a stiff upper-lipped elite Carrying on and Making Do as the bombs fell--whether in those mansions to which children like the Pevensies were evacuated, or at the front, where Cockney enlisted men cheerfully followed their public school-graduated officers to death in a display of "convenient social virtue" that made even "the Yanks" as described here look so egalitarian that the Brits could only shake their heads.
Of course, anyone who gets beyond the most superficial knowledge of the conflict quickly begins noticing a great deal that does not fit in with this narrative. Still, to catch the inconsistencies is one thing, to add them up to create a different and possibly truer image is quite another, and that made discovering Angus Calder's The People's War quite the revelation when I first happened upon it way back when--rather than a history of the conflict in which those on top tell those below them what they would like to think an attempt at a history of the conflict from below. The intent apart, there was what it showed--a deeply unequal and deeply divided nation which brought all its pre-war bagagge with it into the conflict, where the elite, not so different from its "better Hitler than Blum"-minded counterparts on the continent, profoundly ignorant and profoundly disdainful of its poorer countrymen, and thinking first, last and always of the preservation of their own privileges at any cost to the country and the world, proved reluctant and feckless war-leaders, and discredited themselves in the wake of defeat after defeat in Norway, the Low Countries, France, where the "miracle" of Dunkirk was really a partial escape in the wake of a disastrous defeat that would never have been allowed to happen at all by an even slightly determined and competent Anglo-French leadership, and which turned what might have been a limited post-World War I blow-up into an even greater catastrophe. It also showed coming out of it a war waged by the people, and won by the people, the close of which, befitting a "people's war," brought with it a new social contract--with the Beveridge report, the election of '45, the first Attlee government, and the beginning of a period of reform rather than the end that the war proved to be elsewhere (in America, the end of the New Deal, with the reformist impulse not to make much comeback until the Great Society era a generation later).
Radically leftish as all this sounds there is a critique of this view from the left--not least of the extent to which the war never ceased to be an Establishment war, the sharp limits on the post-war reforms at home, and the extent to which even very limited reform at home was matched by constancy in policy abroad. Rather than leading the way to a world where there would never be another war--to the left's vision of global governance that would spare the world another conflict like the one that had just ended--that Labour government took the role of number two member of the Western alliance in the Cold War, while fighting to hold on to the Empire as much as it could for as long as it could. And in line with all that, as David Edgerton points out, the welfare state was consistently second to the warfare state in British post-war life. In short, if it was well that the Allies defeated the Axis and Britain's people did their part, and it was well that the British people had their welfare state, it was no happy ending to the tale--with the fact the more obvious in the age of Thatcher, and Blair, and Cameron, and Johnson.
As one might guess from the fact, in considering how little remembered "the people's war"-type understanding of the conflict seems to be by these days by the British public (which, contrary to the incessant output of Heritage drama, seems no more historically-minded than its American peers), it is not the leftist critique that has shunted it aside, but quite the opposite--the triumph of the rightist-nationalistic version of the war's memory, which to go by what we have seen on the big screen has been promoted vehemently in recent years by Euro-disdaining Brexiteers and Blackadder-bashing right-wing revisionists. In an age in which the danger not only of war but full-blown great power war presses this can seem a most inopportune forgetting of the painful lessons of what has been widely accounted the bloodiest century in human history.
Sunday, March 6, 2022
Steve Shagan’s The Formula and the Future of the Coal Industry
It seems the name Steve Shagan is not much heard these days, but once upon a time he was fairly prominent as both a novelist and a screenwriter. He wrote the novel Save the Tiger, and the screenplay for its cinematic adaptation (1973), the film for which Jack Lemmon won his Oscar, and the Oscar-nominated screenplay for the historical drama about the S.S. St. Louis, Voyage of the Damned (1976). His crime novel City of Angels became the film Hustle (1975), while his later scripts included those for Michael Cimino’s adaptation of Mario Puzo’s 1984 novel The Sicilian (film 1987) and, in those years in which legal thrillers and Richard Gere both remained robust box office draws, Primal Fear (1996).
As it happened these works, if in many cases commercially and critically successful at the time, have not endured. (Looking back I can’t help but be struck by how while dozens of Lemmon’s films have become classics--the Billy Wilder movies like The Apartment or The Fortune Cookie, or the movies produced from Neil Simon’s material like The Prisoner of Second Avenue, or Blake Edwards’ Days of Wine and Roses, or Costa-Gavras’ Missing, or The China Syndrome, or The Odd Couple--the movie for which he actually got an Academy Award is practically forgotten. I remember Hustle mainly for the unlikely pairing of Burt Reynolds and Catherine Deneuve as the leads, which the film itself seemed to underline with their leaving a screening of Claude Lelouch’s A Man and a Woman together.) But all the same I do remember reading a number of Shagan's thrillers--among them 1979's The Formula (which, it might be added, became yet another feature film).
Those thrillers, The Formula included, were very much of their time--high-stakes, jet-setting international thrillers where the accent was more on danger and suspense than big, minutely detailed action film-type set pieces. (They seemed especially ‘70s in their pre-Reagan-era sex-and-drugs-partyness; their blend of essentially conventional politics with a hint of post-Watergate cynicism to keep a modicum of credibility; and their Los Angeles-ness in an era in which that city was still the locus of American filmmaking, their period cop drama clichĂ© and especially their L.A. cop clichĂ© when that was particularly prominent. "Goddamn ulcer!" yelled Tactical Chief John Nolan before he "popped a Gelusil from the open bottle on his desk"--and I pictured it as if it were on a movie screen in front of me.)
The results were satisfactory enough as the thrillers they were meant to be (at least, to a reader like myself who preferred the ‘70s stuff to the ‘90s with its psychobabblers and pathologists and such), but certainly not masterpieces, and even less satisfactory when one moved away from their effectiveness as mere diversion to their handling of their themes. (The big "discovery" in Shagan's rather Dan Brown-ish 1984 novel The Discovery, for example, was particularly underwhelming, not least because of the weakness of the historical premise.) Still, The Formula had its interest on that level. Set against the backdrop of the then-current energy crisis the book revolved around the pursuit of a technique the Third Reich developed for converting coal into oil much coveted by various interests, forcing the hero on a danger-filled international journey to recover it. Some years after reading the book, when there was another energy crisis upon the world evocative of the ‘70s (that of 2003-2008), I found myself thinking of how that theme still seemed depressingly relevant as people spoke of "peak" oil as a very near-term possibility, and some speculated about the conversion of some of our more abundant coal into scarcer oil as a way to relieve the pressure. The intense concern about peak oil, and possible coping mechanisms such as that, passed amid the combination of global economic stagnation (since the Great Recession growth has been in the toilet, with all that implied for the rate at which the demand for energy has increased), a shale boom that has after many false starts produced a relatively cheap expansion of fossil fuel supplies (such that with gas so readily available it’s actually become tougher to sell coal-fired electricity), and the advances renewables have made (however much a Big Coal, Big Gas, Big Oil, Big Nuclear-boosting mainstream press continues to sneer at them), while the ever more aggressive assessments of the severity of the climate crisis have the ecological harm our fossil fuels will do seeming a more pressing danger than a shortfall of production. (Tellingly "Leave it in the Ground!" has become a slogan.)
Of course, the fact remains that, even in the most favorable scenario, transport is shifting away from fossil fuels only more slowly than electricity production. We may already have electric cars, but the speed at which the world will replace internal combustion engine-powered vehicles remains open to question--while alternatives replacing oil in aircraft and ships is a somewhat more distant prospect. Might it be that the coal industry, currently tottering on the basis of its uncompetitiveness in providing electricity, survive on the basis of sustaining that need for cheap oil? The prospect seems dim these days. Yet if the prospect of a peak oil-induced supply crunch has only been postponed as some suggest, and a method for converting coal to oil becomes cost-effective in the resulting market (the way coal-fired electricity has, temporarily, become salable amid an energy crunch) we could see coal remain a part of the scene for as long as it takes to move past oil altogether--especially should the energy transition drag.
Hopefully, it won't--any longer than it already has.
As it happened these works, if in many cases commercially and critically successful at the time, have not endured. (Looking back I can’t help but be struck by how while dozens of Lemmon’s films have become classics--the Billy Wilder movies like The Apartment or The Fortune Cookie, or the movies produced from Neil Simon’s material like The Prisoner of Second Avenue, or Blake Edwards’ Days of Wine and Roses, or Costa-Gavras’ Missing, or The China Syndrome, or The Odd Couple--the movie for which he actually got an Academy Award is practically forgotten. I remember Hustle mainly for the unlikely pairing of Burt Reynolds and Catherine Deneuve as the leads, which the film itself seemed to underline with their leaving a screening of Claude Lelouch’s A Man and a Woman together.) But all the same I do remember reading a number of Shagan's thrillers--among them 1979's The Formula (which, it might be added, became yet another feature film).
Those thrillers, The Formula included, were very much of their time--high-stakes, jet-setting international thrillers where the accent was more on danger and suspense than big, minutely detailed action film-type set pieces. (They seemed especially ‘70s in their pre-Reagan-era sex-and-drugs-partyness; their blend of essentially conventional politics with a hint of post-Watergate cynicism to keep a modicum of credibility; and their Los Angeles-ness in an era in which that city was still the locus of American filmmaking, their period cop drama clichĂ© and especially their L.A. cop clichĂ© when that was particularly prominent. "Goddamn ulcer!" yelled Tactical Chief John Nolan before he "popped a Gelusil from the open bottle on his desk"--and I pictured it as if it were on a movie screen in front of me.)
The results were satisfactory enough as the thrillers they were meant to be (at least, to a reader like myself who preferred the ‘70s stuff to the ‘90s with its psychobabblers and pathologists and such), but certainly not masterpieces, and even less satisfactory when one moved away from their effectiveness as mere diversion to their handling of their themes. (The big "discovery" in Shagan's rather Dan Brown-ish 1984 novel The Discovery, for example, was particularly underwhelming, not least because of the weakness of the historical premise.) Still, The Formula had its interest on that level. Set against the backdrop of the then-current energy crisis the book revolved around the pursuit of a technique the Third Reich developed for converting coal into oil much coveted by various interests, forcing the hero on a danger-filled international journey to recover it. Some years after reading the book, when there was another energy crisis upon the world evocative of the ‘70s (that of 2003-2008), I found myself thinking of how that theme still seemed depressingly relevant as people spoke of "peak" oil as a very near-term possibility, and some speculated about the conversion of some of our more abundant coal into scarcer oil as a way to relieve the pressure. The intense concern about peak oil, and possible coping mechanisms such as that, passed amid the combination of global economic stagnation (since the Great Recession growth has been in the toilet, with all that implied for the rate at which the demand for energy has increased), a shale boom that has after many false starts produced a relatively cheap expansion of fossil fuel supplies (such that with gas so readily available it’s actually become tougher to sell coal-fired electricity), and the advances renewables have made (however much a Big Coal, Big Gas, Big Oil, Big Nuclear-boosting mainstream press continues to sneer at them), while the ever more aggressive assessments of the severity of the climate crisis have the ecological harm our fossil fuels will do seeming a more pressing danger than a shortfall of production. (Tellingly "Leave it in the Ground!" has become a slogan.)
Of course, the fact remains that, even in the most favorable scenario, transport is shifting away from fossil fuels only more slowly than electricity production. We may already have electric cars, but the speed at which the world will replace internal combustion engine-powered vehicles remains open to question--while alternatives replacing oil in aircraft and ships is a somewhat more distant prospect. Might it be that the coal industry, currently tottering on the basis of its uncompetitiveness in providing electricity, survive on the basis of sustaining that need for cheap oil? The prospect seems dim these days. Yet if the prospect of a peak oil-induced supply crunch has only been postponed as some suggest, and a method for converting coal to oil becomes cost-effective in the resulting market (the way coal-fired electricity has, temporarily, become salable amid an energy crunch) we could see coal remain a part of the scene for as long as it takes to move past oil altogether--especially should the energy transition drag.
Hopefully, it won't--any longer than it already has.
Do Bad Reviews Have More Impact Than Good Ones?
One of those bits of unexplained, unsubstantiated bits of Internet "wisdom" I have run across holds that where books are concerned it takes a dozen good reviews to cancel out the effect of one bad one.
How did they come to that conclusion? As I said, it was unexplained and unsubstantiated. Yet simple math makes it clear that on Amazon, following a one-star review, it takes at least ten five-star reviews to get a better than 4.5 star average (4.54), and twenty such reviews to bring one’s average closer to 5 stars than 4.5 (4.76).
Of course, star-based rating systems are an attempt to quantify something only awkwardly quantifiable, but I have found myself wondering if there is not in those numbers some reflection of the reality. At the very least it seems to me that, all other things being equal, positive reviews individually have less effect than bad ones, and I can think of at least three reasons why this is the case.
1. Our personal experience--in many cases, long and painful personal experience--teaches us that most of what is out there runs the gamut from mediocre to plain dreck. Reviewers, considering it their job to grade content from good to excellent, and it increasingly seems, from excellent to outstanding, claim the complete opposite about any one thing. Therefore the odds are that anything we are being told is so great is likely overpraised--with the result that, all other things being equal, a less than glowing review is the more likely to be accurate from a purely statistical standpoint than a breathlessly praise-filled one. This may be reinforced by another aspect of the situation, namely that reviewers often seem to be incentivized to produce positive reviews--but it is less common that they have an incentive to produce negative reviews, and it would be my guess that most are less alert to those occasions when that incentive exists. (Considering popular works, for example, I have often noticed that reviewers tend to go out of their way to say negative things about works which are in some way critical or satirical of the political status quo--with the critical opprobrium directed at the recent Don’t Look Up an obvious example.)
2. It is more often the case that the critical, denigrating review will be specific in its comment, and support its assertions. Putting it bluntly, when we are pleased by something--perhaps the more in as we pleased by it--we are not moved to examine it very closely. Our impulse is simply to enjoy it.
Our reaction is the opposite when we are displeased--and communicate our displeasure to others. And it helps that we are more likely to see a need to demonstrate the validity of our reaction to others when we are hostile.
3. It may be that bad reviews are more entertaining than good ones--even when they do not necessarily work well as reviews. A positive review which is well-written is apt to really be about the thing being reviewed, rather than about itself or its author, and we respond to it accordingly. When someone does a good job writing a positive review we may respect the quality of the writing and the intelligence of the assessment, but our principal interest is in the item which received the review. Even if they showed some wit the entertainment value of the review is likely to be slight, because it was about the entertainment value of something else.
By contrast many an author of a negative review makes the review about itself, and for that matter, themselves, rather than what they reviewed. They put on a show of tearing apart the work they so disliked (or at any rate, tearing apart something because they are likely to get far removed from the thing they are actually reviewing). They become an insult comic giving a performance. Of course, this is rather narcissistic and mean-spirited approach, but if twenty-first century pop culture teaches anything else (and I’m not sure that it does) it is that narcissism and meanness for meanness’ sake have colossal audiences. And a reviewer who has entertained their audience is likely to be heeded even when not really reviewing the thing at all. (Indeed, this may have much to do with how people are, on average, quicker to volunteer a negative comment than a positive one--with obvious implications for the more casual sort of reviewer who supplies so much of the customer reviews at sites like Amazon.
Up against all that it seems only too predictable that one would need legions of positive reviews to subdue a handful of negative ones. For those sending their work out into the world, especially when they do not have a colossal media machine behind them, this is not a happy thing--a single troll easily doing a lot of damage, often entirely undeserved damage to the already slight chances of their work's being seen, and simply for the pleasure they derive from having caused harm. Indeed, one may wonder if for many it is worth bothering to pursue reviews anyway. If we need twenty five-star reviews to wipe out a single one-star rating, then the odds are massively against even the best work getting a fair hearing--and it may be well worth reconsidering the conventional wisdom about what publicity will do for a great deal of work.
How did they come to that conclusion? As I said, it was unexplained and unsubstantiated. Yet simple math makes it clear that on Amazon, following a one-star review, it takes at least ten five-star reviews to get a better than 4.5 star average (4.54), and twenty such reviews to bring one’s average closer to 5 stars than 4.5 (4.76).
Of course, star-based rating systems are an attempt to quantify something only awkwardly quantifiable, but I have found myself wondering if there is not in those numbers some reflection of the reality. At the very least it seems to me that, all other things being equal, positive reviews individually have less effect than bad ones, and I can think of at least three reasons why this is the case.
1. Our personal experience--in many cases, long and painful personal experience--teaches us that most of what is out there runs the gamut from mediocre to plain dreck. Reviewers, considering it their job to grade content from good to excellent, and it increasingly seems, from excellent to outstanding, claim the complete opposite about any one thing. Therefore the odds are that anything we are being told is so great is likely overpraised--with the result that, all other things being equal, a less than glowing review is the more likely to be accurate from a purely statistical standpoint than a breathlessly praise-filled one. This may be reinforced by another aspect of the situation, namely that reviewers often seem to be incentivized to produce positive reviews--but it is less common that they have an incentive to produce negative reviews, and it would be my guess that most are less alert to those occasions when that incentive exists. (Considering popular works, for example, I have often noticed that reviewers tend to go out of their way to say negative things about works which are in some way critical or satirical of the political status quo--with the critical opprobrium directed at the recent Don’t Look Up an obvious example.)
2. It is more often the case that the critical, denigrating review will be specific in its comment, and support its assertions. Putting it bluntly, when we are pleased by something--perhaps the more in as we pleased by it--we are not moved to examine it very closely. Our impulse is simply to enjoy it.
Our reaction is the opposite when we are displeased--and communicate our displeasure to others. And it helps that we are more likely to see a need to demonstrate the validity of our reaction to others when we are hostile.
3. It may be that bad reviews are more entertaining than good ones--even when they do not necessarily work well as reviews. A positive review which is well-written is apt to really be about the thing being reviewed, rather than about itself or its author, and we respond to it accordingly. When someone does a good job writing a positive review we may respect the quality of the writing and the intelligence of the assessment, but our principal interest is in the item which received the review. Even if they showed some wit the entertainment value of the review is likely to be slight, because it was about the entertainment value of something else.
By contrast many an author of a negative review makes the review about itself, and for that matter, themselves, rather than what they reviewed. They put on a show of tearing apart the work they so disliked (or at any rate, tearing apart something because they are likely to get far removed from the thing they are actually reviewing). They become an insult comic giving a performance. Of course, this is rather narcissistic and mean-spirited approach, but if twenty-first century pop culture teaches anything else (and I’m not sure that it does) it is that narcissism and meanness for meanness’ sake have colossal audiences. And a reviewer who has entertained their audience is likely to be heeded even when not really reviewing the thing at all. (Indeed, this may have much to do with how people are, on average, quicker to volunteer a negative comment than a positive one--with obvious implications for the more casual sort of reviewer who supplies so much of the customer reviews at sites like Amazon.
Up against all that it seems only too predictable that one would need legions of positive reviews to subdue a handful of negative ones. For those sending their work out into the world, especially when they do not have a colossal media machine behind them, this is not a happy thing--a single troll easily doing a lot of damage, often entirely undeserved damage to the already slight chances of their work's being seen, and simply for the pleasure they derive from having caused harm. Indeed, one may wonder if for many it is worth bothering to pursue reviews anyway. If we need twenty five-star reviews to wipe out a single one-star rating, then the odds are massively against even the best work getting a fair hearing--and it may be well worth reconsidering the conventional wisdom about what publicity will do for a great deal of work.
The Unevenness of Technological Development, and the Vision of RethinkX
Amid the steady analytical output from the RethinkX think tank the principal document tying the implications of their studies of energy, transport, cellular agriculture and other technologies remains their Rethinking Humanity report of September 2020, which argued that order-of-magnitude price drops in the cost and natural resource inputs required to produce the goods of five "foundational sectors" in economic life--food, energy, transport, materials and information--will, with virtually mathematical certainty, mean a crisis in human existence as radical as the transition from prehistoric to civilized times, which the authors predict happening in the next two decades. As the clichĂ© had it the crisis spelled not only the possibility of bitter conflict and great danger, but also opportunity, with the optimistic outcome the authors of the study envisioned the end of civilization’s unequal and brutal "Age of Extraction" with its hierarchies and exploitation, and the beginning of an "Age of Freedom."
There is, of course, no need to rehash the argument here. Apart from the fact that anyone who would like can download a copy of the sophisticated but highly readable document free of charge from the authors’ own web site I have already got in my two cents on the matter. I only raise all this as background to my mention here of the particular dimension of the issue I find myself thinking about now, namely the profoundly different rates of change across those five sectors. After all, there is no question but that the pace of technological change, the drop in costs, in information, has far, far outpaced any such trend in the other areas. Indeed, even as the price of producing, copying, storing, processing and transmitting information has fallen at mind-boggling rates over recent decades the prices of all the rest have remained stubbornly high, as we were painfully reminded during the food-and-fuel crisis of ‘06-’08, and are reminded yet again during what may be the even sharper inflationary shock we are all now living through.
The fact can seem a cruel irony. After all, while there is no questioning the great practical amenities the digital age has provided, many of which benefit those who have been particularly disadvantaged in various ways (cellular telephony, for example, may well have sped the development of modern communications infrastructures in developing nations), it would seem that from the standpoint of the world’s ill-fed, ill-clothed, ill-housed billions, and even the budgets of all but the most affluent "First Worlders," there is far more desperate need of cheap food, materials, energy and transport than of those amenities. It may also seem that the revolutionizing of information so far ahead of the others, with society so little changed, has meant that even the potentials of the revolutionizing of merely the informational sector are far from being as fully realized as they might have been. Consider, for example, how once upon a time it was hoped that Internet access might be a public good, but instead one must pay vast sums to service providers, and get in return heavily surveilled, algorithm-fiddled, ad-choked Internet where we can’t avoid running into paywall after paywall after paywall. Consider how, as the pandemic recently showed, telecommuting had made so little advance after a generation of everyday Internet usage.
Still, as anyone familiar with the history knows the progress of technology has never been even (thus in the nineteenth century did we have newfangled steam locomotives delivering goods to train stations to be picked up by drivers of the same sort of horse-drawn carts people had been using for millennia), and alas, never revolved around the needs of those who have least, and most need their lot ameliorated. More significantly, the developments in the other four sectors they anticipate would seem to depend on the gains in information technology discussed here, from the vision of cheaper transport summed up in Transportation-as-a-Service (reliant on advances in artificial intelligence), to the prospect of cheaper production and fabrication of materials (which utilizes computerized 3-D printing). The result is that the successes in information technology may well prove to have been a prerequisite for what follows, reminding me of something claimed by a famed futurologist whose ideas I have been considering for decades--Ray Kurzweil. As he acknowledged the changes in technology were overwhelmingly coming in the area of computers, but computers will change everything. Should that prove true RethinkX may turn out to have done an even better job than Kurzweil of explaining how such a thing came to pass.
There is, of course, no need to rehash the argument here. Apart from the fact that anyone who would like can download a copy of the sophisticated but highly readable document free of charge from the authors’ own web site I have already got in my two cents on the matter. I only raise all this as background to my mention here of the particular dimension of the issue I find myself thinking about now, namely the profoundly different rates of change across those five sectors. After all, there is no question but that the pace of technological change, the drop in costs, in information, has far, far outpaced any such trend in the other areas. Indeed, even as the price of producing, copying, storing, processing and transmitting information has fallen at mind-boggling rates over recent decades the prices of all the rest have remained stubbornly high, as we were painfully reminded during the food-and-fuel crisis of ‘06-’08, and are reminded yet again during what may be the even sharper inflationary shock we are all now living through.
The fact can seem a cruel irony. After all, while there is no questioning the great practical amenities the digital age has provided, many of which benefit those who have been particularly disadvantaged in various ways (cellular telephony, for example, may well have sped the development of modern communications infrastructures in developing nations), it would seem that from the standpoint of the world’s ill-fed, ill-clothed, ill-housed billions, and even the budgets of all but the most affluent "First Worlders," there is far more desperate need of cheap food, materials, energy and transport than of those amenities. It may also seem that the revolutionizing of information so far ahead of the others, with society so little changed, has meant that even the potentials of the revolutionizing of merely the informational sector are far from being as fully realized as they might have been. Consider, for example, how once upon a time it was hoped that Internet access might be a public good, but instead one must pay vast sums to service providers, and get in return heavily surveilled, algorithm-fiddled, ad-choked Internet where we can’t avoid running into paywall after paywall after paywall. Consider how, as the pandemic recently showed, telecommuting had made so little advance after a generation of everyday Internet usage.
Still, as anyone familiar with the history knows the progress of technology has never been even (thus in the nineteenth century did we have newfangled steam locomotives delivering goods to train stations to be picked up by drivers of the same sort of horse-drawn carts people had been using for millennia), and alas, never revolved around the needs of those who have least, and most need their lot ameliorated. More significantly, the developments in the other four sectors they anticipate would seem to depend on the gains in information technology discussed here, from the vision of cheaper transport summed up in Transportation-as-a-Service (reliant on advances in artificial intelligence), to the prospect of cheaper production and fabrication of materials (which utilizes computerized 3-D printing). The result is that the successes in information technology may well prove to have been a prerequisite for what follows, reminding me of something claimed by a famed futurologist whose ideas I have been considering for decades--Ray Kurzweil. As he acknowledged the changes in technology were overwhelmingly coming in the area of computers, but computers will change everything. Should that prove true RethinkX may turn out to have done an even better job than Kurzweil of explaining how such a thing came to pass.
Subscribe to:
Posts (Atom)