Earlier this week journalist Ezra Klein penned an opinion piece on artificial intelligence (AI) in the New York Times.
What the piece offers the reader was predictable enough--a hazy sense that "Big Things" are happening, bolstered by an example or two (DeepMind "building a system" that "predict[ed] the 3-D structure of tens of thousands of proteins," etc.), after which, faced with the "Singularity" we get not the enthusiastic response of the Singularitarian but the more commonplace, negative reaction. This included plenty about our inability to actually understand what's going on with our feeble, merely human minds (citing Meghan O'Gieblyn repeatedly, not least her remark that "If there were gods, they would surely be laughing their heads off at the inconsistency of our logic"), which goes so far as to characterize artificial intelligence research as magic ("an act of summoning. The coders casting these spells have no idea what will stumble through the portal"), and declare that confronted with what he sees as the menace of AI to our sense of self we may feel compelled to retreat into the irrational ("the subjective experience of consciousness"), though we may hardly be safe that way (Mr. Klein citing a survey reporting that AI researchers themselves were fearful of the species being wiped out by a technology they do not control). Indeed, the author concludes that there are only two possible courses, either some kind of "accelerate[d] adaptation to these technologies or a collective, enforceable decision . . . made to slow the development of these technologies," while basically dismissing any skepticism toward his view of present AI research as a meddling with forces we cannot understand out of Lovecraftian horror as an ostrich-like burial of one's head in the sand (closing with Eric Davis' remark that "[i]n the court of the mind, skepticism makes a great grand vizier, but a lousy lord").
And so: epistemological nihilism, elevation of the irrational and anti-rational and even obscurantist (he actually talked about this as magic, folks!), misanthropy, Frankenstein complex-flavored Luddism, despair and the brush-off for any alternative view. In other words, same old same old for this take on the subject, old already in Isaac Asimov's day, so old and so pervasive that he made a career of Robot stories opposing the cliché (to regrettably little effect to go by how so many cite an Arnold Schwarzenegger B-movie from 1984 as some kind of philosophical treatise). Same old same old, too, for the discussion of the products of human reason going back to the dark, dark beginnings of the Counter-Enlightenment (on which those who find themselves "overwhelmed" have for so long been prone to fall back).
Still, it is a sign of the times that the piece ran in this "paper of record," especially at this moment in which so many of us (the recent, perhaps exaggerated, excitement over chatbots notwithstanding) feel that artificial intelligence has massively under-delivered, certainly when one goes by the predictions of a decade ago (at least, in their wildly misunderstood and misrepresented popular form, next to which chatbots slapping together scarcely readable prose does not seem so impressive). There is, too, a somewhat more original comparison Mr. Klein makes on the way to dismissing the "skeptical" view, namely between AI and cryptocurrency, raising the possibility of enthusiasm for the former replacing the enthusiasm for the latter as, in the wake of the criminality and catastrophe at FTX, investors pour their money into artificial intelligence.
Personally I have my doubts that AI will become anywhere near the darling of investors that "crypto" was. However much the press looks full of stories of startups raising capital, the plain and simple truth is that Wall Street's love affair with "tech" has not been anything close to what it was in the '90s since that time. (Indeed, investors have preferred that oldest, least tech-y commodity, real estate--hence the bubble and bust and other turmoil of this whole stagnant yet crisis-ridden century.) I do not see that behavior changing now--the more in as money is getting tighter and the economy slower.
Still, the activity in that area may be something for those interested in the field to watch for those curious as to how far things may go, how fast Meanwhile I suggest that anyone really concerned for humanity's survival direct their attention not to fantasies of robot rebellion, but to the fact that such scenarios seem a way of overlooking the very numerous, deadly serious and entirely real conflicts among humans of profoundly unequal power and opposed interest (not AI vs. human, but human vs. human with AI in the mix, and the more probable danger the way some humans may use AI against others). I suggest, too, that those concerned for human survival also worry less about artificial intelligences generally than nuclear weapons, fossil fuels, "forever chemicals," etc.--while remembering that besides the dangers caused by the technologies that we have there is the danger of our failing to produce, or properly use, the technologies we need in order to cope with the problems of today.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment