In his important rebuttal to the New Economy hype of the 1990s Robert Gordon pointed to "hand-and-eye coordination" as a significant barrier in regard to the fuller, more thoroughly productivity-enhancing computerization of the economy, while in 2013 the Carl Benedikt Frey and Michael Osborne-authored Future of Work study likewise identified "perception and manipulation" as one of the most significant "bottlenecks" for the automation of work. Accordingly the designers of artificial intelligence systems overcoming these barriers and passing through these obstacles with computer-guided machines able to perform eye-hand coordination-demanding, perception-and-manipulation-intensive tasks of these kinds adequately and reliably would betoken a great advance in the prospects for automation--and the technological state-of-the-art in the field of "AI."
Of course, a generation after Gordon wrote, a decade after Frey and Osborne wrote, progress here has been . . . unimpressive. And indeed, after a surge of hype in the mid-'10s during which self-driving cars and the beginnings of a tidal wave of workplace automation were supposed to change everything, just about NONE OF IT HAPPENED (even flipping burgers has been a bigger challenge than appreciated), after which the excitement about AI waned these past few years.
However, in recent months OpenAI's new chatbot, "GPT-3.5," and then just this month, the follow-up, GPT-4, got (some) people excited again. Indeed, reading Ezra Klein's New York Times opinion piece on the matter--among many, many others--I was consistently struck by the sense so many had not only of the GPT's approximation of human capacities, but of profound acceleration in the rate of progress in that emulation.
The kind of acceleration that had me start typing into Google "Is GPT-4 the Singularity?" and finding the search engine's autocomplete finishing the thought for me; while after I hit ENTER I saw from the search results that this is exactly the question lots and lots of people seem to be asking--and many answering that question in the affirmative. Calum Chace, for example, argues in Forbes that yes, this is a significant step in that direction.
Looking at all that there seems little doubt that many are greatly impressed by the chatbots' level of functionality--or at least what they are being told about it. (An oft-cited talking point, the more significant in its not necessarily being so straightforward as it sounds, is its scoring in the 90th percentile on the Bar Exam, whereas the GPT-3.5 version had scored only in the 10th.) But it can also seem to be the case that after the comparative bust of the mid-'10s wave of techno-hype, which may have led them to write off certain kinds of automation (e.g. the eye-hand coordination-requiring stuff) as not worth thinking about, they are easily impressed by developments in an area to which they paid less attention and about which they have accordingly not become so cynical as they are about, for instance, self-driving cars--while the Silicon Valley-to-Wall Street boosters, their courtiers and claqueurs in the media, and even the "criti-hype" of those folks who shout "Oh no, it's Skynet! We're doomed! DOOMED!" at every development making the most of that readiness to believe.
Island of the Dead
1 hour ago
No comments:
Post a Comment