This greater-than-human intelligence would not only exceed the human capacity for mental activity, but, particularly to the extent that it is an AI, be capable of designing still-smarter AI, which in its turn can create AI smarter than that, and so on, in a chain of events far outstripping the original, human creators of the technology – while in the process exploding the quantity of intelligence on the planet.
This "intelligence explosion," theoreticians of the Singularity predict, will result in the acceleration of technological change past a point of singularity, analogous to the term's mathematical meaning – a point beyond which the curve of our accelerating technological progress explodes – resulting in
a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in "a million years" (if ever) will likely happen in the next century.The idea of an intelligence explosion was not new in 1993, having quite old roots in science fiction before then, as Vinge, a science fiction writer himself, acknowledged. (To give but one example, something of the kind happens in the course of the stories collected together in Isaac Asimov's I, Robot, with "The Evitable Conflict" describing such a sequence of events quite presciently.) Additionally (as Vinge also acknowledged) Vinge was preceded by Bletchley Park veteran Irving John Good who is often credited with first giving the a rigorous nonfictional treatment in his 1965 paper, "Speculations Concerning the First Ultraintelligent Machine."
However, Vinge's particular presentation has been highly influential, in part, perhaps, because of its timing, converging as it did with similarly spectacular predictions regarding progress in computing, robotics, genetic engineering and nanotechnology (Eric Drexler's Engines of Creation came out in 1986, Hans Moravec's Mind Children in 1988, Ray Kurzweil's Age of Intelligent Machines in 1990) in what some term the "molecular" or GNR (Genetics, Nanotechnology and Robotics) revolution (which many now take to be synonymous with the Singularity concept). That intellectual ferment would shortly be turbo-charged by the expectations the "tech boom" of the late '90s aroused, and perhaps the pre-millennial buzz in evidence during the approach to the year 2000 as well.
In the years since then, the idea has not only become the predominant theme in "hard" (technologically and futuristically-oriented) science fiction, but made increasing inroads into mainstream discussion of a wide range of topics – for instance, by way of Peter Singer's highly publicized book on robotics and warfare, 2009's Wired for War (reviewed here).
There is, of course, a range of outlooks regarding the implications of a Singularity. Some are negative, like Bill Joy's well-known essay "Why The Future Doesn't Need Us", or for that matter, the darker possibilities Vinge has touched on, like technological accidents or a turn to hostility toward human beings on the part of those intelligences. However, there are also "Singularitarians" who believe not only that the Singularity is possible, but commit themselves to working to bring about a benign intelligence explosion, which they expect (especially in combination with advances in biotechnology, nanotechnology and robotics) to bestow on humanity a whole array of epoch-changing, species-redefining goods, including unprecedented prosperity as vast artificial brainpower explodes our economic productivity while vaulting over the inevitable resources scarcities and cleaning up our ecological messes, and even a transition to the posthuman, complete with the conquest of death (a theme which also occupies much of Kurzweil's writing).
The Case For The Singularity's Plausibility
As stated before, the argument for the possibility or likelihood of the Singularity is largely founded on expectations of a continuing geometrical growth in computing power. Singularitarians like Kurzweil commonly extrapolate from the well-known "Moore's Law" that the circuitry and speed of chips doubles every two years (or less). At the same time, they reject a mystical (and certainly a dualistic, body-mind) view of cognition, believing the basic stuff of the brain as "hardware," the performance of which computers could plausibly overtake.
This raises the question of what the hardware can do. Kurzweil estimates the human brain's performance as equivalent to 10 petaflops (10 quadrillion calculations) a second. As it happens, the IBM Blue Gene supercomputer passed the 1 petaflop milestone back in 2008, while the fastest computer in the world today, the Fujitsu-built "K" computer is capable of 8.2 petaflops a second at present, and expected to attain the 10 petaflop mark when it becomes fully operational in November 2012. Nor does this mark the outer limit of present plans, an exaflop-capable supercomputer (a hundred times as fast) popularly projected to appear by 2019.
Of course, it can be argued that merely increasing the capacity of computers will not necessarily deliver the strong AI on which the Singularity is premised. As Dr. Todd Hylton, director of the Defense Advanced Research Projects Agency's SYNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) program puts it,
Today's programmable machines are limited not only by their computational capacity, but also by an architecture requiring human-derived algorithms to both describe and process information from their environment.Accordingly, some propose the "reverse-engineering" of the human brain, the object of several high-profile research programs around the world, of which SYNAPSE is only one, some of which are expected to achieve results in the fairly near term by their proponents. Henry Markham, director of the IBM-funded "Blue Brain" project, claims that an artificial brain may be a mere ten years away.
Those arguing for the possibility of human-level or greater artificial intelligence can also point to the improving performance of computers and robots at everything from chess-playing to maneuvering cars along obstacle courses, as in the Defense Advanced Research Agency's 2007 Grand Challenge, as well as the proliferation of technologies like the irritating and clunky voice-recognition software that customer service lines now routinely inflict on the purchasers of their companies' products.
The Case Against The Singularity's Plausibility
Writers like Kurzweil and Moravec can make the argument in favor of the Singularity's likelihood in the early half of the twenty-first century seem overwhelming, but that claim has plenty of knowledgeable critics, including Gordon Moore of Moore's Law fame himself (ironically given how often his prediction with regard to the density of components on semiconductor chips is cited on the idea's behalf).
Some might respond by citing Arthur Clarke's "First Law of Prediction," which holds that "When a distinguished elderly scientist states that something is possible, he is almost certainly right," whereas "When he states that something is impossible, he is very probably wrong." Moore's objection appears more intuitive than anything else, but, besides the epistemological, ontological and other philosophical problems involved in defining consciousness and verifying its existence raised by John Searle's "Chinese Room" argument, among many others (We don't know what consciousness is; and even if we did create it, how could we be sure we did?), skeptics have offered several cogent criticisms of Singularitarian assumptions, which generally fall into one of two categories.
The first category of argument posits that it may simply be impossible to develop really human-like artificial intelligence. Physicist Roger Penrose's 1989 The Emperor's New Mind, in which he argues the inadequacy of physical laws to account for human consciousness – and therefore, the prospects for human-equivalent machine-based intelligence – is perhaps the best-known argument of this kind.
The second is the position that, while artificial intelligence of this kind may be theoretically feasible, we are unlikely to realize the possibility in the foreseeable future. This could be due to either the growth in computing power slowing down sharply before the point of the Singularity (given our likely reliance on undeveloped new computer architectures to continue Moore's Law past – or even through – this decade), or the elusiveness of the "software" of intelligence, which seems a more subtle thing than building a faster computer. (Certainly those whose expertise is in biological systems rather than computer hardware and software tend to be more skeptical about the possibility, pointing to the complexity of human neurology, as well as how little we actually understand the workings of the brain.)
Less directly relevant, but telling nonetheless, is the fact that, far from accelerating, our technological development may actually have slowed as the twentieth century progressed, as Theodore Modis and Jonathan Huebner have each argued on theoretical and empirical grounds.
Indeed, there has been a tendency on the part of "Singularitarian" prognosticators to push the dates of theirs predictions farther and farther into the future as the years go by without the Big Moment happening. I.J. Good declared it "more probable than not that, within the twentieth century, an ultraintelligent machine will be built" (a guess which influenced Stanley Kubrick's 2001). Of course, the century ended and the titular date came and went without anything like the expected result. Later expectations reflect this, Vinge guessing in the early 1990s that the Singularity would occur in the generation-length period between 2005 and 2030, while Kurzweil suggested 2045 as the big date in The Singularity is Near.
Notably, Kurzweil made more modest but also more specific predictions about what would happen prior to the event, including a list of developments he offered in his 1999 book The Age of Spiritual Machines, for 2009. A good many observers revisited his claims when the time came (myself included). Kurzweil himself discussed the results in a 2008 interview with the newspaper Ha'aretz, and while sanguine about his results, others disagreed, and the same newspaper mentioned him in its New Year's Eve list of recent prediction bloopers. (Indeed, since that time Kurzweil has conceded on some points, bumping what he predicted for 2009 regarding virtual reality and automobiles over to 2020 if not later in an article he published in the New York Daily News in December of that year.)
It may be noteworthy, too, that the production of original ideas about the Singularity seems to have fallen off. Surveys of the field, like the aforementioned book by Peter Singer, increasingly seem to go over and over the same list of authors and works (Vinge, Moravec, Kurzweil, Joy, et. al.), most of which are a decade old at this point, rather than the rush of new ideas one might expect were we racing toward a post-Singularity reality. (Indeed, I have found the vigor of the critics more striking as of late.) Even science fiction has taken less interest, the genre actually seeing something of a backlash against the concept.
In short, there is abundant reason to think that the claims made for the Singularity are not just open to question, but that the kinds of technological change on which their predictions are founded are actually coming along rather more slowly than many of its advocates argue. Yet, even if the last decade has tended to validate the skeptics, that is a far different thing from saying it has discredited the idea – especially given how much can still change in the time frame Kurzweil has suggested for the Big Event (to which he has stuck, even after his moderation of his smaller predictions). Moreover, we can at the very least continue to expect to see faster computers, more sophisticated software and more versatile robotics, and that, for as long as human beings continue to dream of transcendence, some will continue to envision technology as the means by which we attain it.