In making his case regarding the rapid pace and potentially dire consequences of artificial intelligence research not only in the long run, but in the fairly near term, at the current pace of such research, Ezra Klein, in the absence of some acceleration of effective human adaptation to the results (or even alongside such adaptation) called for an international regime for controlling research in such technology, to the end of slowing it down.
While I do not share Mr. Klein's view on this particular matter his premise that something he sees as a problem may be redressed is certainly healthier than the apathy-encouraging doomism that an exceedingly crass and corrupt media delights in inflicting on the public every minute of the day. Still, I have to confess that the record of international cooperation in recent decades on any issue of importance has not been . . . inspiring; the trend in political life for the moment seems to be away from the circumstances conducive to such cooperation, rather than toward it; and the nature of this issue may make it particularly tough to address in this manner.
Consider, for example, those problems that have most seemed to demand global cooperation--where a potentially existential danger to human civilization was pressing, and nothing less than the whole world working together would really do in meeting that danger. The two most obvious are nuclear weapons and climate change--with the failure the more pointed here because of what states officially committed themselves to in the most public way. Consider, for example, the text of the 1968 Treaty on the Non-Proliferation of Nuclear Weapons. This has the undersigned declaring "their intention to achieve at the earliest possible date the cessation of the nuclear arms race and to undertake effective measures in the direction of nuclear disarmament," with this explicitly including agreeing to work toward
the cessation of the manufacture of nuclear weapons, the liquidation of all their existing stockpiles, and the elimination from national arsenals of nuclear weapons and the means of their delivery pursuant to a Treaty on general and complete disarmament under strict and effective international control.Over a half century on, just how seriously would you say "the undersigned" took all this? How seriously would you say they are taking it now--as the bilateral arms control agreements between the U.S. and Soviet Union of the Cold War era, at best steps in the right direction, fall by the wayside with the most uncertain consequences, and acknowledged nuclear powers like Britain and China expand their arsenals, as they race to field a new generation of super-weapons straight out of a comic book super-villain's laboratory. Meanwhile, the latest United Nations' report on climate change making the round of the headlines (among much else) makes all too clear how little governments have even attempted to do relative to even the agreements they have signed, never mind what the scale of the problem would seem to demand.
By contrast, we would seem a much longer way from even such gestures in the direction of action as, the 2015 Paris Climate Agreement. Where artificial intelligence is concerned--comparatively few persuaded that any threat it poses is nearly so dire, with, not insignificantly, broad attitudes toward the issue differing greatly across cultures and nations. Significantly it has often been remarked that in East Asia--a particularly critical region for the development of the technology--attitudes toward AI are more positive than in the West, with a Pew Research Center survey from December 2020 providing some recent statistical confirmation. (In Japan, Taiwan, South Korea, of those polled 65 percent or more said the development of AI was a "good thing" for society, and no more than 22 percent said it was a bad one--with, one might add, the Pew Center's polling in India producing similar numbers. By contrast in the U.S. 47 percent identified it as a "good thing" and 44 percent as a bad, the responses in fellow G-7 members Britain, Germany and Canada similar, and respondents in France even more negative. Even those Westerners most positively disposed toward AI, the Swedish, among whom the positive/negative split was 60-24, still fell short of the poll numbers of all of the Asian countries named here.)
Were major Asian countries such as Japan, India--or a China that plausibly sees the matter similarly--to refuse to sign such an agreement it would be effectively a dead letter. However, I suspect that we would be unlikely to get anywhere near the point where there would be such a consensus even in the more AI-phobic West. It matters, for example, that positive views on the matter are correlated with education--suggesting that elites, with all this implies for policy, are more open to the idea of AI than the population at large. (In the U.S., for instance, 50 percent of those with "more education" who were polled thought AI a good thing, as against 41 percent of those with "less education.") It matters, too, that the young are more inclined to positive views, suggesting such attitudes will become more rather than less common in the years ahead. (Those below median age were more likely than those above it to think AI a good thing by a margin of 49 to 44 percent.) The result is that increasing openness to AI, rather than the opposite, seems to be the trend in the West. (Meanwhile if the trajectory in Asia parallels that in the West, as it actually seems to do by at least as large a margin--with in Japan the more educated 10 percent more likely than the less educated and those below median age 17 percent more likely than their elder to think AI a good thing--we would see an even more positive attitude there, with all that implies for Asian attitudes in the same area.)
I suspect it would take a great deal to change attitudes here--especially among, again, the more favorably inclined policymaking groups, the more in as governments and business presently seem to hope for so much from artificial intelligence (which, if experience in areas like climate change is anything to go by, will very easily outweigh very strong reservations among the broad public). Certainly in the United States a Wall Street embrace of AI such as some now speculate about would make any action seriously restricting it all but unimaginable to go by recent political history, while the deference of officials to such interests aside, concern for plain and simple state power will also be operative around the world. After all, we live in an age in which realpolitik is not declining but intensifying, with one dimension of this intensifying military competition--and susceptibility to technological hype in the military sphere, with at least one world leader of note having publicly expressed expectations that artificial intelligence superiority will be "ruler of the world." Such a statement is not to be taken lightly, the more in as he has just said what others are doubtless thinking, not least because countries facing a situation of geopolitical disadvantage they regard as intolerable (and many do) may see in AI their best hope of "leveling the field." Meanwhile, there is the humbler matter of economic productivity, perhaps the more urgent at a time of economic stagnation at best (or, depending on how does the math, collapsing output), and mounting economic "headwinds," with, again, East Asia worth mentioning here given its stagnant, aging, even shrinking populations. To cite simply the most dramatic example Japan has seen its working-age population collapse over this past generation, falling from 87 million in 1994-1995 to 73 million in 2021, with no end in sight. For a country in such straits, the prospect of AI-powered productivity coming to its economic rescue cannot be overlooked.
All that, too, will factor into the likelihood of an international AI control regime--or the opposite--and the likelihood of states abiding by it. Moreover, even were it possible to institute an agreement, in relative good faith, enforcement would be extremely challenging given the character of AI research. While CNBC remarked that the technology's development was "expensive" the figure for developing and training a program like GPT-3 may have been in the vicinity of $4 million.
That's it. Just $4 million. This may mean that the average person cannot afford to finance a "startup" such as this out of pocket, however much they would like to "get in on the ground floor here." But by the standards of high-technology corporate R & D this is nothing. Repeat, nothing. (According to a report from a few months ago some $75 billion--enough to fund 20,000 GPT-3s--have been spent on self-driving technology--likely to less result in the way of a useful product.) Still more is it nothing next to the resources governments have and are willing to apply to those ends they prioritize. (The Pentagon is requesting $130 billion for a single year's R & D.)
When the "barrier to entry" is that low it is that much more difficult for an international regime to effectively enforce the rules. Indeed, any number of wealthy individuals, who think little of spending nine figures on grand homes and gigayachts; and many of whom are famed for their transhumanist inclinations; could easily afford to fund such research on a personal basis even where they could not do so via their command of their business empires. Moreover, a program of this kind would be relatively easy to keep secret, given the nature of computer research--in contrast with the physical facilities required for the development, production and deployment of, for example, an arsenal of nuclear missiles, with all the temptation that the possibility of discretion, as well as cheapness, affords.
Of course, it may be that no obstacle is wholly insuperable. However, separately, and still more together, they are extremely formidable--enough so to have consistently defeated aspirations to meaningful cooperation in the past. Moreover, if Mr. Klein is right the proverbial clock is ticking--with perhaps not very much time left before the question of the "opening of the portal" becomes mooted by the actual event.
If one takes that prospect seriously, of course.
No comments:
Post a Comment