Of course, in the mid-'10s there was a new burst of excitement based on progress in exactly the key area, neural nets and their training to recognize patterns. At my most optimistic I admitted to
find[ing] myself increasingly suspecting that Kurzweil was accurate enough in guessing what would happen, and how, but, due to overoptimism about the rate of improvement in the underlying technology, was off the mark in regard to when by a rough decade.Suspecting it enough, at least, to be more watchful of a scene to which I had paid little heed for many years--with, I thought, how self-driving cars do a particularly good indicator of just how substantial, or insubstantial, the progress was. Whether or not an artificial intelligence could autonomously, reliably, safely navigate a large vehicle through a complex, variable, unpredictable urban environment seemed to me a meaningful test of progress in this area--and its passing the test reason to take the discussion seriously.
Alas, here we are in 2023 with, even after years of extended time on the test such vehicles decidedly not passing it, and not showing much sign of doing so very soon, with the hype, in fact, backfiring here. Meanwhile other technologies that might have afforded the foundations of later advance have similarly disappointed (the carbon nanotube-based computer chip remaining elusive, and nothing else any more likely to provide a near-term successor to silicon). Indeed, the pandemic drove the fact home. In a moment when business strongly desired automation there was just not much help on offer from that direction, with the event in fact a reminder of how little replacement there has yet been for human beings physically showing up to their workplace to perform innumerable tasks with their own hands.
The resulting sense of very strong, very recent, bust, made it very surprising when these past few months the media started buzzing again about AI--and a tribute to unsophistication on the part of our journalists given the cause of the fuss. Rather than some breakthrough in computer chip design promising much more powerful computers with all they might do, or the fundamental design of neural nets such as might make them radically more efficient learners (enabling them to overcome such critical "bottlenecks" to automation as "perception and manipulation" or "social intelligence"), the excitement is about "chatbots," and in particular a chatbot based less on fundamental design innovation in the underlying neural network than simply making the network bigger for the sake of "language prediction." Indeed, those less than impressed with the chatbots term them, unflatteringly but so far as I understand it not unfairly, glorified autocomplete functions.
Of course, it may well be that a better autocomplete function will surprise us with just what it can do. We are, for example, being told that it can learn to code. Others go further and say that such chatbots may be so versatile as to "destroy" the white collar salariat in a decade's time--or even warn that artificial intelligence research has reached the point of Lovecraftian horror story "meddling with forces we cannot possibly comprehend."
In that there seems the kind of hype that creates bubbles, though admittedly many other factors have to be considered, not all of which fit in with the bubble-making pattern. After all, this has been a period of rising interest rates and tightening monetary policy and falling asset values relative to liabilities that have come to be piled very high indeed amid cheap borrowing and economic catastrophe--hardly the sort of thing that encourages speculative euphoria. Still, as banks start failing it remains to be seen just how stalwart the central bankers will prove in their current course (on which they do not seem to have ever planned to go by their "stress tests" for the banks, and doubtless follow only with great reluctance). Moreover, there is, just as before, a very great deal of money out there that has to be parked somewhere, and still very few attractive places to put it (this is, for instance, why so much money was poured into those crappy mortgage-backed securities!), while, even as the press remains addicted to calling anyone with a large sum of money or high corporate office a "genius" (Ken Lay was a "genius," Jeffrey Epstein was a "genius," Elizabeth Holmes was a "genius, Sam Bankman-Fried was a "genius," on and on, ad nauseam), they consistently prove to be the extreme opposite, with that going the more for those who so eagerly jump on the bandwagon every time. It matters here that said persons tend to be perfectly okay with the prioritization of producing high share values over the creation of an actual product long central to such scenes ("The object of Fisker, Montague, and Montague was not to make a railway to Vera Cruz, but to float a company"), expect that if worse comes to worse there will always be a greater fool, and believe that if they run into any real trouble they will have the benefit of a bailout one way or the other, usually rightly.
The result is that, even if the money reportedly beginning to pour into artificial intelligence really is a case of people getting excited over an autocomplete function--and I will add, very little prospect of anything here to really compare with the insanity of the '90s dot-com bubble for the time being--I still wouldn't advise betting against there being a good deal of action at this particular table in the global casino in the coming months and years.
No comments:
Post a Comment