Tuesday, April 4, 2023

Julian Jaynes' Theory of Consciousness and Artificial Intelligence Research Today

Back in the 1970s psychologist Julian Jaynes argued that human consciousness as we know it only emerged in the second millennium B.C.. His book about the subject, The Original of Consciousness in the Breakdown of the Bicameral Mind details his argument about what consciousness is, and is not; how humans existed and functioned--indeed, functioned to the point of founding civilization, with its agriculture, cities, letters, science, empires--without consciousness; the process by which the "bicameralism" he describes broke down and gave way to a world of conscious human beings of the kind we take for granted; and with the legacies of the earlier bicameralism all around us (especially evident in the major world religions).

I cannot say that Jaynes convinced me of the rightness of his hypothesis--but I nonetheless found the argument intriguing, and still do, with elements of it, at least, seeming to me to have a great deal of explanatory power. Of particular interest to me at the moment is the challenge he presented to our (admittedly fuzzy) notions of consciousness, and especially his argument that consciousness is not a capacity for sensation and observation, or the ability to learn, recall, think, reason, all of which he holds to be mainly unconscious processes. Rather he holds that what he regards consciousness as being (to use the words in which I summed it up earlier) a model of the world around us we carry around in our heads enabling us to introspect, visualize and take decisions in a self-aware manner. Built upon the capacity of language to create metaphors, it consists of selected ("excerpted") details, arranged according to a sense of space and "spatialized" time, and organized in narrative form, with dissonant information reconciled around an image of the self (as seen by oneself and by others)--it is only a small part of our mental life.

Considering the matter of artificial intelligence with Jaynes' theory in mind I find myself thinking that if consciousness is not inherently human, but rather a late development in human history; and a relatively small part of our mental lives without which we can and do sense, observe, learn, remember, think and reason; then it is less useful than it might seem as a basis for delineating and defining the human as against the non-human--or even in delineating and defining "intelligence." That is to say that a machine which did not display consciousness could indeed be functioning in the ways that the human minds does--and less quickly dismiss "a mere autocomplete function" as doing something other than we ourselves do most of the time.

No comments:

Subscribe Now: Feed Icon