AI Cognition – The Next Nut To Crack

AI Cognition – The Next Nut To Crack

By MIKE MAGEE

OpenAI says its new GPT-4o is “a step towards much more natural human-computer interaction,” and is capable of responding to your inquiry “with an average 320 millisecond (delay) which is similar to a human response time.” So it can speak human, but can it think human?

The “concept of cognition” has been a scholarly football for the past two decades, centered primarily on “Darwin’s claim that other species share the same ‘mental powers’ as humans, but to different degrees.” But how about genAI powered machines? Do they think?

The first academician to attempt to define the word “cognition” was Ulric Neisser in the first ever textbook of cognitive psychology in 1967. He wrote that “the term ‘cognition’ refers to all the processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used. It is concerned with these processes even when they operate in the absence of relevant stimulation…”

The word cognition is derived from “Latin cognoscere ‘to get to know, recognize,’ from assimilated form of com ‘together’ + gnoscere ‘to know’ …”

Knowledge and recognition would not seem to be highly charged terms. And yet, in the years following Neisser’s publication there has been a progressively intense, and sometimes heated debate between psychologists and neuroscientists over the definition of cognition.

The focal point of the disagreement has (until recently) revolved around whether the behaviors observed in non-human species are “cognitive” in the human sense of the word. The discourse in recent years had bled over into the fringes to include the belief by some that plants “think” even though they are not in possession of a nervous system, or the belief that ants communicating with each other in a colony are an example of “distributed cognition.”

What scholars in the field do seem to agree on is that no suitable definition for cognition exists that will satisfy all. But most agree that the term encompasses “thinking, reasoning, perceiving, imagining, and remembering.” Tim Bayne PhD, a Melbourne based professor of Philosophy adds to this that these various qualities must be able to be “systematically recombined with each other,” and not be simply triggered by some provocative stimulus.

Allen Newell PhD, a professor of computer science at Carnegie Mellon, sought to bridge the gap between human and machine when it came to cognition when he published a paper in 1958 that proposed “a description of a theory of problem-solving in terms of information processes amenable for use in a digital computer.”

Machines have a leg up in the company of some evolutionary biologists who believe that true cognition involves acquiring new information from various sources and combining it in new and unique ways.

Developmental psychologists carry their own unique insights from observing and studying the evolution of cognition in young children. What exactly is evolving in their young minds, and how does it differ, but eventually lead to adult cognition? And what about the explosion of screen time?

Pediatric researchers, confronted with AI obsessed youngsters and worried parents are coming at it from the opposite direction. With 95% of 13 to 17 year olds now using social media platforms, machines are a developmental force, according to the American Academy of Child and Adolescent Psychiatry. The machine has risen in status and influence from a side line assistant coach to an on-field teammate.

Scholars admit “It is unclear at what point a child may be developmentally ready to engage with these machines.” At the same time, they are forced to admit that the technological tidal waves leave few alternatives. “Conversely, it is likely that completely shielding children from these technologies may stunt their readiness for a technological world.”

Bence P Ölveczky, an evolutionary biologist from Harvard, is pretty certain what cognition is and is not. He says it “requires learning; isn’t a reflex; depends on internally generated brain dynamics; needs access to stored models and relationships; and relies on spatial maps.”

Thomas Suddendorf PhD, a research psychologist from New Zealand, who specializes in early childhood and animal cognition, takes a more fluid and nuanced approach. He says, “Cognitive psychology distinguishes intentional and unintentional, conscious and unconscious, effortful and automatic, slow and fast processes (for example), and humans deploy these in diverse domains from foresight to communication, and from theory-of-mind to morality.”

Perhaps the last word on this should go to Descartes. He believed that humans mastery of thoughts and feelings separated them from animals which he considered to be “mere machines.”

Were he with us today, and witnessing generative AI’s insatiable appetite for data, its’ hidden recesses of learning, the speed and power of its insurgency, and human uncertainty how to turn the thing off, perhaps his judgement of these machines would be less disparaging; more akin to Mira Murati, OpenAI’s chief technology officer, who announced with some degree of understatement this month, “We are looking at the future of the interaction between ourselves and machines.”

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside the Medical Industrial Complex (Grove/2020)

Leave a comment

Send a Comment

Your email address will not be published. Required fields are marked *