+
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Artificial Intelligence: Amazing and Alarming, but Is It Intelligence?

Exploring the limits of cognition in a brave new synthetic digital world.

Language itself is a grand illusion because words are always inherently imperfect or incomplete representations of what they purport to stand for or convey. All words, even concrete nouns referring to discrete physical objects, are always simplifications or excerpts of larger conceptual realities. Words can point toward, specify, or differentiate things from one another, allowing us to exchange information, and perhaps even to refine our thought processes. But they can never encompass the totality of our lived perceptions, much less express the ultimate realities that may, like something akin to Plato’s ideal forms, lurk silently behind the limited reality we can access through the senses or process mentally as abstract concepts. This in no way diminishes the immense importance of language to the human species, but merely creating words to denote or specify physical entities or express abstract ideas is not the same as understanding them or comprehending the totality of their essential natures, or even the full scope of their contextual meanings.

For example, If I use a simple concrete noun like “table,” in a sentence it may call forth a mental image of a physical archetype in the mind of a reader—perhaps a plain four-legged wooden table with a flat top, etc. That image accords with “Definition 1” of the word “table” given in many dictionaries: “a piece of furniture with a flat top and one or more legs, providing a level surface on which objects may be placed, and that can be used for such purposes as eating, writing, working, or playing games.” Setting aside for the moment the myriad alternate uses of this common word, e.g. water tables, logarithmic tables, actuarial tables, or laying a congressional bill on the table, the word embraces an infinite number of possible tables. Depending on the degree of specificity required, describing a particular table requires adding myriad additional details, typically adjectives denoting its color, size, composition, construction, use profile, etc. Despite this wealth of detail, even the most astute, punctilious, and extensive verbal description of a table can never equal the information contained in a 2- or 3-dimensional visual image, such as a clear, sharp photograph or a photorealistic artist’s rendering or sculpture. The proverb “a picture is worth 10,000 words,” attributed to the ancient Chinese, certainly points in that direction, but it’s a vast understatement! That’s because the truncated and interpolated informational content of a verbal description is of a different order than the much broader non-verbal spectrum of information that can be conveyed by looking at a detailed image. Of course, looking at an actual table directly is the best way to apprehend it in all its glorious specificity. But even that impression is attenuated by the limits of the human proprioceptive system, a representation of reality presented to consciousness.

Is that a table set before me?

To understand what a table truly is, or whether the object placed in front of you meets the criteria of a table, you must consider the essential concept of a table and its defining and limiting elements—namely what characteristics must an object possess to qualify as a table, and what elements would disqualify it or call it into question. As always, it’s helpful to understand the word’s origin. “Table” derives from the Latin word tabula, defined as a plank, a tablet, or as a list (as in tabulation) and was memorably used in the Latin phrase tabula rasa, a blank tablet or slate, the state or condition erroneously attributed the minds and brains of newborn babies.

While the largest percentage of tables that can be called “furniture” in the broadest sense have four legs, a significant percentage of existing tables have only two legs or one leg (e.g. pedestal tables), typically with suitable lateral extensions or a wider platform at the base of the leg(s) to help stabilize the top surface. A smaller percentage have 3, 5, 6, or more legs, and tables can also be smooth solid structures shaped like cubes or rectangular prisms, with a continuous surface or “apron” supporting the top in lieu of discrete legs. Could a flat metal surface electromagnetically suspended in the air parallel to the floor without any legs or base at all be considered a table? How about a similar electromagnetic field upon which objects can be placed as though they were on a table—in effect, an invisible table? The most fundamental question is whether the definition of a table is functional (if it’s useable as a table it’s a table), taxonomic (if it’s the same or similar to any existing table it’s a table), or philosophical (if it has all the defining features ascribed to tables, and no disqualifying features (such as a squishy vibrating top surface that would prevent you from using it as a table) it’s a table. The point of all this is not to split hairs, settle any disputes, or come down on one side or the other, but to illustrate the inherent complexities and amorphous nature of words and their inherent limitations even when they’re used for what is arguably their most straightforward purpose—naming common physical objects.

Determinism and its discontents

Things get a lot more challenging when dealing with abstract concepts like cause and effect. When we use these two inextricably conjoined words, we tend to think that we’re referring to a specific, definable event, A, the cause, that, after a finite, measurable time interval, brings about the occurrence of event B, the effect. The cue ball strikes the 8 ball, so it goes into the side pocket. But in fact, even this “simple” event only occurs when a constellation of “causes” align perfectly–the strike angle and impetus of the cue, the amount of chalk on the end of the stick, the coefficient of friction of the surface of the pool table, etc., must all be within rather tight tolerances for the “effect” to occur. Even so, there will inevitably be instances where “indeterminate causal factors” intervene, preventing a seemingly predicted event from happening. In the case of quantum physics, parallel entrained “events” can manifest simultaneously at great distances, an occurrence that would appear to collapse the fundamental concept of cause and effect.  In other words, the very concept of “cause and effect” is a kind of metaphor, a shorthand system for correlating observed events, not a brace of “ultimate reality factors” that exist independently in a pure and inviolate state in the cosmos. The clue is that both terms are relative rather than absolute and their meaning is inherently contextual or assigned, so neither has any meaning without the other! Cause and effect is not a closed loop or web of interconnected loops that dooms us to living in a determinist universe in which we have no choices or agency. “Cause” in its ultimate sense is, like “infinity, ” a word that points toward something beyond itself, something that cannot be fully encapsulated in words. And while the ambit or our “free will” may be much more constrained than we’d like to imagine, it is not zero.

The current kerfuffle over Artificial Intelligence or A.I. is a striking example of the power and limitations of words. Well before A.I. was a thing, computers, and computer systems could outperform humans at a wide variety of tasks well beyond mere information retrieval and mathematical calculations. By the ‘90s there were chess-playing programs that could trounce the average duffer every time and often edge out even high-ranked chess players. More recently, IBM’s Big Blue supercomputer consistently outperformed at least one International Chess Grandmaster brave enough to take up the challenge. Does this mean computers are now “more intelligent” than humans at playing chess? No, but chess programs in computers can process huge numbers of possible scenarios almost instantaneously, all integrated with algorithms that represent the board, the various pieces, and the rules of the game. That gives them a tremendous advantage in calculating the optimal next move, and unlike humans, they are not impeded by emotions or fatigue.

The term “artificial intelligence” was coined by John McCarthy, who held a workshop at Dartmouth on “artificial intelligence” way back in 1955 (!), but the expression started gaining popularity about 20 years ago. In a way, this pervasive new nomenclature makes sense, since the capabilities of current A.I. systems go well beyond what was possible even a decade ago. One can now generate impressive photorealistic still photos and movies with engaging plots, plausible term papers and PhD theses, and deep fake clips of speeches and tirades seemingly delivered by political allies or opponents, simply by inputting and refining sophisticated verbal or written commands. According to the immutable law of taxonomy, “differences in degree, taken beyond certain limits, become differences in kind,” so a new designation may be justified. But however sympathetic I may be to how the term “A.I.” is used today, I must call BS on the current penchant for indiscriminate use of the term “intelligence” to refer to anything that seems to mimic cognitive processes, or can extrapolate novel solutions, narratives, and outcomes by accessing petabytes of stored data. The semblance of intelligence, however convincing, useful, and persuasive it may be, is not the same as real intelligence, an open-ended process of decision-making and dynamic responses to changing real-life situations that’s usually applied only to sentient creatures, including us. 

The word intelligence derives from the Latin nouns intelligentia or intellectus, which, in turn, stem from the verb intelligere, to comprehend or perceive. According to Wikipedia, “Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. It can also be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.” The artificial intelligence systems embedded in some of the latest A.I. capable computers and machines can do many of these things, and execute them much more efficiently than ever before. But self-awareness, emotional knowledge, and true artistic creativity (which always has an emotional component), remain elusive. Though current A.I. systems can spit out images, videos, and texts that have a compelling emotional impact, the visceral response, and the feelings they generate reside not in A.I. or its output, but in the hearts and minds of the humans that receive them, and in the huge database of human output and experience that has been mined to create them. Indeed, the definition of intelligence, and whether one can even define intelligence, are controversial topics. There is considerable disagreement on which abilities it encompasses, and whether any or all of these elements are quantifiable. 

There have been countless attempts to quantify intelligence by testing, notably the Intelligence Quotient (IQ) tests that were first developed in the early 20th century and are still used today, albeit in significantly revised forms. Until recently, all verbal IQ tests were notorious for their social and intellectual bias, reflecting the knowledge base, life experiences, and social constructs of the (mostly) well-educated white folks who created them. However, even in their allegedly less biased current form, many psychologists and neuroscience researchers question the validity of IQ tests, and their ability to accurately measure intelligence. 

Animal vs. Human Intelligence

There have also been myriad attempts to measure animal intelligence, and the literature is replete with unsubstantiated claims that dogs are more intelligent than cats, cows are more intelligent than horses, or that the intelligence of whales surpasses that of many primates. Virtually all these determinations are based on applying anthropocentric criteria—namely that the more closely an animal’s intelligence resembles human intelligence, the higher its ranking. This completely disregards the fact that a fundamental aspect of intelligence is the ability to make good (that is, life-sustaining) decisions in the context of one’s environment and that the systems various creatures have evolved to achieve this may differ substantially from our own. In short, when it comes to surviving in Australian rivers the evolved intelligence and physical form of the platypus may be superior to our own. We humans are ultimately more adaptive to a much wider variety of environments, and that’s why we excel in the broader context of dominating the planet—providing our greed and arrogance don’t destroy the biosphere, making our beautiful and biologically diverse earth uninhabitable before we develop the technological wherewithal to “get out of Dodge” and colonize the universe.

The ultimate question is whether or when computers or any other devices or information processing systems, can ever attain something as dynamic, fluid, multifaceted, self-referential, and ultimately emotional as actual consciousness. Only by having some skin in the game, that is, becoming or being embedded in a mortal, self-replicating life form can the disembodied “brain” of an A.I.-enabled supercomputer transcend its limitations as a spectacular, ingenious, open-ended tool and attain something akin to real intelligence. Human intelligence is not merely a cold cognitive process based on inspecting, comparing, categorizing, and extrapolating—it is directly tied to emotions ranging from anger to fear to love, and above all, empathy. 

Many years ago, I interviewed the middle-aged CEO of a successful Silicon Valley software company with annual sales of around 40 million dollars. Although I was merely an acquaintance, he decided to reveal his incredible life story, which I will paraphrase here in condensed form:

“When I founded this company decades ago, I realized that I was the only person with the knowledge base and strategic vision to make it happen—I couldn’t delegate that responsibility. The problem was that I’d been diagnosed with Asperger’s Syndrome (which is no longer classed as a discrete disorder, but as part of the general autism spectrum, Ed.) In short, was utterly lacking in social skills, and I had no empathy. So, I decided to make a study of the elements that comprise empathy, and to create a virtual empathy that I could then project to my employees and associates, much like an actor playing a role or a classic example of ‘fake it till you make it.’ I knew that I’d have to be really convincing to succeed and I worked very hard at it. Now, decades later think I’ve finally begun to develop a few shreds of actual empathy, and I sure hope I can continue moving in that direction.” I was practically moved to tears when I realized what that poor man had to go through, but I think his story is a poignant reminder that human intelligence entails a lot more than just intellectual prowess.

Until artificial intelligence attains a fully human dimension it will always be subordinated to human intelligence and subject to the vagaries of human behavior and morality. You may rest assured that whatever humans create will ultimately be no better than us, and however artificial intelligence and real intelligence converge going forward, our species, homo sapiens, bears the sole responsibility for the kind of future that unfolds. A.I. is an immensely powerful tool with great upside potential for propelling advances in medicine and science, and a great downside potential for rendering whole categories of jobs obsolete or even undermining human civilization when used by malign actors for intrusive surveillance, creating disinformation, and ever more monstrous weapons. We can only hope that humans of goodwill and compassion prevail because the alternatives are truly terrifying.