TMA Associates

Reprinted from Speech Strategy News, Nov 2011

Editor's Notes:

Artificial Intelligence versus Computer Intelligence
William Meisel

John McCarthy, the influential technology pioneer who died in October (p. 37), coined the term Artificial Intelligence (AI) in 1956 and defined it as “the science and engineering of making intelligent machines.” Since “intelligent” is used in the definition, his definition avoids the challenge of defining what we mean by “intelligence.” Most would agree that AI means having a computer emulate things usually associated with human abilities, e.g., understanding language at the level humans do. There is presumably no requirement that a computer evincing AI would perform its magic doing things with the same mechanisms brains use. Alan Turing famously proposed his “Turing test” in 1950, suggesting that, if a human interacting with the machine by language couldn’t tell if it was human or a machine, the result suggested the machine could “think.”


Whatever defines human intelligence, it is perhaps not the goal we should be targeting with computers. We want computers to help humans, not emulate them. How does a machine understand the subtle implications of a statement like Shakespeare’s “That which we call a rose by any other name would smell as sweet” in the context of Juliet’s love for Romeo, but his being a member of the wrong family? That level of understanding may be a legitimate research goal, but should it be the goal of commercial products? Today’s speech recognition and “natural language understanding” is achieved largely through statistical means that could be argued have little “understanding” of their conclusions.


To the degree that we want to achieve certain goals such as speech understanding to help with certain tasks using a computer, we are best served when we recognize the difference between computer processing and human processing, and take advantage of what the computer does best. Computers do some things well beyond human capabilities, e.g., storing maps that include almost every street in many countries—with little danger of forgetting what they have “learned.” Computers are also good at certain types of pattern recognition from examples. They are probably better at comparing fingerprints than humans, for example, and certainly faster at comparing a fingerprint to a large database of fingerprints. This is more than just comparing images bit-by-bit; key features of the fingerprints that humans would call “patterns” are used to narrow the search. Computers excel in their ability to analyze large databases and come up with the most consistent explanation of that data, and then to use that explanation to produce information in a form useful to humans. Computers excel at tasks such as retrieving specific information in large databases, or at least narrowing the search. Let’s call capabilities of this nature “Computer Intelligence” (CI).


Most have forgotten or never knew about a disagreement early in AI development regarding how computers should approach performing things we associate with human intelligence. My 1972 book, Computer Oriented Approaches to Pattern Recognition, took the position in the title and content that we should use statistical methods to recognize patterns with computers, using examples of patterns and their classification, and not try to copy the way humans did things. This general approach is hardly controversial today—it has produced breakthroughs such as the Hidden Markov Modeling and Statistical Language Models at the core of modern speech recognition technology.


To be clear, CI often requires human intelligence as input to its analysis. Humans recognized that patterns in fingerprints such as “loops, whorls, and arches” could be used to summarize an otherwise complex pattern. Later, computer scientists used mathematical representations of these features in fingerprint classifications software. Human understanding that speech is composed of a finite number of phonemes was used in creating today’s speech recognition systems. And human transcriptions of speech are used to create most statistical language models used in speech-to-text transcription.


Can CI move closer to AI? Is the classical science fiction image of the intelligent robot, be it friendly or evil, likely to become real someday? Will computers simulate intelligence so well that they can become beloved “pets” or “friends”? In the sense of the Turing test, I suspect we will see behavior that would be difficult to discern from human interaction if we ask the computer typical questions we would ask a human. It will fail the Turing test in that it will be able to answer obscure questions consistently beyond the capability of any single human (unless it is deliberately designed to limit its knowledge). “Assistant” features on mobile phones will target “natural” interaction with users, but by methods that are not truly analogous to human thought.


It is questionable if the goal of AI should be to have the computer truly “understand” humans and react as they would. How can a machine meaningfully react to a question such as “Are you hungry?” or “Do you like to play tennis?” A computer can address the Turing test by canned answers to such questions, but true understanding of human feelings requires a human body and years of experience living in it, and the best way to manufacture a human is the old-fashioned way. And who wants to raise a computer from “birth” for 16 years or so before it is useful?


CI is a more useful concept than the ambiguous AI. CI can continually expand its capabilities and accessibility to humans through continuing hardware and software innovation. CI has the potential to enhance the human experience if humans can use CI to add to their intrinsic reasoning capabilities. Navigation systems, search engines, even computer games, allow us to do more by augmenting our abilities.


The major limitation of CI as an expansion of human capabilities is giving people easy access to the results of computer analysis. Advances in the user interface, such as speech technology, allow us tighter connection with CI. Mobile devices and network connectivity make it possible, in effect, to take huge computing capability wherever we go. As these trends continue to develop, computer intelligence will continue to expand our human intelligence rather than mimic it.