The AI Delusion, by Gary Smith (Oxford, 249 pp., $27.95)

Artificial intelligence may prove more dangerous as it advances, but it will never generate actualintelligenceso long as the basic assumptions of the field remain unchanged. InThe AI Delusion, Gary Smith reveals why, and assesses the technology’s problems from an economist’s perspective.

AI’s basic problem concerns how computers process symbols, from the series of English letters one types on a keyboard to, more fundamentally, the strings of 0’s and 1’s into which those letters are encoded. The meanings of these symbols—indeed, even the fact that theyaresymbols—is not something the computer knows. A computer no more understands what it processes than a slide rule comprehends the numbers and lines written on its surface. It’s the user of a slide rule who does the calculations, not the instrument itself. Similarly, it’s the designers and users of a computer who understand the symbols it processes. The intelligence is in them, not in the machine.

As Smith observes, a computer can be programmed to detect instances of the word “betrayal” in scanned texts, but it lacks theconceptof betrayal. Therefore, if a computer scans a story about betrayal that happens not to use the actual word “betrayal,” it will fail to detect the story’s theme. And if it scans text that does contain the word, but without deploying the concept of betrayal, the computer will erroneously classify it as a story about betrayal. Due to the rough correlation that exists between contexts in which the word “betrayal” appears, and contexts in which the concept is deployed, the computer will loosely simulate the behavior of someone who understands the word—but, says Smith, to suppose such a simulation amounts to real intelligence is like supposing that climbing a tree amounts to flying.

Similarly, image-recognition software is sensitive to fine-grained details of colors, shapes, and other features recurring in large samples of photos of various objects: faces, animals, vehicles, and so on. Yet it never sees somethingas a face, for example, because it lacks the concept of a face. It merely registers the presence or absence of certain statistically common elements. Such processing produces bizarre outcomes, from misidentifying a man merely because he’s wearing oddly colored glasses to identifying a simple series of black and yellow lines as a school bus.

It would miss the point to suggest that further software refinements can eliminate such glitches, because the glitches demonstrate that software is not doing the same kind of thing we do when we perceive objects. The software doesn’t grasp an imageas a wholeor conceptualize its object but merely responds to certain pixel arrangements. A human being, by contrast, perceives an image as a face—even when he can’t make out individual pixels. Sensitivity to pixel arrangements no more amounts to visual perception than detecting the word “betrayal” amounts to possessing the concept of betrayal.

The implications of AI’s shortcomings, Smith shows, are not merely philosophical. Failure to see how computers merely manipulate symbols without understanding them can have serious economic, medical, and other practical consequences. Most examples involve data mining—poring over vast bodies of information to detect trends, patterns, and correlations. The speed of modern computers greatly facilitates this practice. But as Smith maintains, the conclusions that result are often fallacious, and the prestige that computers have lent data mining only makes it easier to commit the fallacies.

In any vast body of data, many statistical correlations are bound to exist due to coincidence—they don’t merit special attention. For example, a correlation may exist between changes in temperature in an obscure Australian town and price changes in the U.S. stock market. A person would know that the two events have no connection. Human beings, unlike computers, can conceptualize the phenomena in question, and thereby judge that—given the nature of Australian weather patterns and of U.S. stock prices—no plausible causal link connects them. However, because a computer merely processes symbols without having the concepts we associate with them, it can’t do this. Accordingly, it cannot

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *