Superintelligent machines have long been a trope in science fiction. Recent advances in AI have made them a topic for nonfiction debate, and even planning. And that makes sense. Although the Singularity is not imminent–you can go ahead and buy that economy-size container of yogurt–it seems to me almost certain that machine intelligence will surpass ours eventually, and quite possibly within our lifetimes.
Arguments to the contrary don’t seem convincing. Kevin Kelly’s recent essay in Backchannel is a good example. His subtitle, “The AI Cargo Cult: The Myth of a Superhuman AI” implies that AI of superhuman intelligence will not occur. His argument centers on five “myths”:
- Artificial intelligence is already getting smarter than us, at an exponential rate.
- We’ll make AIs into a general purpose intelligence, like our own.
- We can make human intelligence in silicon.
- Intelligence can be expanded without limit.
- Once we have exploding superintelligence it can solve most of our problems.
He rebuts these “myths” with five “heresies” :
- Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
- Humans do not have general purpose minds, and neither will AIs.
- Emulation of human thinking in other media will be constrained by cost.
- Dimensions of intelligence are not infinite.
- Intelligences are only one factor in progress.
This is all fine, but notice that even if all five “myths” are false, and all five “heresies” are true, superintelligence could still exist. For example, superintelligence need not be “like our own” or “human” or “without limit”–it only needs to outperform us.
The most interesting item on Kelly’s lists is heresy #1, that intelligence is not a single dimension, so “smarter than humans” is a meaningless concept. This is really two claims, so let’s consider them one at a time.
First, is intelligence a single dimension, or are there different aspects or skills involved in intelligence? This is an old debate in human psychology, on which I don’t have an informed opinion. But whatever the nature and mechanisms of human intelligence might be, we shouldn’t assume that machine intelligence will be the same.
So far, AI practice has mostly treated intelligence as multi-dimensional, building distinct solutions to different cognitive challenges. Perhaps this is fundamental, and machine intelligence will always be a bundle of different capabilities. Or perhaps there will be a future unification of some sort, to create a single capability that can outperform people on all or nearly all cognitive tasks. At this point it seems like an open question whether machine intelligence is inherently multi-dimensional.
The second part of Kelly’s claim is that, assuming intelligence is multi-dimensional, “smarter than humans” is a meaningless concept. This, to put it bluntly, is not correct.
To see why, consider that playing center field in baseball requires multi-dimensional skills: running, throwing, distinguishing balls from strikes, hitting accurately, hitting with power, and so on. Yet every single major league center fielder is vastly better than I am at playing center field, because they dominate me by far in every one of the component skills.
Like playing center field, intelligence may be multi-dimensional, and yet one entity can be more intelligent than another by being superior in every dimension.
What this suggests about the future of machine intelligence is that we may live for quite a while in a state where machines are better than us at some aspects of intelligence and we are better than them at others. Indeed, that is the case now, and has been for years.
If machine intelligence remains multi-dimensional, then machines will surpass our intelligence not at a single point in time, but gradually, and in more and more dimensions of intelligence.
Leave a Reply