Saturday, June 16, 2012

There is no singularity

To all those of you who are waiting for "the singularity", whether it be a Skynet-like hellscape or a Star Trek utopia. I'm sorry to burst your bubble. There ain't gonna be one.

I've been hearing about this for awhile now. "The technological singularity is the hypothetical future emergence of greater-than-human intelligence through technological means." - leading to ever greater intelligences, based on the idea that if humans can create an intelligence greater than ourselves, then that intelligence will create an intelligence greater than itself, and so forth, with the level of intelligence increasing exponentially in an unpredictable "intelligence explosion."

Not only will there be no singularity in the 21st century, I'm confident we will not even be able to create AIs that are as smart as us, in any time frame we can imagine. While it seems premature to talk about the 22nd century, I doubt we'll create machines that exceed our intelligence within the next thousand years--if such a thing is even possible.

I've been programming since I was 11, and I have never seen anything remotely comparable to human intelligence in a computer, or even animal intelligence. Virtually all the clever things our computers do are merely things that their creators designed them to do. And everything I've heard about the human brain leads me to believe that we cannot replicate it in the forseeable future.

A lot of people are misinformed about the brain. Some people still seem to believe in the 19th-century notion that the brain starts at birth as a blank slate into which, through the magic of "intelligence", it is filled with stuff. In reality the brain has a particular and intricate structure, much like a modern computer chip. There are dozens of parts, each with numerous subsystems designed to do particular tasks. All of these parts are in place at birth and are required for us to do what we do. We really have no clue how to replicate what the brain does; in fact for the most part we have very little idea what each of the parts do.

And yet they keep making movies and video games with all these AIs in them. Usually these AIs are developed by some kind of super-genius or by means unknown from a shady government agency. TV AIs usually exhibit emotions, curiosity, music appreciation, and other things that are really human traits, and the traits inserted by the writers to make the AI seem more machine-like are either highly implausible, or mere human traits in disguise. The most implausible Hollywood AI trait is the desire to kill everyone, a popular cliché that I need not dignify by discussing further.

For instance, Data on Star Trek appeared not to have any emotions, but for instance he clearly demonstrated loyalty, curiosity, desire (for how can one make choices without some desired outcome?), and other je ne sais quoi traits that may be hard to name but seem distinctly human. One of his more implausible traits was that he would listen to several musical scores at once. Since there is no easy way to distinguish which notes belong to which piece, it's not really possible to enjoy several pieces at once even if your positronic brain contains five "music appreciation units" for some reason (or one fancy multiplexed unit.)

Well, in case you didn't know this already, Hollywood is full of totally clueless writers. In novel writing, I always hear authors say "write about what you know," because this makes your stories as plausible and genuine as possible. Hollywood, however, is full to the brim with people writing, acting and directing about topics they know nothing about*. So in the land of Hollywood and sci-fi we have these clichés and about AIs, computers, hacking, science (and so on) that are nonsensical fantasy, with virtually no connection to reality.

* I recognize the irony that I am writing an opinion piece about AI when I am not an expert in AI nor in the human brain. But hey, it's not like anyone reads my blog.

Even if we could make a computer do all the useful things a human does--walking, talking, object recognition, navigation, reliable context-sensitive voice recognition, language manipulation and problem-solving in general--we'd still be only halfway there. The computer would still just be a tool. It wouldn't really be "an intelligence", or rather it wouldn't be what we think of as an intelligence, that is, an intelligent free agent. To be an AI like we see in the movies, it would also need a suite of emotions (from joy to embarrassment to boredom), curiosity, a sense of purpose, self-awareness (note that no one seems to know what, exactly, self-awareness is anyway), and of course a generalized ability to learn (and there are many different ways that we learn). The key point is, there is no magic formula for any of this. Each of hundreds or thousands of human traits, if they can be replicated at all, must be individually analyzed and studied before they can be replicated by us, and we'll probably only do a sucky cheap knock-off of each trait at first. And once we have figured out how to replicate each of the parts, we'll still have to spend a century or two figuring out how to put the parts together.

If it's even possible.

I mean, why should it be possible? It really shouldn't be possible. Because I have a soul. Or more to the point, I am a soul. I don't know what a soul is, I just know that I am one. And surely there is no way to duplicate one. I know that the brain is a machine, and after much soul-searching I have to admit that I might be 99% machine. But isn't the 1% important? Without the soul, a machine can never really be alive.

I find the difference between humans and computers fascinating. The supposed similarity between brains and computers gets a lot of attention, but the differences deserve more attention.

Consider some of the things humans can do. We can casually recognize objects, animals and people from any angle almost instantly. We can learn language merely by hearing it (as children), we can effortlessly navigate our bodies around obstacles, and we can solve arbitrary riddles (some better than others). The smartest people in the world have toiled for decades trying to figure out how to make computers and robots do the same things, with only limited success, and some of these tasks still require supercomputers. Possibly the most impressive "human" thing a computer has ever done is to play Jeopardy, which IBM's Watson supercomputer played quite well, albeit using 2880 processors and 16 TB (that's 16,000 GB) of RAM in  90 server machines to do it. And note that Watson does not go out and study encyclopedias of its own volition; everything action it takes is chosen within a framework carefully programmed by a team of humans.

Even the human ability to do mathematics is not really something computers are any good at. Computers are good at computation. Pure mathematics, on the other hand, is a field in which computers have limited abilities. You can program them to manipulate symbols, but the things researchers do in mathematics are creative, exploratory, and essentially fascinating activities that computers don't do at all.

And yet we humans can never, ever learn to instantly add 10-digit numbers, let alone multiply them or take their square roots or do other numeric tasks that even the simplest computer chips can do instantaneously,  with perfect reliability, using almost no energy. How's that? Manipulating numbers is trivially simple, compared to the kinds of things humans are good at; indeed, it is the only thing that the earliest mechanical computers were good for. Even analog computers, used (for example) in World Wars I and II to help aim big guns, are better at arithmetic than we are.

1-bit Full Adder circuit. Chain a few together and you can add binary numbers.
It may not look simple, but there are probably protein molecules more complicated than this.

Why is this? The immediate answer is that our brains simply don't have a circuit for adding numbers. Such a circuit would be small and simple and God could have easily put it in there, but it just ain't there. Nor do we have anything well-equipped to do the task. An addition circuit can be simulated with our memories, but our memories are notoriously unreliable and very slow. Our ability to form mental habits can be exploited to perform arithmetic, but no matter how much you practice, you will always be ridiculously slow and unreliable compared to the world's simplest calculators. You may have seen the mathemagician. He can do some operations faster than the audience can type them into a calculator, but he's still far slower than the calculator itself since 98% of the calculator's time is spent doing nothing, waiting for the user to press "equals".

There are three basic things computers can do easily: computations, memorization, and running programs. Curiously, humans are bad at all three of those things.

Computers can do the same things over and over and over, perfectly and extremely fast, which enables them to run programs, and almost everything computers can do is accomplished with programs--including simulations of human traits! And yet a human cannot run any kind of program. In real life there are no "Manchurian Candidates". In our brains, the closest thing we have to programs are habits, but these are quite different from computer programs. Habits are created through training (repetitive action), and a computer has no habits except to the extent we figure how to write programs to deliberately develop them. But while we could certainly teach a computer to learn certain types of habits, humans fundamentally can't run programs, because we seem to be positively allergic to repetitive action. Have you ever tried doing a complex action repetitively? We can't do it! Every time we try to do something, it comes out a little different. In fact, the more times we try to do something in a row, the worse our performance gets!* Bowling strikes me as a sport that would seem utterly ridiculous to a robot. The whole goal is to do a simple task reliably, and it takes years of practice to even come close. A purpose-built robot would have little difficulty, I think, bowling 300. But a human cannot even walk in a straight line without a frame of reference.

* in the long run we may improve, but our performance always gets worse as we repeat many times without pausing.

Thus, most likely any impressive intelligence we do produce will not resemble us very much, owing first to our lack of understanding of ourselves, and second to the fact that computers and brains are not well suited for the same tasks. One way or another, Hollywood's ideas about AIs will be proven faulty.

So, don't titillate yourself too much about the prospect of the singularity. There isn't one.

Also, there is no spoon

No comments: