To paraphrase, Jeff Hawkins is basically saying that an artificial intelligence singularity is not on any foreseeable horizon because the AGI would be subject to a finite learning rate:
For an intelligent machine to discover new truths, to learn new skills, to extend knowledge beyond what has been achieved before, it will need to go through the same difficult and slow discovery process humans do. Yes, an intelligent machine could think much faster than a biological brain, but the slower process of discovery still has to occur.
(See Misconception #3 in The Terminator Is Not Coming.)
Any interpretation of his statement can be shown to be untrue, but I will focus on two.
But before I get into it, let me say that I know Jeff. We’ve drunk margaritas together. He’s a good guy, super smart, well meaning, and I don’t believe for a minute that he’s trying to pull the wool over anyone’s eyes. I was surprised when I first read that piece, and I tried to think through how he meant it such that it made sense, but to no avail. I can only assume that he hasn’t thought through how it is that computers might learn. Beyond how his software does it now, of course.
Ok, so first, perhaps Jeff is talking about the rate at which computers might learn all that there is to know today, i.e. absorb the sum of human knowledge. All that we know about how intelligent computers will learn suggests that the faster the computer that does the learning, the faster the learning. Sure, how the learning is done may be very complicated, and perhaps the more that is learned the slower the learning becomes, but the same learning done on a Raspberry Pi will be slower than when done on a supercomputer. If the latter isn’t fast enough, build a better supercomputer. When humans can’t figure out how to do that anymore, let the computer figure it out itself. There is no known computational speed limit, and even if there was the computations could always be done in parallel. Suffice to say that, even if the learning rate limit was only 1000 times greater than that of a human’s (i.e. the computer could get an MIT undergraduate degree in 12 hours – ask me how I calculated that), many humans might not see the computer as very limited. (I suspect that a computer will be the first intelligent being to read all my whole blog.)
So, second, maybe Jeff means learning things that humans don’t know yet, in particular basic research. Goodness knows that scientists and graduate students spend enormous amounts of time on research. Experiments might involve combining chemical compounds together in millions of different permutations to determine if just one of the them has some interesting properties. But how do humans speed up this research now? Some just hire more researchers. Others use big machines that can do hundreds of combinations in minutes. There are lots of ways, but two that I find particularly interesting.
The first way is to simulate the necessary physics, and run the experiments in a computer. For example, consider Folding@Home. No longer do we actually have to create any real proteins or maintain any elaborate laboratory. We’re not even constrained with respect to time since we can run the simulation as fast as the host can manage it. An intelligent computer will likely be able to create such simulations very quickly, run them, and then do only as much real-world testing as is necessary to verify the results. If the results are off, fix the simulation and repeat.
The second way computers can speed up basic research is the same way that good scientists do it now: think through a good hypothesis and only test that. Usually (but not always), the smarter the scientist, the better the hypothesis. So, we can expect a superintelligent computer (by Bostrom’s definition) to come up with some pretty good hypotheses. Combine that with a few physics simulations and the amount of real world testing that it needs to do could become minimal indeed. And the more it learns, the smarter it becomes, the better hypotheses it makes, the faster it verifies, and before you know it, BAM!: singularity.
This is not to say that I’m worried. I am a little, but not like Bostrom, Musk, Hawkings, etc. (There is another Bostrom post coming up, BTW.) I see cures for cancer and arbitrary lifespans and minimal (but not zero) scarcity. I agree with Jeff that the future will thank us. But they will do so sooner than he thinks.