Why Jeff Hawkins is wrong

To paraphrase, Jeff Hawkins is basically saying that an artificial intelligence singularity is not on any foreseeable horizon because the AGI would be subject to a finite learning rate:

For an intelligent machine to discover new truths, to learn new skills, to extend knowledge beyond what has been achieved before, it will need to go through the same difficult and slow discovery process humans do. Yes, an intelligent machine could think much faster than a biological brain, but the slower process of discovery still has to occur.

(See Misconception #3 in The Terminator Is Not Coming.)

Any interpretation of his statement can be shown to be untrue, but I will focus on two.

But before I get into it, let me say that I know Jeff. We’ve drunk margaritas together. He’s a good guy, super smart, well meaning, and I don’t believe for a minute that he’s trying to pull the wool over anyone’s eyes. I was surprised when I first read that piece, and I tried to think through how he meant it such that it made sense, but to no avail. I can only assume that he hasn’t thought through how it is that computers might learn. Beyond how his software does it now, of course.

Ok, so first, perhaps Jeff is talking about the rate at which computers might learn all that there is to know today, i.e. absorb the sum of human knowledge. All that we know about how intelligent computers will learn suggests that the faster the computer that does the learning, the faster the learning. Sure, how the learning is done may be very complicated, and perhaps the more that is learned the slower the learning becomes, but the same learning done on a Raspberry Pi will be slower than when done on a supercomputer. If the latter isn’t fast enough, build a better supercomputer. When humans can’t figure out how to do that anymore, let the computer figure it out itself. There is no known computational speed limit, and even if there was the computations could always be done in parallel. Suffice to say that, even if the learning rate limit was only 1000 times greater than that of a human’s (i.e. the computer could get an MIT undergraduate degree in 12 hours – ask me how I calculated that), many humans might not see the computer as very limited. (I suspect that a computer will be the first intelligent being to read all my whole blog.)

So, second, maybe Jeff means learning things that humans don’t know yet, in particular basic research. Goodness knows that scientists and graduate students spend enormous amounts of time on research. Experiments might involve combining chemical compounds together in millions of different permutations to determine if just one of the them has some interesting properties. But how do humans speed up this research now? Some just hire more researchers. Others use big machines that can do hundreds of combinations in minutes. There are lots of ways, but two that I find particularly interesting.

The first way is to simulate the necessary physics, and run the experiments in a computer. For example, consider Folding@Home. No longer do we actually have to create any real proteins or maintain any elaborate laboratory. We’re not even constrained with respect to time since we can run the simulation as fast as the host can manage it. An intelligent computer will likely be able to create such simulations very quickly, run them, and then do only as much real-world testing as is necessary to verify the results. If the results are off, fix the simulation and repeat.

The second way computers can speed up basic research is the same way that good scientists do it now: think through a good hypothesis and only test that. Usually (but not always), the smarter the scientist, the better the hypothesis. So, we can expect a superintelligent computer (by Bostrom’s definition) to come up with some pretty good hypotheses. Combine that with a few physics simulations and the amount of real world testing that it needs to do could become minimal indeed. And the more it learns, the smarter it becomes, the better hypotheses it makes, the faster it verifies, and before you know it, BAM!: singularity.

This is not to say that I’m worried. I am a little, but not like Bostrom, Musk, Hawkings, etc. (There is another Bostrom post coming up, BTW.) I see cures for cancer and arbitrary lifespans and minimal (but not zero) scarcity. I agree with Jeff that the future will thank us. But they will do so sooner than he thinks.

The Fountain of Youth

It’s been about 15 months now since my radiation treatment ended. Many flavours are still depressingly flat, and saliva production, while much better than a year ago, is still low. I don’t go running anymore because mouth breathing dries out my mouth pretty quickly, and shortly after that my tongue goes numb. (Carrying water to sip doesn’t help much, and i have to stop to do the sipping because i can easily choke on the water otherwise.)

Naturally i looked around for anything that helps salivary function beyond Biotene products and Xylitol chewing gum. (The latter is about the only thing that works for longer than 5 minutes BTW.) I found that successful gland regeneration was done in mice a few years ago, but there’s no sign that anything has been done with that work since, much less anything to do with getting it to work on humans. This reminded me of senescence research and how it might help in general with healing the body. It also got me thinking about the sorts of things for which senescence treatment might not help.

Any general treatment that reverses aging should only be expected to help with conditions that arise at late-middle or old age. (It’s unlikely treatments would just stop aging. Seems like balancing on the edge of a knife to tot me.) The thinking is that if your body was able to live into its forties without knee or shoulder problems (not including those that are trauma related, but not necessarily excluding them either) then likely your body has a way of healing the damage it incurs. But there is a long list of conditions that may not fit this criteria.

For example, everyone knows that breasts start to sag because the tissue that connects them to the bones starts to stretch. While its normal to see this in middle to old age, it can also happen much sooner if protective measures are not taken to prevent it. And even then it can happen anyway. Also, it turns out this is sadly not just a problem for men.

Neither is hair loss. Again, it can happen at any age, or not, and so likely is not just age-related. And it’s pretty clear that no one’s teeth are going to grow back or become whiter on their own, scars won’t fase, wrinkles probably won’t smooth, gout won’t go away, and livers will not return to a more reasonable size. Also, did you know noses continue to grow our entire lives? Good lord, what will they look like after 200 years? Even if senescence treatment becomes a reality in my lifetime, i’m still going to have a bunch of things to complain about with my cronies at the pub. But hey, at least i’ll be there to do the complaining right? Being alive will not be one of my troubles. (See My Life Everlasting for caveats.)

But poor salivary function will very likely be one of them. My radiation oncologist is hopeful that i still have time to recover a bit more, but since radiation damage could only be an age-related condition for the cast of Baywatch, i’ll certainly continue to live with it even after i start to live forever. But when that happens, hell, i’ll have all the time in the world to cure it myself.


Keen readers may already know that the next generation of GoID is coming, hopefully soon. See http://mlohbihler.github.io/goid/ for more information on that, but the spoiler is there is no code there at the moment, just promises. I’m hoping to have real code there in a matter of weeks though.

In the meantime, feast your eyes on what MIT has been doing.

Nice, no? It’s not robot tennis or soccer players (see older posts for allusion resolution), but it beats the crap out of inverse kinematics (see really old post). I don’t have the technical details, but they pretty much say they are using very fast feedback loops. Hey, kind of like what GoID is! Sure it took a decade to get to this level (assuming they scooped some approaches from the Big Dog project), but it provides more than a little confidence that the basic approach is a good one. The problem remains that it will continued development will not have the technical acceleration that software will have, and so we are much more likely to see AGI come out of a software environment than a physical one like this. But, no worries. To the folks a MIT: carry on!

A fine balance

An interesting discussion took place a week or so ago on the Numenta mailing list. I was a frequent contributor, the first time since i was dragged over the coals for my opinions about using NUPIC to optimize network traffic. This time i got to do a bit of dragging myself, mostly to do with the opinions of another that i can only reasonably describe using a line from Dawn of the Planet of the Apes, as “hippy dippy bullshit”.

But never mind that. Most people who might be reading this will be familiar with the theory that an AGI might reasonably be indifferent to humanity, and would carry on satisfying it’s goal function without regard to us. During the discussion i had another thought. What if a blossoming AGI looked into the future and, say using a fairly accurate simulation, determined that the universe is going to die a slow burnout death in a few billion years, and as such there was no point in going on, and shut itself down? (For argument sake, let’s assume that the machine didn’t decide to shut us all down with it, presumably for our own good.) What if it turned out that a big problem is convincing the AGI to just keep running?

Perhaps it will actually prove tricky to balance the psychology of the AGI between a suicidal depression and a psychopathic apathy.

A higher level of consciousness

These results provide compelling evidence that awareness is associated with truly global changes in the brain’s functional connectivity.

So concludes http://www.pnas.org/content/112/12/3799.abstract. It’s funny to me that we still need research to show that there isn’t a “consciousness module” in the brain. This has seemed obvious to me for a long time. The implications are interesting though, and i hadn’t really thought about them before.

If consciousness is a whole brain activity, it means that all animals are potentially capable of being conscious. No biggie there, but this means that there are various levels or degrees of consciousness. (This is assuming – from empirical observations – that e.g. a cat is not as conscious as a person, a bird is not as conscious as a cat, and a fly is not as conscious as a bird. This also assumes that we all agree on what consciousness is, which is most certainly not true. I’m taking it as roughly meaning one’s awareness of the environment, at multiple levels.)

But this means that there is potentially, probably many, levels of consciousness above a typical human’s, which is an intriguing thought. What might that be like?

World models

Lately I’ve been thinking about building control systems. I don’t know much about the specifics, even though I’ve been doing software in this business for a while now, but I know enough about the generalities to be dangerous, as they say. I was wondering whether it would be possible to take the data stream from such a system – the “trends” or “histories” as they’re called, recordings of pieces of equipment turning on and off, regular samples of temperatures and pressures, etc. – and recreate a model of the system from which it came. For simple systems it wouldn’t be too hard, i don’t think, but the more complex system system gets, the more difficult the problem.

Of course, this is something that brains do too, and so once again i find myself in the AGI world. One particular problem that I kept returning to is how to utilize streaming data in a world model. As my dear readers probably already know, it’s one thing to take a data set and analyze it for patterns, and another thing to try to find such patterns in streaming data. The random accessibility in a given data set means that you can determine ranges, averages and stuff like that first, and then do more informed analysis, whereas with streaming you’re forced to do analysis with only what you know so far, which is very like to change in the future.

Now, maybe this is obvious to others, but I’m not sure I’ve had the same amount of clarity on this before. I think the streaming data can be handled by building an explicit world model, against which the data is compared to either confirm the model, or to adjust it as necessary. I guess this is what I’ve always been trying to do, but not with that specific intention.The benefit is to have a relatively static (or at least dynamic in known ways) model that can be analyzed in a random access manner in order to make predictions, plan actions, and all of that good stuff. The trick is folding the streaming data in somehow. Building controls is a nice domain for this research because it can be both arbitrarily simple or complex.

So smart, it’s stupid

I’ve been reading Superintelligence, by Nick Bostrom. I’m about half way through now. (It’s not the easiest read.) And though i probably should forego commenting on it until i’m done – since authors have a tendency to address the questions that they raise later on – i have to say it so far is thoroughly depressing. It seems that, to paraphrase, computer software is inevitably going to reach singularity-level intelligence and then turn the entire accessible universe into paperclips. Which isn’t quite the outcome i was hoping for. I hope the rest of the book will take a happier tone.

But at the moment, i have to say that the doomsday scenarios that are provided seem to completely ignore an annoyingly obvious retort. In every case, the computer achieves “superpowers” to do with intelligence amplification, strategizing, social manipulation, hacking, technology research, and economic productivity, meaning that the computer is able to far outdo even the smartest humans in each of these things. I don’t have a problem with this. It’s just that at the same time the computer is also bound to operate strictly within the constraints of its programming. So, after smooth-talking its operators into giving it access to the internet and thus “escaping”, and then hacking its way to commandeering the world’s computing assets, and then developing unimaginable technologies, etc etc etc, it still is such a slave to it’s evaluation function that it will interpret the goal, “Make us happy” to mean, “Implant electrodes into the pleasure centers of our brains”, turning us all into a race of smiling idiots. Think Star Trek’s V’Ger on superpower steroids. But if it was smart enough to be able to sweet talk its operators into letting it escape – who presumably were aware that the software would attempt exactly such a thing – and indeed was smart enough to be able to interpret such a vaguely worded goal in the first place, surely it is trivial for it to be able to understand not just what was meant by that comment, but also to have a deeper understanding of what makes humans happy than humans do themselves, and act to achieve that. Even if you think i’m just being hopeful (and certainly i am), you must admit that a decent probability has to be assigned to my way of thinking about this.

Of course an AI might be evil by our definition, and of course, far more likely, it may be indifferent (as humans are to, say, ant colonies living on the land where we want to build our house). But to my mind it wouldn’t take much to tell an AI that it can feel free to expand through the universe as it likes, but that it should also use a little bit of its asymptotically infinite power to make human lives comfortable and happy in the ways that each individual prefers. It couldn’t possibly be so awesomely smart and so woefully stupid at the same time, could it?

How I, personally, would really really try

I was glad to see from the picture that Ben Goertzel still has his hat. Someone has to wear it, and he does so with pride, and i love that. He has a new book out as of just over a month ago. (For those interested, there is the PDF and the dead tree version.) I haven’t read the book (as of time of writing), but i have read the talk it is based upon. And now i probably won’t read the book, because to me there is little that is controversial in it.

In particular i applaud his idea of funding a good number of AGI projects with differing approaches (as opposed to a single Manhattan Project-ish project), and not just because it is also my idea, but because it is the right thing to do. If history can teach us anything about AGI, it is that there are a large number of approaches that don’t work, and an unknown, but presumably small, number (greater than zero) that do. And since i, like everyone else, have not yet discovered one of the working approaches, it’s not my place to say that anyone else’s is wrong. (Except of course if the approach is similar enough to something we already know doesn’t work. I’m looking at you, neural net guys.)

So, assuming that some gentle benefactor decides to put up some dough to test Ben’s theory, one thing that i would like to know is how the lucky recipients of funding would be chosen. I assume it would be based on something flimsy, like having a PhD in something or other, as if a deep knowledge of stuff that doesn’t work is going to make someone more qualified to discover what does. If this is the case, and i give it a 98.4% chance, i will personally receive exactly nothing. The same goes for many other cases.

But even though my chances of being funded are small, they are not quite zero, and so i will finish this post with a summary of what my approach would be. Even if i don’t end up with any of the cash, there is hope that some reader out there may like some idea or other, which would make me happy. Even more so if i got some credit.

So here goes. I am not going to go into detail on a lot of these points because i already have in other posts. If, dear reader, you are curious about something, you might consider reading more entries from this blog. But even if you don’t, feel free to post questions and i will gladly  explain.

All existing forms of intelligence on the planet have one thing in common: they all have nervous systems. Nervous systems, whether they happen to reside in intelligent animals or not, were originally intended to facilitate movement. Therefore, movement is at least the foundation of the only form of intelligence we know of. It may be that an AGI independent of movement can be developed, but i submit that we might as well follow whatever breadcrumb trails the universe has grudgingly provided.

I believe that the size of the repertoire of behaviours in a species closely matches the intelligence of that species, and further that intelligence increases were necessary to facilitate the expansion of behavioural repertoires in ways that aided survival. I also believe that the intellectual abilities of most animals beyond movement are probably relatively simple extensions of the mechanisms that are needed for movement. Think about walking along a difficult hiking trail. You are constantly subconsciously scanning the path in front of you and devising strategies for extending your leg and placing your foot so as to maintain balance and conserve energy in a manner that provides acceptable pace. After only a little research into how this might work i can attest that it is fabulously complicated. And it’s not hard to see how that complexity could be repurposed for other intellectual tasks. If you take the sensory-action loop involved in walking and stretch out the temporal period, with a few – perhaps not trivial – adjustments and some hierarchical layers you can turn it into something like business strategizing. It should not be a surprise that, as Steven Pinker details in a few of his books, humans very often use movement metaphors to explain non-movement concepts. (“I’m going to tell you something about your momma.” “Oooh, don’t go there.” Ok, bad example, but you get the idea, right?)

So, my AGI development approach would be to start by recreating the movement mechanisms of, first, very simple animals, and reusing the learnings (a point important enough to emphasis) to apply to more and more complicated animals, eventually resulting in, say, an agent that can walk on two legs. It might not need to get to that because it’s likely that the architecture for movement will be well enough understood before then to apply to other manifestations of intelligence. And that’s it. My approach is that simple (although not easy). It effectively is following the path of evolution. It worked once, didn’t it?

Ok, it doesn’t actually start where i said. First we need a very accurate physics simulation. I started my previous research using JBox2D because i already knew it, and i didn’t think (and still don’t think) that using only 2 dimensions to start would keep me from discovering some of the basics. But i did quickly run up against some accuracy problems. A very good 3D physics library would be essential to a quick development cycle. If you tried to do this with real life robots, you’d, for one thing, spend a ton of  time making physical sensors.

Again, i could expand greatly upon any of the individual points above, which in this bare form i known may not seem very convincing.


I was pretty excited after watching a recent Jeff Hawkins talk about sensory-motor integration into NuPIC. (Thank you John B for the link!) And especially after my last post about making machines with animal-like movement. One particularly interesting idea was that lower-level motor commands are routed not only to muscles et al, but also as afferents to higher hierarchy levels, allowing brains to associate their behaviours with outcomes in the world. I like to think that this would have been obvious once I started thinking more about behaviour selection, but hey, I’ll take good ideas from wherever. So it seems I might have to start playing with NuPIC. See you on the message boards.