I was glad to see from the picture that Ben Goertzel still has his hat. Someone has to wear it, and he does so with pride, and i love that. He has a new book out as of just over a month ago. (For those interested, there is the PDF and the dead tree version.) I haven’t read the book (as of time of writing), but i have read the talk it is based upon. And now i probably won’t read the book, because to me there is little that is controversial in it.
In particular i applaud his idea of funding a good number of AGI projects with differing approaches (as opposed to a single Manhattan Project-ish project), and not just because it is also my idea, but because it is the right thing to do. If history can teach us anything about AGI, it is that there are a large number of approaches that don’t work, and an unknown, but presumably small, number (greater than zero) that do. And since i, like everyone else, have not yet discovered one of the working approaches, it’s not my place to say that anyone else’s is wrong. (Except of course if the approach is similar enough to something we already know doesn’t work. I’m looking at you, neural net guys.)
So, assuming that some gentle benefactor decides to put up some dough to test Ben’s theory, one thing that i would like to know is how the lucky recipients of funding would be chosen. I assume it would be based on something flimsy, like having a PhD in something or other, as if a deep knowledge of stuff that doesn’t work is going to make someone more qualified to discover what does. If this is the case, and i give it a 98.4% chance, i will personally receive exactly nothing. The same goes for many other cases.
But even though my chances of being funded are small, they are not quite zero, and so i will finish this post with a summary of what my approach would be. Even if i don’t end up with any of the cash, there is hope that some reader out there may like some idea or other, which would make me happy. Even more so if i got some credit.
So here goes. I am not going to go into detail on a lot of these points because i already have in other posts. If, dear reader, you are curious about something, you might consider reading more entries from this blog. But even if you don’t, feel free to post questions and i will gladly explain.
All existing forms of intelligence on the planet have one thing in common: they all have nervous systems. Nervous systems, whether they happen to reside in intelligent animals or not, were originally intended to facilitate movement. Therefore, movement is at least the foundation of the only form of intelligence we know of. It may be that an AGI independent of movement can be developed, but i submit that we might as well follow whatever breadcrumb trails the universe has grudgingly provided.
I believe that the size of the repertoire of behaviours in a species closely matches the intelligence of that species, and further that intelligence increases were necessary to facilitate the expansion of behavioural repertoires in ways that aided survival. I also believe that the intellectual abilities of most animals beyond movement are probably relatively simple extensions of the mechanisms that are needed for movement. Think about walking along a difficult hiking trail. You are constantly subconsciously scanning the path in front of you and devising strategies for extending your leg and placing your foot so as to maintain balance and conserve energy in a manner that provides acceptable pace. After only a little research into how this might work i can attest that it is fabulously complicated. And it’s not hard to see how that complexity could be repurposed for other intellectual tasks. If you take the sensory-action loop involved in walking and stretch out the temporal period, with a few – perhaps not trivial – adjustments and some hierarchical layers you can turn it into something like business strategizing. It should not be a surprise that, as Steven Pinker details in a few of his books, humans very often use movement metaphors to explain non-movement concepts. (“I’m going to tell you something about your momma.” “Oooh, don’t go there.” Ok, bad example, but you get the idea, right?)
So, my AGI development approach would be to start by recreating the movement mechanisms of, first, very simple animals, and reusing the learnings (a point important enough to emphasis) to apply to more and more complicated animals, eventually resulting in, say, an agent that can walk on two legs. It might not need to get to that because it’s likely that the architecture for movement will be well enough understood before then to apply to other manifestations of intelligence. And that’s it. My approach is that simple (although not easy). It effectively is following the path of evolution. It worked once, didn’t it?
Ok, it doesn’t actually start where i said. First we need a very accurate physics simulation. I started my previous research using JBox2D because i already knew it, and i didn’t think (and still don’t think) that using only 2 dimensions to start would keep me from discovering some of the basics. But i did quickly run up against some accuracy problems. A very good 3D physics library would be essential to a quick development cycle. If you tried to do this with real life robots, you’d, for one thing, spend a ton of time making physical sensors.
Again, i could expand greatly upon any of the individual points above, which in this bare form i known may not seem very convincing.