I was as perturbed as anyone trying to help the AGI effort after reading the piece by David Deutsch with the same name as this post. Like many i thought he made a number of fair points, but many unfair ones as well. In particular, i think it’s odd to seemingly ignore the progress in psychology where people such as Bernard Baars have developed very compelling theories of mind and consciousness, as if to say that such things are not contributing to AGI at all, when in another sense this appears to be exactly what he says is missing. He refers to AGI needing a grand idea, but it’s unlikely that some researcher is going to solve the AGI problem with a shower-time epiphany. More likely we are doing a whole bunch of things wrong, and there will need to be a kind of aligning of the stars to get us going in the right direction.
Personally, i think a great deal of aligning was done by Baars, and Stan Franklin et al at the University of Memphis’ Cognitive Computing Research Group (CCRG) have done the field a great service by implementing Baars’ concepts into the LIDA framework. I’ve had some time recently to play with it a bit, and must say that the work is impressive. As a Java developer i can confirm that the technical architecture is sound and well written. As an AGI researcher i’m excited by the potential, because not only does it embody a compelling theory of consciousness, but it does so using a cognitive cycle, i.e. the temporal stuff that i’ve been banging on about for years now. What is most interesting is that it takes the massive problem of AGI and divides it up into still-very-difficult-but-far-more-managable chunks: modules like sensory memory, workspaces, declarative memory, procedural memory, action selection, etc. The framework wires them all together based upon Baars theories (there are a great number of implementation issues that the CCRG had to solve themselves) and also provides a number of module implementations, such as a slip-net for perceptual associative memory. Some modules are not implemented (like sensory motor memory), and some have very basic implementations that will certainly need to be expanded upon. But still, if you agree with me that the approach makes sense you will probably agree that the work is a significant advancement of the field. Arguably the most important feature of the frameworks is that, as an AGI researcher, your work is now much more defined in a concrete way. If you have an interest in transient episodic memory, for example, you can now look at the API for it, and you will understand the constraints under which your code has to work in order to integrate with the rest of the system, rather than having to build an entire system from scratch.
I have to thank Ben Goertzel for pointing me towards LIDA. And i also recommend reading his well-penned response to Deutsch. And in an attempt to emulate the fine way Ben has of gently placing laurels on the heads of those he’s about to contradict, i must commend him for all of his extensive and tireless efforts – many effective – to forward the much-maligned field of AGI over what must be multiple decades now.
But when he mentions the paper Mapping the Landscape of Human-Level General Intelligence, i think he ended up scoring an own goal. This paper is chock full of all sorts of rephrasings of the Turing Test, and certainly not in any useful way. The gist of it is to mark out a path to AGI by following the psychological development of a child. The authors are careful not to specify an order, but some of the milestones include motor control development such as building block towers, story comprehension and retelling, and school learning. Of course no such paper would be complete without including video game playing – it’s part of every complete childhood. Just to round out the weirdness the Wozniak Test is thrown in too, in which the embodied agent has to knock on the door of an unfamiliar house, be let in, and then go to the kitchen and make a cup of coffee. No 10 year old, artificial or otherwise, has yet knocked on my door to make me a coffee, so although i can’t speak for the realism of the test, i can appreciate the technical difficulty.
But the difficulty is precisely the problem: none of the tasks that are mentioned in the paper – much less the milestones – are even remotely on the engineering horizon for AGI right now. It was all well and good of Turing to provide us with his test back in the 1950s, but here we are still scratching our heads. The existence of his test has not helped a whit, and neither will the well-intentioned but pie in the sky Landscape paper.
Rather, what is needed is a path to AGI that at least begins with goals that are achievable using incremental improvements to the engineering that we have now. The Landscape paper decries this approach as being too tempting to narrowly defined solutions, but i disagree. If we start by saying that solutions must be part of an overall AGI framework such as LIDA (or perhaps OpenCOG, although i don’t know enough about it to say), then researchers will be compelled to at least give a solemn nod towards generality. If we go further and say that qualifying solutions must address at least two tasks of different classes, the benefit of generality will eventually overcome the temptation of narrowness.
We must discard the notion of AGI development goals based upon ontogenical development. A newborn child is not a blank slate on which knowledge and skills are written. There is staggering complexity in the built-in wet ware, the understanding of which we are only beginning to scratch the surface. It’s nonsense to assume we can casually step over this massive knowledge chasm and merely concern ourselves with how we can get a computer to understand a children’s book. Besides, the way in which a child understands a book is a product of the entire brain; there is no “book comprehension” lobe to which we can neatly restrict our work. Without simulating an entire human brain, however young, you will never have human-like comprehension.
Instead, we should be focusing on the phylogenical development of the brain. Tasks should begin with the challenges that the first nervous systems faced. It is well within our current capabilities to build a virtual world that is rich enough to provide a reasonable simulation. (This was the intention behind GoiD, although it never got to where i really wanted it to be, and without help from other likely never will. I should say, “more help”: many thanks to Max Harms for his contributions.) It is reasonable to assume that, as we make the environment more difficult and otherwise add more challenges we will clearly see the reasons why brains evolved the way that they did, and why they work now the way they do. This approach also has the great benefit of providing clear grounding to early ideas, which has the same effect as the adoption of the LIDA framework: you understand the constraints under which you need to make improvements, which turns research problems into engineering problems. Basically, we need to start doing less AGI research, and more AGI engineering in the form of incrementally improving on an agreed-upon general framework.
In a future post i will write some ideas for phylogenically-based tasks. Individually they will sound like fodder for narrowness, but the important point to remember is that the same, programatically-unmodified agent must be able to solve multiple tasks.