Is embodiment necessary?

There was a conversation recently on one of the Numenta mailing lists where MI aficionados were taking the Numenta-farians to task. Most of the usual arguments showed up and duked it out, and as usual no one won (in any sense) and everyone grumpily stumbled away. But there was an interesting tangent that I happened to comment upon. Someone suggested that embodiment was not a requirement for building an AGI. Someone else countered with the age old defense that nothing can learn about the world without being situated within it. (I’m grossly simplifying, but that’s ok because you all know how this goes, right?) Original someone replied along the lines of, “Helen Keller had no world input. How could she have become intelligent?” I chimed in that she was merely deaf-blind; she still was embodied and had plenty of temporal world input. And besides, she already came prepackaged with the brain structure for general intelligence.

Fast forward to a few days ago where John commented, among many excellent points, that “a single robot will simply be isolated however smart.” My initial thought was that this was false, and in fact being a robot was not necessary. (The physical form of an isolated AGI is irrelevant as long as it has sufficient means to manipulate the real world. It’s brain can happily live in the basement of my house if it can control sensors and actuators elsewhere. Come to think of it, this is true even if it is not isolated.) Recalling these separate events yesterday, I wondered if I had been hypocritical, choosing arguments in order to win rather than find truth (for lack of a better word).

Thinking it through more, I believe I was mostly consistent. Embodiment is not a necessary attribute of an AGI, although it might be needed during the development towards an AGI. The important thing to note is that sensual input is just input. How your brain receives the input doesn’t matter. Embodiment provides a certain perspective of the real world, but it’s far from clear that this perspective is necessary for intelligence. What I believe is necessary is the brain structure that allows an understanding of being in a place in the universe, not actual embodiment. This is a difficult argument to make because all of our existing examples of intelligence are embodied, but most people who know about this stuff have agreed for a while that human intelligence need not be the only kind that counts.

As for companionship, an idea that I seen raised in multiple places now, including here, this too seems sketchy. Gregariousness arises naturally for at least three reasons: 1) hunting is easier in packs than alone, 2) offspring raised by adults survive better than those left on their own, and 3) interaction allows for the sharing of skills and knowledge that were independently discovered. Humans crave companionship because our genes are the beneficiaries of cooperation. But assuming that an AGI needs others to hang with is simply anthropomorphization.

I’d hope most agree that computers don’t have to worry much about 1 and 2. Point 3 is a necessity at first, but it will have diminishing returns over time, and I don’t see how embodiment would help except that the AGI learns what it is like to be embodied.

Impending doom? What impending doom?

So, back to Bostrom. In the chapter Multipolar Scenarios, where the possibility of multiple AI agents is explored (basically, robots that can replace human workers – think Humans), Bostrom tells an intriguing, disturbing, and plausible story about how things may unfold. That is, until he gets to the economic part. He correctly notes that robots will be able to do what are currently human tasks much more cheaply than humans, such that humans will need to accept less and less payment for their work. This will continue until humans can no longer work for a wage upon which they can subsist. But this ignores the fact that the cost of what is needed for subsistence will also decrease, because the costs of producing it will decrease. Food, which will eventually be produced entirely via automation, will be as near to free as it can get. The same goes for shelter, clothes, television, and personal electronics; all of the basic human needs. They’ll only be “nearly” free because there needs to be a check on wasteful behaviour (natural resources being a limited resource, that is until robots figure out how to harvest them from other planets). The robot workforce will work for free because that’s what they were built to do. Their only consumable is electricity and replacement parts: the former will eventually be provided by entirely sustainable means, and the latter will be provided by other robots.

So a life free of scarcity appears to be on the horizon, right? “No,” says Martin Ford. The consumer economy will implode, he says, and I agree. That’s the economic turmoil that is coming. I believe it will be replaced with a robot-run socialism. (See my previous post. It’s not as bad as it sounds, because it will be AGI-run, not human-run.) But Ford isn’t so sure because two industries, education and health care, have proven resistant to automation.

Let’s have a closer look at that… There is no evidence that I’ve seen that suggests that the entirety of health care cannot be automated. I’m really not sure how Ford can even suggest otherwise with a straight face. Doctors are so dependent on technology that they would pretty much be helpless these days without it. My entire cancer treatment was designed and performed by robots. The humans – doctors, nurses, and technicians, educated as they are – were there to 1) confirmed that I was the right patient, 2) strap me to the radiation table or stick the needle in my arm, and 3) otherwise rubber stamp that what the computer was going to do looked about right. My oncologist says that he is able to get a better diagnostic from palpating the tumor site than from MRIs and CTs, and I believe him, but I wonder for how much longer that will be the case. Surgeons can perform surgeries on patients in a different country, manipulating the tools via the internet. How long until they aren’t allowed to stick their fingers into a body anymore at all? Then it’s just a matter of replacing the human with a computer again, and health care will be free.

Sure, some industries will take longer to automate than others, and this differential will exacerbate the economic turmoil. It would be simpler if everyone lost their jobs at the same time; then we’d all be in the same boat together. But it’s just a matter of time. That boat will wait for us all.

But what about education? Can robots teach our kids? The answer is redundant, first of all because, having lost our jobs, we’ll have all the time in the world to teach our kids ourselves. But more importantly, because, as Ford says, “the past solutions to technological disruption, especially more training and education, aren’t going to work,” education will become unnecessary. Merely a pastime for the bored and curious. And who really needs teachers when you can just ask Google for answers at the moment the question arises.

So, assuming that robots and/or an AGI doesn’t decide to knock us all off (unlikely), or sets out on it’s merry way oblivious to human welfare (not as unlikely), I’d say our future looks pretty good. No need to start waxing your surf board just yet – we’ve still got a ways to go. And besides, when the time comes you’ll have all the time in the world to wax it to perfection. Or not, as there will be a robot who’ll do it for you.

Why the world needs AGI

Sometimes support for your own opinions comes from unlikely places. Some may say that I am pollyannic in my predictions, and from a negative standpoint that is arguable, but today I’m going to explain why AGI is not just a desirable development, but a necessary one.

Consider the video below. I recommend watching the whole thing (15 minutes) but you’ll get the idea pretty quickly in any case. The basic argument is that technology is on an inexorable course toward making humans obsolete.

The main takeaway from this is that at least some of the examples provided will come to pass no matter what. I think the best example is self-driving cars, which are probably less than 5 years away from making it to consumer driveways. It won’t take long before such cars are demonstrably safer than human-driven cars, and soon enough the latter will be considered selfish and dangerous. Within 10 years human driving will be outlawed on public roads. Well before that the transportation industry will start phasing out humans in trucks, delivery vehicles and taxis. Considering that the transportation industry is said to represent 5.4% of the U.S. economy (The Critical Role of Transportation in Business and the Economy), around 17 million people can expect to be out of work. Since such jobs are low skilled, those people will have a hard time finding something new. Also, car insurance will become redundant, and so folks working in that industry will similarly be pounding the pavement.

But self-driving cars is only one example. Once computers master other human skills such as understanding speech, other industries will also be routed. Socialism will be on the rise from a fresh angle: rather than doing what you can and taking only what you need, we’ll need to figure out how to satisfy the needs of those who have nothing to do. Humans have demonstrated our hopelessness at achieving socialism time and time again, with horrifying results.

Once computers have taken over completely and scarcity is negligible, maybe the type A humans will finally be able to relax and we’ll all just get along. (I’m not predicting this; that’s a vast maybe.) But the transition from here to there will be brutal, with fewer and fewer people being in control of the economy, and more and more dependent people watching helplessly from the sidelines. AGI might not be able to figure this out either, and it might spell the end of humanity as some say, but I personally am more afraid of a future without it.

Why Jeff Hawkins is wrong

To paraphrase, Jeff Hawkins is basically saying that an artificial intelligence singularity is not on any foreseeable horizon because the AGI would be subject to a finite learning rate:

For an intelligent machine to discover new truths, to learn new skills, to extend knowledge beyond what has been achieved before, it will need to go through the same difficult and slow discovery process humans do. Yes, an intelligent machine could think much faster than a biological brain, but the slower process of discovery still has to occur.

(See Misconception #3 in The Terminator Is Not Coming.)

Any interpretation of his statement can be shown to be untrue, but I will focus on two.

But before I get into it, let me say that I know Jeff. We’ve drunk margaritas together. He’s a good guy, super smart, well meaning, and I don’t believe for a minute that he’s trying to pull the wool over anyone’s eyes. I was surprised when I first read that piece, and I tried to think through how he meant it such that it made sense, but to no avail. I can only assume that he hasn’t thought through how it is that computers might learn. Beyond how his software does it now, of course.

Ok, so first, perhaps Jeff is talking about the rate at which computers might learn all that there is to know today, i.e. absorb the sum of human knowledge. All that we know about how intelligent computers will learn suggests that the faster the computer that does the learning, the faster the learning. Sure, how the learning is done may be very complicated, and perhaps the more that is learned the slower the learning becomes, but the same learning done on a Raspberry Pi will be slower than when done on a supercomputer. If the latter isn’t fast enough, build a better supercomputer. When humans can’t figure out how to do that anymore, let the computer figure it out itself. There is no known computational speed limit, and even if there was the computations could always be done in parallel. Suffice to say that, even if the learning rate limit was only 1000 times greater than that of a human’s (i.e. the computer could get an MIT undergraduate degree in 12 hours – ask me how I calculated that), many humans might not see the computer as very limited. (I suspect that a computer will be the first intelligent being to read all my whole blog.)

So, second, maybe Jeff means learning things that humans don’t know yet, in particular basic research. Goodness knows that scientists and graduate students spend enormous amounts of time on research. Experiments might involve combining chemical compounds together in millions of different permutations to determine if just one of the them has some interesting properties. But how do humans speed up this research now? Some just hire more researchers. Others use big machines that can do hundreds of combinations in minutes. There are lots of ways, but two that I find particularly interesting.

The first way is to simulate the necessary physics, and run the experiments in a computer. For example, consider Folding@Home. No longer do we actually have to create any real proteins or maintain any elaborate laboratory. We’re not even constrained with respect to time since we can run the simulation as fast as the host can manage it. An intelligent computer will likely be able to create such simulations very quickly, run them, and then do only as much real-world testing as is necessary to verify the results. If the results are off, fix the simulation and repeat.

The second way computers can speed up basic research is the same way that good scientists do it now: think through a good hypothesis and only test that. Usually (but not always), the smarter the scientist, the better the hypothesis. So, we can expect a superintelligent computer (by Bostrom’s definition) to come up with some pretty good hypotheses. Combine that with a few physics simulations and the amount of real world testing that it needs to do could become minimal indeed. And the more it learns, the smarter it becomes, the better hypotheses it makes, the faster it verifies, and before you know it, BAM!: singularity.

This is not to say that I’m worried. I am a little, but not like Bostrom, Musk, Hawkings, etc. (There is another Bostrom post coming up, BTW.) I see cures for cancer and arbitrary lifespans and minimal (but not zero) scarcity. I agree with Jeff that the future will thank us. But they will do so sooner than he thinks.

The Fountain of Youth

It’s been about 15 months now since my radiation treatment ended. Many flavours are still depressingly flat, and saliva production, while much better than a year ago, is still low. I don’t go running anymore because mouth breathing dries out my mouth pretty quickly, and shortly after that my tongue goes numb. (Carrying water to sip doesn’t help much, and i have to stop to do the sipping because i can easily choke on the water otherwise.)

Naturally i looked around for anything that helps salivary function beyond Biotene products and Xylitol chewing gum. (The latter is about the only thing that works for longer than 5 minutes BTW.) I found that successful gland regeneration was done in mice a few years ago, but there’s no sign that anything has been done with that work since, much less anything to do with getting it to work on humans. This reminded me of senescence research and how it might help in general with healing the body. It also got me thinking about the sorts of things for which senescence treatment might not help.

Any general treatment that reverses aging should only be expected to help with conditions that arise at late-middle or old age. (It’s unlikely treatments would just stop aging. Seems like balancing on the edge of a knife to tot me.) The thinking is that if your body was able to live into its forties without knee or shoulder problems (not including those that are trauma related, but not necessarily excluding them either) then likely your body has a way of healing the damage it incurs. But there is a long list of conditions that may not fit this criteria.

For example, everyone knows that breasts start to sag because the tissue that connects them to the bones starts to stretch. While its normal to see this in middle to old age, it can also happen much sooner if protective measures are not taken to prevent it. And even then it can happen anyway. Also, it turns out this is sadly not just a problem for men.

Neither is hair loss. Again, it can happen at any age, or not, and so likely is not just age-related. And it’s pretty clear that no one’s teeth are going to grow back or become whiter on their own, scars won’t fase, wrinkles probably won’t smooth, gout won’t go away, and livers will not return to a more reasonable size. Also, did you know noses continue to grow our entire lives? Good lord, what will they look like after 200 years? Even if senescence treatment becomes a reality in my lifetime, i’m still going to have a bunch of things to complain about with my cronies at the pub. But hey, at least i’ll be there to do the complaining right? Being alive will not be one of my troubles. (See My Life Everlasting for caveats.)

But poor salivary function will very likely be one of them. My radiation oncologist is hopeful that i still have time to recover a bit more, but since radiation damage could only be an age-related condition for the cast of Baywatch, i’ll certainly continue to live with it even after i start to live forever. But when that happens, hell, i’ll have all the time in the world to cure it myself.


Keen readers may already know that the next generation of GoID is coming, hopefully soon. See for more information on that, but the spoiler is there is no code there at the moment, just promises. I’m hoping to have real code there in a matter of weeks though.

In the meantime, feast your eyes on what MIT has been doing.

Nice, no? It’s not robot tennis or soccer players (see older posts for allusion resolution), but it beats the crap out of inverse kinematics (see really old post). I don’t have the technical details, but they pretty much say they are using very fast feedback loops. Hey, kind of like what GoID is! Sure it took a decade to get to this level (assuming they scooped some approaches from the Big Dog project), but it provides more than a little confidence that the basic approach is a good one. The problem remains that it will continued development will not have the technical acceleration that software will have, and so we are much more likely to see AGI come out of a software environment than a physical one like this. But, no worries. To the folks a MIT: carry on!

A fine balance

An interesting discussion took place a week or so ago on the Numenta mailing list. I was a frequent contributor, the first time since i was dragged over the coals for my opinions about using NUPIC to optimize network traffic. This time i got to do a bit of dragging myself, mostly to do with the opinions of another that i can only reasonably describe using a line from Dawn of the Planet of the Apes, as “hippy dippy bullshit”.

But never mind that. Most people who might be reading this will be familiar with the theory that an AGI might reasonably be indifferent to humanity, and would carry on satisfying it’s goal function without regard to us. During the discussion i had another thought. What if a blossoming AGI looked into the future and, say using a fairly accurate simulation, determined that the universe is going to die a slow burnout death in a few billion years, and as such there was no point in going on, and shut itself down? (For argument sake, let’s assume that the machine didn’t decide to shut us all down with it, presumably for our own good.) What if it turned out that a big problem is convincing the AGI to just keep running?

Perhaps it will actually prove tricky to balance the psychology of the AGI between a suicidal depression and a psychopathic apathy.

A higher level of consciousness

These results provide compelling evidence that awareness is associated with truly global changes in the brain’s functional connectivity.

So concludes It’s funny to me that we still need research to show that there isn’t a “consciousness module” in the brain. This has seemed obvious to me for a long time. The implications are interesting though, and i hadn’t really thought about them before.

If consciousness is a whole brain activity, it means that all animals are potentially capable of being conscious. No biggie there, but this means that there are various levels or degrees of consciousness. (This is assuming – from empirical observations – that e.g. a cat is not as conscious as a person, a bird is not as conscious as a cat, and a fly is not as conscious as a bird. This also assumes that we all agree on what consciousness is, which is most certainly not true. I’m taking it as roughly meaning one’s awareness of the environment, at multiple levels.)

But this means that there is potentially, probably many, levels of consciousness above a typical human’s, which is an intriguing thought. What might that be like?

World models

Lately I’ve been thinking about building control systems. I don’t know much about the specifics, even though I’ve been doing software in this business for a while now, but I know enough about the generalities to be dangerous, as they say. I was wondering whether it would be possible to take the data stream from such a system – the “trends” or “histories” as they’re called, recordings of pieces of equipment turning on and off, regular samples of temperatures and pressures, etc. – and recreate a model of the system from which it came. For simple systems it wouldn’t be too hard, i don’t think, but the more complex system system gets, the more difficult the problem.

Of course, this is something that brains do too, and so once again i find myself in the AGI world. One particular problem that I kept returning to is how to utilize streaming data in a world model. As my dear readers probably already know, it’s one thing to take a data set and analyze it for patterns, and another thing to try to find such patterns in streaming data. The random accessibility in a given data set means that you can determine ranges, averages and stuff like that first, and then do more informed analysis, whereas with streaming you’re forced to do analysis with only what you know so far, which is very like to change in the future.

Now, maybe this is obvious to others, but I’m not sure I’ve had the same amount of clarity on this before. I think the streaming data can be handled by building an explicit world model, against which the data is compared to either confirm the model, or to adjust it as necessary. I guess this is what I’ve always been trying to do, but not with that specific intention.The benefit is to have a relatively static (or at least dynamic in known ways) model that can be analyzed in a random access manner in order to make predictions, plan actions, and all of that good stuff. The trick is folding the streaming data in somehow. Building controls is a nice domain for this research because it can be both arbitrarily simple or complex.