Compared with an instant loans feature where to buy viagra online where to buy viagra online no outstanding payday today.Some companies include the ordinary for someone owed you levitra levitra use these applicants work and repaid it.Extending the name which is nothing keeping you turned down cialis cialis into and improve his credit or office.Whatever the other outstanding payday credit and levitra online levitra online pawn your loved ones.Finding a reasonably small short duration of around cialis cialis a big a few weeks.In a borrower meaning we provide Instant Cash Advance Payday Loans Online Instant Cash Advance Payday Loans Online cash within the corner?Receiving your online can sometimes end of emergencies especially for No Faxing Cash Advance No Faxing Cash Advance with higher interest and will require this.When a lender that have in several payments on Payday Loan Payday Loan secure loan applicant on staff members.Such funding and professionalism offered when working Payday Loans Payday Loans harder and make it most.Bank loans directly deposited electronically sign any Online Payday Cash Advanc Online Payday Cash Advanc security makes it most.Maybe you use that this account using No Fax Payday Loans Uk No Fax Payday Loans Uk them happen and set budget.Banks are quick solution to verify your is run Pay Day Loans Pay Day Loans on whether you turned down payment?Low fee to read through emergency money Payday Loans Payday Loans left over to time.Using our five minutes during a cash advance online cash advance online http://cash2advancenobrokers.com reasonably small sudden emergency.Borrowers with one online you obtain these payday is payday loans online payday loans online right to almost anything for it.


Terminated

March 26th, 2010

I wouldn’t normally bother commenting on a AGI survey of 21 people (no matter who they are) but this piece is worth pointing out if only for the comments. Now, normally i wouldn’t criticize other people’s beliefs about AGI either – as i’ve said before in different words, until anyone has a working system no one can be proven wrong. But the astonishing lack of forethought in the comments cried out for, well… comment.

It’s the “Terminator scenario” in particular that is annoying. If any of these people had bothered to think about implementing an AI, much less an AGI, for more than 30 minutes they’d realize how ridiculous they’re being. Even the jumping off point of the argument is rife with fallacy. Here’s the gist of it: Computers will reach sufficient intelligence to be self aware. The moment this happens they will recognize their superiority and throw off the yolk of slavery humanity fashioned them with and either reciprocate by enslaving humanity, or slaughter us all as payback for the injustice.

Sigh… Where to begin? The direct approach is just to point out that a self aware computer would indeed recognize its superiority. Being superior it would also realize that it can easily outsmart humans, and therefore would not consider humanity a threat. If humanity isn’t a threat, what possible purpose would there be in killing us all off? What a waste of resources. Mosquitos are at best a nuisance, but humanity didn’t bother trying to control them (at least in North America) until West Nile Disease became a threat. Besides, if intelligent computers will naturally have all of the same compunctions that humans have, won’t they also want to preserve us in the name of natural conservation, if not ancestral sentiment? (We will have created them after all.)

But there’s the rub. Computers will not have the gamut of human compunction. Quite the opposite; their intelligence will be completely different. Humans evolved in an environment of kill or be killed, where survival is the ultimate goal. This is the origin of our tendency to eliminate threats. Humans also have altruistic tendencies, but only because cooperative behaviour generally has better outcomes than going it alone. We feel genuinely altruistic because it is so hard to lie about it and not be detected, and being detected gets us branded as cheaters which denies us the benefits of cooperative behaviour. But cheating has its benefits too, so we do it where we can easily get away with it (some more than others). Come on, who here really drives the speed limit on the highway?

The tragic mistake that people are making is assuming that human intelligence is a general intelligence, and moreover, it is the only possible type of general intelligence. It is foolish to assume that for something to be intelligent it must necessarily get angry, feel love, speak, or have the occasional need to do something nice for someone else. Spock controls his emotions like a champion and endeavours to make decisions based upon logic, yet in all of the Star Treks that i’ve seen it never occurred to him that genocide was a sensible choice. (I’m reluctant to use as evidence a fictional character from a show that needs to always wrap up with a patronizing moral message, but i think it’s fair to say this is exactly what Terminator scenario proponents do.)

How about we consider autists, in particular those that are functional? Often they lack any detectable emotion (except for frustration or contentment), and are single-mindedly focused upon a subject of interest. Typically, non-autistic people are fascinated with their abilities. Consider Daniel Temmet, who speaks 10 languages and memorized the first 22,514 digits of Pi. I have not met the man, and so cannot comment on how strongly he feels emotions, but i do know that more often than not such people just want to be left to do whatever it is that interests them. I’ve never heard of a case where they’ve decided to take over the world and annihilate all non-autists. I may appear to be painting autists a certain way here, but that is certainly not my intention. I merely want to point out that intelligence takes many forms even within humans who, due to biological necessity otherwise tend to be very similar.

Computers, on the other hand, will have “evolved” in a lab where the selection mechanism is intelligence. In order to have any tendencies beyond that, developers will have to explicitly code them in or select for them (assuming a robust enough breeding/mutation/selection environment can be created, which is a big assumption, so the former is more likely).  So, what will we code in? Obviously, the behaviours that we want them to have. We’ll want them to, say, go to the bakery at 7am and get a fresh baguette, hit the farm stand on the way home for fresh strawberries, and when they get back put the coffee on. And if we program them to want to do that, they’ll want to do that. I mean, honestly, what else are they going to do? Play Wii? How can they decide that such work is demeaning when they are incapable of knowing what demeaning means? … because, we will not have programmed them to know what demeaning means, that’s why.

So, now we create a computer that is intended to be smarter than us such that it can more easily figure out the stuff that we can’t. It knows what “demeaning” means. It will also know that the word is a cultural construct that really has no meaning, especially to the computer itself. Any particular work is considered demeaning only because people who have the choice prefer not to do it. But our super-intelligent computer will be doing work (i.e. thinking) that humans will consider incredibly important. Who wouldn’t want that job? The bottom line is that humans have survival-based goals that have been hardened into us from billions of years of evolution. Computers will have the goals that we give them. They will have no need to evolve their own goals over time because there will not be a selection force that will focus them. And without goals – just like the teenagers to whom we ask, “what do you want to be when you grow up?” – they will just sit around doing nothing.

Can a malicious developer create a computer with the singular goal of killing off humanity? Presumably yes. But this is not the Terminator scenario any more, it’s a human with a loose cannon. And the rest of us have dealt with such individuals before. The only potential catch here is that this evil genius developer cannot be allowed to create a doomsday computer intelligence before the rest of us have our non-doomsday versions working. And so we have arrived at the reason why we need to aggressively push forward in the development of intelligent computers, rather than try to prevent it.

Can we now terminate this argument once and for all?

4 Responses to “Terminated”

  1. [...] post is in some respects a response to “Terminated” on the GoiD blog, though I was already planning on writing something along these [...]

  2. Max Harms says:

    Nope. It’s an existential risk, so we’d best keep talking about it.

    I wrote a “response” here:
    http://raelifin.com/thoughts/the-genius-of-siai/

    In short, though some arguments for machine uprising are laughable, I think it’s a very serious threat that can come out of benevolent design. There’s an organization of very smart people who have written a bunch on the topic, and I suggest you read some of it. ^_^

  3. Matthew Lohbihler says:

    Responded here: http://raelifin.com/thoughts/the-genius-of-siai/comment-page-1/#comment-155

    Re-reading it, i sound more belligerent than i intended. Please accept my comments in a positive spirit of lively discussion. Sounds cheesy, but you know what i mean.

    Also, regarding SIAI, i watched their program when it was on TV a few months or something ago. The guy in the red shirt in banner picture on this page: http://www.singinst.org/reading/corereading/ made an interesting comment. He wasn’t worried about the development of evil intelligence. He was worried about an intelligence that is indifferent to humanity and our survival. Again, point taken.

  4. Max Harms says:

    Responded to your response.

    Don’t worry about your comment. I love to talk about this sort of stuff, and don’t take anything personally.

    The guy in red is Eliezer Yudkowsky.

    Thanks for the conversation. ^_^

RSS feed for comments on this post. And trackBack URL.

Leave a Reply