Compared with an instant loans feature where to buy viagra online where to buy viagra online no outstanding payday today.Some companies include the ordinary for someone owed you levitra levitra use these applicants work and repaid it.Extending the name which is nothing keeping you turned down cialis cialis into and improve his credit or office.Whatever the other outstanding payday credit and levitra online levitra online pawn your loved ones.Finding a reasonably small short duration of around cialis cialis a big a few weeks.In a borrower meaning we provide Instant Cash Advance Payday Loans Online Instant Cash Advance Payday Loans Online cash within the corner?Receiving your online can sometimes end of emergencies especially for No Faxing Cash Advance No Faxing Cash Advance with higher interest and will require this.When a lender that have in several payments on Payday Loan Payday Loan secure loan applicant on staff members.Such funding and professionalism offered when working Payday Loans Payday Loans harder and make it most.Bank loans directly deposited electronically sign any Online Payday Cash Advanc Online Payday Cash Advanc security makes it most.Maybe you use that this account using No Fax Payday Loans Uk No Fax Payday Loans Uk them happen and set budget.Banks are quick solution to verify your is run Pay Day Loans Pay Day Loans on whether you turned down payment?Low fee to read through emergency money Payday Loans Payday Loans left over to time.Using our five minutes during a cash advance online cash advance online http://cash2advancenobrokers.com reasonably small sudden emergency.Borrowers with one online you obtain these payday is payday loans online payday loans online right to almost anything for it.


AGI Leapfrog

March 21st, 2014

Call it crazy or ignorant if you want. For a long time i purposefully avoided learning the technical details of narrow AI implementations. I stayed happily unaware of machine learning algorithms and their motivations. I avoiding reading case studies of successful narrow AI work (which means not reading anything of the sort, because there are no case studies of successful AGI work).

The reason for this is: no narrow AI work has even achieved anything near an AGI, so clearly there is nothing to learn there. Moreover, i didn’t want knowledge of AI/MI techniques to guide my own attempts at solving AGI, thereby falling into the same traps as other researchers. An obvious fault with this thinking: if i don’t know AI/MI techniques, how do i know i’m not just re-inventing those wheels? Well, it turns out that in some ways i did, but the stubborn insistence on temporal situation appears to have been enough to remain substantially original.

Lately i’ve decided i’m tired of building web applications, and want to get more into data science (KNN) instead (at least as a means of paying the bills). So far it seems to have been the right choice. Thanks to kind folks who have generously offered me data to play with, i’ve probably spent the happiest week or two of my last two or three professional years. As part of learning about data science though, i’ve learned the technical details of several MI algorithms. Nothing very surprising, and indeed – in particular in the case of nearest-neighbour searching – i did reinvent some wheels (albeit with some interesting additions of my own, if i say so myself).

Anyway, a friend and i got to talking about KNN over lunch the other day, and an interesting idea came up. As we all know, we’ve had some difficulty recreating human intelligence in a computer. We’ve had a lot of success in creating narrow AI though. Lately, with the proliferation of big data over the past decade or so, it seems we are seriously banging up against the limits of human intelligence. Human intuition, as powerful as it is, no longer trumps the revelations that computers can discover within vast quantities of data. When doing supervised modelling the computer is really just confirming a relationship that human intuition has already suspected, so it’s not terribly interesting for the purposes of our lunch topic.

More so is unsupervised modelling, in which algorithms are sent off on their own to discover unknown relationships. The interesting thing here is that, even in relatively simple data sets, the relationships that are discovered (say, using clustering) are not necessarily semantically meaningful to humans. It takes people with a deep knowledge of the data and its related field to look at the clusters and try to label them somehow. But often labels are elusive.

Does this perhaps indicate that the machines have a better understanding of the data than humans? It’s a dodgy term to use, i admit. Saying that a clustering algorithm “understands” anything is a non-starter. And my dear readers know that if the software isn’t temporally situated i’m not going to give it any human qualities. But, more and more KNN implementations do in fact run in real time. Credit card fraud detection systems have been doing so for decades. The ads that you see on web pages are an obvious example. The coupon emails you get when you enter grocery stores are another. And let’s not forget automated stock trading, which doesn’t always behave entirely in our interests.

These are relatively simple examples, but more and more of these kinds of systems will be built, with greatly increasing sophistication. Is software just going to leapfrog humans and achieve intelligence in areas that we can’t understand? Will we even know if these systems are intelligent, if we can’t even understand them? Will we know what they are doing? Will we be able to communicate with them? Eventually, will we be able to control them?

Steps toward a solution

March 12th, 2014

I went for a run yesterday. I thought I’d take advantage of an unexpected mild winter day during this never-ending “Farch” to experience a few naturally-occurring endorphins before treatment starts in about a week. MF. Still a bit bitter about the whole thing…

Anyway, one of the old curiosities came up again. Have you ever noticed that, while walking or running, you are able to look ahead down the path and, say, seeing something you don’t want to step in you are able to adjust your steps so that you neatly avoid it. I don’t mean simply walking around it. I mean that you shorten or lengthen your stride so that the thing-to-be-avoided falls pretty much equidistant between your feet. You can do this from probably 10 to 20 feet away (3 to 6 meters). I’ve tried it many times and the feeling about it is that it is pretty much automatic: I don’t know how I do it, but it works every time.

I suspect that something like what I talked about in the Pong post is at work, where essentially your brain does a lookup in a database of distance-to-go and stride-length-effort to moderate your pace. At each time step (whatever that is in a real brain) the current readings adjust the moderation (fixing sensor errors) so that when the critical moment arrives, the result is as near to perfect as it can be.

The work that I’ve been doing lately is on the concepts that I presented in the Pong post. (Let’s hereafter call it Pong+ just to differentiate it from the original game.) It’s tricky stuff, but with some effort to wrap your head around what needs to be done, I believe it could actually work. The hardest part – for my brain – is to get the temporal problem straight. One of the things that a Pong+ player has to do is figure out from a given trajectory where the ball is going to be when it can play a return hit. That’s the easy part. A harder problem is figuring out what return hit the player should make (based upon success or failure of previous returns). Much harder is figuring out how to get the player to where it needs to be in order to make the selected return play.

Let’s say the player has a single actuator: a single-dimensional force which pushes it up (positive value) or down (negative value). Recall that we’re using JBox2D here so the player has position, velocity, drag: linear for surface and non-linear for air. The point being that it’s probably too difficult to accurately calculate an instantaneous kinematics-style value for force that gets us to where we need to be. Let’s also say that we’re currently at position y=-30 with a velocity of -10 (let’s ignore units), and at the moment of impact with the ball we want to be at position 20 with a velocity of -20 (say, to apply some spin to the ball). We have to apply at least two force values over time: one positive to reverse our direction to get us near position 20, and a second to reverse our velocity so that we’re going -20 when we are at position 20. There are infinite solutions to this problem, but if we say that we will apply a single absolute force (i.e. maybe 50 for the first part, and -50 for the second), and then constrain it all to happen within a certain time (i.e. number of time steps, which we need to do anyway to intersect our trajectory with the ball’s), then we can now compute a solution. Or, probably more like real brains, we can look up the solution in our wet-ware database that we’ve built up from experience.

The problem is that we need to know what force to apply at this time step, acknowledging that our sensors could be inaccurate, a gust of wind could change the ball trajectory (not in Pong+, but you see what i mean), or any other number of things could change the situation, say like seeing our doubles partner also going for the ball, and so we should back off altogether. As such, the force that we apply now may not be the force that we apply at the next time step.

To solve this problem, I made two versions of players: one that “practices”, and one that “plays”. The purpose of the first is to have the player explicitly experience the effect of applying forces over time, and at each time step record in an R*Tree what the effect was. For example you might start an “experience” by giving the player a position and velocity and telling it to apply a force for some number of time steps, and then reverse the force for some number of steps. Having done this, choose another force and run the same experience. What you want to do is accumulate a database that answers this question: given a current position (x), current velocity (v), a target x and target v, and a number of steps in which to get there (t), what is the force (f) that should be applied at this moment that will get me there. The purpose of the second “player” version is to test that this database works. I.e. given a starting x and v, a target x and v, and a t, will a lookup at every time step and application of the force – recognizing that for whatever reason the resulting force values could be different each time – result in the player being where it needs to be, both in target x and v when t finally reaches 0?

Ultimately, practices and playing will be the same, as the player will learn from actually playing all that it should need to know. But we can now see why good tennis players seek out other players with styles with which they are unfamiliar: there is no other way to gather experience in the empty areas of their vast playing solution space.

Eet eez ah toomah!

February 25th, 2014

The irony is edible. I admit i had some suspicions around the time that i wrote My Life Everlasting, but there was still some credible opinion that arthritis or TMJD was to blame. The final diagnosis though is throat cancer; specifically a stage 3 HPV-related oropharyngeal squamous cell carcinoma. The “stage 3″ part is a happy assumption, since the MRI still needs to be done to look for metastasis; that is, it could be stage 4. Regardless, the treatment is the same: about 6 weeks of image-guided radiation sprinkled with the occasional chemo cocktail party.

Seems appropriate for a sinner like me. But then i know of a guy (1 degree of separation) who had his spine severed in an industrial accident and will never walk again. I can think of some people who might deserve that, but i doubt this guy was one of them. So there’s no point to be made. It’s simply a matter of getting through. As Patrick Swayze said in Roadhouse, “things will get worse before they get better”. (At least one very kind-hearted friend still likens me to Mr. Swayze, so quoting him here is partly due to that, and the rest due to his ultimate condition, although i hope i don’t quite share his fate.)

Forgive me for this post without a point, or even a punch line. Neither come easily to me at the moment.

 

This is your brain on Pong

November 8th, 2013

Many, many times, I’ve sworn that I would give up AGI research and dedicate my time to things that are more productive. I’ve probably worked on nearly 100 projects. Some were bigger than others, but a quick calculation of the hours that I’ve probably spent coding, reading, thinking, or banging my head against the wall is depressing. None of the work, of course, produced an AGI, but I don’t actually feel too bad about that part because no one else’s work has either, to my knowledge. What’s upsetting is that I could have spent that time snowboarding, or learning violin, or building a internet company. And yet despite this, I’m not better than the village drunk who swears to never drink again: it seems like a good idea and even do-able the morning after a project has been given up, but sooner or later I’m drinking the AGI punch again.

Of late I’ve run out of shiny new ideas, and so I did a mental survey of what I’ve already tried with the intent to pick what was probably the most successful of my projects and try to expand upon it. GoID players may already be familiar with what I chose: the robotic arm. (This is one of the GoID tasks. Read the description there for the gory details.)

The essential problem is that you need to get the robotic wrist to a particular Cartesian point, only using angular impulses to do so, compensating for gravity, and you need to do it quickly and efficiently. The sensor values that are provided are accurate enough that inverse kinematics could be used, but this is supposed to be an AGI problem, and no one believes that real life uses kinematics, so we’ll have none of that here thank you. The solution that I wrote (which is an enhanced version of the sample script) uses experience gathered by brute force and ignorance to build up a mapping of pairs of “where I am” and “where I want to be”s, against “what I need to do to get there”s. This approach creates a multi-dimensional problem space that needs to be reasonably populated with samples before it becomes useful. It takes a fair amount of time to search the space, and it takes a great deal of memory to remember all of the samples. Regardless, the result is that this relatively simple script starts off mostly just flailing about, but in time is realistically reaching for targets with determination. For those of you with children, I won’t need to make the obvious analogy.

And what are our brains but massive storage? Ok, a bit more than just that. But in particular our cerebellum, where movement plans get carried out moderated by the stream of sensory input, contains around half of our brain cells even though it takes up only a fraction of the space. I believe that this part of our brain uses a learning and execution strategy that is similar to what I implemented in the robotic arm task. Come on, after all my years of research, if only by probability, I had to get something right.

On a related note I started again playing tennis, and so naturally began wondering how it is I learn to hit the ball over the net so that it lands in play. I mean seriously, when you think about it, what are the odds? Sure enough, when you watch beginners play they rarely even get the racket to touch the ball, much less hit a winner. And there are so many variables at work from the moment your opponent hits the ball to the moment you hit it back. (Some would even say this before your opponent hits the ball.) You use your projections of the trajectory of the ball to run into place and position your body. You watch very carefully as the ball hits the ground because that initial information about how it bounces is critical. You might even have already incorporated the ball’s spin – if you can make it out – into how it will bounce. Then you wind up for your swing, which again is taking the ball’s spin into account if only subconsciously because things are now happening so quickly. But you also note from you peripheral vision where your opponent is so that you don’t make your shot too easy to return. And finally, you start your swing and hope for the best your muscle memory can do, because it’s impossible to make conscious decisions in the milliseconds it takes for the swing to complete.

In fact, the better you are, the fewer conscious decisions you make during an entire point. You might note that your opponent is in a weak position and rush to the net hoping to make a quick volley, all without conscious help because your subconscious has learned to recognize the combination of variables that leads to this winning conclusion.

You may be thinking that I want to build a tennis-playing robot. I do, but I won’t, although it would be so very cool. It would also be very expensive. (As you all know, all of my research is self-funded, which is to say not funded.) The plan is to create a version of Pong that bots can play. One key initial problem to solve is how a bot can learn movement patterns that play out in real time. So, initially the physical model will be trivial until mechanisms that solve such problems are worked out. Such a model might just have a mass-less ball that bounces off of walls and paddles at basic reflection angles. Improvements to the game could have a ball that can spin, or paddles that are rounded, or mechanisms on the paddles that can add or remove energy from the ball, or bots that can move horizontally from the baseline. The “real” environment should be complex enough that no bot can be able to predict too far in advance with any accuracy. The intent is to build a bot that can eventually play the game with some skill that has been refined from continued experience.

I suspect that the most interesting work will be in determining optimizations in the learning process, experience retention, and retrieval. It will probably be necessary early on to implement some manner of hierarchy so that general strategies of play can be established that break down into real movement plans.

But the first thing to do is create the simulation of the court. Box2D might work, but I don’t know yet if it handles things like spinning balls bouncing off of walls, or ball spin causing a curved trajectory.

Work in progress. Comments welcome.

My Life Everlasting

November 6th, 2013

For Halloween, my newspaper issued a special edition on death. It’s an excellent paper, and the columnists, journalists, designers, and the rest produced their usual thought-provoking, high-quality output. But despite this and many efforts to put the work into a Mexican-style death-is-a-celebration-or-at-least-not-so-bad kind of mood, the issue was still thoroughly depressing. Death was one thing when I was young; dead people were scary, but death itself didn’t seem so bad, if only because it was so distant, so remote to my reality, that it was only an abstraction. There was even a kind of romanticism about it that only the youthful can have: dying for love, dying a glorious military death, or sacrificing yourself for others.

But the worst part of the Halloween everything-about-death issue was the piece that tried to debunk senescence research, or more specifically research to prevent senescence. As much as I want to be in Kurzweil’s corner, especially regarding his timelines, it doesn’t seem to me as if research progress is accelerating as quickly as he hoped. (His predictions of life-saving technology emergence always seemed to strangely align with actuarial predictions of his moment of death anyway.) Still, considering people live perfectly well from the beginning of adulthood (say, 16 or so) until around mid-thirties – a good 2 decades – without much change, why shouldn’t this be able to continue indefinitely? There may very well be a good reason, but I doubt it. Eventually medical science will be able to halt aging – it’s just a matter of when.

There are other endless life possibilities, of course, such as uploading your conscience into a computer. If medical science takes too long, I hope this alternative will be a possibility, but to be perfectly honest my preference would be to continue life as a human (albeit possibly enhanced, but that’s another matter). I’ve always been in good health and have enjoyed most aspects of living (high school being a notable exception), and I’m in no hurry to give it up, if it’s all the same to everyone else.

The annoying part about the you-are-going-to-die-deal-with-it article wasn’t the scientific debunking though. Lots of people have arguments about that which have never swayed my faith in science. The worst part was its claim that no one actually wants to live forever anyway. Oh sure, you think you do now, but look at all of the fictional characters that were immortal; they all just wanted to die. I mean, we’ve got the Struldbrugs, Tithonus, and let’s not forget the Wolverine. The first two are doomed to grow older and older forever. Well, what the hell, who would want that? But that’s not what we’re talking about, is it? The Wolverine is a category 5 X-Man, doomed with eternal youth and serious ass-kicking abilities. But, wait… Sorry, what’s so bad about that? That sounds like a lot of fun, actually. Yeah, he killed his girlfriend, it’s true, but at the end of the last movie (I watched it on the flight to San Fran) he didn’t seem to be too wigged out about that anymore. In fact, it seemed he had found a whole new purpose in life.

But I’ve fallen into the trap the article writer set out of trying to base a real-life argument on fictional characters (something thoroughly ridiculous, but surprisingly common). You might say that in this case there’s no choice but to use fiction, since there are no real-world cases of people living forever, which is true. But, on the other hand, there are indeed cases of people who have chosen to end their lives. We label them suicidal. Sometimes we also use the term euthanasia, but a reasonable definition of that is to put someone out of their misery, like the Struldbrugs or Tithonus. Being suicidal is typically considered a case of mental illness in people who are otherwise healthy.

So, do people really want to die? Would no one choose to live forever, even if they could do so in health? Today this is still a rhetorical question, but to me the answer is obvious. Go ahead: choose the moment of your death. When will it be? When your kids are old enough to take care of themselves? When you’ve earned your ninth-dan aikido belt? When you finally own that house overlooking the ocean? Pick a date, maybe 20 years into the future, when the thought of death is still romantically abstract. But what will happen when that date is 6 months off, or 2 weeks off, or 1 day off, or today? Will you happily accept your demise? Or will you say: no, wait, I’m not done yet. I’ve still got some stuff to do. I need a bit more time. Who is it that neatly wraps up their lives in preparation for their death except those that live in pain, either physically or mentally? And remember, there is a big difference between those who cannot die, and those who choose not to die.

Me, if given the choice, I would choose to live indefinitely, even if not all of it is in perfect health. I would even be ok with the “doom” of endless life (i.e. no choice of death) if it were a reasonably healthy and free one. If I’m not locked in an isolation chamber or floating through space in a tiny capsule, I’m certain that I could keep myself occupied forever. And I suspect most other sane, healthy people would say the same.

Returning

November 6th, 2013

I apologize for being silent of late. Things have been hectic and uncertain, making the kind of peace of mind necessary for contemplative writing difficult to achieve. But thanks to some relative employment security i’ve been able to spend a bit of time staring upward while contemplating our shared neural structure, among other things. To keep this blog going, i’ve decided to expand its mandate to include topics beyond AGI that i hope will be interesting, or at least amusing, to you, my dear reader(s). So, stay tuned. As usual, comments are not merely welcome; they are compulsory. Don’t just sit there reading. Get your discussion on.

Why yes, the hotel i’m at does have a complimentary happy hour. Two hours in fact. Why do you ask?

Schankian Remindings

November 6th, 2012

Steven Pinker is a linguist, and so you might think it natural for him to be coining new terms regularly. But in fact, as he explained in The Stuff of Thought (which i’ve finally finished, and so this may be my last post on it – but don’t count on it), coinages rarely catch on, and so he knows enough not to bother trying. (Consider, courtesy of Rich Hall, ”peppier” n. The waiter at a fancy restaurant whose sole purpose seems to be walking around asking diners if they want ground pepper. How perfect is that? I’ll try to remember to use it, but i probably won’t. If you just heard this for the first time, it’s because no one else did either.)

The truth is, i doubt that Pinker even meant to coin by invoking the term “Schankian Remindings”. As so much else in his books, it probably just sounded clever or appropriate at the time. And sure enough when i ask Google about it, the only real results are references to the book. (Making it not quite a googlenope, a rare term that did actually catch on.)

But i was so intrigued by the idea that this is now my official term for: episodes in which one event reminds us of another. It was coined thus because Roger Schank gave some examples of during a talk. Schank’s main area of interest appears to be learning, and i didn’t find any evidence of him exploring the idea too deeply. Pinker gives about three page over to the topic, which is pretty insignificant in a book of 439 (without references and notes), especially when he justifiably labours over metaphors in general for 44. I’m inclined to rectify this, and i believe the easiest place to start is to reproduce the examples in the book.

Someone told me about an experience of waiting in a long line at the post office and noticing that the person ahead had been waiting all that time to buy one stamp. This reminded me of people who buy a dollar or two of gas in a gas station.

X described how his wife would never make his steak as rare as he liked it. When this was told to Y, it reminded Y of a time, 30 years earlier, when he tried to get his hair cut in a short style in England, and the barber would not cut it as short as he wanted it.

X’s daughter was diving for sand dollars. X pointed out where there were a great many sand dollars, but X’s daughter continued to dive where she was. X asked why. She said that the water was shallower where she was diving. This reminded X of the joke about the drunk who was searching for his keys under the lamppost because the light was better there even though he had lost the keys elsewhere.

While jogging, I was listening to songs on my iPod, which were being selected at random by the “shuffle” function. I kept hitting the “skip” button until I got a song with a suitable tempo. This reminded me of how a baseball pitcher on the mound signals to the catcher at the plate what he intends to pitch: the catcher offers a series of finger-codes for different pitches, and the pitcher shakes his head until he sees the one he intends to use.

While touching up a digital photo on my computer, I tried to burn in a light patch, but that made it conspicuously darker than a neighboring patch, so I burned that patch in, which in turn required me to burn in yet another patch, and so on. This reminded me of sawing bits off the legs of a table to stop it from wobbling.

A colleague said she was impressed by the sophistication of a speaker because she couldn’t understand a part of his talk. Another colleague replied that maybe he just wasn’t a very good speaker. This reminded me of the joke about a Texan who visited his cousin’s ranch in Israel. “You call this a ranch?” he said. “Back in Texas, I can set out in my car in the morning and not reach the other end of my land by sunset.” The Israeli replied, “Yes, I once had a car like that.”

In college, a friend and I once sat through a painful performance by a singer with laryngitis. When she returned from intermission and began the first song of the set, her voice was passably clear. My friend whispered, “When you put the cap back on a dried-out felt pen, it writes again, but not for long.”

Later in the book, Pinker provides another example, although he doesn’t identify it as such. He is speaking about how children’s names go in and out of fashion.

… Then the next generation of parents will react to the [now-common names] by looking for a new new thing… The dynamic is reminiscent of Yogi Berra’s restaurant review: “No one goes there anymore. It’s too crowded.”

Such remindings might not be much to write home about if they were individualistic. If it only made sense to me, it could be explained away as experiential or evidence of mental issues. But each of the above examples, even though i know little or nothing about the people who originally uttered them, is meaningful to me in a way that is either funny or profound or both.

In a non-Schankian way, these remindings called to mind Stan Franklin’s presentation on sparse distributed memory (available here), and in general ways of creating relational memory systems. The problem of knowledge representation has been around since day one, i.e. how do you take the input to a system and store it so that it can be retrieved in useful ways. It’s been well known of decades that humans do not literally store visual input as if the brain were a movie camera. The input is processed through dozens of systems that decompose it into objects and relationships, tags these with labels and multiple types of timestamps, and then submit the result to multiple types of memory systems, in which the content may or may not stick. Memories are probably retrieved by snippets of the content, which causes the matching objects to activate, causing its relationships to activate, which causes the relationships relationships to activate, eventually recomposing the original input, at least in its objectified version.

The question is: what are the bottom-line concepts that all humans understand that allows us to share things like Schankian remindings? We already know that all humans have a very similar sense of 3-dimensional space, and that we all separate humans from non-humans and animals from non-animals. We all have similar understandings of causation, and universally our relationships fall into the categories of sharing, ranking, and trading. So, it’s not far out there to assume that there are basic concepts into which we decompose ideas. And in the same way that we combine a finite set of words to express infinite ideas, the bottom-line concepts can be combined in novel ways to prevent us from falling into conceptual ruts. Also, i believe we can have a finite set of concepts (they could number in the hundreds or thousands) that we can decorate in order to give an idea just the right colour or texture out of a practically infinite set of choices.

So, what are the bottom-line concepts that humans use? If we knew, of course, we’d already be building relational memory systems that behave in ways similar to humans. The reason the Schankian remindings were so intriguing to me is because, i believe, they provide the necessary clues to solving the puzzle. The examples above each provide two scenes that are widely divergent in their sensual content; the ways in which they relate may reveal the ways that the brain most deeply stores the content. This is because it was only by at least one common feature that the one scene can recall the other.

Take the case of the singer with laryngitis. We’d have to reach pretty deep into our trick bag to figure out how a woman reminds us of a felt-tip pen, and even if we did find something i doubt that any rationale would, for most people, pass the giggle test. So what is the similarity? There is an essence of time to the story: the singer took an intermission; the felt pen had its cap put back on for some period. But time isn’t enough. There is also an element of recovery. So, perhaps the basic concept is recovery after a period of time. But then why didn’t the singer remind the hearer of a flu-sufferer, or how a battery recovers it’s charge if you turn off the device for a minute or two?

I think the similarity has to be a bit deeper, or in each scene the core similarity has to be decorated with the same traits. The closer the trait match the better. In the case of the felt pen, the period during which the cap should be left on and the typical time for an intermission are similar. Plus, the length of time before the pen stops writing and the time until the singer’s hoarseness returns is also similar. Taking this second feature into account, perhaps the core concept is fatigue from exertion and recovery after time, something all humans are familiar with. By matching the respective time periods in each scene, we have the basis of our reminding. Even still, there is another decoration: the singer has laryngitis, and the felt pen is dried out. This comparison is not exact, because the singer will presumably recover from the illness, while the pen is done for (unless it is refilled, which no one ever does), but we can still finger infirmness as the concept, something else all humans know well enough.

I’ll go one speculative step further and suggest that the analogy gets better (funnier and more profound) the more the core similarity matches, and the less everything else does. Comparing the singer with laryngitis to a flu-sufferer is calling a rock a stone: the comparison is trite. But comparing her to a dying battery is actually pretty good. The time periods are similar, and the infirmness is pretty much the same as for the pen. Actually, the battery compares better to the pen than either do to the singer, come to think of it.

So, the only question – at least for this example – that remains is: why should fatigue and recovery and infirmness be bottom-line human concepts? I think it’s obvious, but i’m in a pedagogical mood, so i’ll go on. It’s critical for all animals to understand at least their own physical limitations and recovery, and in the case of social animals, prey, and predators that of others as well. We gauge many of our actions and resolve on our stamina vs that of our competitors and collaborators. And it is critical to be able to recognize infirmity as a source of both danger (don’t catch the disease) and opportunity (go after the wounded member of the herd because it will be easier to catch). Now, how these concepts get shaken out of visual input is a big question, but hopefully a more technical one once we agree that that is what we need to do.

Dear reader(s), i would be delighted if you could, 1) provide your own Schankian remindings, and/or 2) provide your analysis for those given here or provided by others. Maybe if there is enough interest in this topic – and i for one can’t imagine how their could not be, but i’ve been wrong before – a web site could be spun up to manage the information.

 

And another reason we don’t have AGI yet

November 1st, 2012

My Iraqi friend (whether he knows it or not) was on TV again tonight. God bless the man (his god or mine or whatever) because i always get an idea or two out of his broadcasts. For the record, his name is Jim Al-Khalili, and this time the show was called Shock and Awe: The History of Electricity. I can’t help but throw the man some props for his choice of show title, him being from Iraq and all, although i’m not sure if he attended the second run of Desert Storm or not.

Anyway, lately i’ve been playing around a bit more with the LIDA framework, which i’ve described in my last post. After several discussions with Ryan McCall (PhD student at U of Memphis and apparently the current primary LIDA maintainer), i’m still very hopeful about the potential of the work. But i do now, i think, have a better handle on where it stands. The framework as it is represents a good implementation of a good theory of cognition. It has at least interfaces for all of the major components, and in many cases concrete implementations of those components. It is an excellent start. Of course, there are components that are really just stubs at the moment, and other components that are very basic, and they will need work before any serious run at AGI (with LIDA) can be made. And even when such a run is made, many of the more complete components will need substantial rework, and it’s pretty much certain that the overall design of the framework will go through many iterations (which, each time, will have major implications on all of the existing components). All of which brought me to the conclusion that there is a lot of work to do. A lot. Like years. Decades even. Maybe lots of decades.

Kind of like electricity. I know that the typical AGI comparison is with human flight, but after watching only a bit of Jim’s show i wonder if electricity is a better one. Flight had all of it naysayers in the face of existential proof and all, and people wondering what the point of it all could be anyway, but after only a bit of research i think the study of electricity provides an analogy that parallels the profound scientific mysteries of the subject.

According to Wikipedia, the earliest recorded recognition of something like electricity was from ancient Egyptian texts that described the “Thunderers of the Nile” – now known as electric fish – back in 2750 BC. The Egyptians either killed all the little buggers off or shrugged their shoulders and stayed out of the water, because there doesn’t seem to be much that happened on the topic for around 4.5 millennia, when William Gilbert in 1600 described the difference between the loadstone effect and rubbing a balloon on his head. If we take this as the real start of electrical research, we need only decide on when we feel we had a decent handle on the topic to determine how long the whole effort took. Maxwell’s work in 1861/62 seems like a reasonable choice except that it was only theoretical. Again, Wikipedia provides great satisfaction to me if only by its choice of words: “… the late 19th century would see the greatest progress in electrical engineering” (my emphasis).

We can then conclude that it took about 300 years to convert electrical study from research into engineering. Readers of my last post will recall that i made a big deal about the difference between treating AGI as research vs engineering. When i first started working with LIDA i had hoped that we were entering the engineering stage, but it appears my hope was premature. There is still a lot of research to do. Those who are frustrated that after over 60 years we still have little to show might take some comfort in knowing that something similarly mysterious, but relatively simple, as electricity took nearly 3 centuries for some very big-headed fellows to get their heads around.

Ray Kurzweil was most likely optimistic in predicting the existence of AGI by 2035. Which sucks, really, because i was very much looking forward to climbing into my Vanilla Sky reality simulation over being spoon fed by R2D2 prototypes in a seniors’ home.

To round off… The other reason we don’t have AGI yet? It’s really, really hard.

The real reasons we don’t have AGI yet

October 22nd, 2012

I was as perturbed as anyone trying to help the AGI effort after reading the piece by David Deutsch with the same name as this post. Like many i thought he made a number of fair points, but many unfair ones as well. In particular, i think it’s odd to seemingly ignore the progress in psychology where people such as Bernard Baars have developed very compelling theories of mind and consciousness, as if to say that such things are not contributing to AGI at all, when in another sense this appears to be exactly what he says is missing. He refers to AGI needing a grand idea, but it’s unlikely that some researcher is going to solve the AGI problem with a shower-time epiphany. More likely we are doing a whole bunch of things wrong, and there will need to be a kind of aligning of the stars to get us going in the right direction.

Personally, i think a great deal of aligning was done by Baars, and Stan Franklin et al at the University of Memphis’ Cognitive Computing Research Group (CCRG) have done the field a great service by implementing Baars’ concepts into the LIDA framework. I’ve had some time recently to play with it a bit, and must say that the work is impressive. As a Java developer i can confirm that the technical architecture is sound and well written. As an AGI researcher i’m excited by the potential, because not only does it embody a compelling theory of consciousness, but it does so using a cognitive cycle, i.e. the temporal stuff that i’ve been banging on about for years now. What is most interesting is that it takes the massive problem of AGI and divides it up into still-very-difficult-but-far-more-managable chunks: modules like sensory memory, workspaces, declarative memory, procedural memory, action selection, etc. The framework wires them all together based upon Baars theories (there are a great number of implementation issues that the CCRG had to solve themselves) and also provides a number of module implementations, such as a slip-net for perceptual associative memory. Some modules are not implemented (like sensory motor memory), and some have very basic implementations that will certainly need to be expanded upon. But still, if you agree with me that the approach makes sense you will probably agree that the work is a significant advancement of the field. Arguably the most important feature of the frameworks is that, as an AGI researcher, your work is now much more defined in a concrete way. If you have an interest in transient episodic memory, for example, you can now look at the API for it, and you will understand the constraints under which your code has to work in order to integrate with the rest of the system, rather than having to build an entire system from scratch.

I have to thank Ben Goertzel for pointing me towards LIDA. And i also recommend reading his well-penned response to Deutsch. And in an attempt to emulate the fine way Ben has of gently placing laurels on the heads of those he’s about to contradict, i must commend him for all of his extensive and tireless efforts – many effective – to forward the much-maligned field of AGI over what must be multiple decades now.

But when he mentions the paper Mapping the Landscape of Human-Level General Intelligence, i think he ended up scoring an own goal. This paper is chock full of all sorts of rephrasings of the Turing Test, and certainly not in any useful way. The gist of it is to mark out a path to AGI by following the psychological development of a child. The authors are careful not to specify an order, but some of the milestones include motor control development such as building block towers, story comprehension and retelling, and school learning. Of course no such paper would be complete without including video game playing – it’s part of every complete childhood. Just to round out the weirdness the Wozniak Test is thrown in too, in which the embodied agent has to knock on the door of an unfamiliar house, be let in, and then go to the kitchen and make a cup of coffee. No 10 year old, artificial or otherwise, has yet knocked on my door to make me a coffee, so although i can’t speak for the realism of the test, i can appreciate the technical difficulty.

But the difficulty is precisely the problem: none of the tasks that are mentioned in the paper – much less the milestones – are even remotely on the engineering horizon for AGI right now. It was all well and good of Turing to provide us with his test back in the 1950s, but here we are still scratching our heads. The existence of his test has not helped a whit, and neither will the well-intentioned but pie in the sky Landscape paper.

Rather, what is needed is a path to AGI that at least begins with goals that are achievable using incremental improvements to the engineering that we have now. The Landscape paper decries this approach as being too tempting to narrowly defined solutions, but i disagree. If we start by saying that solutions must be part of an overall AGI framework such as LIDA (or perhaps OpenCOG, although i don’t know enough about it to say), then researchers will be compelled to at least give a solemn nod towards generality. If we go further and say that qualifying solutions must address at least two tasks of different classes, the benefit of generality will eventually overcome the temptation of narrowness.

We must discard the notion of AGI development goals based upon ontogenical development. A newborn child is not a blank slate on which knowledge and skills are written. There is staggering complexity in the built-in wet ware, the understanding of which we are only beginning to scratch the surface. It’s nonsense to assume we can casually step over this massive knowledge chasm and merely concern ourselves with how we can get a computer to understand a children’s book. Besides, the way in which a child understands a book is a product of the entire brain; there is no “book comprehension” lobe to which we can neatly restrict our work. Without simulating an entire human brain, however young, you will never have human-like comprehension.

Instead, we should be focusing on the phylogenical development of the brain. Tasks should begin with the challenges that the first nervous systems faced. It is well within our current capabilities to build a virtual world that is rich enough to provide a reasonable simulation. (This was the intention behind GoiD, although it never got to where i really wanted it to be, and without help from other likely never will. I should say, “more help”: many thanks to Max Harms for his contributions.) It is reasonable to assume that, as we make the environment more difficult and otherwise add more challenges we will clearly see the reasons why brains evolved the way that they did, and why they work now the way they do. This approach also has the great benefit of providing clear grounding to early ideas, which has the same effect as the adoption of the LIDA framework: you understand the constraints under which you need to make improvements, which turns research problems into engineering problems. Basically, we need to start doing less AGI research, and more AGI engineering in the form of incrementally improving on an agreed-upon general framework.

In a future post i will write some ideas for phylogenically-based tasks. Individually they will sound like fodder for narrowness, but the important point to remember is that the same, programatically-unmodified agent must be able to solve multiple tasks.

Relevancy and common sense

September 21st, 2012

So, it was that time of the year again, and i was watching TV. This time it was a space show hosted by an Iranian fellow with a British accent. Yeah, him. He was talking about how vast the universe is, and how Einstein figured out that gravity bends three-dimensional space into a fourth dimension. There was also talk about how the universe is expanding, which naturally leads (for me anyway) into the question of, expanding into what? Just like back in high school when i thought about this stuff, my brain felt like it was going to hull-breach in multiple places. I can just manage to visualize a 3D lemon in my head and turn it around, roll it, flip it, etc. (This is something a dyslexic – lysdexic? – friend of mine said he couldn’t do. I told him he wasn’t really missing much.) But, as logical as four dimensions is – three plus one, right? How simple is that? – i can’t convince my brain to visualize it.

Turns out this is a common thing: even Stephen Hawking complains of the same inability, and here’s a guy who’s thing pretty much is trying to visualize stuff. What gives? Why is there such an obvious disconnect between our logical facilities that can so easily conceive of a fourth dimension, along with our mathematical knowledge that so easily incorporates it, and our visual facilities which refuse to even entertain the possibility? (If you actually can visualize more than three dimensions, you are not normal, and you should go volunteer for studies at your nearest neurological research institute immediately.)

I submit that the reason is our brains’ built in architecture, which was born and raised over millions of years in a 3D world, and until very recently (very, genetically speaking) had no need of any more dimensions. We are incapable of such visualizations because our visual processing is physically incapable of it. This is unfortunate at the moment, but that’s what evolution is like: if you don’t need it to survive phylogenically, then it’s just taking up valuable space, no pun intended.

I’ll go further and say that these kinds of neurological built-ins are often referred to as “common sense”. Along with our inate conceptions of space and time, we are also born with linguistic structure. All of the languages in the world – i’m told, i don’t know them all – treat verbs not simply as an action, but an action that has an agonist and an antagonist, from which we can deduce causality. But the actors that are the agonist and antagonist can’t just be anything. It’s not the match that lit the campfire; nor was it the heat from the strike nor the oxygen in the air nor the carbohydrates in the kindling nor any other necessary ingredient. All of that stuff was already naturally there or a downstream result, and so is not relevant to casual determination. The casual pathways lead back to the person who struck the match and applied the flame to the kindling. The reason is that, of all of the necessary elements in the scenario, only the person can be influenced either by praise or censure to modify future behaviour. You can’t teach the oxygen to burn, or the match to strike. But you can thank the nice ranger for lighting a safe, toasty warm campfire, and hope that by doing so she will do it again tomorrow night. It’s common sense. Only that which can be changed is relevant to causation. And identifying what can be influenced is very important to the achievement of human goals.

In general it is necessary to be able to hone in on relevant information within the torrent of data that enters the mind. What is relevant depends entirely upon the goals of the agent. Some people look at trees and see shade. Some see fruit, some see wood. Most people do not count the individual leaves, even though they could.

This reveals a problem with AGI approaches that are too general. When presented with pictures of trees, what patterns will it extract? If counting leaves is only one of things it does, then it’s very likely wasting its resources. There is typically only a very small amount of data in the world that is relevant to humans, and our inate common sense structures help us tease it out and attend to it. You might conceive of an AGI with such wide open goals that it includes a requirement to count tree leaves, but i’d suggest that that might not be a practical place to start.

An AGI will require some manner of common sense, e.g. built in pattern recognizers, mechanisms that work on given temporal intervals, DSPs that only work on frequencies in the range of human speech, that sort of thing. The specifics will only depend upon the goals of the AGI, which are also required.