Time and time again

“Do not squander time,” said Benjamin Franklin, “for that is the stuff life is made of.” This quote appears midway through chapter 4 of The Stuff of Thought, by Steven Pinker. Once again i find myself banging on about the importance of basing AGI development on cognitive loops instead of on-event algorithms. My apologies if i’m preaching to the choir, but some folks seem to need more convincing. I’ll let Dr. Pinker take over, quoting from the same spot.

Our consciousness, even more than it is posted in space, unrolls in time. I can imagine abolishing space from my awareness – if, say, I were floating in a sensort deprivation tank or became blind and paralyzed – while still continuing to think as usual. But it’s almost impossible to imagine abolishing time [Pinker’s emphasis] from one’s awareness, leaving the last thought immobilized like a stuck car horn, while continuing to have a mind at all. For Decartes the distinction between the physical and the mental depended on this difference. Matter is extended in space, but consciousness exists in time as surely as it proceeds from “I think” to “I am”.

Pinker later drops a William James quote:

The practically cognized present is no knife-edge, but a saddleback, with a certain breadth of its own on which we sit perched, and from which we look in two directions into time. The unit of composition of our perseption of time is a duration, with a bow and a stern, as it were – a rearward- and forward-looking  end…. We do not first feel one end and then feel the other after it, and from the perception of the succession infer an interval of time between, but we seem to feel the interval of time as a whole, with its two ends embedded in it.

James called this the “specious present”. Pinker elaborates:

How long is the specious present? The neuroscientist Ernst Pöppel has proposed an answer in a law: “We take life three seconds at a time.” That interval, more or less, is the duration of an intentional movement like a handshake; of the immediate planning of a precise movement, like hitting a golf ball; of the flips and flops of an ambiguous figure [refering to optical illusions elsewhere in the book]; of the span within which we can accurately reproduce an interval; of the decay of unrehearsed short-term memory; of the time to make a quick decision, such as when we’re channel-surfing; and of the duration of an utterance, a line of poetry, or a musical motif, like the opening of Beethoven’s Fifth Symphony.

Practically speaking, from a very low level, if the AGI implementation were monitoring a data stream, one that was merely event-based (i.e. running an algorithm on the receipt of data) would be incapable of acting upon the absence of data. This was the original reason why i started to focus more on cognitive loops, but these days the more i read stuff like Pinker’s books, the more i’m convinced that this is the only way to go.

I am personally unaware of any serious AGI project that uses cognitive looping as a basic architectural feature. Do any readers know of any?

8 thoughts on “Time and time again

  1. >I am personally unaware of any serious AGI project that uses cognitive looping as a basic architectural feature. Do any readers know of any?

    Yes, mine: http://www.cognitivealgorithm.info/2012/01/cognitive-algorithm.html.
    There are two ways I implement “Cognitive looping”:
    1: Parallel search on higher levels of generalization, across correspondingly extended temporal range of accumulated selected inputs, &
    2: Higher-level feedback, which redefines search range & input resolution on lower levels, ultimately through actuators.
    That’s what my architecture is all about.

  2. Thanks Boris. Your description of your approach is interesting but, if i may, may benefit from examples. Terminology is great if everyone agrees on definitions, but i found myself many times trying to decide how it was you were using a given word or phrase. I know i am guilty of this too, which is why i was planning on starting to provide code snippets and such to concretely illustrate concepts.

    On the other hand, your approach appears to be highly centered on pattern discovery – please correct me if i’m wrong. (That is, discovery with the intent on choosing actuation based upon predictions, for which i assume you are also developing implementations.) I dropped my attempts at discovery a while back because they were taking up too much time and i didn’t have a decent breadth of data to work with (and because i assumed there were enough others working specifically on that). Instead, i now assume that discovery is already done (by providing the patterns to the system ahead of time), and focus on the runtime architecture. What occurs to me is that if our respective ideas conform sufficiently, we could build separate parts of the same system. Perhaps this could be the start of a collaborative effort, such as what a recent commenter on this blog felt was missing in the AGI world?

  3. Thanks for checking it out, Matthew. I use my terms in the most general sense I can think of, & define many of them explicitly. It seems that the problems with interpretation arise mainly when people assume some specific context, while I operate in utterly decontextualized atmosphere, which this problem requires. Let me know which of my concepts could use more defining.

    I am a bit mystified as to how you can work on GI while avoiding pattern discovery? From my perspective, if the discovery is done, then you don’t need any intelligence. It’s the very essence of learning, which is what GI is all about, everything else should be learnable. Sure, you can & should manually add all kinds of heuristic shortcuts, especially related to sensorimotor & processor architecture. But that would be task-specific, & hard to do if you don’t understand cognitive algorithm you add them to.

    The problems I work on now are theoretical, mostly related to how the syntax of patterns should develop. I don’t run any simulations because they won’t help, – interpreting results would be as difficult as arriving at solutions theoretically.

    But thanks for the suggestion, I’ll be in touch if I feel that my algorithm is ready for implementation :).

  4. Ok. Now it makes a bit more sense. I’m all about implementation: the nature of the input data is a key consideration, as is the concrete definition of the knowledge representation, among other things. So, i was trying to take the description of your algorithm in the context of how to implement it. I had assumed you were already doing that. Do you have an implementation timeline in mind?

    An explanation of why i’ve (temporarily) put aside discovery is worthy of a blog entry itself. I’ll try to get to that soon.

  5. Even random videoclips off the web should be fine as data, as long as you accumulate enough of them. Re. concrete definition of the knowledge representation, – that’s my syntax, but it expands with each level of search / generalization. What must be defined is the way it expands, which is what the algorithm is all about.

    I don’t do timelines, the best way to predict the future is to invent it :). Timelines only make sense in routine work, & mine is as far from routine as it gets. I’ll implement when all my loose ends are worked out theoretically. Basically, I don’t believe in a brute force as a tool to solve the most theoretical problem ever. The best tool I got is my own brain, *until* I have a working AGI.

  6. I’m a software developer by trade, so while i agree that it’s possible to think through all of the details eventually, once you have a handle on the overall application structure it’s generally more efficient to just implement and let the computer tell you what you got wrong. More often than not you also discover details/problems you never would have considered. As they say: “In theory, theory and practice are the same. In practice, they’re not.”

    Regardless, it would be great to hear when you consider your work done. Should be interesting.

  7. According to Ben Goertzel, OpenCog and Stan Franklin’s LIDA also use a “cognative cycle”. With LIDA it appears to be a fundamental feature, so yay! With OpenCog it is a feature of CogPrime, which is implemented within OpenCog, so yay again!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>