Tuesday, March 10, 2009

My Brain Is Very Strange

Once again, I'm not sleeping. I don't have a clue why I sleep so irregularly these days. It could be latent stress or a subtle indication that I need to structure my life better. Who knows?

In this late/early hour, I can only sit and hope I get tired and distract myself in the process. I watched "Brain Smasher: A Love Story" and now I'm watching Disney's "Beauty and the Beast." Somewhere in this bizarre movie marathon I find my brain thinking on non-monotonic reasoning, aka, defeasible logic and the implications it has on aspirations for true artificial intelligence.

For the longest time, we've been trying to get computers to be like us: we want them to be intelligent. Well think about that. Computers, as they're built now, are nothing more than symbolic logic systems. Zeros and ones, put in distinct patterns, shuffle data values that, at their core, are also zeros and ones. Mid-level abstractions of the 0's and 1's are commands/rules that are executable line by line. Zero or one? True or false? These concepts are at the core of all computers.

True and false. That's logic in a nut shell. We like to think that with tools as powerful as deduction, predicate calculus, peano arithmetic, blah blah blah, that we have the tools necessary to describe things such as concepts and ideas... knowledge! And we've had some no-shit success stories. These successes have given birth to entire sub-disciplines of AI like Expert Systems and Knowledge Representation.

Well THERE is my beef. The representation of knowledge, as we do it now (most commonly) is inherently logical. Rule based. It's still a basic facet of linguistics that our syntax can be 'adequately' described with basic axioms and recursive rules. That's the rub. Just because we CAN describe aspects of our intelligence logically, with rules, with facts, quite literally SPELLED OUT, certainly does NOT mean this is the way it should be done.

Sounds all well and good, yes? So where is my problem? Well consider a logical system. It has to be deductively sound and valid. The true things are true, the false things are false, and everything is consistent with eachother. There's no such thing as a deductively valid system that holds both P and not-P. Furthermore, a given logic system won't let me introduce a new fact or a new rule that is not consistent with the rules and facts that it already contains. For example, if the logic system contains the rule "Birds fly" and "Penguins do not fly", the damn thing will simply not allow me to tell it "A penguin is a bird."

Quick interjection: I think the beast was better looking before he got turned back into a human.

Yikes, right? Right. If I threw "A penguin is a bird" into my already sound and consistent knowledge base, then it would forward-track through its rules with this new fact and deduce "Penguins fly." But then it would have the facts "Penguins fly" and "Penguins do not fly" which will cause it to blow up (fail).

Well shit. Ok. Here we are on the edge of reason, quite literally. One of my favorite professors, Dr. Nute, was the father of the logical solution to this problem. It's called defeasible logic or non-monotonic reasoning where it is possible to have rules that are always true, rules that are usually true, and rules that are true under different circumstances. So that when one course of deduction leads to an explosion (failure), we backtrack and try again with another rule that may let us keep our knowledge base consistent. Cool, yes?

Yes, it is cool.

But what the fuck, people? Seriously? I mean, this sounds about right, doesn't it? Defeasible logic. Exceptions to rules. Default rules and alternatives. Isn't this the way we would describe our thought processes? Hell. Even the Closed World Assumption, another basic premise of logic, fits with our way of thinking. If we have no evidence that something is true, then it must be false. In other words, if I don't have a fact or a rule that asserts "Penguins can fly" I must assume they can't. This is called negation by failure. If asking about the truth or falsity of the premise "Penguins can fly" or the premise "Bigfoot is a fabulous dancer", then coming up with ZILCH, we assume that penguins CAN'T fly and that Bigfoot has two left big feet. In this case we would be right, of course.

Like I said, it all sounds groovy. All this 'adequately' describes the way we think, right? Rules. Facts. Predicated knowledge. True and false (one and zero).

Let's take a survey: How many people out there have ever cut into a sheep's brain? Relax, it was for a class. I didn't hop a fence in at a sheep farm with a buzz saw and a surgical kit. I studied all sorts of neurological structures, most of which have several names, and most of which we have some incling about their functions. We have mapped out areas responsible for vision, language, muscle control, sensory input, and yes, to a certain extent, MEMORY.

A cry out to all you neuroscientists: point out to me the part of the brain where all of these facts and rules are systematically stored, i.e., take a a scalpel and cut to the part in the brain where the "food is good" predicate is stored. It would be especially helpful if Mother Nature left them in there to find in either a numbered or a bullet-pointed list. Keep it nice and organized.

Well you can't! Our brains are connectionist models, aka, neural networks. The processing is parallel, not serial (as it is with computers and logic systems). When we have a thought or a memory or recall a fact or drive a car, we don't have to traverse a list of stored knowledge, ignoring the information that is inapplicable until we finally hit the bit of information we want. Nuh-uh.

Defeasible logic is one HELL of a great way to get around the problem. My brain can wrap itself around the Penguin issue without blowing up. At least I hope so. Well why then am I so worked up this earl/late? Easy. Just because we DESCRIBE our knowledge and language etc etc etc (adequately) in terms of rules and facts (as if they're stored in the knowledge base of our minds), that doesn't mean THAT is how they are stored! Or executed, for that matter.

We're stumbling around, frantically trying to define new systems of logic that account for the simplest of counter examples to the working theories we have. To that I say TOUGH SHIT. We may be barking up the wrong tree here. Sure, we've gotten incredible results with knowledge based systems as they are described. But what about the Penguin case? What about US? To put it bluntly, the penguins don't blow our minds. I'm going to sleep just fine (eventually) knowing that "Birds fly", "Penguins can't fly", and "Penguins are birds."

I think it was Penrose who said any parallel process can be "flattened out" in time. By that, it is simply a cold hard truth that life is just a series of moments. He was trying to get at the fact that any parallel process, when examined at the smallest increments in time, can be defined serially, because inarguably at the SMALLEST increments of time, no two events occur at precisely the same moment.

So do it. Crack open a skull and put your right index finger on their stored knoweledge.

Where is it in the brain that knowledge is stored serially and logically? AGAIN, like I've said a hundred times, just because we can 'adequately' DESCRIBE a phenomenon a certain way doesn't mean that duplicating it will. We need to process parallel in a given AI system, or suffer for it. I think Penrose would do better to argue that our current logical systems (computers) are adequate platforms for the simulations of neural networks and connectionist models, not that just because we CAN flatten things out that we necessarily SHOULD.

So if there is any hope, any hope AT ALL, our knowledge representation and ESPECIALLY our knowledge bases of facts and rules that we don't question must be somehow encoded into neural networks... I'm not entirely sure how to do this just yet, but I'm willing to take a stab at it. I just need to figure out how to standardize a list of predicates into a training data set. In the meantime, I need to clean off my buzz saw and apologize to the sheep farmer.

9 comments:

  1. I had to edit this today because I realize I was half asleep while writing it. The bottom line is that we need to find a connectionist way of encoding our knowledge bases to allow for defeasible, non-monotonic situations.

    ReplyDelete
  2. Have you studied Optimality Theory yet? We talked about it in Phonology. Isn't that kind of what you're talking about: a set of constraints that can be violated in order to satisfy another, more important constraint?

    ReplyDelete
  3. I've seen a bit of OT from Dr. Kawahara before he jumped ship last year. But just a hair. So yes, I know that it's QUITE similar to Logic Programming with defeasible concepts worked in. And it's nice we can describe all of this stuff... ...slowly, painfully, and iteratively. But we don't have "Kawahara's Guide to OT" spelled out inside our skulls. I'm thinking lower level. If I can come up with a way to feed a neural network a training set of logic (or OT) principles so that the proper inputs match up to the proper outputs, then I'll be rich and famous.

    ReplyDelete
  4. Ahh Tony. I've got a lot to say about this topic. I don't even know where to start (I've started this comment 4 times now, actually). I'll just throw out some random brain diarrhea, if you don't mind. First, I would say we don't necessarily need a connectionist model of knowlege...we just need something very fuzzy. NNs provide that, but they're not really made for that. We like to think of concepts as discrete, but let's not forget that they're far from it (mostly because the thing that concepts refer to, i.e. *reality*, is itself super fuzzy). We've imposed logical structures on our own conceptual system because literature has trained our brains to think like that (see Walter Ong's 'Orality and Literacy'). It's not natural. What we need is a more naturalistic AI...*embodiment* is the key, i think. I think we store knowledge in sensory and motor areas...in other words, knowledge is indistinguishable, neurally, from experience. I have no sources for this, just a hunch. Abstract concepts are stored as metaphors (check out http://en.wikipedia.org/wiki/Metaphors_We_Live_By). Concepts are organized hierarchically...that's about as formal as it gets in there. Conceptual abstraction (into metaphor, into motor/sensory storage) is like defragmentation. Vast conceptual spaces are reduced to smallish sets of categories and extensions. Of course this is all done in the brain (and the rest of the body, of course, but even then, it's in the brain), but as you say the brain does not store concepts. It senses, it reacts, it plans and organizes but there isn't a dedicated neuron that tell us whether or not a penguin is a bird. So...long story short...connectionist model, yes, *but* said model needs to be able to learn, including learning its own conceptual representation. It needs to evolve, perhaps, like we did?

    I'll stop there. Time to get back to work!

    ReplyDelete
  5. Holy shit, that second book looks like a Rubiks Cube. I'm all over it. I also found this one on Amazon today:

    Neural-Symbolic Learning Systems

    Check out the table of contents. THAT is what I'm talking about.

    ReplyDelete
  6. One more:

    Associative Engines: Connectionism, Concepts, and Representational Change by Andy Clark. He's very well regarded in cog sci and philosophy etc.

    ReplyDelete
  7. Dude! I need to stop buying books! I won't have a chance to read the FIVE I just got on Amazon until the summer.

    I don't want to learn anymore!

    ReplyDelete