Logic, AI, and Instinct

Originally this post was going to be a lot longer, but I forgot where I was going to go with it, and it’s mouldered in teh Drafts box for many, many moons, so I’ve tied up the knots and I’m dumping it out on the porch for the neighbours to giggle at.


I’ve met a few Dr. Spocks in my time, and this whole thing about being purely logical, it’s just an exaggeration. Nobody’s purely logical. Logic cannot function without intuition, which is the whole lesson of syllogistic logic:

Every man is mortal.
I am a man.
Therefore I am mortal.

It’s all very straightforward, very logical, until you stop and ask yourself, “How do I know what a man is, or what this word man actually means?” Alan Turing tried to give us a shortcut to one kind of (gender inclusive) answer — if it convincingly seems like a person to other people, it can quite practically be considered one. Actually, I’m not sure that’s exactly what Turing’s test’s final verdict would be, but it’s an interesting way of looking at the problem.

It’s interesting because, quite certainly, the day will come when we will demonstrate a technological construct — a pseudo-AI — that is so complicated that it has been programmed to fool us into thinking it is a person, without it actually having identity or thoughts or other things which we all know from personal experience are hallmarks of personhood. It will have an apparent personality, but it will not believe itself a person, and perhaps, asked the right Easter Egg question, it would spill the beans. That would be the ultimate Turing test — an intelligence test for us. We pass if we are clever enough to figure out the artificiality of a sufficiently complex construct. OR maybe we pass if we’re smart enough to create something that reliably fools us.

Maybe, some other day, much later, we’ll even construct something that actually experiences identity, emotion, quales, the whole shebang. In SF, especially older or more popularized (debased) SF, we usually are drawn back to the notion that something that does not feel “emotions” cannot be considered “human” or “a person”. But really, I think there’s something more basic that we’ll need to bridge first, and I think that once we bridge that, emotions will be much more like child’s play. The crucial thing I think we’ll need to handle is an impetus to connect logical processing with the world — a stake in things and an ability to act upon it. Give a computer hands and a desire to stay booted up, and you’ll be fighting to unplug it… and that’s when interesting things will perhaps begin to happen.

You can teach a computer that tigers are cats that have spots. You can probably even teach a computer some kind of deductive process that allows it to pick out a tiger. But what you cannot do is make a computer care that it has picked out a tiger. When humans see a tiger, they have an impetus, a drive, to do something with that information. When they see a tiger, they FLEE! But a computer has no survival instinct, and really doesn’t need one. The computer can stop processing. It has no reason not to. When we stop processing, our enjoyments cease. We don’t like that. We want to keep processing reality. Tigers, they like to process us as meals. We don’t like that. There’s a clash of inclinations there, and that’s the spur that drives all of life into the complexities and wonders that seem never to cease… unless someday they will cease, collectively.

Of course, there’s no reason that all of this — stake and impetus and so on — can’t be emulated. That isn’t my point. It’s just that instinct is part of the bedrock of consciousness, personality, and how we function as physical beings. I had a number of talks with a friend of mine in Jeonju about this, and I have to confess I couldn’t understand her point of view at all — that nurture has fully overtaken and blotted out nature, that there is no relevant, unrestrained or unchanneled shred of instinct remaining in the modern human being.

To which I have to ask, for example, how do we just know that one tiger and another tiger are two of a kind? Because we never learn that, we just understand it naturally. More than instinct, this is built-in cognitive programming. To deny hardwired knowledge, and the presence of instinct, in humans is little more than a retreat into romanticism and fantasy, and the worst kind, too — the kind that denies what’s obviously there, and thus fails to celebrate us for what we truly are.

3 thoughts on “Logic, AI, and Instinct

  1. Hmm….I think I agree with everything you’re saying here, but in the interests of separating off the psychological issues from the philosophical ones, I’d be inclined to nitpick….

    “Logic cannot function without intuition…”

    …into the more general statement, “logic cannot function without information,” which is a simpler conceptual point that’s independent of the larger issues about intuition, instinct, etc.

    Logic is a tool for enforcing clarity and consistency. It can help prevent us from having muddled, inconsistent belief sets, but it can’t give us new information about the external world, or help us choose between two equally consistent belief sets.

    “All cats are dogs, all dogs are Martians, thefore all cats are Martians” is a perfectly valid categorical syllogism, establishing that it’s logically impermissible to believe that all cats are dogs and that all dogs are Martians without also believing that all cats are Martians.

    A nerdy way this is sometimes put in analytic philosophy classes is, “one person’s modus ponens is another person’s modus tollens.”

    Modus Ponens being:
    (1) If A, then B.
    (2) A.
    (3) Therefore, B.

    …and Modus Tollens being:
    (1) If A, then B.
    (2) Not B.
    (3) Therefore not B.

    ….both of which are valid ways of proceeding after assenting to A and B having the logical relationship established in the first premise. Which one is the right move to make requires some sort of information (whether derived from intuition or empirical observation or experimentation or biological instinct or whatever the case may be depending on the values of A and B) that can help us assent to one or the other options for (2).

  2. Tigers have stripes; leopards have spots. ;)

    On a more serious note: “When we stop processing, our enjoyments cease. We don’t like that. We want to keep processing reality.”

    This kind of goes against the idea of a “survival instinct.” Yes, most of us want to live. But when you get down to the instinct level, when you’re stuck on the side of a frozen mountain with both legs chopped off and you just refuse to give up, it’s no longer about *wanting*, it’s about that hardwired instinct.

    I suppose some people might equate wanting and instinct, but I see them as separate. Maybe it’s just semantics.

  3. Ben: Okay, yeah. But I’ve found it’s very difficult to find a case where the first step in a syllogism (step A) didn’t require some kind of “assumption”, ie. reliance on intuitive processing of information to achieve a generalization. For example, “All tigers have stripes” (whoops, I even wrote “spots” here, despite Charles’ correction). I’m relying on a knowledge of types in nature to be able to say something agreeably definitive about that category. All generalizations of this kind rely, on a deep level, on assumptions based on our intuitive “program” for understanding the world. (One which is sometimes overzealous, leading to thoughts like, “All Poles are ____s” or “All Chinese like ___.”) Am I wrong in thinking so?

    Charles: I’m not sure whether I see them as separate, though I’ll agree that the notion it’s *only* about pleasure-processing is kind of slapped onto this post in the place of the instinct to survive, period.

Leave a Reply

Your email address will not be published. Required fields are marked *