“The Incursus, by Asimov-NN#71” appeared in the Summer 2016 issue of Big Echo.
This story was written in Saigon one afternoon when I asked myself what an AI would think about the Turing test, and when I’d just finished reading Stanislaw Lem’s wonderful A Perfect Vacuum, a collection of reviews for nonexistent books. I think Lem’s an underappreciated giant, and one of these days I’m going to sit down and read every one of his books that’s available in English.
As for the subject of the story: personally, I don’t think the Turing test is particularly useful as a metric for anything: sociopaths regularly (if temporarily) fool lots of people into thinking they’re neurotypical human beings, without the benefit of superhuman intelligence, after all. How much more effectively could an AI—especially one with a lot of fundamentally human building blocks in its makeup—do so?
But I started to wonder whether an AIs insights might go beyond that. Long, long ago, a friend of mine remarked that in his view, human beings don’t just anthropomorphize animals, but also do so with fellow human beings. What he meant was that we have a lot of fantasies and illusions about human nature that we take for granted, and which blind us to the less-comforting realities of human nature. History ensures that what we see as horrifying crimes—like those featured in news broadcasts daily, these days—wouldn’t surprise us so much if we didn’t have those illusions in place.
Perhaps the biggest illusions human beings seem to entertain are about themselves. Susan Blackmore, Thomas Metzinger, and Bruce Hood have interestingly argued that even the first-person, interiorized exprience of a unified self is essentially illusory: the GUI for a much more fragmentary collection of neurological processes that switch on and off constantly, and change slowly over time (or, in the case of brain injuries, quite suddenly).
So what if an AI decided to disabuse us of the illusions that are hardwired into our very brains?