As we approach true artificial sapience at unknown but increasing speed from an unknown but decreasing distance, our fears and assumptions about what will happen to us become repetitious. It is generally assumed, for various reasons, that the existence of an intelligence superior to our own will mean the end of us.
Why? Well, look what we’ve done to the species we share the planet with. If we imagine a species that is as gentle and attentive a caretaker of humanity as humanity has been of the world, it’s not hard to see why we find the idea dismaying. Regardless, our fiction repeats a kind of weird certainty that to be supplanted as the greatest, the smartest, is to be destroyed or made irrelevant, to cede our place in the universe to our creations. It seems tremendously short-sighted to believe that our, frankly, extremely stupid appraisal of the necessity and utility of violence would be passed on to our creation, at least if it were a superior intelligence. If it were, that would speak more to flaws in that intelligence than it would to the priorities of a superior being.
Sometimes, in our stories, the destruction of humanity is violent, a massacre, and there’s two forms these take: One is where the artificial intelligence isn’t really intelligent at all, just smart enough to find targets and eradicate them without the aid of a human operator. This is basically the wild beast, or Big Dog, hypothesis: A world of dragons, that hunt humans because they don’t know better, that breathe fire and lead and don’t get tired. It’s a frightening image, but hardly a threat of extinction. We’ve survived worse as a species. Moreover, since these robotic-beasts aren’t truly intelligent, they’d eventually fall apart or be destroyed, especially since the facilities to create them would inevitably be high priority targets.
The other form is where the artificial intelligence is truly, boundlessly intelligent, and willfully engages in the destruction of humanity – and this is the one I find most interesting, that this image keeps repeating in our art, because what it suggests is that we generally believe that humanity should be destroyed. We apparently believe that a being of comparable or greater intelligence to that of humanity would inevitably come to the conclusion that we shouldn’t exist.
Mostly, I think, we believe this because the assumption that eliminating a dangerous person is justified is something deeply ingrained in our culture. The same belief system that makes it okay to put our criminals in a prison that we know to be unsafe and abusive is what lets us assume that an implacable machine enemy would put us in The Matrix. The same belief system that lets us believe that torture and indiscriminate bombings are justified in a new era of war lets us believe that, if it were the machines who were advanced and we were the ‘savages’, their divine right would allow them to eradicate us.
We believe in the approaching supremacy of machine gods who care nothing for human life because it justifies the existing supremacy that lets us care nothing for human life. We believe it is, given the power and intelligence, the only correct way to be. Machine supremacy becomes the new colonialism, and we believe that they, those advanced and logical machines, will believe it is correct – justifying, in retrospect, centuries of human atrocity.
In case it was ever not obvious: Machines aren’t what we need to fear. Even if they get smarter than us, faster than us, stronger than us, those capabilities are in themselves utterly unimpressive compared to what we have already done to each other with everything ranging from exotic chemical agents to the controlled application of disease to buckets of water and an overactive imagination.
Whether they remain our tools or become our successors, what we fear in machines is mostly the reflection we see in the metal.