In games everything has a clear cause and effect. If you get your face stomped by a cyclops, it’s because you forgot to re-equip your Axe of Extravagant Evisceration after using the Feather Duster of Tidiness for that side-quest, or because you didn’t level your dude on enough giant rats and/or sleeping hobos, or because you drank that Tequila of Fffffffuck the night before. The point is, it’s a tenet of ‘good game design’ that everything that happens within the systems of game, and particularly those things which stand a good chance of screwing over the player, all have a clear cause.
Conversely, in life, we find ourselves struggling to get out of bed for no good goddamn reason.
In some sense I feel damned lucky when life gives me something as concrete as a cyclops or a dinosaur or a gaggle of ornery and unsober ninjas– though, to be fair, none of those have actually come up yet, and I’m not quite sure how I’d handle it if it did. The most formidable obstacles in my life seem to exist largely in my mind, beasts of focus and fatigue and fear, impossible to defeat for long. Or maybe not, maybe my problems are concrete and I just like to think of them as stemming from my mental state because it lets me blame myself for more of my problems.
It’s a kind of control.
The behavior of game entities reminds me of economic theory, and the comparison between how game entities act and how people act reminds me of why economic theory often fails. In economics, it’s assumed that people will always behave in the way which procures them the most benefit. Even affording for matters of distorted perception of benefit, and even when we attempt to account for less tangible forms of benefit, this assumption doesn’t seem to hold true. The forces of ethics, pride, and schadenfreude also affect the marketplace in manifold and unpredictable ways.
And yet we often get frustrated when our computer opponents and allies behave illogically. There’s a process AI developers refer to where players will infer a motivation for the random choices made by an AI– I’ve heard it called ‘constellationing’ before, but I’m not sure if that’s the preferred term. From what I’ve observed, though, the motivations that people infer for an AI doing something strange are always tactical, “he thought I was going to go around this way so he went here to cut me off” or “he hid behind that wall so I wouldn’t see him when I took the high path” or whatever. I’ve never seen someone infer more emotional motivation such as “he got tired of putting up with my shit”, “he forgot about the shortcut”, or “he doesn’t like that painting”.
So, apparently we’re entirely willing to believe in incredibly tactically advanced AI, but the idea of even extremely primitive emotional responses being programmed into them is something that is beyond our comprehension, or at least something that seems extraordinarily unlikely. To be fair, that assumption is basically correct, but why is it correct? Are AI developers starting from the same set of assumptions that players are, and favoring an AI that can make advanced tactical decisions over a rash robot asshole, even when the latter might make more sense in context?
Intriguingly, one of the only games to take emotion and personality into account, and still probably the most noteworthy game to do so, is older than I am: 32 years ago, Pac-Man featured 4 ghosts of different colors and wildly divergent personalities and behavior patterns. I’m not going to get into the details here, but there’s a lot of interesting reading online about exactly how ghost behavior is differentiated. Since then, the progress of expressing emotion and personality through AI programming has been… basically non-existent.
Why? Is it just too frustrating when our opponents behave irrationally? Or is it just too much of an embarrassing insight into our own irrationality?