Archive

Playing Games

Games are about progress – I can state this with some confidence since it’s a natural outgrowth of games both as narrative works and games as systems which generate results from input. Even games that are technically endless are built on this idea of time passing and things changing and of dealing with whatever emerges afterwards. However, let’s disregard narrative progress for the moment – after all, moving the plot forward is in no way a concern unique to games, and most of the techniques used to do so were established elsewhere. Let’s also disregard the progression of player skill, since all the game designer can do to affect player skill progression is to give that player an interesting space in which to develop those skills, and tools to understand the necessary steps in doing so.

With those two safely out of mind for the moment, let’s talk about Power Progression: Leveling up, finding sweet loot, going to a skill trainer, whatever. There’s a lot of it in games now, much more so than there used to be: Early arcade games were mostly about player skill progression, though they did have a few levels and some nominal narrative which also were progressed. Still, the future that the player imagined when they played those games was not that of a thrilling conclusion or a cool super ability, but becoming better at the skill of playing the game. Power progression really started taking off, though, when it became possible to save a game in progress. Buying items and leveling up became a lot more appealing when those items and levels didn’t disappear whenever you took a break or had a power outage. Players loved it, because it gave them an extra reason to come back, a sense of having invested something into the game which they could recoup later. Designers loved it because it gave them a long-term design space to work with, where the way the system interacted evolved over time.

The Problem emerged when the suits started to love it. Capitalists love power progression because it makes games addictive, incentivizes players to drop extra money in, and incidentally serves to reinforce the propaganda of capitalism: Stick with the system, work hard for the system, and eventually you will be rewarded. It turns out that, like adding a nauseating amount of salt, almost any game can be cheaply made more compelling just by adding some sort of long-term progression on top of whatever game is already there. A sense of progress is an easy emotion to exploit, because it makes even activities that are soulless and unappealing feel worthwhile: Most of the tech sector is founded on selling a sense of progress, even though the objects that are supposedly emblematic of this progress are trinkets of questionable utility at best and Looney-Tunes-esque smart house paranoia fuel at worst.

So we add upgrades and levels and purchasable boosts to upgrades and levels and we add prestige and unlockable skins and so on and so forth, just to make the player feel that their time-wasters aren’t wasting their time. That this time is an investment.

With this all in mind, the sort of power progression you want to put in your game becomes quite a pressing question, one that interrogates both what experience you want to impart to the player and what is an ethically sound way of offering that experience. Even if not quite as dystopian as some of the free-to-play scenarios we’ve already seen play out, the tendency of RPGs to sub in progression of the player character for any meaningful player skill progression and, in the case of MMORPGs, any meaningful plot progression as well, also raises questions of how many of these empty calories can we feel okay about feeding to players.

This sort of player progression faked through creating a more powerful player avatar could be taken so much further, to an extent that’s creepy to think about. Like an inversion of the adaptive difficulty systems that were trendy in games a while back, you could design a game that quietly became easier and easier the more it was played be the same person, perfectly faking the experience of them improving. With thumbprint and facial recognition and constant internet connections, you could even make it so the game matches difficulty with its player across devices – you could make it profile the player’s play style to sense discrepancies in case you tried to spoof it. Imagine a version of Super Hexagon that slowly went slower and slower the more you played it, like Mrs. Twit in the Roald Dahl book The Twits, who is fooled by her husband into believing she is shrinking when he gradually elongates her cane and the legs of her chair with little slivers of wood. Once there’s one such game, there will be many such games, making the sense of progression we feel from experience points feel negligible in comparison. We’d be fooled into thinking we were giants. Everyone would be an expert, regardless of expertise.

The scariest thing about this future is in some capacity I feel it has already come to pass.

Advertisements

The term role-playing is applied very loosely to games. Not only has it come to mean something completely different when used to describe video games than the pen-and-paper games that originated it, but it has drifted away from its obvious meaning in those games as well. Every game is about playing some sort of role – even when there’s no explicit narrative role (which there usually is), we still take on a role defined by the rules of the game – the role of the intelligence who places the pieces in a jigsaw or who builds the Tetris to eliminate four lines of blocks, the role of pitcher or quarterback or referee. This sort of role-playing is in many ways closer to the sort of play that which early RPGs were meant to capture, tactical miniature play inspired by the battles in the Lord of the Rings books, than what modern enthusiasts of the genre mean by the term, which is more akin to playing a part in a play – and, crucially, a part that one writes for oneself.

This is a topic we could dig deeper into, what role-playing has come to mean in different contexts, but at the moment I’m more interested in the way that playing a role, or choosing not to play a role, appeals to us. One of the core conflicts of my life is my simultaneous desires to have a place in the world and to not be constrained to do any single thing: These desires are flagrantly contradictory, and yet I feel them both frequently. At one moment I wish people would just tell me what they want from me, at one moment I wish I could pursue interests with no regard for what anyone’s expectations of me are. I can even feel both of these at the same time. It’s a sort of talent, I suppose.

Both of these, finding a niche in which we excel or choosing any path for ourselves and having it work out, are sorts of power fantasies, and different sorts of games like to cater to both of them. Whether these games are called “Role-Playing Games” or not has very little bearing on this. Most MMORPGs favor casting the player fairly narrowly, where they pick a class and have to play to the strengths of this class in a very specific way, while games like Skyrim are built to allow the player to do basically anything they want to with no negative consequence of any sort.

If you don’t like the role the game casts you in, you probably won’t like the game. If you don’t feel like the game gives you enough room to perform your role in your own way, you probably won’t like the game – in much the same reason people don’t like jobs that don’t give them any freedom to tackle tasks with their own methods. For a few days I went back to playing Team Fortress 2, and somehow there I have the best of both worlds – probably one reason I played so much of it. I have a list of 9 roles (or perhaps more, with all the ways equipment can change a class’s role) which I can pick at a whim. Maybe today I feel like getting into the thick of things and causing a lot of trouble, so I play Soldier, or I feel like moving around and harrying, so I take Scout, or I feel like being an asshole, in which case I roll Spy.

I usually play Spy.

Out in the world, though, we seldom are afforded the opportunity not to be defined by the roles we are cast in. Usually, in order to survive, we are forced to live the role we are given. Others of us, bereft of such a role, struggle to define ourselves in terms that are understandable to others, socially approachable, economically viable. In the end, we have to either accept a pre-made role, or learn to make our own – and, to make our own, first we have to have some idea of what sort of role could be both desirable and viable.

It’s easy to be led astray. I generally want to be an artist and thinking person, and what are the traits that we have used to define these sorts of people? Lonely. Mentally unstable. Self-destructive. We paint doom on our thinkers and artists, even though there’s no particular reason to believe in any real correlation outside of the feedback loop caused by this stereotype. How have these cues affected the way I live my life? How can I learn to define myself as a creator outside of this toxic worldview?

I can’t help but stand back and look at the motivations behind this toxicity. Who stands to profit from making artists believe they are worth more dead than alive? Who stands to profit when inventors are forced to sell their inventions for pocket change?

Those who have written the roles we are cast in may not have our best interests at heart.

I don’t consider myself exceptionally awkward in social situations, but I don’t think I’m particularly comfortable in them either. Much of the time, particularly in emotionally loaded moments, I have no idea what to say – no idea what an appropriate sentiment is for the occasion, no idea how to express something that isn’t hollow or tone-deaf. My usual tendency when I don’t know what to say is to say nothing, but sometimes nothing is just not an okay thing to say, and that tends to be when I run into issues.

These sorts of ambiguous situations, where anything could be expressed and all expressions seem insufficient, exist everywhere in life. However, when we make games, even when we try to simulate some aspect of life, this ambiguity is flattened. Dialogue is expressed through branching trees of pre-written choices, or in more ambitious attempts through some text parser or abstract sentiment generator – in the long run, no matter how the player expresses the sentiment, it is interpreted by a machine, chopped up into something quantifiable for the game’s systems to react to. There is inevitably a Right Choice, a correct thing to say in that circumstance to progress the game, to get the ‘good’ ending, to see the bonus cutscene.

Dialogue, in a game, is a control mechanism, not communication – or, if it’s communication, it’s the game’s designer speaking to you rather than you speaking to the game. You don’t care if the game understands, you don’t care how the game feels, you just care about how it responds to your input. It really isn’t much like speech at all – which is fine, it doesn’t really need to be, but having it constantly presented as speech, being treated as though the player is genuinely expressing something in the way they would to another person, probably has some strange effects on how we understand speech to actually work.

This, though, is just a specific instance of the process of disambiguation that happens when we try to emulate the vast mess that is reality in our goofy little electronic worlds. To play, as a child, is to imagine scenario after scenario with no logical connection or overriding ruleset – you have been shot, but you are bulletproof, but the bullets are armor-piercing, but you’re actually bulletproof times infinity plus one. To play in a video game, though, even an open-ended one, means that there must be a logical connection from one moment to the next, since the game, being a computer program, has to operate on logic. There’s still lots of room for self-expression in a well-made and open-ended game, but the fidelity of that expression is mediated by the granularity of that simulation. Or, at least, the fidelity of the part of that expression that exists within the game – because there’s also the part of that expression that exists within the minds of the players, and that could be as unbounded as ever. In theory, at least – do kids pretend they’re pirates in games that aren’t about pirates? Ninjas in games about vikings? Wizards in games about soldiers?

Maybe that’s why we like to play games, though. The infinite possibility and ambiguity of life and human interrelation is incredibly overwhelming. How relaxing it is to be provided an environment where only a few choices can be made – and, even if those particular choices end up being wrong, they are wrong for reasons which are explainable and quantifiable, albeit sometimes quite complex. The games industry keeps trying to make games look more and more realistic, though, while maintaining this simplicity of input and response, and it builds a myth – a myth of a world where each action and consequence is mapped directly and predictably, and anyone who’s clever can find the action and the consequence. The ‘Just-world hypothesis‘, the belief that everyone gets what they deserve based on the actions they have taken, is much easier to convince yourself of if you can build it on a belief that every action and reaction are directly mapped, straightforward, and quantifiable.

If the causal relationship between action and reaction is completely predictable, any suboptimal outcome can be blamed 100% on poor decision-making. Every tragedy becomes a justification that bad things happen to bad people, where in this case ‘bad people’ means people who have made any choice that is subsequently followed by a bad outcome. In this way, games as they have traditionally been structured have a radical conservative bias.

Maybe there’s some other way for them to be structured – but without some huge leap forward in technology that creates worlds too complex for predictable causality, or some sort of ongoing responsive content created by another person (as in a tabletop RPG), this is always going to be a sytemic bias of the technology. The only way to push back against that is explicitly through the content of the game, and that’s going to be difficult to do without alienating players, since rewarding ‘optimal play’ is a foundation of game design.

I want to be good at things. Obviously I would like to be good at art and music and such in order to make good art and to make money to support myself – and, yes, there’s the darker aspect to it, that I described before, where sometimes we improve ourselves just so we can consider ourselves better than other people – but I also just have a need to be good, or to keep becoming better until I find out what good actually is. I want to be an expert. I want to be a pro. I think expertise might be a carrot that’s dangling from a stick that’s tied to the back of our heads, that keeps step with us no matter how fast we move forward – and yet, once you have it in your sights, it’s hard to back down.

I’m not sure where this need actually comes from. Perhaps it’s part of how we’re wired, a need to feel useful, a need to feel that we are contributing to something. Perhaps it’s part of our capitalistic culture, demanding that at any moment we prove our value, prove our worth as an asset. Or, I guess, maybe we just feel a need for a purpose, some sort of guiding premise to our lives, some sort of narrative thread, and being an expert in something seems like the most approachable way to manage that. I don’t know. Whatever.

So, for a certain period of time, a decade or so ago, video games were constructed as primarily a way to feed this need for expertise and mastery with empty calories. For a certain period of time, we decided that all games had to be fun, and that ‘fun’ meant that they made you feel like you were amazing. The standard format of the video game was a simple, easily learned and mastered challenge, presented with a layer of fiction that portrayed it as some amazing and rare skill. Most games are still like this to one degree or another – even a difficult game like Dark Souls is still much easier to complete than it would actually be, presumably, to go on a quest to beat the shit out of an aging deity.

I am very glad that video games aren’t made to this specification any more. If they were I probably wouldn’t be playing them, and possibly wouldn’t be making them. If I was still writing about them, my already-notably-grouchy writing would be far grouchier.

Once you know what empty calories taste like, in terms of expertise, it’s hard to be satisfied with them. You want to become actually good at something, which is much harder than just buying a machine to tell you you’re good at something. Perhaps the most difficult part is that, in order to improve at a skill, you have to accept that you have room for improvement. In order to learn, you must accept that you are not all-knowing. In other words, in order to obtain expertise, you must abandon the idea that you’re an expert.

This remains the case even if you are, in fact, an expert. This part of the process doesn’t change. As Socrates suggested, you must be wise enough to admit that you know nothing – at least, nothing relative to what there is to learn, which is an awful lot.

So we say humility is a virtue. There’s nothing wrong with being proud of what you’ve accomplished – actually, it’s also an important part of the process, because pride is what drives you to define a ‘better’ to strive towards – but being humble enough to know that you are imperfect and can still improve is necessary as well. Know that you can do things others cannot. Know that others can do that which you cannot.

If you refuse to do that, you are trapped, and will never find a place beyond the one you’re at right now.

Much as I’d like to think of time spent enjoying good art as a sort of exercise of the mind and the spirit, there’s an assumption there that I wonder about sometimes – no, not the mental or spiritual benefits of art, I am generally convinced of those, but the benefits of good art in particular, as compared to bad art. Surely, while learning about another artist’s carefully conceived and expressed world view is worthwhile, so is picking apart a poorly formed piece of claptrap to discover aspects of your own worldview. Bad art, acknowledged as such, can be a path to self-discovery – simply finding the words to describe what it was you disliked about something can be as beneficial as any other experience engaging with art.

This is why I hesitate to class the experiences we can have with art into any sort of hierarchy of quality. The movie or book or game may have been clumsy and naive, but it might still have genuine insights which were not heretofore available to me – or maybe it was a masterpiece, but still contained niggling flaws which I am compelled to catalog and describe. This is all valuable. What is not valuable is deciding partway through what the experience I am having is and ceasing to engage with the work – to decide 10 minutes in that because I understood the particular narrative trick at play I have nothing to learn, or that because I didn’t understand how it was done there was nothing I can do but gawp in awe. It’s tempting though, to dismiss something as beneath notice or embrace it as beyond knowledge. It’s freeing, being able to enjoy something solely as an experience, in the moment – but it’s also constraining, believing most things to always be beneath notice or out of reach.

I guess if I could distill my general philosophy it would be this: Pay Attention. This doesn’t stop at art. People who are contemptible and unwise often follow some rule of behavior, and even if it’s an foolish and destructive rule it’s better to know what it is, and why it is, than to not. Every friend and ally and mentor and hero carries deep flaws and unseen scars: We are all different, and no one can really live someone else’s life or create their art. We can’t trace, we can’t copy, we can’t merely emulate, we have to actually learn how to make our own art and our own lives. No role can be sufficiently modeled before the fact: Eventually you have to become whoever you are.

All we can do is our best to learn what we can and give what we can. None of this can happen if my understanding stops at friend, ally, mentor, hero, just as it can’t if I write off someone as loser, idiot, asshole, enemy: Understanding cannot stop there, even if it’s easier that way.

We have to look closer. We have to not turn away. We have to see.

One of the most common ways to evaluate a game design decision is in terms of “risk and reward”. Usually we assume that that whenever the player takes a risk it should be to attain a commensurate reward, and so we try to encourage the player towards risky play by offering such rewards. Risk-taking is something worth encouraging, so the logic goes, because it increases the tension and therefore the excitement of the gameplay.

This assumption raises some questions. Does risk actually make the game more exciting? Since there’s always a threat of failure in any challenge-based game, if the ‘risks’ provide rewards that increase the long-term chances of success, aren’t actually risky overall – they’re just the most dangerous inflection point of a strategy. If they’re risky because they have a random chance to fail, they likely fall into one of two categories: Either an unnecessary chance of creating a failure where none exists, or a necessary gamble to take in order to gather the resources needed for success. Either way, the risk is usually either always worth taking or never worth taking, and the game becomes just a test of luck and of the insight to know whether the coveted resources are necessary to victory. Conversely, if the risk is a test of skill, then it becomes something similar to the luck test but with unknown odds of success – but, again, the player either needs the resources or they don’t and they either have the skill to mitigate the risk or they don’t, and in either case the strategy is simple and straightforward.

The trade-off of risk and reward is, by itself, an incredibly tedious way of balancing a game. Once you know how a statistical game is optimally played, it stops being very interesting: Blackjack is not interesting because it’s a good game, it’s interesting because there’s money on the line, so unless you want to ratchet up the stakes of your game to include real life consequences (beyond wasted time), the risk/reward model exemplified by the casino is not one to emulate.

There are a lot of tools that are useful to describe some aspect of game design, but they are hazardous to use prescriptively as a blueprint for what a design should look like. Genres, as well, are great for describing fiction, but sticking too closely to their conventions is anathema to the imagination. The issue with “risk and reward” is that the risks and the rewards aren’t actually what’s interesting about the challenge of a game. There are two things that are interesting about game challenges: planning and mastery. The most satisfying experience in a game is coming up with a plan and then executing it – or failing to execute it and having to improvise a new plan and execute that. While viewing a player choice as a risk and a reward can give an insight into how these strategies will take shape, it almost never shows the whole picture.

You might be wondering what specific tree the branch up my ass came off of at this point – that is to say, you may be wondering what actual game design decisions I have in mind when I say that this faulty metric has led designers astray. The first example I have is probably a contentious one, because I know lots of people really like it, but I think that parrying in the Dark Souls games is garbage. You have a game that rewards careful analysis, positional play, and timing, and then you also include a mini-game that lets the player ignore all of those things if they can hit the button at the right time. “Do or do not, there is no try” may be helpful advice for space wizards, but it is a pretty lousy way to design a game. By the metric of “risk and reward” parrying looks like great game design – you take a risk of eating an attack to the face for the reward of distributing an attack to someone else’s face! – but in terms of giving the player something interesting to do it fails. It’s Guitar Hero with the sound turned off. It’s a Quicktime Event with no button prompts.

Shields in Dead Cells share most of the problems with parrying in Dark Souls (which makes sense since that’s what they were clearly inspired by), but a much bigger issue are the cursed chests. In Dead Cells, a roguelite game where each run is unique, you frequently find cursed chests. These chests contain a fairly useful reward – a bit of money and item-unlocking currency, a high-level weapon, and the equivalent of a level up – but in return they curse you, which means that if you take any damage before the curse is lifted you instantly die (the curse is lifted after you kill 10 enemies). These become an incredibly awkward piece of design, though, since both the risks and the consequences of those risks increase rapidly to the point where there’s essentially no way for the rewards to keep pace. Early on, if you find a cursed chest there’s very little reason not to take it: If you die you don’t lose very much, and it might give you just the item you need to pull your run into shape. Past that point, though, you start to risk completely losing 30 minutes or more of gameplay, and having to completely redo the relatively rote early levels, in order to get an item which you’ve already probably got something more useful than and gain some currency you don’t need. So, in this case, not only is the trade-off not very interesting, but the choice is usually obvious based on your situation.

So how do we try to make the choices in the game interesting, if not by measuring their risks and rewards? The key to whether a choice is compelling usually lies not in what we risk or we sacrifice, but in what we need to take into account to make that decision. If any given choice could be good or bad based on the situation, that generates an interesting thread of thought to follow – assuming those externalities themselves are interesting to navigate. If a choice will always be great in a particular scenario and you know that the scenario will be in play when you take the choice – IE if fire weapons are extremely useful against the ice monsters and the next level is populated entirely with ice monsters – then it’s not really an interesting choice whether or not to take it, since you know it’s optimal. These sorts of obvious best choice situations can be good for pushing the player to try a new mechanic, but aren’t interesting in and of themselves. Conversely, if you know the next level could have ice monsters or robot monsters or a dark labyrinth, and while the fire sword isn’t great against the robots it’s fantastic against the ice monsters and also can help light the way through the labyrinth, but the laser is more generally viable against the robots and ice monsters but has limited ammunition, but you’re really most comfortable using the poison scythe and generally prefer it – this starts to become a really interesting choice, one generated from the specific combination of the situation and how you in particular feel comfortable playing the game.

A great example of this kind of decision-making is the choice of whether or not to take a given card in Slay the Spire. When presented with a set of potential cards to take, you weigh them in terms of their general usefulness, their usefulness in the deck you have now, their usefulness in combination with other cards you might get in the future, the likeliness of getting those cards, what boss you’re expecting to fight, and more. Every decision has a risk and a reward, sure, but the designer didn’t determine what the risks or the rewards were in an excel document, these risks emerged from the nature of building a deck, and the reward is of seeing a machine you have built work flawlessly.

There’s a lot you can learn from thinking about a given player choice as a risk and a reward, but there’s even more that can be obscured if you trap yourself into seeing it only through that lens. Every player decision has to have context, has to have its place in an overall strategy that emerges from the player’s engagement with the game’s situations and tools. If it does not, it’s a coin flip or a Quicktime Event.

I’ve continued to play a ton of Slay the Spire over the last few weeks, though I’ve dropped off a bit. Even when I’m not playing the game I like to think about the game. I’ve been thinking about one card in particular quite a bit recently: Prepared.

In order to talk about this card and why it’s interesting I’ll have to talk a bit about how the game works. Put briefly, each turn you draw five cards, and at the end of the turn you discard your entire hand. During each turn you have a certain amount of energy with which to play cards: You start with three energy and most cards cost one to play. In between battles, you get a choice to take one or none of three cards, and judicious selection of these is key to achieving success as the battles get more and more difficult.

Now that you have some context, Prepared is a card that costs 0 energy to play, draws an extra card, and discards a card. It is commonly regarded to be a bad card. I don’t think it is.

There’s a couple of naive ways of looking at Prepared. The first is just looking at the 0 and the draw a card and saying “ooh, free card draw!” Of course, it actually isn’t free draw, because the card you end up drawing is just the next card in your deck, which you would have drawn anyway if you’d never bothered to take Prepared. The second naive perspective is the reaction to the first, saying essentially “well this does nothing but discard a card, so it’s bad”. The first part of this is true: The second part isn’t.

There’s two assumptions that go into this read. The first assumption is that discarding a card is bad, which it frequently is not. Once you’ve spent all your energy for the turn, extra cards are useless, so until you start generating enough energy that you have some left over after using all the cards in your hand Prepared costs you nothing at all. The second assumption is that discarding a card is not good, which sounds a lot like the first one but I think is subtly distinct: One is the understanding that much of the time there is no penalty for discarding, the other is the understanding that there is frequently utility in discarding. On the more obvious end of this, there are a number of cards and relics which take advantage of discarding in various ways – in describing Prepared as a bad card people often stipulate “except in a discard deck” because of these. However, there are also curses and negative status effect cards that you want to discard before the end of your turn to keep from taking damage or other negative effects, giving it a utility outside of these specialized synergies. If you have 3 energy and most of your cards cost 1, as at the beginning of the game, prepared is almost never bad and is occasionally quite good.

Then there’s even more obscure and surprising uses. I’ve used prepared as nothing but an empty card that costs 0 alongside abilities that activate with every card played. I’ve used it to dump cards which were perfectly fine but also not good enough to justify using them when I could empty my hand and use a relic to draw more cards. I’ve used it to pull an extra card so that next turn I would have 0 cards left in my deck and be able to use an ability that’s only usable under those circumstances.

This is what’s wonderful about Slay the Spire. Everything has a use and everything works together, sometimes in delightful and unexpected ways.

I do think it’s interesting, though, seeing how these concepts of card advantage and deck thinning sometimes fail to transfer from games like Magic and Hearthstone, where most people learn them, to a game that is, on the surface, very similar. Thinning your deck is if anything even more important, since you end up cycling through it multiple times in a combat – however, the utility of cards that only serve to grind through the deck faster is questionable, since by adding them they themselves thicken the deck, which less of a concern in a game with a set deck size. Drawing more cards in Slay the Spire doesn’t control your long-term prospects the way it does in these games either, since you’re going to draw a new hand next turn anyway, but solely offers you a way to burn energy to achieve advantages on the current turn.

In all honesty, Prepared still isn’t a very good card. It’s bad in a lot of decks, and in a lot of others it makes no real difference one way or the other. Still, the specifics of when and how it’s good, and the assumptions that people bring into the game about why it is useful, or fails to be so, are interesting.