I’m thinking about how I think; I’m processing my process. For whatever reason I seem to be a weirdo who ends up thinking about things a bit too much – most of the time this is super inconvenient because it makes it difficult to communicate with people who aren’t me and aren’t related to me. Sometimes also it lets me make interesting things, perhaps finding an novel angle or perspective someone else might not find. Or maybe I have the cause and effect in reverse here: it could be that the act of creating interesting things has tweaked my brain away from the standards of human discourse and into weird and specialized pathways. Likely some combination of both.

Perhaps these approaches will be interesting to you. Perhaps not. I’m trying to spend a little bit more time understanding how other people approach problems and creative tasks, so perhaps you may be interested in learning my approach as well.

What do I do? I ask a lot of questions. From any one fact, you can start pulling at threads, start prodding at what has to be true as prerequisite to this being true and what must be true as consequence of this truth. From any one fact you can then unearth a network of facts, from any one question a cluster of further questions. This is helpful for the direct problem-solving stuff, figuring out what logistically needs to be lined up in order to make something work, but it’s also useful for philosophical exploration.

Really what I’m talking about is applying programming logic to situations more complex and nuanced than a program. I’m used to programming, so I always try to find the most general case. If there’s something that is common knowledge in one field, maybe it has more general applications – helpful metaphors are often born this way, such as when we take our knowledge of the way rot spreads through produce and helpfully inform people that one bad apple spoils the bunch. Unfortunately then people use the helpful phrase “one bad apple” to say oh it’s just one bad apple so it’s not a big problem, which is the exact opposite of the intended meaning of the phrase. It may be that we don’t have much call to skin cats any more, but there’s probably still more than one way to do that if we really want to, and probably more than one way to do most other things as well. Metaphorical aphorisms are usually just using a specific description of the best practice within a field that to describe a more general case. In, um, exactly the way that I just did by beginning this paragraph with an analogy to programming.

Of course, these metaphors are often bullshit. That is, what’s good for the goose may not actually, in practice, be good for the gander. So somewhere in this spectrum, from the specific application of an idea out to the most general application of the idea, there’s usually a point where it stops actually being useful and becomes extremely bad advice. Somewhere there’s a discontinuity. These breakdowns are also an interesting point to start exploring from, to trace out from the specific case to the general until we find where it breaks down, to then describe the consequences of using this concept, that has been accepted as generally good advice from a specific application, outside of the scope of its utility.

Sometimes something just feels wrong, or disproportionately satisfying, in ways that aren’t readily described, and these are interesting places to begin to explore as well. Why does this feel different than I’d expect it to? What does it say about the thing that it instigates these emotions? What does it say about me thant these emotions are instigated? Or, inversely, if others seem to be reacting unusually strongly to something, what does that signify? What sort of unfed hungers are indicated when something becomes explosively popular, what kind of unspoken rage when something becomes a locus of incandescent fury?

The through-line here is that we probe using emotion and intuition and then dig deeper using logic. Neither of these tools are honestly especially useful on their own – and most people who think they’re being purely logical are actually being guided and biased by emotions they fail to acknowledge, people who sell themselves as in tune with pure emotional truth are guided by logic that they pretend isn’t logic by avoiding giving it any concrete description. Everyone is guided by emotions and by logic, and the only way to navigate with any goddamn clarity is to acknowledge the presence of both and harness them, rather than to try to reject one or the other. The above approaches are really just frameworks for attempting that – likely your own personal creative and analytical approaches are as well.

Advertisements

I feel that this is probably an essay that I should preface with the disclaimer that I don’t really know what I’m talking about here. This is a chain of thoughts and suppositions which grazes on some touchy subjects, and I could be way off base. Nevertheless I feel that these may be thoughts worth sharing.

Okay. Something that seems a bit odd to me is that we describe mental illness as being a qualitative aberration rather than a quantitative aberration. That is, we say that depression is categorically unlike mere sadness, ADHD categorically unlike mere restlessness, narcissism categorically unlike mere pride, and so forth. I think we’re doing this a lot right now to emphasize the fact that mental illness is real, it’s an actual thing that happens that can ruin lives and kill people. Unfortunately, I feel like this framing leaves a lot of more marginal cases in a terrible position. What of those of us who fail to meet the standards of depression but have mere everyday crippling melancholy? What of those of us who are merely distractible, proud, irritable, impetuous… Are we therefore completely fine, even if we find coping difficult?

I don’t think separating mental illness from mere emotional difficulty is always a beneficial viewpoint, or even frequently so. I think of emotions as being like allergies – they’re a response in our body that exists to protect us from irritants, to make us healthier. They usually do. However, occasionally they can be counterproductive – and sometimes they can kill us.

Context and quantity are the only things that separate a medicine from a poison.

There are often dialogues comparing mental illness to emotion in an attempt to discredit the concept, acting like depression etc are new inventions or that the only necessary solution to these sorts of imbalances are a nice jog through the woods or some other horseshit. These idiocies in particular tend to make this a territory that people, particularly those directly suffering from mental illness, get defensive about. The popular conception is that emotions are fundamentally controllable, and that being overwhelmed by them is a sort of personal weakness – therefore, since mental illness is not a personal weakness (except perhaps in the same literal sense as a paralyzed limb), emotions and mental illnesses are categorically dissimilar.

In describing how mental illness is fundamentally unlike emotion, people often describe it as a chemical imbalance. Again, I’m speaking as a complete layman here, but isn’t this still completely consonant with the conception of them as an overwhelmingly strong emotion? All emotions are chemical, and if it’s such an overwhelming sensation that it derails your life that’s obviously imbalanced.

If we understand emotional balance as being situational, delicate, and on a spectrum, then it becomes clear that a) we probably all need a tune-up and b) there’s little hard division between everyday overemotional and dangerously ill. Unfortunately, the mechanisms of emotion are extremely complicated and can easily be knocked out of alignment, and it’s incredibly difficult to tell when this happens. Everything that governs who we are and how we act is a machine that is a black box to us whose machinations are occasionally extremely volatile. I guess my point is… don’t wait until you’re undeniably, obviously, and perhaps irrevocably sick before you start thinking about your mental health. Those things which become illness often start as something smaller, and it’s better to go to a doctor than to an emergency room. And, I guess, also, don’t assume that just because someone hasn’t crossed the threshold to mental illness they are totally fine.

I’ve been thinking about ideas and about plagiarism. The concept of stealing ideas is a bit of a tricky one, since it conceives of ideas as having a single origin point, or of being discrete and quantifiable things – but so much of the life-cycle ideas is of them being misinterpreted, reinterpreted, hazily remembered, recombined, turned into new ideas. I have not plagiarized and have no intention to, no one has ever accused me and I hope no one ever will, but I’m sure that some bits and pieces of other ideas which have originated elsewhere have found their way into my work. How could they not?

There’s so many ways for this to go wrong and I have honestly zero idea how to approach this as a problem. People should get credit for their work, but also it should be possible to accumulate knowledge without strenuously and probably erroneously tracking the source for every tidbit we happen to pick up. I can see the endpoints of both sides of this chain of logic though, and neither is satisfactory: Either we regard everything that goes on in our minds as wholly our own, and the credit and benefit of all ideation goes to the highest profile person to convincingly present those ideas, or we regard everything that goes on in our minds as coming from elsewhere, at which point presentation of ideas stagnates as everyone is certain that their ideas must have been presented elsewhere and better before. I feel like I’ve been on both sides of this, and found both deeply unsatisfactory.

Fortunately we are not restrained to only the extremes of every possible divide. There’s probably a few guidelines worth following here:

  1. If you know an idea came from elsewhere, give credit. If you’re not sure and it’s searchable, do a search. If you’re not sure and don’t find anything, but someone points it out later, update to give credit.

  2. Independent discovery happens all the time, but there’s no reason not to give someone credit if they happen to come up with the same idea. Go ahead and link to their work as further reading if it comes to your attention later.

  3. Try to add something new. Personally this is always my goal when writing, but it’s a worthy goal as well to just seek to share knowledge – a fact I sometimes forget to my detriment. If you’re just sharing knowledge, though, try to share some of the context where you got that knowledge as well – context is everything, and sometimes this context will make it easier to give credit where it’s due, especially when that credit is experience gained as part of a community or culture. It will also, incidentally, probably make the knowledge more useful.

  4. Boost the voices of others when they say something that you feel is worthwhile. Giving credit for the ideas that inspire you is great when it comes to the piece that has been inspired, but maybe even better if you do it every day.

When it comes to that last point I feel rather remiss. I think I feel hesitant to try to boost other peoples voices because I usually feel that I don’t have much voice of my own, and that it feels weirdly presumptuous to do so, but I definitely should get over it. In the spirit of trying to do better, here’s a few voices that I think have had a particular influence on me:

This current reflection, in particular, I started writing as a consideration of Liz Ryerson‘s recent thoughts on the economy of ideas in games writing. She’s a particularly incisive voice in critique around games culture, though she’s now distanced herself from that scene. Also I think her game Problem Attic is quite interesting, particularly in how it harnesses the mechanics of glitchiness into metaphor and has likely inspired several of my pieces.

A lot of the original inspiration for starting a blog to talk about game design came from the lectures of Jon Blow – though I often disagree with his perspective nowadays, I still feel his critiques of mainstream game design were interesting and edifying. Unfortunately I unfollowed his Twitter feed after his unimpressive and unnecessary response to the Google memo shit show, but I can’t deny that his ideas have greatly influenced my own.

I was also inspired to begin this blog by the insightful and hilarious discussions on the Idle Thumbs Podcast, where the experience of playing games is taken apart, put back together again, and extrapolated out into absurd scenarios exploring why we love ridiculous video game bullshit.

There’s probably more I should put here, but I don’t expect to solve this all at once. I’ll just try to boost more voices in the future – and, perhaps, the first step in doing that is to believe that my own voice can be heard.

Among other ways to think about games, one that I rarely hear spoken of is their capacity as attention engines. Among the many social and emotional needs we have as human beings is our desire to be heard – even more than to say anything specific, we just yearn to be able to jam a flag into the dirt for people to see. Having someone listen to you, even passively, can be hugely rewarding, emotionally and even intellectually, as you feel connected with the world.

Before most games, we had ELIZA. ELIZA is a simple chatbot made to emulate a psychotherapist, one who answers every question with a question. “What are you thinking of?” “How did that make you feel?” “Why do you say that?” Despite being created with an almost parodic intent, to show the superficiality of human-machine communication, people felt a genuine connection with ELIZA, and sometimes even a degree of therapeutic benefit. Now, in 2017, there are lots of ways to be listened to by machines. Most of us have a machine in our pocket who will answer our questions as best as it is able, and will listen and respond to anything we say no matter how inane – though, perhaps, not in a very satisfactory way. Still, unsatisfactory answers are not necessarily too different from what we’ve come to expect from genuine human social contact either.

A lot of what we want from games is for them to just respond to what we say to them. a game lives or dies on its ability to react to us, to listen to what we are trying to say. Because real artificial intelligence is a very long way away still, our methods of communication are usually greatly restrained in games to enable them to react in a satisfying way. A game’s controls are the language we use to speak to it: Frustration ensues when a game misunderstands what we are trying to communicate, or when it doesn’t allow us to communicate the thing that we desperately want to. We describe these sorts of problems as control issues, which I suppose says as much about us as it does about them.

This is what people really want a lot of the time when they ask for non-linearity. They don’t care about replay value and they don’t care about getting the sex scene for the character they like, they just want the sensation that they are being listened to. They want the video game equivalent of someone nodding along and saying “Mm, wow. Interesting. Huh. I see.” That those multiple branching paths and endings and romantic partners are available is chiefly valuable because it reifies that sensation, makes it feel solid and responsive – that there is, indeed, someone listening. And, perhaps, there was, 18 months earlier, a designer who listened to them by way of the player proxy voice who lives in his head, who he designs for.

A little while back the game Passpartout: The Starving Artist became a small-scale hit. Passpartout is a game where, playing as the titular starving artist, you paint using an extremely simple art program, and passersby choose to either buy your paintings or not, occasionally giving explanations as to what they like or don’t like about the picture. Now, the engine for evaluating these paintings is seemingly pretty simple, apparently rewarding the artist more frequently for time spent than for technical skill, but it serves its purpose, effectively convincing the player that the game is paying attention to what they are creating, that they’re not just creating into a void – which it all too often feels like artists are.

Unfortunately, Passpartout is quite similar to a prototype made by Jon Blow called Painter. He released this prototype for free on his site, and in talking about it he described it as a failed prototype because it failed to realize his vision of a strategy game where you create paintings to appeal to different gallery owners and curators and achieve success. Some statements made on Twitter suggest that he was dismayed that a game so similar to his failure could be a success – but there was no actual reason for the Painter prototype to be a failure except that it failed to achieve the vision he’d had in mind. The element of trying to appeal to tastes was never actually very interesting. What’s interesting was making a painting and having it be seen and acknowledged, of being told that something you’d made had worth. All the judging algorithm had to do was make it so the game could reasonably successfully determine which you’d worked hard on and which you’d hastily crapped out and evaluate them appropriately, just to ensure that you knew it was paying attention.

This may seem fake or trivial, but this loop of the player communicating something and the game responding is the core of what a game is. It doesn’t need to be a real or detailed response, it just has to be real enough to show the player that someone, or something, is listening.

A game is a collection of symbols and rules for how those symbols can interact with one another. I described these as being obstacles and tools, one of which defines the parameters of success and the other of which are used to navigate through those parameters – but the division between the two isn’t necessarily as harsh as I implied. Sometimes obstacles and tools are the same – that is, sometimes you can use one obstacle to navigate another.

There are a number of interesting examples of this – you could jump off of an opponent’s head to reach a higher platform or, inversely, you could jump to a higher platform to cut off an opponent’s pursuit. You could lead a group of enemies from one faction to encounter a group of enemies from another faction so they start fighting each other, or you might reposition a trap to activate another trap safely. You might even intentionally jump on an exploding trap to boost yourself to a normally unreachable rooftop. In addition to obstacles sometimes behaving like tools, on occasion a tool will take on the characteristics of an obstacle. The most common example that we encounter in games is probably the hand grenade, which is both an extremely potent weapon and also a convenient way to quickly separate the component parts of your bloody carcass.

There’s no reason for tools and obstacles to be different at all, in fact, when they all just boil down to being a set of physical properties and behaviors. There’s no reason for an object to know what its purpose is, only for it to behave in accordance with a set of instructions. In Spelunky, for example, you can pick up and throw almost everything in the game – rocks, pots, laser turrets, unconscious yeti – and, once an object has been thrown, it affects the environment in pretty much the same way no matter what it happens to be – well, unless it happens to be explosive, in which case its effects will be more dramatic. It’s actually fairly commonplace to throw a rock to set off an explosive only to have the rock blasted back in your face by the explosion and hit you… into a yeti who throws you onto a collapsing platform which falls on top of a mine which blasts you into a pit.

When items are agnostic of their origins and purpose, surprisingly intricate interactions become possible. In many other games, rocks would only be useful for hurting opponents or for setting off traps – in many other games mines would only be activated by the player, enemy bodies would be inert, explosions would only affect things which could be damaged. It’s fascinating what can be achieved when we allow items to be exactly what they are, instead of design them specifically to fit a particular role in the game.

Another interesting example is Phantom Brave, a tactical RPG from Nippon Ichi Software: In this game, the main character Marona is the only living character you control, with all of your teammates being ghosts she has befriended and which she can summon to help. The catch is, to summon a ghost she has to have something to summon them into, which can be something as ordinary as an everyday bush or rock or as ornate as a cursed sword. Whatever they get summoned into, they gain properties of that object so, for instance, if you summon someone into a tree they’ll probably be more vulnerable to fire damage. The other catch is that literally any object on the battlefield, including other characters, can be picked up and used as a weapon. These systems, interacting with others such as a system where every item comes with a stat-modifying adjective before it, enable some really strange and intriguing strategies. Sometimes it’s necessary to pick up one of your characters and toss them across a pit, pick up an enemy and beat up his teammate with his body, or summon a weak character holding a really nice rock you’ve found and have them drop it there for you to summon a more important fighter into. Later in the game some enemies are even scripted to start picking up and throwing powerful items off of the stage to keep you from summoning allies into them.

In some ways, this functional agnosticism of game objects is the default – when I say a game is a collection of symbols and rules, I’m not just speaking conceptually, but also in terms of how games are made, what the internal programming logic that goes into their operation is. So, one might ask, if this is the natural state of the game, and if this creates so many interesting and emergent situations, why do so few designers allow their games this kind of leeway? Unfortunately, when you increase the possibility space in a game this way, you also increase the odds of something going haywire, of an uncontrolled feedback loop or absurd dominant strategy that completely undermines the intended game design. Part of why this openness was possible in Spelunky and Phantom Brave is that these are very tightly controlled designs – Spelunky through having a small set of game elements with only a couple of methods of interaction and a straightforward and minimal progress path, and Phantom Brave through presenting a restrained and traditional tactical RPG interface. For a look at what this ends up looking like without this sort of restraint, take a look at Dwarf Fortress – which is an important and fascinating game, to be sure, but is not easily accessible to most audiences and results in scenarios which, though they are amazing stories, frequently represent bizarre and illogical breakdowns of the symbolic logic as the system recursively interacts with itself.

Still, it’s worth considering: How am I restraining this object, making it behave in a more constrained way than it has to – and a less interesting way than it could? How would it affect the overall design if these constraints were to just, maybe… disappear?

My energy ebbs and flows. I have good days, bad days, and runs of each which last weeks and sometimes months. Every time I have a few good days or a few bad days in a row I feel like I’ve figured a bit more out about how I work, how to optimize and improve, and perhaps gradually approach a better version of my life by trial and error. I’ve also been thinking about studies with giving animals rewards on a random schedule, and the weird random irrational behaviors they began to exhibit as they tried to determine how the reward was ‘earned’. How much of self-improvement boils down to superstition, to trying to behave the way you did that one time you felt good instead of the way you did that time you felt crappy? Sometimes it’s hard to tell the difference between wearing a lucky t-shirt and getting 8 hours of sleep. They’re both supposed to help, and proponents have a way of writing it off when they don’t.

Okay, yeah, I suppose this is what science is for, analyzing empirical evidence while accounting for secondary factors, but that’s really only helpful for analyzing trends across a population. When it comes to what works for me, what makes me feel better or worse, what makes me more or less productive and satisfied, I have a sample size of one with an unknown time delay between input and output. There’s a lot of noise in my signal, and it takes a lot of samples before I can be satisfied that I feel shitty and tired because of something I ate, something I did, something I failed to do – any and all of that may be just background noise on top of a signal which may just indicate that maybe I just feel bad sometimes.

Trying to debug a system that you live in is difficult. It’s difficult when it’s your body and it’s difficult when it’s your culture. Everything you change changes you, and everything that changes you changes your capacity to observe the system. It’s easy to get discouraged. The only advantage we have is the depth and quality of information we receive: We don’t have to wait for something to break, we can just tell when something feels wrong. In some ways that really sucks, because it means we spend a lot of the time feeling something is wrong without knowing what. Sometimes we’re afraid it’s nothing – sometimes it is nothing. Usually, though, if we feel there’s something wrong, there’s something wrong.

Many people prioritize the logical over the emotional, deriding those who would say that something just ‘feels’ wrong. A lot feels wrong at this point – another thing making the systems we live in difficult to analyze. But telling people to ignore these feelings is as shortsighted as telling them to ignore any other pain – pain is an indication that something is cut or broken, and even if we sometimes experience it for no good reason there’s never a good reason to ignore it.

Just because there is noise does not mean there is no signal.

Your lips are intriguing, a sheath for your teeth,
a curled pearl snarl or wrap around rocks,
singing a song or intoning a poem,
bent up in symbols known only to you.

Cheeks rising with lips and eclipsing the eyes,
similar shapes shared in joy and dismay,
skull sockets packed with radial muscles,
and dark skin draped eyelids to hold in the ball.

Brows beetling and bristling make facial creases,
rising in surprise and bowing with frowns,
indicating which way your day is now going,
an arrow pointing up or one pointing down.

Your nose in repose stays smooth as breath flows,
sides flare with your temper, your bridge draws together,
in anger and sadness, over troubled waters,
sorrows salty like seas and easier to drown in.

Lines define the edge of the mouth,
up to the nose, lips out like a bell,
then around down the sides and under the chin
sometimes invisible, but seen when you yell.

Skeletons and paintings can't choose when to smile,
but you do,
these lines can't choose to lie,
but you can.