Monthly Archives: November 2017

This is the last of a month of daily Problem Machine blog posts. It’s been a tiring month. I’m looking forward to never writing another word for the rest of my life, or at least a few days. I guess this is the time to reflect back over what I’ve learned.

  1. Ideas are not rare

I worry sometimes that I’ve already thought of every topic that I’m going to think of, that the barrel is dry and I’m just scraping out splinters. I don’t consider that a reasonable worry but also I don’t consider it an escapable one. What’s been driven home over the last month is that not coming up with any ideas has more to do with where I’m at on that day – that when I can’t think of anything it’s not a permanent affliction, but just one day where my brain is interested in doing different things that aren’t coming up with ideas for something to write.

Unfortunately, when I’ve committed myself to doing daily essays I can’t really allow my mind the extra time it wants to come up with something, so I end up having to push myself to write after several hours of thinking and false starts. This is the most exhausting part: The actual writing is usually (not always) fairly effortless, comparatively.

  1. Ideas do, nevertheless, become scarcer

The first 10 days or so were fairly forthcoming and exhilarating, though it still took a certain amount of pushing to get myself to come up with concepts, and a while to build up momentum. The next 10 days were probably the easiest, where I had my habits built up and still had a creative reservoir, but I started feeling the strain.

The last 10 started really taking a toll. It might also be the weather changing for Winter I suppose, but I’ve been very tired. Nearly every post now takes a few hours of sitting and thinking and reworking before I can turn it into anything, and this isn’t leaving me a ton of time and energy for other work. Fortunately, for today’s post I had the incredibly convenient pre-made topic of this being the last daily post to write about!

  1. Super Hexagon is a good video game

I’ve written in the past about how I like to use Super Hexagon as a creative tool, almost a form of meditation, since it requires such acute spatial concentration it really leaves the verbal/abstract parts of my brain free to think about this and that. Thus for the last month, as I try to write every day, I have been playing approximately one shitload of Super Hexagon – enough to actually get good at the game again and beat most of the best times on my friends list.

It’s a relief, when I’m drilling myself on the abstract ideals of improvement at art and what that means in this world, at the unsolvable dilemmas of game design and how to do better, to spend time in bits and pieces in something that I can definitely and quantifiably improve at. Many games promise this idea of visible improvement, but few single-player games in particular can satisfyingly offer it – frequently offering upgrades to equipment and characters instead of instilling a direct change in the player’s skill. The aspirational goal being measured in mere seconds is pleasing in both its straightforwardness its limitedness: Even an amazing time, for me, would be at most a few minutes, which is something I can definitely fit in my schedule. Even though I described the last month as having contained one shitload of Super Hexagon, in fact I think I’ve spent less than 10 hours actually playing it over the last 30 days – it just feels so dense and active that it felt like many more.

What’s next? I think I’m going to be going back to weekly posts for the immediate future, though I’ll probably be skipping this Saturday for obvious reasons and will probably be a bit spotty through December for other obvious reasons. At one point I was considering going twice-weekly and starting a Patreon to support my writing, but though the readership seems to have increased a bit – around 30 views a day, which is encouraging but not astounding – I don’t think I have much of a readership base sufficient to really offer significant support. Feel free to pipe up in the comments if you feel differently.

That said, I do generally feel more confident in both the quality and consistency of my writing ability now, so I’ll probably be working on collating a bunch of past Problem Machine posts into some sort of structure and begin the process of converting that into a book. At a rough estimate, I think I probably have about 5 years of weekly 500 word blog posts, and between overlap and unsuitability I figure I’ll probably be able to use maybe half of these, so this book will start with around 60,000-70,000 words, which I can then revise and add supplementary material to to round it to probably around 100,000-150,000 – pretty substantial. We’ll see when I get there, but I think it could be something I can be really proud of when it’s done, and encompass a lot of the philosophy I’ve put into this blog.

Part of the reason, as well, that I think I’d like to put a book together is pursuant to one of the ideas I’ve been talking about recently: The idea that to be a good artist is to be a good promoter of your art. It’s not an approach that comes easily to me, but I think as a naturally cautious person I have a much easier time promoting the idea that this thing I have made is good than the idea that this thing I will make will be good – I am generally very chary of making promises about what will happen in the future. Having one discrete thing that I can promote as my work sounds very appealing. If people then take that work as evidence that I can produce work of similar quality in the future, that’s on them – even if I, too, hope and believe that they are correct in that presumption.

I will probably also do another month of daily work in the near future, even if this one made me want to die a little bit. December’s no good, and I will need to stabilize my money situation a bit – this writing-binge was enabled by a small windfall I received a few months ago, which I’ve tried to be careful with but which half of has already eroded. Probably next up will be a daily music project: I’ll post the results here probably in weekly digests. This is all up in the air, but I thought y’all might be interested in hearing where I’m going with this.

So, to close out this month, here’s some of my other stuff you can check out:

As I just mentioned, I write music. Here’s where most of it is:

I also stream on Twitch! My current schedule is Tuesday, Thursday, Friday at 8pm Pacific time, Sunday at 6pm Pacific time:

I’m also working on a game! I’ve been having to dial back my efforts on this recently due to increased focus on the blog, but I post about my progress on that project here as well.

Thanks for checking out my work. Every view and every like means a lot to me, since it’s so easy to feel isolated and powerless in the world today. I hope I’ve brightened your day or broadened your perspective a bit, as well, through the work I’ve put in over the last month, and the last five years.

It’s a tough banana to split, knowing how much better you could be while trying to convince yourself you’re good enough. The more one improves the more capable one becomes of seeing room for improvement. Now, the Dunning-Kruger effect suggests that at the highest level of skill one becomes able to confidently assess one’s ability as being extremely high. I don’t think I’ve ever met anyone at this level of skill. I’m not sure it’s reasonable to expect ever reaching that point, at least not in the near future. The experience of art that I have now is probably close to the experience that I can expect for the next decade: Being better than many, but also just good enough to see how much worse I am than I could be.

The good news is that this is one of the exact traits – along with enthusiasm, patience, and, I dunno, talent if that’s a real thing – that I will need to improve. The bad news is that it’s real fucking annoying.

Skill isn’t everything. I mean, when it comes to art it’s hard to even quantify what skill means. The idea that being a skilled painter equated to perfect photo-realism went out of style when cameras came in and did that job better. Who the hell even knows what being a good writer means? We just know it when we read it. Except we usually don’t, considering the career of Dan Brown, who I’ve never read but also I don’t want to because I’ve heard he sucks and I believe it. We have the production of near-identical ‘good’ movies down to such a science that people hunger for less competently made films in the hope that they at least provide something new and interesting. Good art and bad art are mostly just signifiers of what we value, nothing intrinsic to the work. Skill is the ability to produce the thing that’s closest to what you think of as good art.

It’s a real pain in the ass if what you think of as good doesn’t line up with what other people think of as good. When that happens, the better you get, the less you rely on cliche, the further away you drift from what people want. Poor Van Gogh, making the best paintings he could in a style only he could achieve, and no one wanted them. Only later did the definition of good art shift enough to make room for his work.

That’s the third rail in this banana split: Even if one were to somehow achieve perfection, to perfectly realize the dream art floating in your brain, to really pour yourself onto paper or canvas or celluloid, whether that’s ‘good’ or not depends more on the world than it does on you. Which is why most of the job, the actual work of being an artist, if you want an audience, if you want money, is to convince people that whatever it is you’re doing is ‘good’ – to bring their idea of good art into alignment with your own by any means available.

It’s bad news for those of us who have just been locking ourselves away and practicing. We got to the late game and realized we leveled up the wrong skills. Of course, if food and medicine and shelter weren’t issues, we could roll with it, hope that maybe someday the world’s tastes would coincidentally come along and align with our own, just like they did too late for Van Gogh. Unfortunately, we don’t have that sort of leeway.

Maybe not by nature, but by necessity, making art is a sales position.

There’s a proverb, “genius is the infinite capacity for taking pains.” The main thing that determines our capacity to do well at a task is how much we work at that task; the main things that determine how much we work at a task are our interest in the work and our freedom to pursue it. If, as a society, we want to see a lot of great art made – as well as, incidentally, a lot of great everything else – the path to making that happen lies in creating environments which foster a deep and powerful interest in pursuing the act of creation – and which allow the freedom to pursue that creation.

This is not the environment we live in. Most of the education we provide is shaped to develop a keen interest in the rewards work can bring rather than an interest in the work itself. People who show an interest in work for its own sake are usually branded nerds or dweebs of one variety or another. Moreover, we seldom have the resources to wholly pursue our interests, being forced away from them by the pressures of financial solvency or other obligations to society. This is why most successful artists come from wealth: It’s not that wealth enables them to train more effectively at art, it’s that wealth enables them to focus entirely on art in a way not available to others.

Both of these are problems that need fixing. The latter, of how to provide everyone with the freedom to pursue their own interests, is at this point a matter of fierce debate – though not generally couched in those terms. We talk about this kind of freedom in terms of health care, in terms of food and shelter security, in terms of basic income: These are the tools that might grant freedom to people by shielding them from the elements long enough for them to do work they actually care about.

The former is a bit trickier. How do you make people care? Why is it it we think it’s uncool to care? Is it even ethical to lead people to care in a world where precarity makes taking an interest in more abstract subjects potentially dangerous?

There’s something vital at stake here. If we stop creating, we are done. We are in a desert, and though we are starving and dying of thirst, it is the worst time to stand in place. We have to keep moving – and, to move, we have to want to move. We must create, so we must find our passion for creation.

Or maybe that’s just self-serving. Maybe I just want us to keep creating because that’s what I care about, and this passion is a path to our self-destruction. The point remains, though, that interest, or passion, is a limited and vital resource, and that we so often just let it drain out through our fingers without even noticing it’s there. When you pay attention, there are no refunds.

Way back when I started this blog, one of the first essays I did was about conceiving of a game as the combination of three related spaces – physical, mechanical, and narrative – and gameplay as the act of allowing the player to explore these spaces. I think this perspective is still useful, though sorely in need of revision now, five years later (I’ll likely return to it at some point in the future). However, whenever I think about how different games emphasize one or the other of these attributes, whenever I try to draw a hard line between where one space begins or ends, I run into a bit of difficulty making that division. The mechanical and narrative spaces are fairly easy to delineate – one is the actions you can take in an environment and how that environment reacts to those actions, the other is the story that is told about those actions and the context in which they take place. However, the physical space, which one would expect to be the most intuitive of the three, is a bit more difficult to delineate – and I think it has to do with how we create physical space in games.

The problem I keep running into is that the physical space of the games is actually created by means of mechanical and narrative elements. The mechanical aspect of the space is your ability to move around on some parts of it and to have your movement blocked by others, and the narrative aspect is the colors and textures and what they suggest about the world you’re in. Together they create something that feels like a chunk of physical world to explore, but there’s nothing actually physical there. A sense of physical space is created, but it is not separable from the mechanical and narrative elements that contribute to it – not in the same way that the mechanical and narrative elements are separable from each other.

It may be extremely obvious that the physical reality of the game world doesn’t exist, but it’s suggestive that we create the illusion of a physical reality through recreating the parts of reality which interest us most as humans. That is, when we encounter an object, our concerns are a) what can I do with this? And b) what does it mean that this is here? This, of course, has very little to do with the actual material world, where objects are made of many different bits and pieces, covered with bits and pieces of everything else, subjected to forces we have an incomplete understanding of. What’s noteworthy is not that we are simulating a reality, but that we are simulating outwards in, out from the superficial aspects we find ourselves most interested in, down into the more fundamental aspects such as mass and warmth only as we find those necessary to power the superficial simulation.

Tangentially, I am now quite certain that if we had any way to simulate texture and taste in games we would have done so long ago, as these are also superficial aspects of great interest to human perception.

It’s fascinating that we so many of us consider what we’re doing to be realistic. What we do with games is render exclusively that which can be seen: every 3d object is an empty shell, every character who is modeled is simply their exterior with nothing inside, and interior parts only created as they become necessary to render when they are ripped apart from the exterior (a common scenario in games). We see what a human, or a house, or a rock, looks like, and reproduce what we see, when that is inherently only the most superficial possible version of that thing.

Something from physical reality is translated into signals for our brain, is stored as a symbol representing that object; then our brain conducts our body to create an object that can reproduce those signals in another brain. That is what we call art – or, at least, representational art.

So, with games, we started from the simplest version of the most superficial reality, and from there we’ve managed to make more detailed and convincing forms of that representation. Perhaps we could simulate a reality based more on what we know to be there than what we see to be there? Even a primitive simulation of a more complete reality could lead to new and interesting artistic pursuits. Or, perhaps, since we are unmoored from the physical basis of reality, we could create a simulated world far wilder and stranger than we can while paying lip service to material reality.

Mostly, though, I just find it amusing how much we like to act like we’ve come anywhere near a reality simulation when our approach is in essence purely superficial. How very human of us.

There’s an idea I have a hard time getting myself away from: The idea that it is necessary to create. The idea that it is necessary to add value, to contribute, to build up. That is, the idea that our purpose is our contribution, the things we make for society. And this is such an overwhelmingly entrenched maxim in my mind that looking at it in print I already feel like it makes me look bad to even be questioning this idea, but I feel like if we actually spend a bit of time to dissect it it starts to look pretty fucked up.

Let’s just say outright: Contributing is good! Doing things which make more people’s lives better is good, doing things which advance human knowledge is good, doing things which broaden our understanding is good! These are all great! That does not, however, make them, on a person by person basis, necessary. Your humanity does not rest on your ability to contribute. It doesn’t even rest on your ability to not do harm. These are good things to try to do: But they don’t make you you. No one thing makes you yourself except for your actual presence in the world.

The part that struck me most forcibly just now is the phrase “the value of human life.” Is this actually an okay way to think about human life? As something that can be valued? Is this how deeply the idea of competitive economics has drilled itself into us? The word ‘priceless’ was made to describe the idea of something being not measurable in terms of value, of having a deep significance, of being irreplaceable, but nowadays we just use it to mean extremely expensive. It feels, too, that when we speak of human life having value that is what we are saying, that our bodies and minds have value, that we might be expensive but we can still be bought and sold.

Fuck every life being precious, every life is more than that – it’s life.

It gets hard to live it when that life is spent trying to calculate how to maximize its own value.

This valuation of human life maybe made sense at one point, when we had enough food to keep half the village alive through winter and we had to make some hard choices about who got fed, who would be able to best keep the village going after winter ended. We have enough now to feed everyone. The problem isn’t that it’s too hard or too expensive to keep people alive, it’s just too unpopular. The only reason why we continue to evaluate ourselves this way is because it’s advantageous to those who would extract that value. It’s better for those on top if those below spend the rest of the time fretting how to make themselves ‘better’, how to produce more for less, how to be a bargain.

I’m tired of trying to be better. I still want to be, desperately, but I’m tired. Tomorrow I’ll probably continue to practice, to create, to expand: Like it or not, this is who I am now. But I still need to remember that it’s not all of who I am: I am also me.

I’m thinking about how I think; I’m processing my process. For whatever reason I seem to be a weirdo who ends up thinking about things a bit too much – most of the time this is super inconvenient because it makes it difficult to communicate with people who aren’t me and aren’t related to me. Sometimes also it lets me make interesting things, perhaps finding an novel angle or perspective someone else might not find. Or maybe I have the cause and effect in reverse here: it could be that the act of creating interesting things has tweaked my brain away from the standards of human discourse and into weird and specialized pathways. Likely some combination of both.

Perhaps these approaches will be interesting to you. Perhaps not. I’m trying to spend a little bit more time understanding how other people approach problems and creative tasks, so perhaps you may be interested in learning my approach as well.

What do I do? I ask a lot of questions. From any one fact, you can start pulling at threads, start prodding at what has to be true as prerequisite to this being true and what must be true as consequence of this truth. From any one fact you can then unearth a network of facts, from any one question a cluster of further questions. This is helpful for the direct problem-solving stuff, figuring out what logistically needs to be lined up in order to make something work, but it’s also useful for philosophical exploration.

Really what I’m talking about is applying programming logic to situations more complex and nuanced than a program. I’m used to programming, so I always try to find the most general case. If there’s something that is common knowledge in one field, maybe it has more general applications – helpful metaphors are often born this way, such as when we take our knowledge of the way rot spreads through produce and helpfully inform people that one bad apple spoils the bunch. Unfortunately then people use the helpful phrase “one bad apple” to say oh it’s just one bad apple so it’s not a big problem, which is the exact opposite of the intended meaning of the phrase. It may be that we don’t have much call to skin cats any more, but there’s probably still more than one way to do that if we really want to, and probably more than one way to do most other things as well. Metaphorical aphorisms are usually just using a specific description of the best practice within a field that to describe a more general case. In, um, exactly the way that I just did by beginning this paragraph with an analogy to programming.

Of course, these metaphors are often bullshit. That is, what’s good for the goose may not actually, in practice, be good for the gander. So somewhere in this spectrum, from the specific application of an idea out to the most general application of the idea, there’s usually a point where it stops actually being useful and becomes extremely bad advice. Somewhere there’s a discontinuity. These breakdowns are also an interesting point to start exploring from, to trace out from the specific case to the general until we find where it breaks down, to then describe the consequences of using this concept, that has been accepted as generally good advice from a specific application, outside of the scope of its utility.

Sometimes something just feels wrong, or disproportionately satisfying, in ways that aren’t readily described, and these are interesting places to begin to explore as well. Why does this feel different than I’d expect it to? What does it say about the thing that it instigates these emotions? What does it say about me thant these emotions are instigated? Or, inversely, if others seem to be reacting unusually strongly to something, what does that signify? What sort of unfed hungers are indicated when something becomes explosively popular, what kind of unspoken rage when something becomes a locus of incandescent fury?

The through-line here is that we probe using emotion and intuition and then dig deeper using logic. Neither of these tools are honestly especially useful on their own – and most people who think they’re being purely logical are actually being guided and biased by emotions they fail to acknowledge, people who sell themselves as in tune with pure emotional truth are guided by logic that they pretend isn’t logic by avoiding giving it any concrete description. Everyone is guided by emotions and by logic, and the only way to navigate with any goddamn clarity is to acknowledge the presence of both and harness them, rather than to try to reject one or the other. The above approaches are really just frameworks for attempting that – likely your own personal creative and analytical approaches are as well.

I feel that this is probably an essay that I should preface with the disclaimer that I don’t really know what I’m talking about here. This is a chain of thoughts and suppositions which grazes on some touchy subjects, and I could be way off base. Nevertheless I feel that these may be thoughts worth sharing.

Okay. Something that seems a bit odd to me is that we describe mental illness as being a qualitative aberration rather than a quantitative aberration. That is, we say that depression is categorically unlike mere sadness, ADHD categorically unlike mere restlessness, narcissism categorically unlike mere pride, and so forth. I think we’re doing this a lot right now to emphasize the fact that mental illness is real, it’s an actual thing that happens that can ruin lives and kill people. Unfortunately, I feel like this framing leaves a lot of more marginal cases in a terrible position. What of those of us who fail to meet the standards of depression but have mere everyday crippling melancholy? What of those of us who are merely distractible, proud, irritable, impetuous… Are we therefore completely fine, even if we find coping difficult?

I don’t think separating mental illness from mere emotional difficulty is always a beneficial viewpoint, or even frequently so. I think of emotions as being like allergies – they’re a response in our body that exists to protect us from irritants, to make us healthier. They usually do. However, occasionally they can be counterproductive – and sometimes they can kill us.

Context and quantity are the only things that separate a medicine from a poison.

There are often dialogues comparing mental illness to emotion in an attempt to discredit the concept, acting like depression etc are new inventions or that the only necessary solution to these sorts of imbalances are a nice jog through the woods or some other horseshit. These idiocies in particular tend to make this a territory that people, particularly those directly suffering from mental illness, get defensive about. The popular conception is that emotions are fundamentally controllable, and that being overwhelmed by them is a sort of personal weakness – therefore, since mental illness is not a personal weakness (except perhaps in the same literal sense as a paralyzed limb), emotions and mental illnesses are categorically dissimilar.

In describing how mental illness is fundamentally unlike emotion, people often describe it as a chemical imbalance. Again, I’m speaking as a complete layman here, but isn’t this still completely consonant with the conception of them as an overwhelmingly strong emotion? All emotions are chemical, and if it’s such an overwhelming sensation that it derails your life that’s obviously imbalanced.

If we understand emotional balance as being situational, delicate, and on a spectrum, then it becomes clear that a) we probably all need a tune-up and b) there’s little hard division between everyday overemotional and dangerously ill. Unfortunately, the mechanisms of emotion are extremely complicated and can easily be knocked out of alignment, and it’s incredibly difficult to tell when this happens. Everything that governs who we are and how we act is a machine that is a black box to us whose machinations are occasionally extremely volatile. I guess my point is… don’t wait until you’re undeniably, obviously, and perhaps irrevocably sick before you start thinking about your mental health. Those things which become illness often start as something smaller, and it’s better to go to a doctor than to an emergency room. And, I guess, also, don’t assume that just because someone hasn’t crossed the threshold to mental illness they are totally fine.

I’ve been thinking about ideas and about plagiarism. The concept of stealing ideas is a bit of a tricky one, since it conceives of ideas as having a single origin point, or of being discrete and quantifiable things – but so much of the life-cycle ideas is of them being misinterpreted, reinterpreted, hazily remembered, recombined, turned into new ideas. I have not plagiarized and have no intention to, no one has ever accused me and I hope no one ever will, but I’m sure that some bits and pieces of other ideas which have originated elsewhere have found their way into my work. How could they not?

There’s so many ways for this to go wrong and I have honestly zero idea how to approach this as a problem. People should get credit for their work, but also it should be possible to accumulate knowledge without strenuously and probably erroneously tracking the source for every tidbit we happen to pick up. I can see the endpoints of both sides of this chain of logic though, and neither is satisfactory: Either we regard everything that goes on in our minds as wholly our own, and the credit and benefit of all ideation goes to the highest profile person to convincingly present those ideas, or we regard everything that goes on in our minds as coming from elsewhere, at which point presentation of ideas stagnates as everyone is certain that their ideas must have been presented elsewhere and better before. I feel like I’ve been on both sides of this, and found both deeply unsatisfactory.

Fortunately we are not restrained to only the extremes of every possible divide. There’s probably a few guidelines worth following here:

  1. If you know an idea came from elsewhere, give credit. If you’re not sure and it’s searchable, do a search. If you’re not sure and don’t find anything, but someone points it out later, update to give credit.

  2. Independent discovery happens all the time, but there’s no reason not to give someone credit if they happen to come up with the same idea. Go ahead and link to their work as further reading if it comes to your attention later.

  3. Try to add something new. Personally this is always my goal when writing, but it’s a worthy goal as well to just seek to share knowledge – a fact I sometimes forget to my detriment. If you’re just sharing knowledge, though, try to share some of the context where you got that knowledge as well – context is everything, and sometimes this context will make it easier to give credit where it’s due, especially when that credit is experience gained as part of a community or culture. It will also, incidentally, probably make the knowledge more useful.

  4. Boost the voices of others when they say something that you feel is worthwhile. Giving credit for the ideas that inspire you is great when it comes to the piece that has been inspired, but maybe even better if you do it every day.

When it comes to that last point I feel rather remiss. I think I feel hesitant to try to boost other peoples voices because I usually feel that I don’t have much voice of my own, and that it feels weirdly presumptuous to do so, but I definitely should get over it. In the spirit of trying to do better, here’s a few voices that I think have had a particular influence on me:

This current reflection, in particular, I started writing as a consideration of Liz Ryerson‘s recent thoughts on the economy of ideas in games writing. She’s a particularly incisive voice in critique around games culture, though she’s now distanced herself from that scene. Also I think her game Problem Attic is quite interesting, particularly in how it harnesses the mechanics of glitchiness into metaphor and has likely inspired several of my pieces.

A lot of the original inspiration for starting a blog to talk about game design came from the lectures of Jon Blow – though I often disagree with his perspective nowadays, I still feel his critiques of mainstream game design were interesting and edifying. Unfortunately I unfollowed his Twitter feed after his unimpressive and unnecessary response to the Google memo shit show, but I can’t deny that his ideas have greatly influenced my own.

I was also inspired to begin this blog by the insightful and hilarious discussions on the Idle Thumbs Podcast, where the experience of playing games is taken apart, put back together again, and extrapolated out into absurd scenarios exploring why we love ridiculous video game bullshit.

There’s probably more I should put here, but I don’t expect to solve this all at once. I’ll just try to boost more voices in the future – and, perhaps, the first step in doing that is to believe that my own voice can be heard.

Among other ways to think about games, one that I rarely hear spoken of is their capacity as attention engines. Among the many social and emotional needs we have as human beings is our desire to be heard – even more than to say anything specific, we just yearn to be able to jam a flag into the dirt for people to see. Having someone listen to you, even passively, can be hugely rewarding, emotionally and even intellectually, as you feel connected with the world.

Before most games, we had ELIZA. ELIZA is a simple chatbot made to emulate a psychotherapist, one who answers every question with a question. “What are you thinking of?” “How did that make you feel?” “Why do you say that?” Despite being created with an almost parodic intent, to show the superficiality of human-machine communication, people felt a genuine connection with ELIZA, and sometimes even a degree of therapeutic benefit. Now, in 2017, there are lots of ways to be listened to by machines. Most of us have a machine in our pocket who will answer our questions as best as it is able, and will listen and respond to anything we say no matter how inane – though, perhaps, not in a very satisfactory way. Still, unsatisfactory answers are not necessarily too different from what we’ve come to expect from genuine human social contact either.

A lot of what we want from games is for them to just respond to what we say to them. a game lives or dies on its ability to react to us, to listen to what we are trying to say. Because real artificial intelligence is a very long way away still, our methods of communication are usually greatly restrained in games to enable them to react in a satisfying way. A game’s controls are the language we use to speak to it: Frustration ensues when a game misunderstands what we are trying to communicate, or when it doesn’t allow us to communicate the thing that we desperately want to. We describe these sorts of problems as control issues, which I suppose says as much about us as it does about them.

This is what people really want a lot of the time when they ask for non-linearity. They don’t care about replay value and they don’t care about getting the sex scene for the character they like, they just want the sensation that they are being listened to. They want the video game equivalent of someone nodding along and saying “Mm, wow. Interesting. Huh. I see.” That those multiple branching paths and endings and romantic partners are available is chiefly valuable because it reifies that sensation, makes it feel solid and responsive – that there is, indeed, someone listening. And, perhaps, there was, 18 months earlier, a designer who listened to them by way of the player proxy voice who lives in his head, who he designs for.

A little while back the game Passpartout: The Starving Artist became a small-scale hit. Passpartout is a game where, playing as the titular starving artist, you paint using an extremely simple art program, and passersby choose to either buy your paintings or not, occasionally giving explanations as to what they like or don’t like about the picture. Now, the engine for evaluating these paintings is seemingly pretty simple, apparently rewarding the artist more frequently for time spent than for technical skill, but it serves its purpose, effectively convincing the player that the game is paying attention to what they are creating, that they’re not just creating into a void – which it all too often feels like artists are.

Unfortunately, Passpartout is quite similar to a prototype made by Jon Blow called Painter. He released this prototype for free on his site, and in talking about it he described it as a failed prototype because it failed to realize his vision of a strategy game where you create paintings to appeal to different gallery owners and curators and achieve success. Some statements made on Twitter suggest that he was dismayed that a game so similar to his failure could be a success – but there was no actual reason for the Painter prototype to be a failure except that it failed to achieve the vision he’d had in mind. The element of trying to appeal to tastes was never actually very interesting. What’s interesting was making a painting and having it be seen and acknowledged, of being told that something you’d made had worth. All the judging algorithm had to do was make it so the game could reasonably successfully determine which you’d worked hard on and which you’d hastily crapped out and evaluate them appropriately, just to ensure that you knew it was paying attention.

This may seem fake or trivial, but this loop of the player communicating something and the game responding is the core of what a game is. It doesn’t need to be a real or detailed response, it just has to be real enough to show the player that someone, or something, is listening.

A game is a collection of symbols and rules for how those symbols can interact with one another. I described these as being obstacles and tools, one of which defines the parameters of success and the other of which are used to navigate through those parameters – but the division between the two isn’t necessarily as harsh as I implied. Sometimes obstacles and tools are the same – that is, sometimes you can use one obstacle to navigate another.

There are a number of interesting examples of this – you could jump off of an opponent’s head to reach a higher platform or, inversely, you could jump to a higher platform to cut off an opponent’s pursuit. You could lead a group of enemies from one faction to encounter a group of enemies from another faction so they start fighting each other, or you might reposition a trap to activate another trap safely. You might even intentionally jump on an exploding trap to boost yourself to a normally unreachable rooftop. In addition to obstacles sometimes behaving like tools, on occasion a tool will take on the characteristics of an obstacle. The most common example that we encounter in games is probably the hand grenade, which is both an extremely potent weapon and also a convenient way to quickly separate the component parts of your bloody carcass.

There’s no reason for tools and obstacles to be different at all, in fact, when they all just boil down to being a set of physical properties and behaviors. There’s no reason for an object to know what its purpose is, only for it to behave in accordance with a set of instructions. In Spelunky, for example, you can pick up and throw almost everything in the game – rocks, pots, laser turrets, unconscious yeti – and, once an object has been thrown, it affects the environment in pretty much the same way no matter what it happens to be – well, unless it happens to be explosive, in which case its effects will be more dramatic. It’s actually fairly commonplace to throw a rock to set off an explosive only to have the rock blasted back in your face by the explosion and hit you… into a yeti who throws you onto a collapsing platform which falls on top of a mine which blasts you into a pit.

When items are agnostic of their origins and purpose, surprisingly intricate interactions become possible. In many other games, rocks would only be useful for hurting opponents or for setting off traps – in many other games mines would only be activated by the player, enemy bodies would be inert, explosions would only affect things which could be damaged. It’s fascinating what can be achieved when we allow items to be exactly what they are, instead of design them specifically to fit a particular role in the game.

Another interesting example is Phantom Brave, a tactical RPG from Nippon Ichi Software: In this game, the main character Marona is the only living character you control, with all of your teammates being ghosts she has befriended and which she can summon to help. The catch is, to summon a ghost she has to have something to summon them into, which can be something as ordinary as an everyday bush or rock or as ornate as a cursed sword. Whatever they get summoned into, they gain properties of that object so, for instance, if you summon someone into a tree they’ll probably be more vulnerable to fire damage. The other catch is that literally any object on the battlefield, including other characters, can be picked up and used as a weapon. These systems, interacting with others such as a system where every item comes with a stat-modifying adjective before it, enable some really strange and intriguing strategies. Sometimes it’s necessary to pick up one of your characters and toss them across a pit, pick up an enemy and beat up his teammate with his body, or summon a weak character holding a really nice rock you’ve found and have them drop it there for you to summon a more important fighter into. Later in the game some enemies are even scripted to start picking up and throwing powerful items off of the stage to keep you from summoning allies into them.

In some ways, this functional agnosticism of game objects is the default – when I say a game is a collection of symbols and rules, I’m not just speaking conceptually, but also in terms of how games are made, what the internal programming logic that goes into their operation is. So, one might ask, if this is the natural state of the game, and if this creates so many interesting and emergent situations, why do so few designers allow their games this kind of leeway? Unfortunately, when you increase the possibility space in a game this way, you also increase the odds of something going haywire, of an uncontrolled feedback loop or absurd dominant strategy that completely undermines the intended game design. Part of why this openness was possible in Spelunky and Phantom Brave is that these are very tightly controlled designs – Spelunky through having a small set of game elements with only a couple of methods of interaction and a straightforward and minimal progress path, and Phantom Brave through presenting a restrained and traditional tactical RPG interface. For a look at what this ends up looking like without this sort of restraint, take a look at Dwarf Fortress – which is an important and fascinating game, to be sure, but is not easily accessible to most audiences and results in scenarios which, though they are amazing stories, frequently represent bizarre and illogical breakdowns of the symbolic logic as the system recursively interacts with itself.

Still, it’s worth considering: How am I restraining this object, making it behave in a more constrained way than it has to – and a less interesting way than it could? How would it affect the overall design if these constraints were to just, maybe… disappear?