Archive

Music

Last month I had just started working on a system for transforming the color of sprites. I finished it pretty early on into this month, and immediately afterward made an post-processing effect which performed the same effect but across the entire screen. Although color matrices can do some really interesting and nearly arbitrary things to process one set of colors into another, for now I settled for just creating a set of interfaces for the most common set of effects I’d need: Hue, Saturation, Lightness, and Brightness. If you’re familiar with these sorts of operations you might have expected “Contrast” to be listed there, and it was at one point – however, I realized it was easy to approximate its effects with the lightness and brightness sliders, so I removed it.

Perhaps a word of explanation about what a color matrix is would be useful. A color matrix is a way of multiplying color values by each other and then offsetting the result: So, for instance, I could rotate the colors by making the red channel equal to the green channel and the green channel equal to the blue channel and so forth, or could zero out the blue channel to tint it yellow, or could add to all the channels to increase the overall lightness of the image. With that in mind, the operations do the following:

  • Hue: Rotate the RGB channels by degrees from -180 to 180
  • Saturation: Multiply the RGB channels while offsetting them enough to keep the overall average value the same
  • Lightness: Add to the RGB channels, moving them towards white or black
  • Brightness: Multiply the RGB channels, making colors more vibrant

This is a pretty powerful set of commands and is probably all you’d need under most circumstances. Nevertheless, it is tempting to add an interface for setting the matrix directly at some point, which would be much clunkier but allow for more unusual effects such as negative imaging, mapping the image’s transparency to its RGB brightness, or performing operations on only certain color channels. I’ll likely add it either at such time as I need that functionality or as I decide to sell this on the asset store.

Another feature I added, which I’m not sure if I’ll ever use but it seemed like an obvious addition, was adding a mask texture to control to what degree the color matrix is applied. I have no idea if or when this will become significant, but the idea of modifying both this texture property and the color matrix values simultaneously opens the door for some really unusual and powerful effects. Another possibility which I’m tempted to explore in the future might be loading a bunch of color matrices into a 3d texture and applying that as a post-processing effect… but I digress. The range of possibilities is exciting, but I need to keep moving.

This is something I can clearly go on about at length, but in practice this only took me a week or so of work and is more or less complete now. After completing the color transform feature, I was feeling stressed out with all of the work I’d been doing on the game and decided to take a week off… sort of. I ended up spending it working on music composition, something I’d been missing doing. This isn’t strictly part of the DevBlog, since this piece won’t feature in the game, but I’m quite pleased with how it turned out and felt like posting it here anyway.

I’d also, at the end of last month, finished implementing a system for changing between rooms in the game. This system was more or less functional but not really tested, and I’ve been developing it further over the last month. Aside from debugging, I’ve also made it so each room has a set of appearance options such as default lighting, background image, and post-processing effects, and whenever you move between rooms it changes between them. I actually went a step further and made it so these values actually change inside the editor depending on which room you’re currently looking at, so I can actually see these appearance modifications without starting the game and running to the room.

I spent some time at this point making some improvements to my work-space. It turns out that for $70 in monitor mounts you can drastically increase your available desk-space, and also just make a working environment feel generally more professional. For whatever reason, I’ve never felt as much like I was living in the future as I did after setting up my room with monitor mounts. Unfortunately for some reason my phone camera is incredibly terrible and this was the best picture I could take (so much for the future), but you get the idea.

In between all these other things I was making slight tweaks and improvements to the test rooms, shifting the terrain around, improving the visual effects, adding a mist particle effect and so forth.

I’m very pleased with the overall appearance of this section so far. I am not, however, pleased with its performance: There’s a little bit of framiness in the first couple of areas, but not enough to worry me too much. However, if I move further on, into areas with larger particle systems, performance starts to become truly terrible. At first I thought this was due to core issues with the way the Unity particle system renderer worked, and started working on developing a replacement, but once the supposed replacement was a ways along I tested it against the basic particle system and noted its performance was actually slightly worse – you know, the same sort of performance hit you’d expect from a system doing basically the same thing as what was there, but made by someone with less internal access and less experience with this kind of programming. Replacing the particle renderer was a bust, but all hope is not lost: Further brief diagnostic attempts revealed that the worst issues seemed to disappear if I disabled my custom particle animation effect, which could either mean that that effect is far more processor-intensive than I had believed or that enabling it was forcing the particle system into a state where it couldn’t operate as efficiently. Between that, the possibility of shader efficiency improvements, and the potential to replace some of the particle effects with cheaper and simpler animated textures, I think I’ll be able to resolve this to my satisfaction… eventually.

I’m going to be out of town for the next couple of weeks but will probably still have plenty of time to work on the project, so the question is how much of this work I can effectively do on my old laptop. Considering the areas with really bad performance drop down to like 2 frames a second even on this somewhat recent desktop, it may be unfeasible to tackle these performance improvements right away just due to the amount of time I’d have to spend waiting for clicks to register while testing changes. Still, I’ll give it a shot – and even if it doesn’t work I still have a ton of animation work to do, which I should have no problem doing with that equipment. Even if that, too, proves infeasible for some reason, I can plan out what I need to do for these first few areas – even if I have to do it with pen and paper.

If you’d like to help support this project or my writing, please consider supporting me on Patreon. Support at any level lets you read new posts one week early and adds your name to the list of supporters on the sidebar.

Decades ago, a friend told me a logic problem that had a big impact on how I see the world. Two workers emerge from the coal mines after a long day of work: One’s face is covered in soot, the other is mostly clean. The clean one wipes his face, the one covered in soot doesn’t. Why?

The answer, of course, is one of perspective. The clean miner sees his friend covered in soot, assumes he must be the same, and wipes his face – likewise, the dirty miner sees his friend and assumes he must have also come out clean. This took me a little while to puzzle out at the time, because from my omniscient perspective I can easily envision what they look like – but I have to work harder to understand what they think they look like. I have all the information, but I have to put it into a certain context before I can actually understand it.

When we learn something new, the context in which we encounter it determines how we categorize it. We determine information’s veracity and meaning based on who communicates it to us, why we believe they are doing so, and how it fits into what we already believe. This is why people from different cultural and political backgrounds often seem to have completely different understandings of the world: The same information conveyed to two different people takes on a drastically different meaning based on the existing context of their lives.

This is the basis of most surprise in art. We are presented with information, and then later on presented with new information which recontextualizes the things we’ve already learned and forces us to reevaluate them. This is what a twist or surprise ending is. This is what most jokes are. This is how characters get developed and fleshed out – and, even in less narrative forms, we achieve moments of the sublime by carefully shifting contexts. We hide details in paintings which completely change the meaning of the scene, we shift harmonic chords under the same melody to completely change its tone and impact. These shifts in meaning extend into the past and future, and when we come to understand the situation in a different way because of them we carry that understanding into interpreting both the antecedents of the situation and what will proceed from it.

But we’re all just mineworkers out here, and our understanding of the situation is necessarily limited. When our friend wipes his face, despite it being clean, what should we infer from that? Change in the world sometimes happens so slowly that only later, comparing it against the context you’ve come to understand it in, do you realize your understanding of how things work must be incomplete. Sometimes we ourselves change over time, and slip out of the hole we’ve dug for ourselves and, emerging out into the sun, must be reborn and learn everything all over again.

If you enjoyed this essay, please consider supporting me on Patreon. Support at any level lets you read new posts one week early and adds your name to the list of supporters on the sidebar.

We’re at the end of the first official month of development of the current version of EverEnding. That’s a lot of qualifiers, but it’s still a milestone of sorts. Did it go well, you may ask? Did it go poorly? Kind of in-between!

Most of this month was focused on creating the intro sequence, but before I even started in on that I first wrapped up what I was working on at the end of last month, the crouching animation, even though it really doesn’t need to be done until I start on the first chapter of the game.

Once I have my teeth in a problem, I really don’t like to abandon it until I’ve solved it – a tendency which kind of backfired later on. However, having accomplished that, I started developing the different graphical elements of the intro area, making a dynamic and playable version of the first test screen which was previously just a placed static image. Here’s a comparison of what it looked like before and what it looks like now:

I started with the tree, which I’ve always liked the look of but which was cut off at the edges on this screen since I’d originally drawn it on a paper pad and ran out of space. I extended the top of the tree and added layered systems of branches, then drew several different leaf clumps which I spawn in-game using a particle system. It turns out most of these leaves aren’t actually visible from the main intro area, but I think I’ll probably pan the camera down to this area at the beginning, which will show off the leaves and branches nicely. There’s some aspects I still like better in the old version, such as the overall level of saturation and contrast on the tree and the gradient in the sky background, which I’ll probably try to bring back in as I develop these assets further.

I quickly drew the night sky background, which was mostly scribbling, and somewhat less quickly I constructed fill and edge textures for the dirt, and then built the terrain using Unity’s Spriteshape tool. I had been concerned about the quality of effect this tool would provide, but I honestly couldn’t be happier with how the ground segments turned out. A point of interest is that SpriteShape used carelessly can make the edges of the collision area misleading, so I tried to make it clear via shading what the actual collision edge of the ground surface was.

Previously the main ground area had been grassy, but I figured with the special grass particle effect I developed a short while back I might be better served by leaving it as dirt and then having the grass effect handle all the grass rendering. I may reconsider this at a later date – and the grass effect itself is surely still subject to change – but the dirt ground asset will be useful later regardless. I haven’t created any rocks to replace those in the initial version – they’re not vitally important, but I probably should, especially since I’m certain to need rock elements to place later anyway.

Probably the most time this month was spent wrestling with the rendering system. While the initial simple sprite look was appealing, I couldn’t and can’t stop thinking about how incredible it could look with some extra post-processing and shadowing effects. By copying a shader I didn’t understand well I could get these effects but only at the cost of clipping away the transparency at the edge of the sprites in a fairly hideous way, and one which will cause much more severe problems as I add more assets with transparency effects later on. I still haven’t figured out a perfect solution to this, and it’s a rabbit hole I could get in deep – after all, people build entire careers specializing in this sort of graphics programming – but I’ll probably keep pursuing it both because I think the results could be worth it and because I find this kind of work inherently interesting.

The crux of the issue is that in order to properly post-process an image with certain effects, you need to draw to the ‘depth texture’. In order to make objects draw one in front of another, you need to draw to the ‘depth/z buffer’, which it turns out is a completely different thing… And, in order to draw transparent objects, one would normally avoid drawing to either the depth texture or the z buffer, because it would overwrite information about objects behind the transparent object which you still want to draw because your object is transparent! In layman’s terms: In order to know what something should look like, the game needs to know how far away it is, but it can only know the distance of one thing at a time, so: If we’re holding a piece of semitransparent glass in front of an apple, how far away is the thing it should be drawing? Is it the distance from the glass to the camera, or is it the distance from the apple to the camera?

The only solution that seems viable to me is to set a transparency threshold: If it’s barely transparent at all, like a piece of stretched rubber, then it counts for depth, and if it’s very transparent indeed, like glass, then it doesn’t. However, just knowing this as an algorithm isn’t enough, because you still need to know how to explain to the graphics hardware what behavior you want – and that’s what I’ve been struggling with, because it has very particular ideas about what information you can feed it and how.

I’m not sure how well this problem is coming across in text, so hopefully next month I can just show you a picture of the working version to illustrate what I mean.

I also started getting sound and music implemented. Now, the intro’s sound and music needs are pretty minimal, basically just requiring one music track and one long sound effect to be played, but I started seriously considering what the music system would need to look like to handle future problems. Even in the very first playable area there’s some degree of adaptive music, and later areas have other types of dynamic music planned, from transitional segments to cross-fading alternate tracks and more as I think of them. Altogether, these represent a not-insignificant programming task – and, in Unity, requires some rather awkward queuing and loading of audio tracks. I decided that it was foolish to try to create a music system like this when there’s already incredibly powerful tools made for this specific purpose out there, so, after a quick assessment of its licensing options, I began integrating FMOD, an adaptive music system for games, into the project. This system is free for small-scale projects like this, and in addition to adaptive music provides great tools for mixing sound effects together and slight dynamic tweaking of sound parameters based on arbitrary values – so, for instance, one can not merely adapt music by creating alternate tracks, but also by bumping equalizer and filter parameters based on in-game actions. FMOD also provides an actual tool for visualizing these mixes and setting up musical transitions, which is great because manually plugging values into an XML script on my first attempt at this was a real drag.

Finally, I started considering how text was going to work. This seems like it should be a freebie – there’s some pretty standardized ways of handling text in games at this point – but, for this project, I want a sort of living storybook feel, with text appearing on the page as you encounter it. First, I needed to figure out how to just get text on the screen, which ended up being fairly easy since Unity includes TextMeshPro, a great solution for solving exactly this problem in 2d and 3d spaces. However, when text is on a background that could be any color, just black print doesn’t really cut it, so I spent quite a while looking through different fonts and rendering styles until I found a couple that worked for the two main ‘voices’ I need to have at the start of the game – though I’m still undecided whether these parts will also have voice acting.

After this I created a simple class to fade in the text over time – and then worried it should have been even simpler, since for some unfathomable reason I made it so text faded in over a set total amount of time instead of at a predictable rate, meaning it would be nearly impossible to sync up between fields of different length. As soon as I started thinking about all this, though, I started thinking about the way it should be, about what the optimal interface and feature set ought to be for a tool like this. This is a trap! This exact behavior is why I was talking in the last devblog about trying to treat this project as a series of game-jam-esque sprints, and why I said my tendency to fixate on problems once I approach them causes issues: Because this is the sort of thing you don’t do in a game jam. Not only is this a far more refined solution than is immediately needed, but trying to create it pushed me into writing code for Unity’s internal UI system again which is an invariably soul-crushing practice since getting anything done in there is such a finicky and arbitrary mess. I realized all this after a couple of days, and left this text fading in a state where it’s not quite as perfect as I would want it to be if I were selling it as a product – but is still quite sufficient for my immediate needs.

One could reasonably ask: Why focus on creating an introduction before completing any playable areas? I will preface the explanation by saying that I don’t believe this is the correct approach, and might in fact argue that it’s a very incorrect approach. Generally speaking, I would prefer to start with the core gameplay, build up playable areas, and expand out from there. However, right now I’ve already essentially created a gameplay prototype with the initial AIR version of the game: I don’t know if the gameplay is going to work on the macro level, IE will engagements with enemies be interesting and will the overall flow of play be interesting, but the prototype gameplay has been enough to convince me that the simple act of moving around the world will be satisfying. In addition to these basic gameplay systems, though, there are narrative systems – which are usually considered as an afterthought, but which also need to be developed and tested. Developing the intro will a) create a distinct chunk of the game, albeit a relatively unimportant one, b) force me to create the structure of the narrative systems, and c) create a free-standing piece that should hopefully build enthusiasm for the project – both for myself and for potential audiences.

Because I got so focused on trying to get specific tasks completed correctly, instead of merely functionally, I didn’t reach my original goal of completing the intro by the beginning of this month – but I don’t think I’m actually that far off. I had originally conceived of this beginning bit as being largely just text, but with a bit of reflection I’ve realized that would be an incredibly boring and tedious start to the game, so I now have a sequence planned where the camera slowly pans down to the intro screen, across text displays, and with cuts to certain illustrations I have yet to make. This is conceptually still pretty simple – and honestly probably doesn’t sound very exciting, described in this bare-bones manner – but should be more exciting and intriguing than just a few lines of suggestive dialogue, and I think with a deft touch could be really cool. The current intro music is almost 2 minutes long, which I’m starting to suspect may be too long to actually fit this sequence without messing up the timing – so I may also need to rewrite it to accommodate this.

Thus, to complete the intro, I need to make 3 illustrations and a couple more minor pieces of art, possibly tweak some assets, animate the camera transitions and text fade ins, add some sound effects, and then edit the music track to fit. I think all this is achievable within one week. Once this is complete I’ll start in on the first area – which, at first, will mostly be animation work while I complete and implement all of the standard character animations.

In all honesty, my mood has been all over the place recently – for obvious reasons. It’s nice to have something concrete to work on, like a ship in a bottle, while being otherwise locked in place.

If you’d like to help support this project or my writing, please consider supporting me on Patreon. Support at any level lets you read new posts one week early and adds your name to the list of supporters on the sidebar.

We’re at the end of another month. For February, I have made… an album!

I honestly didn’t think I was going to have something ready by end of month that I could call an album. The first couple weeks were not very productive for several reasons, ranging from inefficient work methods to fatigue after completing January’s project to the release of Apex Legends, but after that I was able to fairly quickly produce four tracks which I ended up really liking (Cut Adrift, A Letter to the Living, The Forgetter, and Outside of Reason). I also wrote most of another track, but ended up deciding not to finish or include it because it wasn’t quite as strong as the rest.

I combined these four tracks with four more I wrote several years ago (Every Scratch in the Wood, The Incandescence of Your Filaments, They Only Remember Silhouettes, and Amidst her Glorious Device), then pulled in a couple more tracks which were mostly done and just needed a bit of remastering/arranging (My Heat Cuts the Ice, You’d Be Surprised What You Can Accomplish), and all of a sudden I had ten tracks and 50 minutes of music!

I think it all hangs together pretty well, though I’m a bit hesitant about my choice to have My Heat Cuts the Ice as the first track – the slow start of it is well-suited to the position, but it’s definitely not the strongest track on the album, so perhaps I should have cut it in order to put my best foot forward. I suppose if I change my mind someday I can always recut it and include it as a bonus track. In the meanwhile, I just have it so Outside of Reason is the featured track even if it’s not the first, since I think it’s generally more appealing and expect the album as a whole will make a better impression if that’s the first track people hear.

With this project behind me, next month is going to be focusing on creating a relatively small and simple 2d platformer in Unity. My goals for the next project are twofold, and perhaps threefold depending on how things work out: First, to create a game that feels more complete and polished than the quick prototypes or ambitious experiments of previous months. Second, to understand how Unity handles 2d development and what’s possible within that context, with the idea of possibly porting EverEnding to it in mind. Third, I’m going to be asking around to see if other people want to work with me on this project, so this may be an opportunity to learn how to work better with a team.

Another month has gone by, and though a short vacation, a nasty little cold, and a number of other minor distractions got in my way, I still managed to make a little bit of progress.

First, and most importantly, I put quite a few hours into writing the music for the first boss of the game. I may have gone a little bit overboard on this one: The concept I wanted to pursue was a track with multiple phases that mapped to different parts of the boss encounter, bouncing back and forth between them until finally reaching a conclusion. I’m not sure if I can possibly create a boss encounter that stays interesting long enough to accompany this track, coming in at almost 9 minutes long, but it will be fun to try once the rest of the chapter is complete.

The phases of the track are:

0:00-1:47 Intro
1:47-4:13 The Chest
4:13-6:16 The Mask
6:16-7:49 The Heart
7:49-8:40 Conclusion

This one honestly ended up getting quite a bit out of hand, and I spent quite a bit more time than I’d originally expected to on it, but I’m quite pleased with how it turned out. I also just enjoyed doing music work again! I’m going to carry on with composing the soundtrack even though I’ve effectively completed all the tracks for the first chapter of the game now, which is the part of the game I’m focused on finishing. The reasons why I’m going to continue doing music work, despite otherwise attempting to contain my efforts to this first chapter, are several-fold: first because, as mentioned, I like making music and I want to do more of it, second because if I can’t make this game in a timely fashion I can damn sure make its soundtrack, which is a discrete sub-creation that I can be proud of in its own right, and third because I find music so compelling that I think just having the soundtrack to the game will motivate me more to finish the rest of it. There’s also a fourth, more pragmatic reason: Inspired by UNDERTALE’s soundtrack, I’m really trying to integrate motifs from different characters and locations into tracks with a narrative connection to those characters and locations. It’s going to be really hard to do that until I know what those motifs, for later parts of the game, actually are! I’m not really going to be able to consider chapter 1’s soundtrack complete until I’ve written the rest of the soundtrack and know better what my overall thematic tools and goals are.

Anyway! Aside from music, I’ve been working on a few things. I’ve been feeling my way around programming the main narrative component of the game, the storyteller. This is going to be something pretty similar to what Supergiant does in their games with an ongoing narration element, except I would like to integrate these narrator lines a little bit more closely with the music, syncing the lines up with particular parts of the track and so forth. Additionally, I want to have text appear in the world synced with the audio, so it’s a bit like playing a storybook. Figuring out how I’m going to pragmatically handle the synchronization of these elements and making them play nice with a player who may or may not be interested in the narrative taking place is going to be a challenge, but I’m getting close to having a simple version ready to test so that I can iterate on it.

I’ve also been thinking a lot about what the interface of the game is going to look like. There are really only two elements that need to be displayed under normal circumstances: The player’s current health, and how many sparks you’ve collected, which also maps to your max health. I could just have a red bar along one side of the screen, but that felt inelegant. A sphere that fills and empties like the health meter in Diablo might have been a bit more thematic, since there’s some sun/moon symbolism I’m playing with in the game, but it felt like a circle would take up a lot of screen real estate for how much info it would impart and probably wouldn’t look very good. What I’ve come up with instead is an idea that’s… actually a little bit difficult to express here. It’s basically a life bar along the left side of the screen, except it looks like an engraved stone tablet. Only a small part of the tablet is visible early on, but as you gain more power the tablet expands and you can see more of it, and the engravings on it. I can actually directly tie the health meter into the narrative of the game in what I think is a pretty interesting way. However, because you don’t gain power at a constant rate, but instead end up collecting more and more as you defeat more powerful opponents, I’m going to have to figure out a curve that reveals the tablet at a rate that’s satisfying over the course of the game. I have a logarithmic function in mind that may work well, but it will have to be tested. I’ll also need to figure out how to have the tablet build up in such a way that it feels satisfying, and ensure that no matter what its interim shape is it still gives satisfactory feedback as a health meter. This will all take a bit of experimentation, but it’s an idea I’m excited about.

Finally, I’ve been working on the game’s first animation. I mean, I’ve already built several animations, but this is the first one that will play in the game: The player character awakening, standing up, and taking her weapon at the very beginning of the game. I started creating this animation, and then had to start over after working on it for a few hours because my first take on it sucked. I think my second take on it has potential, though it’s still very rough the motion feels good to me.

The actual removing-sword part of the animation still needs to happen, and of course all of the detail and the tween frames need to be added, but I think I’m on the right track this time.

So, the plan for August is to finish working on these things, write the music for the first area of chapter 2 (I’ve already started), create more main-character animations, and maybe get some basic sound design in. Of course something else may capture my fancy and I’ll end up working on that, but as long as I stick to my big task list I think I can maintain forward progress.

EveHeader

This was an eventful month! Following my devblog post last month, I started sharing the project on a couple of game dev forums, and through a logical process which eludes me now 30 days later determined that a) I wanted to have the first chapter of the game complete by the end of 2017 and b) that in order to do this I should create a complete task-list and schedule for the project up to that point. This ended up taking me a few days, but I really feel like it was worth it. I now have, printed across 12 pages, a fairly comprehensive list of work that needs to be done in order to complete the first chapter of the game. There’s going to be four chapters total, so a lot of work will remain to complete the game even after all this, but the scope of the work will be determined and I’ll know how much time and effort it takes to create finished content for the game. All major gameplay bugs should get eliminated through this process, and all fundamental design code will be firmly in place.

I broke the schedule up into a total of five three-month blocks, one for the rest of this year and four others for next year. Currently, for this year’s block, I have 24/53 tasks completed or otherwise resolved. I also have a few tasks which I had to add to the list which aren’t accounted for there, as well as a few that are partially complete, but it’s still good progress and I’m proud of how quickly I’m getting the work done. Now, I expect some future tasks to be quite a bit trickier, and I also expect many unforeseen tasks to crop up, but that’s why I’m trying to get ahead of schedule now – as well as acknowledging that December is likely to be so busy with other stuff that I’ll probably only be able to work for half of it.

The biggest task accomplished over this month is the attack animations. All right-facing attack animations are complete – well, except for the occasional mistake or two still to be fixed, a few of which I’m noticing as I watch the attack montage play below. About half of the left-facing attacks are complete as well, and they should progress more quickly on average now that I have the right-facing attacks to use as template and I’ve got so much sprite creation practice. There were a few big sticking points: I realized after mostly completing them that the original standing attack animation was a) boring and b) functionally redundant with the running attack animation. I’ve since replaced the former with the latter, but fortunately not all was lost: I was able to use the standing attack frames to resolve another issue that had cropped up. When I changed the crouching position of the right arm some time ago, I invalidated the entire swing arc of the primary crouching attack animation prototype. However, the new arm position made perfect sense for the motion of the unused standing attack animation, so I just pulled the torso from those frames into the crouch animation. I still had to redo the leg positions from scratch, but it was a nice shortcut into creating a good expressive attack.

attackmontage

I’ve also been working on the music for the game. The first few areas largely have completed music tracks already, since I created music concurrent with them to figure out the tone I was going for, but as I made that music like five years ago there’s a lot of rough edges in those old tracks and they’re not necessarily well set up to work with the systems I want to have in place for the game. That is to say, I’m not planning on just creating a loop for each area, but having some degree of adaptive music based on where specifically the player is in an area and what the game state of that area is. Thus, I’ve been remastering the old tracks, making small composition tweaks, and rearranging the parts to make jumping between them work better. Fortunately, I found that by setting timers and jump points, I could very elegantly skip between segments of an adaptive track to switch playback to a new section. Less fortunately, I discovered that a track with tempo changes and heavy use of delay effects is probably the least optimal type of track to feed into such a system. Still, it’s functional for now, even if some of the track transitions sound a bit odd. I’ve at least proven out the basic concept and built the architecture: If I need to change things up a bit later to resolve these issues, it should be quite feasible.

hills

I’ve also been working on tilesets and backgrounds here and there. I made this background very desaturated to create a clear delineation between background areas and gameplay areas, and also to reinforce the misty feel I was going for, but I worry a bit about how well it will work with the extremely vivid and saturated caves background I made before. I really love playing with color in unexpected ways, like I did with making the distance in the caves background a dark vivid red, but consistency is important as well. In the end, that’s something I can only figure out by getting the assets into the game and playing around with them and seeing what works. Really, though, changing palettes is incredibly easy compared to creating new assets, so it probably won’t be a big deal at any rate.

In addition to the backgrounds, I’ve created a number of the transitional tilesets necessary to blend different tilesets together. Now I can have grass tiles next to stone tiles next to dirt tiles without them looking like artificial grid-based garbage. There’s still some gaps in there, tiles that I’ll need to create that I haven’t noticed I’ll need to create, but I can build out most of the environments I want to now, at least at a degree of rough detail.

Over November I plan to finish out all of the character animations and start creating detail assets for the first section of the first chapter – Mostly just different kinds of grass and stone to start with but, again, in many of these cases I won’t know what I’ve forgotten until I get there and find I don’t have it. Still, finishing this game, as distant a goal as it remains, feels more concrete and feasible now than it has in a long time.

 

I’ve ended up on an impromptu vacation due to someone else cancelling. This wouldn’t necessarily keep me from writing a post, but no ideas have readily sprung to mind and I haven’t had the motivation to wrack my brains for any. But that’s fine, because it turns out it’s been a long time since I’ve done a music post, which means both that I’ve been good about posting regularly so I can feel okay with missing one and that I’ve got a fair amount of music ready to go here for just such an occasion!

I got the piece I wanted to post today up to around 1300 words, with 2/3rds of the outline left to fill, and realized I wouldn’t be able to finish it in time.That’s okay, though, because I just finished a piece of music; so now I can take the good ol’ musical coward’s way out once more.

While I’d wanted to do vocal tracks this year, I’m finding that more challenging than anticipated. I wrote the lyrics, I wrote the melody, and then when it came time to actually record some singing I hit a wall. I have a very hard time speaking into microphones, to the point where it’s kind of a phobia, and while I was hoping that this would be something I could just kind of power through in the moment it’s clear that I’ll have to go for a more circumspect approach. Towards that end, and for other reasons, I’m going to try to get into game streaming, particularly my playthrough of Dark Souls 3. I’ll be doing test streams this week: I’ve already tested my video and it’s working decently as long my internet connection holds (it gets a bit finicky), but I’ll need to get my audio set up, something I’ve been kind of dragging my heels on (for some reason). Once I get things set up in a way I’m comfortable with, I’ll start announcing streams on twitter. In the meanwhile I suppose I’ll be doing more instrumental tracks.

I’ve also been working on getting a new site set up. This is kind of a tricky decision, though, since as much as I’d like to get a more proper and permanent set up through which to promote the game and my work, I already have a fair number of followers on here, and am concerned that they won’t make the jump to a new site. Alternately, I could double down on this version of the site, put together some proper front pages, tidy everything up and make it look nicer and more professional, and then pay to have wordpress remove their part of the domain name and remove all the ads. That might be a better solution, but probably wouldn’t result in as nice a site and might cost me some fine control. I haven’t decided, but probably by end of month I’ll get a proper domain set up, on one service or another.

Anyway, sorry there’s no piece this week, but next week’s will probably be a doozy. By my standards anyway, since I tend to usually keep things pretty short.

 

I feel like there’s a hole in the way we discuss soundtracks. We talk about using certain instruments or techniques to evoke certain kinds of emotions and associations without ever talking about the specifics of how we do that and why it works. I can’t tell if there’s some big conversation going on about this I’ve somehow managed to miss or if this is just something that somehow doesn’t get talked about, and a quick google search has turned up nothing, so let’s just get into it and if it turns out this is actually its own field of study with its own terminology then I’ll just have to look like an idiot later.

Okay.

So we’ve been trying to use music to evoke complex ideas for some time. Symphonic Poems, symphonies created to evoke a given piece of art or poetry through music, stood itself apart from earlier forms of music, which were either meant to accompany the opera or ballet or were meant to be purely musical exercises, not related to evoking any emotion or idea in particular. Of course, once we had film, we found that it was kind of weird and awkward to sit there and watch something in spooky silence, so we started playing music to accompany it, and soon the music also included rough foley effects. This was the precursor to the modern soundtrack, the music that accompanied non-musical action, baked straight into the movie – not singing, not dancing, just a couple having a heartfelt conversation about where their lives are going or a man walking away from an explosion.*

Music, even without lyrics, has symbolism. The most obvious form of this is mimicry: If you want to evoke an icy environment, use crystalline sounding instruments like chimes, or thin windy instruments that sound like the wind through the snows, or abrupt snapping percussion that sounds like ice cracking. The second form of this is onomatopoetic, using sound to evoke an environment or action in a less direct way – this can be difficult to quantify, but the famous Jaws theme is an outstanding example, the slow insistent motion evoking waves and the gradual crescendo to something faster and more insistent evoking something terrifying moving underneath them. The third, and probably actually the most common, is the associative: We associate sexy ladies with saxophone solos because we associate saxophone solos with sexy ladies. Among other things, this frequently provides a handy musical shortcut to communicate what era a flashback or period piece takes place in, though anything past 80 years ago is likely to be interpreted by a modern audience as “I dunno, in ye olde days sometime.”

A curious effect of the last is that, because it’s quite easy to create an association like this, the composer can create her own internal symbolic logic. For instance, Peter and the Wolf gives each character their own musical instruments and theme. Giving a character their own theme or leitmotif is a popular composer’s choice, something that can communicate bits of plot very easily, such as by using variations on a character’s theme during a scene where they appear in disguise. This was actually used often for humorous effect in the comedy series Arrested Development, establishing a short musical sting for a particular character/plot element and then playing it at unexpected moments when that element came up later in discussion (it may have been that part of the reason for this show’s lackluster success were that many of its jokes were too subtle to register as jokes to an audience not paying attention.)

Maybe a big reason this doesn’t get talked about much is that, as with the Arrested Development example, no matter how much the composer thinks about this, no matter how hard they work on it, few in the audience seem to notice the effort. The early areas in Monkey Island 2 have a lovingly crafted adaptive score, where each character has their own variation on the main town theme, with characteristic instrument choices and a bunch of detailed musical transitions, and these are sometimes reprised in later parts of the soundtrack – it’s unlikely we’ll ever see its like again in a game, since it was such a massive effort for a result that almost no one noticed. Inception cleverly used a musical motif, slowed down progressively, to viscerally communicate an idea about how its world operated – a motif since appropriated by other films going for that ‘epic movie feel’ without any appreciation for the original symbolic logic of its use.

Maybe most audience members just don’t give a shit about soundtracks.

The thing is, if soundtracks have meaning, it’s possible for soundtracks to have unfortunate and unintended meanings. I saw the play The Curious Incident of the Dog in the Night-Time, a play about an autistic teenager: The soundtrack seemed very intentional, using arpeggiatic constructs to evoke a sort of mathematics-tinged outlook and loud overwhelming distorted sounds to evoke the idea of being painfully overstimulated. It also had a kind of glitchy aesthetic to it, which struck me as odd. Was this intended to suggest that he was like a computer? Or, worse, a poorly programmed computer, or a malfunctioning one? Well, most likely it’s just that the composer associated these sounds with math and rational thinking, but in that specific context it had some rather unfortunate implications…

To me, anyway. I’m the only one who notices these things, apparently.

*I may be entirely off-base as to the musical history here. As I said, I wasn’t sure what search terms to use to do more research on this stuff.

FromSparePartsCoverSmall

Over the last year, I wrote a piece of music each month with the intent of collating all of them into an album. Most of these I’ve put up on the blog as I went, generally as a sort of consolation prize whenever I felt too tired or uninspired to write a new article. Well, now it’s 2016, and having spent the last couple of weeks ordering these tracks, polishing them up, and mastering them, the time has come.

I actually decided that all 12 tracks together were a bit unwieldy as an album, though. 75 minutes is a long time to ask someone to sit still and listen to instrumental music, and some of the tracks didn’t really feel like they fit into the overall flow of the album. Thus, alongside the main album, I’m also releasing a free mini-album. These are, respectively, “From Spare Parts and Parts Unknown” and “Please Don’t Make Me Leave”. Of course, both are free to stream, but if you’d like to download the mini-album, for use in portable devices and whatnot, it’s free for those purposes as well.

if you enjoy either or both of these albums, please consider purchasing them and/or recommending them to your friends. I spent a lot of time trying to make these as good as I could, and I hope that there will be an audience who enjoys listening to them as much as I enjoyed making them!