Archive

Tag Archives: Artificial Intelligence

There’s a lot of talk right now around AI art, and I’ve mostly just patiently listened to it. However, more and more I feel the need to yammer at length about it, because several aspects of how this gets discussed are frustrating to me.

“AI” stands for Artificial Intelligence, and is used with all of the exacting precision we’ve come to expect from how the terms “artificial” and “intelligence” are themselves deployed. AI gets used to mean a lot of things – from a carefully programmed set of instructions to control the behavior of entities in games, to science-fiction computer super-intelligences, to algorithms for mashing a bunch of randomized inputs together until they match an expected output. The last example here are referred to as Machine Learning algorithms (the acronym “ML” gets pretty confusing if you spend a lot of time around left-leaning programmers). There’s three parts to this process: First, getting the list of stuff to mash together, second, doing all the mashing, and third, developing a way to assess how close this mishmash is to what you want. Any one of these three steps can introduce unforeseen biases – and, due to the black-box nature of these meta-algorithms, none of these biases are easily identified once they’re introduced.

Regardless, this technique can be, in certain circumstances, kind of cool. The process of masticating a bunch of input and regurgitating it is not necessarily a creative one, but it can be a generative one: I think most artists have probably played with processes for gleaning inspiration from random occurrences, from looking at passing clouds or from Rorscach tests or from pulling words from a hat, and these processes can be satisfying and elucidating in their own right. ML is a particularly powerful version of such processes, since it has the vast iterative power offered by modern processing, but is still fundamentally pretty similar to a set of dice with words printed on them. Artists also have a number of processes where the specific result doesn’t matter so much as that it creates the correct impression; creating the texture of tree bark or of flowing water, it’s seldom necessary to get any particular line or highlight correct, but vital to create the right texture, convey the correct impression, and uncontrolled processes like throwing gobs or specks of paint or scraping the canvas might work to generate this substance – here, too, such ML algorithms could be useful in digital art, generating the right kind of noisy detail fills.

What ML is not useful for, what it will never be useful for so long as it uses this methodology, is generating art. The reasons for this get down to the core of what art is: Art is a method of communication, an intentionally engineered solution to the problem of how to convey a complex and abstract idea. ML can never make art because it is a conversation with no one: What it generates could be interesting, but it’s just random noise filtered in a particular way. Whoever interprets meaning from that could find something interesting, could be inspired or moved – but the same is true of sitting in a forest and listening to a river. The process may be beautiful, but without intent it fails the one and only test of art, building a connection between two or more human beings. In other words, art is art because it has an artist.

Perhaps the thing I find most insulting and frustrating about this is I actually love the idea of artificial intelligence art. Imagine a whole other class of sapient being who lives with us, who was born of our effort! What would such a being have to say? What would their perspective be? What concepts would they try to convey to us through art, and how would we receive them? But of course, that isn’t what this is. Nor is this a tool created by an artist to express an idea, some abstracted set of rules for generating meaning, some meta-medium that can generate many artistic experiences by building off a set of authored rules (though, tangentially, it could be argued that this is what a video game is). It is, in practice, simply a system for chewing up and spitting out existing work. I’m not one to be prescriptivist when it comes to art. I think anything a person cares to make and to call art can be art – but they do have to make it, curate it, place it, assign meaning to it. I know the objection that’s springing to every defender’s virtual lips now: “I wrote the prompt, I chose the result, therefore I’m the artist. I provided the intent, the machine just filled in the details – how is this different from an artist using a textured brush or other tool?”

There are two reasons why I still don’t buy this argument. First: The direction these tools are being developed in, towards ever finer detail and rendering, and the way the people supposedly advocating towards the future of them present the results, build towards a shallow aesthetic beauty, something which looks nice on first glance, rather than towards results which are surprising and expressive – that is, the way ML art is currently rhetorically deployed is the furthest possible thing from the aesthetic goals that define an artist’s approach. The ML tools making the rounds a few years ago, the tools that spat out grotesque nonsense, where you could see the connections the machine was trying to make and failing to articulate, were often fascinating prompts for the imagination, a weird dream slurry of related ideas, but as these tools “improve” they only drift further from the realm of the interesting and expressive. Second: While ML tools may spit up solutions to problems, they are in fact regurgitating existing solutions to problems – an aspect which, as well, is exacerbated by their current rhetorical deployment in “real art” arguments made by people with a rotted view of what real art is supposed to look like. The output that manifests this way is doomed towards genericism (in the case of harvesting too many inputs) or plagiarism (in the case of harvesting too few) as an innate property of the methodology of the tool.

Of course, these mistakes aren’t limited to machines. Countless human artists, many who should know better, make the mistake of imitating the form of something they liked rather than attempting to understand its construction and apply the lessons that can be inferred from it. So much of art, especially popular art, consists of these endless repetitions of contextless form, because we have come to identify certain shallow traits as the correct way to do things. Waves of movies with the same visual and production style, echoes of the same style of dialogue writing, signaling at fun or importance without bothering to construct anything fun or important; eras of game design with pointless leveling or crafting systems or unnecessary turret sections; novels which mistake maudlin sentiment for profoundness, television shows which mistake meanness for genius, and so forth.

Perhaps what really frustrates me so acutely in this advocacy of a future of machine art is what it reveals under the surface: So much of what people see in art is the immediate presentation, the basic aesthetic, and so little the craftsmanship, the meaning, the message. Bad enough were it limited to art, but the pattern repeats elsewhere: Sham experts citing sham science to convict real people; sham coordinators making sham foam pits for people to shatter their real spines in; sham politicians stoking sham outrages to get their real enemies murdered. It may have always been this way, but it’s moving faster and the stakes are higher, and I will never stop being angry about it. The awareness of impostor syndrome has been inverted into a belief that everyone is an impostor, that doing things well doesn’t matter, that simply mimicking the appearance of competence is identical to competence, and it’s getting people killed and it’s going to get more people killed. All of this is only possible in a world that holds deep contempt for both the concept of education, that which makes one expert, and labor, that which gives one experience – we hold capital to be the sole arbiters of correctness, to decide what’s real and what’s fake, what’s expertise and what’s bias, what’s art and what’s trash.

If you enjoyed this essay, please consider supporting me on Patreon. Support at any level lets you read new posts one week early and adds your name to the list of supporters on the sidebar.

I still find it fascinating to think how often we make stories about machines rising up to destroy us. There’s probably a few anxieties tangled up together in these narratives, and it’s intriguing pulling on those threads to see where they lead. One leads back to Frankenstein, to the abandoned artificial child growing hard and vengeful absent a father’s love – which story itself ties in fears of new technological capabilities trespassing in gods domain, and of careless treatment of those we have obligations towards causing far distant tragic consequences. One thread leads back to the nuclear bomb, a fear that we will invent the means our collective undoing as a species. One thread connects back to our often-unstated understanding that our prosperity, such as it is, is built off of the suffering and exploitation of people all over the world, and that this is unjust and those people might justly cast us down any day. One other thread ties to the understanding that we as well are being exploited, and technology is providing those exploiting with more leverage with which to do the exploiting.

All of these anxieties find a home in feverish imagination as killer robots and malicious artificial intelligences. I’ve begun to suspect, though, that the main thing we see when we look into the terrifying robot future is ourselves. Most automation, as things stand, isn’t actually automatic at all – just a way to allow people to work at a distance, to obscure their presence, to launder and anonymize their labor. On the flip side, most of the decisions that cause the most harm, that cost the most lives, are entirely human in nature, lacking in rigor and data, full of bias and fear and cruelty. What unites these decisions together, however, is that they are usually performed in the service of some sort of system. Some army or corporation or government sets a priority for something that must be accomplished, and otherwise normal people begin to completely disregard whatever ethics, whatever regard for human life, they might once have had.

We’re already very good at performing automated labor. We’re good at playing roles, fitting ourselves into systems, and avoiding looking up at the overall effect of our actions because it is in aggregate too complex and too horrifying to be countenanced. We can tell terrible parables of gray goo and terminators, but what makes these stories effective is how familiar they all seem, created in our own image.

I do not believe humans are evil – I used to not believe in evil at all, and I still don’t really believe a person can be evil, or even if an action in isolation can be evil. We are very good at performing social roles, and those make us capable of doing great harm no matter what our personal beliefs are. We’re scared of the machines because we’ve been them. We’re scared of the machines because we might be them still, just following orders, just acting as we are programmed to act. I believe now that, inasmuch as evil exists, it exists in the space between us, in the way we organize and understand power, in the way we are taught to treat one another. Evil is a machine, and we are its parts, and the duty is ours to see how it might be dismantled.

If you enjoyed this essay, please consider supporting me on Patreon. Support at any level lets you read new posts one week early and adds your name to the list of supporters on the sidebar.

After you spend a little while doing creative work, you tend to notice certain themes recurring throughout your creations. I have written here about my current project, EverEnding, but seldom in terms of its story and themes – I haven’t spoken at all about the project I thought of first, and which is on extended hiatus, Mechropolis. The themes of Mechropolis are, as may be evident from the title, artificial intelligence and life after death; following a set of three characters all straddling the boundaries of dead and alive, organic and synthetic, and using them to explore ideas of what it means to be made for a purpose – and then to be discarded.

Incidentally, I suspect that when artificial intelligence does come to manifest itself one way or another, it will take a very long time for us to notice. Consider how poor of an understanding and how little respect we have for animal life and intelligence – hell, consider how poor of an understanding and how little respect we have for the life and intelligence of humans who look slightly dissimilar to us. We convince ourselves of patent racial falsehoods every day, make manifold excuses to not perceive or understand the intelligence of others, and you would have me believe that we have any idea that artificial intelligence looks like? It may have already arrived, for all we know. Given our history of treatment of those we deem our social inferiors, it would certainly be in the best interests of any artificial mind to keep on the down-low.

Anyway. The themes of EverEnding are less immediately obvious, but similarly have to do with beings who were created for a purpose and exist somewhere between life and death, long past the purpose they were originally crafted to serve. Noting themes which you’re consistently drawn to, which have ended up woven into your every idea without ever making a specific decision to include them, can be mildly unnerving. What anxieties do they reveal? Are you doomed to always circle the drain of the same few grim fascinations?

I think what fascinates me about the idea of artificial life is a sense that we exist in ways that are far more similar to those of an AI, a golem, or an angel than we generally care to admit. Though we weren’t made explicitly to serve a purpose, many of us have had purpose instilled upon us, or claimed purpose for ourselves – and, once you make a decision to be something, it warps everything in your life around that focal point. The question, when meeting someone for the first time, when wanting to quickly understand who they are is: What do you do? What is your job? What is your function? I have a sense of this as being a particularly American outlook, but I have little basis for comparison having not left the country.

So we find ourselves a role, and we begin building ourselves to fit into it. We learn skills and forget others, we embrace passions and forget others, make friends and forget others, snip off bits and pieces of our personality that jut outside the mold we’re trying to fit into. We tailor ourselves to suit a purpose, and live defined by that purpose. Then, very often, we outlive that purpose and have to figure out who or what we are afterwards, slowly forgetting the things we learned and remembering the things we forgot, regaining a shape we abandoned long ago. After this little death may come a little rebirth, a new sort of life less shaped by the purpose it must fulfill – but this new life, being defined in different terms, can never entirely coexist with the lives of servitude to purpose that surround it, and always will straddle these invisible boundaries, always be out of place, undeath, unlife.

Both games are about the aftermath of some sort of disaster or collapse – in one case the end of the world and in the other the mere collapse of a nation – and the reshaping that happens afterwards. As we all drift, unmoored and unmanned, captained by greed and idiocy and sailing off the edge of the world, I know now why these are the themes I’ve felt compelled to explore. At this point I only hope I get a chance to actually complete these projects – and that there’s still an audience left for them if I do.

If you enjoyed this essay, please consider supporting me on Patreon. Support at any level lets you read new posts one week early and adds your name to the list of supporters on the sidebar.

I’ve been thinking about empathy, and about the role it plays in game design. There’s a fair amount of discussion of ‘empathy games’ – games created by and from the perspective of marginalized creators, games created to push you into another person’s proverbial shoes to walk a proverbial mile. Different versions of these have emerged, from games that attempt to simulate a life of one random person on the globe, based on statistical modeling, to games about one specific person’s specific experience and their understanding of it. Much digital ink has been spilled, as well, advocating that these games are the way towards a more understanding and empathetic future, with implied better outcomes for the communities represented by those games.

That’s not really what I want to talk about, though it’s an important thing to touch on. Most of the lofty rhetoric around these games has borne little fruit – it turns out that walking a mile in a person’s shoes doesn’t really tell you much about them, because you’re still walking with your legs and perceiving the world through your eyes. You can tell yourself that you’ve come to understand them, but all you’ve done is constructed an effigy of them for your imagination to occupy. It is far from empathy. Often, if these games involve significant choice, they end up being turned into min-max exercises by the player – coming to understand the single optimal strategy for ‘winning’ the game, trivializing a life full of uncertainties and incomplete information into an obstacle course to be solved. And, of course, the end result of these gameable ‘lives’ is the exact opposite of empathy, feeding directly into a sort of just-world fallacy.

However, even before we encounter these high level issues with the ideas underpinning empathy games, let’s question an even more basic assumption: Does empathy lead to kindness?

Empathy is the process of understanding what another creature is thinking and feeling. This is something we do all the time, and is a vital survival tool. All interpersonal interaction is some degree of empathetic, where we are predicting reactions and trying to feed back into those, verbally or otherwise. All communication could be seen, then, as a sort of formalized empathy, codifying and expressing internal processes to make them easier for others to engage with, while they provide the same service in return.

This is lovely, but it reveals a dark truth: Just as there’s nothing inherently kind or morally good about language itself, there’s nothing inherently kind or morally good about empathy itself. Certainly I believe that those most able and inclined to be empathetic are, on average, better moral actors: They understand the potentially painful outcome of their decisions better and they have a conception of shared moral reality that extends beyond their immediate purview. I also believe the same is, on average, true of people who are good at interpersonal communications, for much the same reason. This is not the same thing as these tools being intrinsically or always good or moral. You can use your bone-deep understanding of another person’s mental state for anticipation, for manipulation, for exploitation. We like to describe these sorts of mental domination tactics as being completely separate from what empathy is, but they seem like two sides of the same coin to me.

Of course, even using empathy aggressively is not inherently immoral. We do it all the time when we play games! In something like a fighting game against a single opponent, you’re constantly trying to understand, predict, and counter their every decision. Fighting game players sometimes call this ‘yomi’, Japanese for ‘reading’, meaning to read the mind of their opponent, but it seems like empathy to me: Every form of understanding the mind, decision-making, and emotional state of another creature seems, to me, to be a form of empathy.

This is part of why we love to compete – for the same reason we love conversation, because it allows us to understand and express bits and pieces of our collective minds to each other. The process of competition is much the same as the process of conversation: “What do you want?” “What do you expect?” “How can I accommodate these desires and expectations?” or even “How can I shape these desires and expectations?” These are questions which, in some form or another, go through one’s mind both in cordial social interactions and during an intense competitive game. Both situations have as well a degree of subterfuge – sometimes you have to conceal your feelings, your desires, your plans, either for some sort of competitive advantage or just to spare someone else discomfort.

Even single-player games interface with this desire for connection. In stealth games, for example, you’re constantly trying to understand where people are around you, where they’re going, what they’re trying to achieve, and how much they know about where you are and what you’re doing. The behaviors controlling these opponents are extremely simple because there tend to be quite a few such opponents and the penalties for failure are often high, but the basic flow of understanding “what is it that this entity understands and desires?” is still present. Because these behaviors are so basic, however, they often end up feeling arbitrary – most enemies immediately can tell the difference between your footsteps and their friends’ from two rooms away, but will observe a door that’s meant to be locked swinging open without reacting.

This is, I think, a very subtle thing that players responded to in the Dark Souls and affiliated series: Though enemy behaviors are extremely simple, they do seem to be placed and scripted as though their intent were to defeat the player rather than, as is the case with many enemies in other games, to be dramatically and satisfyingly defeated. Similarly, the enemy behavior is just sophisticated enough that you can watch them decide on an attack based on your relative positions, then seek to execute that attack and respond to it, creating something akin to a very primitive version of the sensation of fighting games’ ‘yomi’. Of course, many players dislike these traps and ambushes and tactics, seeing in them not a malicious opponent but a malicious game designer – which is, I suppose, also the case.

All these single-player examples, though, are of games which encourage you to understand the intent of extremely basic characters and creatures. Even if they’re presented as clever entities like humans in the narrative layer, usually they end up coming off as simple-minded simply because of the limitations of their development. Most games don’t bother creating rich interior lives for their characters – for the simple reason that most players wouldn’t notice if they had. It doesn’t take a lot of effort to make a character behave somewhat convincingly, to make them pace and mutter and run towards loud noises and yell and shoot, but it takes a lot to make them understand their world, formulate goals, and act to achieve them.

Still, it’s worth thinking about what we might do to create a simulation sophisticated enough to be worth empathizing with in a deeper and more elaborate way. First, let’s think about how a creature or person takes action:

This is a chart of what decision-making might look like to an entity. The entity’s self is represented by the red box: Every creature has innate desires and certain information about the world it occupies. For living creature these desires would be subject to change based on that information, but let’s just say this simulation is taking place across little enough time that these desires remain more or less constant. The creature’s information changes based on its observations of the world, and the information combined with the creature’s desires combine to create a plan of action to achieve those desires. The plan results in actions it takes, which change the world around it, and so forth. Left to its devices the creature would steadily achieve its objectives (assuming its plans were any good), but there’s also an unknowable quantity of other creatures whose actions are also affecting the world in ways which change the flow of information, constantly requiring new plans.

Thus, if we wanted to create a compelling set of artificial behaviors, we’d need three tools:

1) Information. How does the creature perceive the world? What can it see and hear, and how does it parse this into usable data?

2) Desire. This is probably the simplest, just create a world state (or set of such states) that the creature wishes to achieve or maintain – the difficulty of this step is in formulating it such a way that it can be used for the next step.

3) Plan. This is the tough one. How do you synthesize the information and desire into a plan of action?

We usually don’t model anything like that in games because it’s overkill. If you create a set of creature behaviors capable of inferring from subtle information, players will often feel like it’s cheating; whereas if you create a set of random behaviors players will often infer a reason for that behavior. Sometimes the cruder and more artificial behavior set ends up feeling more real. However, when we don’t bother to emulate any internal life, we create seams: Creatures ignore information which they aren’t scripted to notice, react bizarrely to edge cases, and while all of these may be nearly impossible problems to effectively solve I think it would be neat if we tried. If we had, as we strained to make ever more photo-realistic worlds, established methods for giving characters perceived knowledge of their environment, a set of desires, a method of formulating plans, would it then be very difficult to create opponents which feel real and substantial? Perhaps even human? If players feel AI that is too smart is cheating, is that just an artifact of the strained pseudo-fidelity of modern games, where everything looks photo-real but nothing meaningfully reacts to the things happening around them?

There’s a natural, easy joy to competition, to testing each other and understanding each other and exceeding each other, which largely doesn’t exist in single-player experiences. What we have instead of competition is naked challenge, a kind of lurid hyper-competition which strips away all the ‘boring’ parts – and, in the end, gives us targets rather than opponents, conquests rather than contests. This is fine. It’s not like these games can’t be fun and interesting and even thought-provoking. And yet I can’t help but wonder what these games would look like today if we’d ever been taught to look at their casts of characters as anything aside from predators or prey.

Of course, even if we gave every creature motivation and observation, there would still be something missing: Empathy. If we wanted to create realistic behavior, we’d also have to give creatures some capacity to observe each other directly, predict actions, and act preemptively based on those observations. Maybe we’d even start to feel bad for massacring them.

If you enjoyed this essay, please consider supporting me on Patreon. Support at any level lets you read new posts one week early and adds your name to the list of supporters on the sidebar.

ex-machina

It feels, now, so naive to believe that higher resolutions, faster framerates, and more polygons could ever lead to something which appears truly real. Every step we take toward making things look realer becomes a step into the uncanny valley: The more realistic the texture and motion, the faker the seams start to seem, unraveled at the edges. When we film in high resolution, it can’t fail to become more obvious that we’re building sets and canning dialogue instead of recording a reality – when we render at 60 frames a second, the visual gaps between our animations and the motion of muscle and bone become stark.

In the long run, I don’t believe that we can approximate reality by means of picture quality or polygon pushing – or, even, by means of technique and artistry.

This relates to what I wrote about last week. The first stroke of creation still matters deeply, and determines the final form of the piece in an inescapable manner. The more we try to hew to reality through artificial means, the more the gaps between reality and our representation of reality will begin to show.

As we seek reality, if we continue long enough, our methodology will begin to drift away from mimicry and towards emulation – that is to say, it will no longer be sufficient to create a model, texture, and animation that looks like a creature in motion, but will become necessary to create a simulated creature operating under the rules of reality. Follow the thread long enough and realism becomes simulation, inevitably. There’s no bridge across the uncanny valley, just art on one side and reality on the other – one, a representation of something external to itself; the other a system no longer beholden to aesthetics.

If your standards for realism come high enough, the only way to fulfill them is to create reality. So, the question is, what is it we actually want when we say we want things to look better, to look realer? Do we really want Turing machines running virtual flesh bodies, ensuring each motion is motivated by an actor with real wants and needs, each muscle jiggling and snapping as limbs flex? Or is what we want, not reality, but the same old fake worlds with more pores, with higher thread counts, where everything is just a little bit shinier and we can pretend that reality is what we make of it?

Terminator

As we approach true artificial sapience at unknown but increasing speed from an unknown but decreasing distance, our fears and assumptions about what will happen to us become repetitious. It is generally assumed, for various reasons, that the existence of an intelligence superior to our own will mean the end of us.

Why? Well, look what we’ve done to the species we share the planet with. If we imagine a species that is as gentle and attentive a caretaker of humanity as humanity has been of the world, it’s not hard to see why we find the idea dismaying. Regardless, our fiction repeats a kind of weird certainty that to be supplanted as the greatest, the smartest, is to be destroyed or made irrelevant, to cede our place in the universe to our creations. It seems tremendously short-sighted to believe that our, frankly, extremely stupid appraisal of the necessity and utility of violence would be passed on to our creation, at least if it were a superior intelligence. If it were, that would speak more to flaws in that intelligence than it would to the priorities of a superior being.

Sometimes, in our stories, the destruction of humanity is violent, a massacre, and there’s two forms these take: One is where the artificial intelligence isn’t really intelligent at all, just smart enough to find targets and eradicate them without the aid of a human operator. This is basically the wild beast, or Big Dog, hypothesis: A world of dragons, that hunt humans because they don’t know better, that breathe fire and lead and don’t get tired. It’s a frightening image, but hardly a threat of extinction. We’ve survived worse as a species. Moreover, since these robotic-beasts aren’t truly intelligent, they’d eventually fall apart or be destroyed, especially since the facilities to create them would inevitably be high priority targets.

The other form is where the artificial intelligence is truly, boundlessly intelligent, and willfully engages in the destruction of humanity – and this is the one I find most interesting, that this image keeps repeating in our art, because what it suggests is that we generally believe that humanity should be destroyed. We apparently believe that a being of comparable or greater intelligence to that of humanity would inevitably come to the conclusion that we shouldn’t exist.

Why?

Mostly, I think, we believe this because the assumption that eliminating a dangerous person is justified is something deeply ingrained in our culture. The same belief system that makes it okay to put our criminals in a prison that we know to be unsafe and abusive is what lets us assume that an implacable machine enemy would put us in The Matrix. The same belief system that lets us believe that torture and indiscriminate bombings are justified in a new era of war lets us believe that, if it were the machines who were advanced and we were the ‘savages’, their divine right would allow them to eradicate us.

We believe in the approaching supremacy of machine gods who care nothing for human life because it justifies the existing supremacy that lets us care nothing for human life. We believe it is, given the power and intelligence, the only correct way to be. Machine supremacy becomes the new colonialism, and we believe that they, those advanced and logical machines, will believe it is correct – justifying, in retrospect, centuries of human atrocity.

In case it was ever not obvious: Machines aren’t what we need to fear. Even if they get smarter than us, faster than us, stronger than us, those capabilities are in themselves utterly unimpressive compared to what we have already done to each other with everything ranging from exotic chemical agents to the controlled application of disease to buckets of water and an overactive imagination.

Whether they remain our tools or become our successors, what we fear in machines is mostly the reflection we see in the metal.

EveHeader

Another slow week, primarily working on animation stuff. Most of the animation work wasn’t very good either, but eh once I create it it becomes easier to make it good, so still progress.

MaskWalkTurn00

Added turning animation to the walk cycle. This is definitely wrong in a couple of ways: Most obviously the limbs switch sides after the turn, which is just a product of the left-walking animation being just the right-walking animation flipped around. That’s an easy fix. A bit less obviously, both feet stay on the ground during the turn, which makes no sense and will definitely need to be fixed. That’s going to be a bit trickier, but should be feasible. In general I’m a bit displeased with the animation of the left arm in all of these: I’ve gotten used to Eve’s relatively motionless left arm, but these guys need to be a lot more expressive there, so that’s probably something I’ll be generally working on in the animations.

I also did the animation for transitioning between the alert and idle states:

MaskIdleAlertTransition01

I like this one a lot better. There’s a bit of weightlessness to it that doesn’t quite work, but I like that it jumps off the ground slightly to transition to the alert state, selling that the entity is startled (and meaning I probably don’t need to make a separate turning animation for when it’s alerted by something behind it). It also seems like the left leg is moving naturally both going into and out of the alert state, taking a little step to snap back where it should be. Even better, there’s a chance I may be able to reuse this animation, or a slight variation on it, as the jumping animation. I’ll test that out when I get to it.

EveHeader

Well I got the general case solution working, and I got all of the code I’d already written for the entity behavior copied over to where it needs to be, and I built it all and it all runs more or less. So I guess now I’m on to debugging and improving it, creating more animations, and creating the alternate versions of this entity. It’s all a lot of work, but I knew it would be: The problem at this point is mostly just that it’s been real hot and I’ve been having trouble keeping my motivation going.

But, you know, my motivation has flagged plenty of times during this project, and it still keeps on chugging along. As long as I get a bit of work in every day, it will continue to move forward. Yes, I’d prefer to get in more than a little, but sometimes it’s going to be hard.

That’s just how it is.

So, this week will be mostly creating new animations to flesh out this entity, implementing them, and fidgeting with the code to make the behavior better and more naturalistic. Hopefully productivity will pick up, but even if it doesn’t progress will get made, just perhaps very slowly.

EveHeader

Another week on programming this entity’s behavior. A few days in and I was getting close to having it all working when I ran into the dread disease programmitis, bane of programmers everywhere. No I’m not talking about carpal tunnel syndrome. No, I’m not talking about getting a sore butt from sitting in a computer chair all day! I’m talking about the general use case.

Well, to make a long story short, it was pretty easy to create a generalized state machine AI behavior, but it was a lot more difficult to break all of the state code out into individual files, requiring me to separate all the variables used out into each of those classes rather than keeping them all in one central place in a cluttered but easily comprehensible way. It was also a lot of work taking the most general functionality, such as tests to see whether one entity can see or hear another and the code to navigate a path, and extracting that into an EntityTools utility class that I can use for all future entity behaviors.

In other words, all the traditional ways that programmers get sidetracked and waste a ton of time.

Is this time going to be a waste? Dunno! Pretty sure I could have at least done this in a better order, such as finishing getting the entity working, then extracting the general-use code out into an EntityTools class, then generalizing the state machine AI into a reusable behavior. That would have been the smart way to do it, probably. Oh well!

As things stand, if I can focus it should still come together pretty fast and be working within a matter of a day or two. That’s a big ‘if’, though, with the temperatures this week dancing up around the high 90’s and 100’s of degrees and me stuck in a tiny room with no air conditioning. Well, I’ll try to make steady progress, and maybe if I get lucky I’ll even eventually start making fast progress too.

EveHeader

I started working on all of the behavioral programming stuff, then I got completely sidetracked for a couple of days when I realized that I didn’t have any centralized document with all of my story content in plain language. Up until now I’d gotten by pretty much keeping all that stuff in my head, and occasionally writing down bits and pieces of it, more often than not in the form of stories which were more metaphorical than accurate to the reality of what’s supposed to be going on in the story. This, I realized, was making it difficult to append to the story and to plan out how I was going to tell it, because I had no centralized resource to refer back to to make sure I wasn’t contradicting myself. It took me several hours to write it all down and it ended up being more than 3,000 words, which is a pretty good sign I should have done it sooner. It’s still somewhat subject to revision, but revisions should be more along the lines of expanding and going into more detail about unclear concepts than changing the specifics (unless I come up with a really good idea, of course).

After that I went back to programming the behavior of these entities, and I found that both my production planning using Trello and the detailed story breakdown of the last week or so of work came in very handy, since I immediately found myself breaking down all of the behavioral specifics that had been confounding me into a set of relatively easy-to-manage behavioral states. Thus, rather than trying to think of the behavior of these enemy types as an impenetrable wall of if-then statements, I can parse this behavior much more readily as a set of simpler behaviors that switch to other simple behaviors based on the input. So, for instance, I can have a patrol state that does nothing except walk forward, and then have it switch to either an idle action state at random intervals, a turn around and patrol the other direction state if it hits a wall or the edge of its patrol radius, or a pursuit state if it sees the player. At that point, all I have to do is write the 5 or 6 lines of code that control each state and the entity should work.

I have the basic version almost up and running, but the code to handle when and how to attack is still a bit tricky since it deals with the specifics of positioning which vary from attack to attack. All of the movement stuff is pretty much there, though it will inevitably require some debugging, and I still need to generate the timing info for things which rely on a timer (attack recovery, patrol delays, alert time, etc.) I think I can finish up the basic behaviors tomorrow, at which point I go back and do all the prototype animations needed to fully animate those behaviors – after that, I go back in and add all the code in for another version of this enemy, such as the scout or rider variants, and then I make the animations for them – and so on, and so forth, until they’re all done.

Making future enemies should be quite a bit easier after this, since I think these will be by far the most complex of any non-boss enemy in the game. Even for cases as intricate as this, in the future I’ll have these guys as a template to work from, so I expect any future problems to be quite a bit more approachable.