Apokalypsos & A.I. : An Introduction

by Maniac on July 29, 2010

Hi Folks,

first of all, let me introduce myself. I’m Simon Wittenberg and I’m responsible for the A.I. in Apokalypsos. I had this Forum created to tell you about the ongoing development in my department. In a (hardly) weekly series ill talk about the general aspects of A.I. in game development, the very approach we will take for Apokalypsos, maybe a little bit about techniques and about all other stuff that is, or at least that i think is, interesting (associated with A.I. of course, otherwise you’d soon know more about me than you will want to). 

Ok, so here we go. A.I. stands for Artificial Intelligence. Calling it artificial says nothing about the quality nor complexity of that intelligence. Intelligence itself is a term that isn’t beyond dispute. So the whole term is kinda blurry and you can get many different opinions on what A.I. is and how far you can take it. Can you create a ‘real’ intelligence? Can programs ‘feel’? I don’t plan on answering those kind of questions, but i may answer some like those:
How can we create an A.I. that feels like a ‘real’ intelligence? How can we accomplish an appearance of feelings in an NPC mind? How to simulate a mind in the first place?

Since this will be a computer game and no scientific research, and since the resources to implement A.I. in this game are limited, we cant use each and every technique that allows exact reproduction of human/animal perception or learning. Imitation of behavior may not sound like a big deal, but that’s just because you have an intuitive insight on how to imitate behaviors to a certain degree. But how do you know what makes a behavior believable, reasonable, ‘real’? And how would rules that cause a behavior with such features look? Of course, this depends on the scope in which the A.I. has to prove itself. A chess A.I. may just know the rules of chess, a normal shooter A.I. may just need to know how to use guns and behave like a trigger happy grunt and so on. Most modern A.I.’s are specialized and therefore problems (and A.I. is mostly about solving problems and problems solving) can be solved most of the time in an appropriate manner. The drawback of such a close up scope and specialization is that events that weren’t thought of when the scope was defined may cause the A.I. to react (if it will react at all) with an inappropriate behavior. So the more your world grows in complexity the more your A.I. has to be capable of. Since you don’t want to spend all of your time thinking up every possible thing your A.I. could do and writing the corresponding code to make it able to handle it, you have to think of ways to solve the problems of generic problem solving in a much more generic way. Excuse the overuse of ‘generic’ here, but that’s exactly the approach we will take. The generic approach. 

What do i mean when i say ‘generic’? I believe you heard this term more often in negative context than in a positive one. Especially when this term is used in relation to game design. Generic world feel uninspired, dead and wooden. But behaviors don’t. E.g. if you know how to use one doorknob, you know how to use them all, even if they have a different shape, color or door attached to it. If I’d show you a picture of a tree in my backyard, you’d recognize it as tree, even if you have never seen that particular tree before. So the solution is to break up all possible actions, behaviors, perceptions and what not into smaller pieces of generic packages that may be used as components to build more complex actions and behaviors. of course, this isn’t applicable to each and every action and situation an A.I. might encounter in a game situation and therefore some of those components have to be bigger and more defined than others, just because breaking up specialized behaviors like squad tactics would cause more load on the CPU and the programmers than it was worth. 

More on that topic and many more in the next episode, hope you enjoyed this one and you’re hungry for more ;-)

or in other words:

To be continued…

{ 0 comments… add one now }

Leave a Comment