AI (Artificial Intelligence) algorithms are a means to provide robustness as well as flexibility and variety to a large range of different tasks. This is of particular use when developing interesting software agents; autonomous programs capable of using rational decision or reaction methods to interact with features of their environment. Recent research undertaken at CIS applies these methods to 'sandbox' environments, typically small worlds with interesting or complex features and we have found several interesting environments by applying our research to classic video games.
While they may be considered simple or non-trivial to humans, often challenges in video games are highly complex for computers to comprehend and solve. This is due to a human's ability to easily understand the nature of a game; video games are designed by people, for people, hence the ability for a human to abstract and define facets of control and gameplay is rather intuitive, allowing them to apply their knowledge or skill appropriately. On the other hand, computers can not immediately make such abstractions and this is what provides video games their novelty to AI researchers. It is of growing interest to the community whether we can apply methods such as reactive systems and automated planning to provide solutions that can perform intuitive reasoning on the same level, if not better, than human players. The hidden benefit is that in solving these local (possibly abstract) problems, the techniques applied can then be carried over to more complex scenarios in the real world. Furthermore developers in the video game industry begin to take note and apply them in their own commercial software, providing consumers with a more rewarding and enjoyable experience.
Previous work by Thompson, Levine and Hayes showed the manipulation of agents in the EvoTanks domain; an environment heavily inspired by the game 'Combat' (originally released on the Atari 2600 in the early 1970's). Given the task to learn how to defeat enemy tanks within the environment, this preliminary research found that by applying a co-evolutionary computational model; in which agents learn to perform a task by playing against each other, agents were capable of learning to defeat a variety of enemy combatants.
Since then this work has continued in different avenues; attempting to apply these methods in other environments as well as expanding our previous work. Our work in EvoTanks is continuing to expand, with the focus on learning how to scale-up complex behaviours by combining several smaller facets or features of an agent. For example, we may wish for a tank that can visit waypoints while avoiding obstacles as well as nearby shells. Recent results have shown that by merging individual features in subsumption architectures (in which 'more important' features override others) we can generate more intelligent agents that can react and move based on a variety of data from their game world.
We have also expanded our research into other game worlds intent on attempting new challenges. Two particular works of note involve applying AI techniques to two classical video games; Asteroids (the 1979 arcade game by Atari) and Pac-Man (released by Namco in 1980). Each game provides new features and challenges at present untouched in the EvoTanks research, and have given students the opportunity to sample work akin to our AI endeavours here in the department.
The Asteroids project was undertaken by Hans Majury, a current final year student. The project sought to assess the effectiveness of 'scripted' (i.e. pre-programmed) agents, AI agents and human players at playing Asteroids. Applying an evolution-based learning model, the agents in Asteroids learned how to capably and efficiently destroy nearby asteroids and rack up a high score, while at the same time learn to navigate in a simulated vacuum. In the end the final results show AI agents capable of performing better than human players (well, against us anyway).
Lewis McMillan stepped up to the task of developing an AI based Pac-Man player as part of his final year, with the novelty of his research in developing means to eat the 'pills' lying across game levels while avoiding (or indeed eating) the enemy ghosts that haunt the game. Using a blend of rule-based systems with graph traversal algorithms, the Pac-Man builds a 'graph' of the game world to learn how to traverse through the game world and applies a list of rules to react to ghosts as they continue to interfere in the Pac-Man's path. While our work does not garner the greatest score ever (held by Billy Mitchell at 3,333,360), we have completed an agent capable of performing better than novice humans using relatively little computational power.
Given the success of this years individual projects, we hope to provide a variety of new challenges for students next year. In the near future our first year research students are hoping to partake in a new endeavour by applying AI methods to larger commercial games possibly in the first-person genre. Should anyone be interested in learning more about this article, please contact either Thomas Thompson, Alastair Andrew or John Levine.