Programming Assignment:

Multi-Agent Capture The Flag

Due Date: Friday, November 10


0. Download

Fixes/Updates: Get the Java CTF package: for PC.
hw4.tar.gz for Unix.

1. Introduction

This assignment asks you to develop a knowledge-based agent that must act logically in an unfamiliar external world. Though the inner workings of knowledge-based agents are simple and easy to understand, complex and intelligent behavior can emerge. If programmed correctly, such an agent can determine an optimal action given its limited information about the state of the external world. Furthermore, a simple, deterministic knowledge-based approach can be combined with other approaches to yield behavioral results that can be astounding. In this assignment, you will explore these issues. (For more information about knowledge-based agents (e.g., Wumpus World), see Ch. 6-8 in the text.)

2. Background

Capture The Flag (CTF) is a popular outdoor education game in which teams of players attempt to capture the flag of the other team. Each team has a base location, and the team's flag is usually located on or near the base. A player captures the flag of the enemy team by stealing the flag from the enemy base and making it back to his/her own home base without being tagged by the enemy. When a player is tagged, the player must return to his/her home base. CTF for humans is best played on interesting wilderness terrain that contains forests, fields, and creeks. CTF for agents is best played in a two-dimensional grid world.

3. The CTF Framework

To facilitate the development of knowledge-based agents for CTF games, we've developed a Java CTF framework. All code for processing the board state, handling agent moves, and board visualization (GUI) has already been written. Your assignment is to write agent code that can be plugged in to this framework. Because you only need to implement one part of the system, required knowledge of Java is minimal for this assignment. Anyone who is familiar with Java or C-style syntax and can mimic an example implementation should have no problem. We provide you with a graphical environment which allows you to configure game start states and select agent implementations to use for each team. Once you implement your own Agent subclass, you can easily load your agent into the graphical environment to test its performance against other types of agents, or even against itself.

4. CTF Rules

As mentioned above, agents in this framework play CTF in a two-dimensional grid world. The framework supports only two-team games. The environment is made up of an NxN grid of board spaces, and each space in the grid is either open or blocked by an obstacle. Each board has a home base location for each of the two teams, which are situated in the center of the far east and west edges of the grid. Each board also has a flag for each team which starts out at the team's home base location. Agents start the game aligned vertically with their own base/flag, and as far north and south as possible (if teams have more than two agents, the agents are spread so that at most one agent occupies each grid space). During each round of play, all agents on the board are queried for a move. Possible moves include the following: If the move specified by an agent is not blocked by an obstacle (or by the edge of the board), the agent is moved to the new position. Play proceeds according to the following rules: To make the agent move process deterministic and fair, during each time step every agent makes a move, but agents' moves are processed in alternating team order. Thus, if team A={ A1, A2, A3 } and team B={ B1, B2, B3 }, one possible ordering of agent move processing would be A1, B1, A2, B2, A3, B3. Between time steps, teams alternate being the first to have a move processed.

5. Writing Agent Code

We have written an abstract Agent class. For those unfamiliar with Java or object-oriented programming, an abstract class is one that only defines interfaces for certain methods (and doesn't define bodies for these methods). An abstract class is not meant to be instantiated directly (since calling an un-implemented method doesn't make sense), but is meant to be subclassed. Thus, to write code for your agent, you should create a class that extends Agent. For example:
public class Jcr13Agent extends Agent {
The Agent class contains only one abstract method, which means that to implement your agent, you only need to implement one method. The abstract method as declared in Agent is as follows:
public abstract int getMove( AgentEnvironment inEnvironment );
Thus, in our Jcr13Agent class, we can implement this method as follows:
public int getMove( AgentEnvironment inEnvironment ) {
	... our code here ...
The getMove method takes an AgentEnvironment object and must return an integer that represents the move the agent is trying to make (one of the six moves listed above, see integer constants defined in AgentAction in the project javadoc). An AgentEnvironment object contains all the information an agent can access about the environment state. The information is agent-centric, in that the point of reference is the agent's location. All information about the environment is obtained from methods in AgentEnvironment that return boolean values describing whether a particular property is true or false. For example, the "isFlagNorth" method returns true iff the flag is located to the north of the agent. For each time step, your agent's getMove method will be called, and your agent will be passed a new AgentEnvironment describing the new environment state. Note that the information provided by AgentEnvironment is quite limited in that it only provides cardinal direction (N, S, E, W) information about various board features. Also, no methods in AgentEnvironment deal with proximity mines, which effectively means the mines are invisible for all agents. To browse a complete list of methods available in AgentEnvironment, see the project javadoc.

You should create only one .java file for this project. You should entitle the file "", where "NetID" is the net-id of one of your team members. Your Java file should contain a java class called "NetIDAgent" that extends Agent as mentioned above. Your class should be in the package "ctf.agent". Classes that your agent can access are in "ctf.common". Thus, if your net-id was jwa6, you would create a file called "", and the top few lines of your file could look like:
package ctf.agent;

import ctf.common.*;

public class Jwa6Agent extends Agent {
Note that your Agent subclass must also have a parameterless constructor. An easy way to meet this requirement would be to write no constructor at all.

For an example class that does all of this correctly, see "", which is in "ctf/agent". You should save your Java file in the "ctf/agent" directory also. Please DO NOT turn in more than one .java file. Note that Java forbids defining more than one class in each .java file. If you feel that you need to implement other classes besides one that extends Agent (e.g., if you want to write advanced data structures), then you should do this with inner classes (basically a class defined inside the body of your Agent subclass). See the project javadoc for more information about subclassing Agent.

For fairness, please DO NOT perform any IO operations in your agent code. In other words, reading from files is prohibited. To make sure your agent class adheres to this rule, we will be grep'ing all incoming .java files for "" and "" import statements.

6. Loading Agents in the Test Environment

To test your agent code, put your "" file in the ctf/agent directory. To compile the file (because it is in package ctf.agent), change to the directory containing "ctf" and type:
javac -cp . ctf/agent/
This tells the java compiler to set the user class path to the current directory (the directory containing "ctf") and to compile your .java file.

After compiling, your "NetIDAgent" class will be loaded each time the test environment is launched. However, you need to tell the environment that your class should be listed as one of the available agents. To do this, edit the file "AgentClasses.txt", and add an entry for your class (follow the pattern from the classes that are already listed in the file). Note that you must increment the number at the top of the file so that it reflects the number of classes listed in the file (the file that we've included has three classes listed it it, so the number "3" is on the first line of the file).

7. Running the Test Environment

To launch the environment, change to the directory containing "ctf" and type:
java -cp . ctf.environment.TestPlaySurface
The board is initialized using default settings, which include a randomized obstacle map. The game starts out paused. The control frame is filled with various options: most of these should be self-explanatory. Note that the board is reset to the game start state every time an option is changed that could affect board state integrity (e.g., switching the obstacle map). When using the "Random" board set, reselecting "Random" from the list will generate a new randomized board state. Playing with the various options should give you a feel for how they work, so we won't explain them any further here. Below is a screen shot of the CTF GUI:

8. Testing Your Agents

Note that though teams larger than two agents each are supported by the test environment, you will be graded only on how your agents perform in teams of two (situations get very complicated when 3 or more agents are on each team).

Three Agent subclasses are provided for testing purposes. Jcr13Agent is a simplistic agent that naively chases after the enemy flag until it gets it, then naively chases its own home base until it gets there. Jcr13Agent doesn't pay attention to any other agents (enemies or teammates), but it does do some rudimentary obstacle navigation (though it gets permanently stuck in certain maps). Jcr13Agent doesn't plant any mines. RandomAgent and BomberAgent are two other test agents that don't need any explanation.

Note that you will be graded only on your performance against RandomAgent and Jcr13Agent. BomberAgent is provided merely for your own amusement (or to convince you why planting lots of mines is a bad idea). You will also be graded based on your performance against my secret test agents.

Five fixed-obstacle maps (10x10) are provided. You will be graded based on how your agents perform on these maps, and also on how they perform on one additional map of undisclosed size and obstacle distribution. The "Random" map setting is provided only to give you an idea of the possibilities, but you will not be graded on how your agents perform on random maps (random maps tend not to be fair to both teams).

Maps are stored in the "sets" directory. If you want to create your own maps, add text files to this directory that mimic the format of the files already there (the first number in the file is the dimension of the board). This directory is scanned automatically whenever the test environment is launched, so your custom map will appear in the drop-down list of board sets.

9. What to Hand In

You should turn in a single "" file (as described above). Make sure your file compiles before you turn it in, and make sure that your class is in package ctf.agent (see above). Turn in a printed version of your code along with with homework 4. Please turn it in as a separate part (not stapled to the other parts). Also, YOU MUST (for purposed of our tournament and grading). FTP upload your .java file IN TEXT MODE to:
login: cs472
pass:  deepblue
To put the DOS-prompt FTP client into text mode, connect to playpen and then type "ascii". If you need to re-upload, name the file "", etc.

Your "" should contain neatly formated comments that explain how your agent works. These comments should include a block of English text at the top describing your agent's strategy, as well as comments throughout the code as necessary. The first few lines of your file should be a comment listing the names of your group's members.

10. Grading

We will be running your agents against RandomAgent, Jcr13Agent, and SecretAgent on the five provided maps and on one extra, undisclosed map. We will also be running your agents against other agents from the class in a round-robin tournament. Each agent from the class will play each other agent for one round on each of the six maps. A round consists of play up to the first flag capture, with the team capturing the flag winning the round. To prevent these games from going on forever, a step limit will be imposed of 2 * boardSize^2 steps. Thus, on the 10x10 boards, the round will be declared a draw after 200 steps. During the class tournament, draws count as no points for either team. However, during rounds against the test agents, draws count as wins for the test agents (and thus the test agent gets 1 point and your agent gets no points).

Grading will be out of 100 points with the following distribution:

11. Extras

This assignment requires that you use a knowledge-based approach. However, this doesn't forbid you from using other approaches in addition. You're free to implement any kind of data structure inside your agent, and you can use any method you can think of for determining the correct move to make in a give situation. As a long-shot, we should mention that learning algorithms might even be possible if you modify the framework to suit your training needs. Of course, this would require a lot of extra work, and for the finished product you would need to hard-code the learned parameters into your agent code (since you can't turn in a modified framework or read anything from file).

If you're feeling artisticly inclined, you can implement a custom icon for your agent. You can do this by overriding the drawIcon method.