Assignment 2: Ms. Pac-Man

This assignment is worth 20 points and has two parts.

1. Uniform-cost search (10 pts)

Implement uniform-cost search in Java.

To begin, first ensure that you have Java version 11 or higher. Clone the ai_1 repository, which contains files for this course's assignments. Look in the src/search subdirectory, which has source files for classes described in this section.

You will write a general-purpose implementation that can search any problem that implements this interface:

// S = state type, A = action type
interface Problem<S, A> {
  S initialState();
  List<A> actions(S state);
  S result(S state, A action);
  boolean isGoal(S state);
  double cost(S state, A action);        
}

Build your implementation in src/Ucs.java, in this class:

// uniform-cost search
class Ucs<S, A> {
  public static <S, A> Solution<S, A> search(Problem<S, A> prob) {
     your implementation here 
  }     
}

The search() method should return a Solution<S, A> if a solution was found, otherwise null. Solution<S, A> contains these fields:

class Solution<S, A> {
  public List<A> actions;  // series of actions from start state to goal state
  public S goalState;      // goal state that was reached
  public double pathCost;  // total cost from start state to goal

}       

Your search method could be used like this, for example:

  Problem<Integer, Integer> p = new MyPuzzle();
  Solution<Integer, Integer> result = Ucs.search(p);

The src/problems subdirectory includes several Problem instances that you can use for testing:

It would be wise to ensure that your uniform-cost search implementation works on these problems before attempting to use it in your Ms. Pac-Man agent. Run the tests in UcsTest.main().

hints

    1. Check whether the new path is cheaper than the existing path. If it is, you want to record it, but you cannot simply update the fields of the existing frontier node, since that will not change its position in the priority queue. Instead, you can remove the existing frontier node from the queue, and add a new node representing the new and cheaper path. With this approach, you will never have two nodes in the queue that represent the same state. However, removing an existing node is potentially slow, since the remove method of PriorityQueue runs in O(N).

    2. Check whether the new path is cheaper than the existing path. If it is, then add a new node representing the new and cheaper path, but leave the existing node in place in the queue as well, to avoid the expensive remove operation. With this approach, there may be multiple nodes in the queue that have the same state. So whenever you remove a node from the queue, you must check whether its state is already in the explored set. If so, you have already handled that state, so you should ignore the node.

    3. Always add a new node representing the new path, even if it is more expensive than the existing path. With this approach, you will have more duplicate nodes in the queue than in approach (2). As in approach (2), when you remove a node from the queue you will need to check whether it has already been explored.

    With approach (1) or (2), you will need some way to quickly check whether a given state is in the frontier so that you can find its existing path cost. One way to do that is to use a HashMap<S, Node<S>> to keep a map from states to frontier nodes.

    I recommend approach (2) or (3), since (1) will become slow as the frontier grows large.

2. Ms. Pac-Man (10 pts)

In this part of the assignment, you will write an agent that plays Ms. Pac-Man. Your uniform-cost search implementation from part (1) above may be helpful.

Clone the Ms. Pac-Man source code from the repository at MsPacMan-vs-Ghosts-AI. When you visit that page on GitHub, be sure to read the entire page – it has lots of useful information about the game, including a link to the API documentation.

Write your agent in src/MyAgent.java, which initially holds a dummy implementation.

Assuming you want to use uniform-cost search, I recommend the following basic approach. In your agent code, create an instance of the Problem<S, A> interface above and call Ucs.search() on every game tick. Perform a uniform-cost search over the physical maze graph, not the abstract tree of game states. You can think of the physical graph as a simplified state space in which each state contains only Ms. Pac-Man's position, not additional game state such as the ghost positions.

In the API, each position in the maze is a node in the maze graph. The initial node for your search should be Ms. Pac-Man's current position. Usually every pellet can be a goal node, perhaps unless it is close to a non-edible ghost. Edible ghosts and fruits are also great goals to aim for.

When Ucs.search() returns a Solution, the first action in the solution is the direction that Ms. Pac-Man should go.

Assign costs to graph edges as necessary to avoid going along undesirable paths. Any edge that ends at a non-edible ghost should have a very high cost. You may also want to give high costs to edges that are close to ghost positions, or that move in a direction toward the ghosts, or that reach goal nodes that are too close to ghosts. If you want to be smarter, consider not only where the ghosts are now, but where they are heading.

hints

[SIMULATOR] PacMan is still thinking!
    1. You can copy the files and directories from ai_1/src into MsPacMan-vs-Ghosts-AI/src. Then all the ai_1 sources will be available inside the Ms. Pac-Man project.

      This is the most direct method, though it's a bit inelegant. If you ever run 'git pull' in ai_1 to update the sources there, you may have to copy them into MsPacMan-vs-Ghosts-AI/src again. Furthermore, git will report all of these copied files as new inside the MsPacMan-vs-Ghosts-AI project unless you add them to .gitignore.

    2. Alternatively, you can add the ai_1 project as a dependency of MsPacMan-vs-Ghosts-AI. To take this approach:

      1. Edit MsPacMan-vs-Ghosts-AI/pom.xml and add the dependency there:

<project>
   ...
    <dependencies>
        <dependency>
            <groupId>cz.cuni.mff</groupId>
            <artifactId>ai-1</artifactId>
            <version>0.1</version>
        </dependency>
    </dependencies>
</project>

        With this dependency in place, the Java compiler will be able to find the ai_1 source files when it compiles your Ms. Pac-Man agent. You will need to have both projects (ai_1 and MsPacMan-vs-Ghosts-AI) loaded in your IDE.

        (Alternatively, your IDE may provide other ways to specify a project dependency. However this approach of adding the dependency to the Maven project file should work in any IDE.)

      1. Edit the mspac (or mspac.bat) startup script and add ../ai_1/target/classes to the classpath there. The mspac script will now look like this:

          java -cp target/classes:../ai_1/target/classes MsPacMan $*

        Or, on Windows, mspac.bat will look like this:

          java -cp target/classes;../ai_1/target/classes MsPacMan %*

scoring

ReCodEx will test your test your agent by playing 50 games, each with a different random seed. Note that in each game Ms. Pac-Man initially has 3 lives and may potentially go through many different levels.

To score 10 points for this part of the assignment, your agent must score an average of at least 12,000 points per game. ReCodEx will give you partial credit for lower scores, but will award no credit for any agent that averages less than 6,000 points per game.

To run the testing process yourself, run

$ ./mspac MyAgent -sim 50

This will run 50 random games and report your agent's average score on these games.

ReCodex must be able to evaluate your agent on 50 games within a time limit of 100 seconds, so your agent must not use more than 2 seconds of CPU time in aggregrate in an average game. This means that your average response time will probably have to be significantly less than the 40-ms limit, especially if your agent is often able to last through many levels.

All agents submitted before the soft deadline for this assignment will be entered into a tournament. Entries will be ranked by highest average score. In the tournament, I will evaluate agents by playing as many random games as necessary to ensure that the differences in average scores are statistically valid. Also, be aware that I will use a random starting seed, not seed 0 which is used by default by the -sim option.