Week 8: Notes

exhaustive search in graphs

Last week we saw how to use an exhaustive search to solve the N queens problem. As another example of a problem that we can solve using exhaustive search, let's consider the 3-coloring problem. Given an undirected graph, we'd like to determine if there is any way to label each of its vertices with one of 3 colors such that no two adjacent vertices have the same color.

The related 2-coloring problem (which is the same as determining whether a graph is bipartite) is computationally easy, and can be solved in linear time via a depth-first or breadth-first search. However 3-coloring appears to be computationally harder. Specifically, the 3-coloring problem is one of hundreds of problems that are known to be NP-complete. You will study NP-completeness in more advanced classes. Here, I'll only say that no polynomial-time algorithm is known for any NP-complete problem, and it seems likely that none exists (though nobody has yet been able to prove that fact).

We can look for a 3-coloring using an exhaustive search. We will not perform a depth-first or breadth-first search of the graph. Instead, we will use recursion to explore a tree of possibilities in which we first assign a color to the first vertex, then assign a color to the second and so on. For each vertex, we will only consider assigning colors that don't violate any constraints, i.e. that aren't the same as any color already assigned to any neighboring vertex.

If a graph has V vertices, then there are 3V possible ways to label those vertices with 3 colors, so the search will run in O(3V) in the worst case. In practice, our search will not consider all 3V possibilities, because it will backtrack whenever it finds that it cannot assign any possible color to the next vertex.

Here is an implementation:

// Given an undirected graph in adjacency-list reprsentation,
// print a 3-coloring of the graph, if one exists.
void color3(int[][] g) {
    int n = g.Length;
    int[] color = new int[n];

    bool valid(int i, int c) {
        foreach (int j in g[i])
            if (j < i && color[j] == c)
                return false;
        return true;
    }

    // Find colors for vertices i..(n - 1).  Return true if a coloring
    // was found.
    bool go(int i) {
        if (i == n) {
            WriteLine("colors: " + string.Join(" ", color));
            return true;
        } else {
            for (int c = 0 ; c < 3 ; ++c)
                if (valid(i, c)) {
                    color[i] = c;
                    if (go(i + 1))
                        return true;
                }
            return false;
        }
    }

    go(0);
}

Notice that our recursive helper function go() returns a boolean. As soon as it finds a 3-coloring, it returns true, which will cause all nested recursive calls to return immediately. This is often a useful technique.

We can use an exhaustive search to solve many other graph problems, many of which are also known to be NP-complete. One such problem is looking for a Hamiltonian cycle in a graph, i.e. a cycle that contains all vertices in the graph.

dynamic programming

Dynamic programming is a general technique that we can use to solve a wide variety of problems. Many of these problems involve optimization, i.e. finding the shortest/longest/best solution to a certain problem.

Problems solvable using dynamic programming generally have the following characteristics:

Usually we can dramatically improve the running time by arranging so that each subproblem will be solved only once. There are two ways to do that:

Generally we prefer a bottom-up solution, because

We will first study one-dimensional dynamic programming problems, which have a straightforward recursive structure. After that, we will look at two-dimensional problems, which are a bit more complex.

Fibonacci numbers

Computing the Fibonacci number Fn is a trivial example of dynamic programming. Recall that the Fibonacci numbers are defined as

yielding the sequence 1, 1, 2, 3, 5, 8, 13, 21, …

Here is a recursive function that naively computes the n-th Fibonacci number:

  int fib(int n) =>
n < 3 ? 1 : fib(n - 1) + fib(n – 2);

What is its running time? The running time T(n) obeys the recurrence

T(n) = T(n – 1) + T(n – 2)

This is the recurrence that defines the Fibonacci numbers themselves! In other words,

T(n) = O(Fn)

The Fibonacci numbers themselves increase exponentially. It can be shown mathematically that

Fn = O(φn)

where

φ = (1 + √5) / 2

So fib runs in exponential time! That may come as a surprise, since the function looks so direct and simple. Fundamentally it is inefficient because it is solving the same subproblems repeatedly. For example, fib(10) will compute fib(9) + fib(8). The recursive call to fib(9) will compute fib(8) + fib(7), and so we see that we already have two independent calls to fib(8). Each of those calls in turn will solve smaller subproblems over and over again. In a problem with a recursive structure such as this one, the repeated work multiplies exponentially, so that the smallest subproblems (e.g. fib(3)) are computed an enormous number of times.

top-down dynamic programming (memoization)

As mentioned above, one way we can eliminate the repeated work is to use a cache that stores answers to subproblems we have already solved. This technique is called memoization. Here is a memoized implementation of fib, using a local method:

  int fib(int n) {
      int[] cache = new int[n + 1];
      cache[1] = 1;
      cache[2] = 1;
        
      int f(int i) {
          if (cache[i] == 0)
              cache[i] = f(i - 1) + f(i - 2);
          return cache[i];
      }
        
      return f(n);
  }

Above, the cache array holds all the Fibonacci numbers we have already computed. In other words, if we have already computed Fi, then cache[i] = Fi. Otherwise, cache[i] = 0.

This memoized version runs in linear time, because the line

runs only once for each value of i. This is a dramatic improvement!

bottom-up dynamic programming

In this particular recursive problem, the subproblem structure is quite straightforward: each Fibonacci number Fn depends only on the two Fibonacci numbers below it, i.e. Fn-1 and Fn-2. Thus, to compute all the Fibonacci numbers up to Fn we may simply start at the bottom, i.e. the values F1 and F2, and work our way upward, first computing F3 using F1 and F2, then F4 and so on. Here is the implementation:

  int fib(int n) {
      int[] a = new int[n + 1];
      a[1] = a[2] = 1;
      for (int k = 3 ; k <= n ; ++n)
          a[k] = a[k - 1] + a[k - 2];
      return a[n];
  }

Clearly this will also run in linear time. Note that this implementation is not even a recursive function. This is typical: a bottom-up solution consists of one or more loops, without recursion.

This example may seem trivial. But the key idea is that we can efficiently solve a recursive problem by solving subproblem instances in a bottom-up fashion, and as we will see we can apply this idea to many other problems.

rod cutting

The rod cutting problem is a classic dynamic programming problem. Suppose that we have a rod that is n cm long. We may cut it into any number of pieces that we like, but each piece's length must be an integer. We will sell all the pieces, and we have a table of prices that tells us how much we will receive for a piece of any given length. The problem is to determine how to cut the rod so as to maximize our profit.

We can express an optimal solution recursively. Let prices[i] be the given price for selling a piece of size i. We want to compute profit(n), which is the maximum profit that we can attain by chopping a rod of length n into pieces and selling them. Any partition of the rod will begin with a piece of size i cm for some value 1 ≤ i ≤ n. Selling that piece will yield a profit of prices[i]. The maximum profit for dividing and selling the rest of the rod will be profit(n – i). So profit(n), i.e. the maximum profit for all possible partitions, will equal the maximum value for 1 ≤ i ≤ n of


prices[i] + profit(n – i)

naive recursive solution

Here is a naive recursive solution to the problem:

  // Return the best price for cutting a rod of length n, given a table
  // with the prices for pieces from lengths 1 .. n.
  int profit(int n, int[] prices) {
      int best = 0;
      for (int i = 1 ; i <= n ; ++i)
          best = Max(best, prices[i] + profit(n - i, prices));
      return best;
  }

This solution runs in exponential time, because for each possible size k it will recompute profit(k, prices) many times.

bottom-up dynamic programming

This solution uses bottom-up dynamic programming:

  // Return the best price for cutting a rod of length n, given a table
  // with the prices for pieces from lengths 1 .. n.
  int profit(int n, int[] prices) {
      int[] best = new int[n + 1];  // best possible price for each size

      for (int k = 1 ; k <= n ; ++k) {
          // Compute the best price best[k] for a rod of length k.
          best[k] = 0;
          for (int i = 1 ; i <= k ; ++i)
              best[k] = Max(best[k], prices[i] + best[k - i]);
      }

      return best[n];  // best price for a rod of length n
  }

Once again, this bottom-up solution is not a recursive function. Notice its double loop structure. Each iteration of the outer loop computes best[k], i.e. the solution to the subproblem of finding the best price for a rod of size k. To compute that, the inner loop must loop over all smaller subproblems, which have already been solved, looking for a maximum possible profit.

This version runs in O(N2).

bottom-up dynamic programming, extended

Often when we use dynamic programming to solve an optimization problem such as this one, we want not only the value of the optimal solution, but also the solution itself. For example, the function in the previous section tells us the best possible price that we can obtain for a rod of size n, but it doesn't tell us the sizes of the pieces in which the rod should be cut!

So we'd like to extend our solution to return that information. To do that, for each size k < n we must remember not only the best possible prices for a rod of size k, but also the size of the first piece that we should slice off from a rod of that size in order to obtain that price. With that information, we can reconstruct the solution at the end:

  // Return the best price for cutting a rod of length n, given a table
  // with the prices for pieces from lengths 1 .. n.
  int profit(int n, int[] prices, out int[] sizes) {
      int[] best = new int[n + 1];   // best possible price for a given rod size
      int[] cut = new int[n + 1];    // size of first piece to slice from a given rod size

      for (int k = 1 ; k <= n ; ++k) {
          best[k] = 0;
          for (int i = 1 ; i <= k ; ++i) {      // for every size <= k
              int p = prices[i] + best[k - i];
              if (p > best[k]) {
                  best[k] = p;
                  cut[k] = i;  // remember the best size to cut
              }
          }
      }

      List<int> l = new List<int>();
      for (int s = n ; s > 0 ; s -= cut[s])
          l.Add(cut[s]);
      sizes = l.ToArray();
      return best[n];
  }

Study this function to understand how it works. Like the preceding version, it runs in O(N2).

smallest number of coins

Here is another classic dynamic programming problem. We would like to write a method int numCoins(int sum, int[] coins) that takes a integer and an array of coin denominations. The method should determine the smallest number of coins needed to form the given sum. A sum can contain an arbitrary number of coins of each denomination. For example, if sum = 34 and coins = { 1, 15, 25 }, the method will return 6, since

34 = 15 + 15 + 1 + 1 + 1 + 1

and it is not possible to form this sum with a smaller number of coins.

If we cannot form the given sum using the given denominations of coins, our method will return int.MaxValue.

It would not be difficult to write a recursive program that can generate all possible ways of forming a certain sum from a given set of coins. So one way to find the smallest number of coins could be to generate all those possible ways, and see which one uses the smallest number. However that will be very inefficient, because there may be an exponential number of ways to form the sum. Dynamic programming will give us a much more efficient solution.

naive recursive solution

First, here is a naive recursive solution to the problem:

  // Return the smallest number of coins needed to form (sum), or int.MaxValue if
  // the sum cannot be formed.
  int numCoins(int sum, int[] coins) {
      if (sum == 0) return 0;
    
      int min = int.MaxValue;
      foreach (int c in coins)
          if (c <= sum)
              min = Min(min, numCoins(sum - c, coins));

      return min == int.MaxValue ? int.MaxValue : min + 1;
  }

This will run in exponential time.

bottom-up dynamic programming

Here is a solution using bottom-up dynamic programming. Its structure is similar to the previous rod-cutting solution.

  // Return the smallest number of coins needed to form (sum), or int.MaxValue if
  // the sum cannot be formed.
  int numCoins(int sum, int[] coins) {
      int[] min = new int[sum + 1];
    
      for (int k = 1 ; k <= sum ; ++k) {
          // Compute min[k], the minimum number of coins needed to add up to k.
          int m = int.MaxValue;
          foreach (int c in coins)
              if (c <= k)
                  m = Min(m, min[k - c]);
          min[k] = m == int.MaxValue ? int.MaxValue : m + 1;
      }
    
      return min[sum];
  }

This version runs in O(N2), where N = sum.

bottom-up dynamic programming, extended

This version returns an array with the values of the coins that form the sum.

  int[] numCoins2(int sum, int[] coins) {
      int[] min = new int[sum + 1];
      int[] first = new int[sum + 1];
    
      for (int k = 1 ; k <= sum ; ++k) {
          // Compute min[k], the minimum number of coins needed to add up to k.
          // Also remember first[k], the first coin to use in forming the sum k.
          int m = int.MaxValue;
          foreach (int c in coins)
              if (c <= k && min[k - c] < m) {  // best so far
                  m = min[k - c];
                  first[k] = c;
              }
          min[k] = m == int.MaxValue ? int.MaxValue : m + 1;
      }
    
      List<int> l = new List<int>();
      for (int s = sum ; s > 0 ; s -= first[s])
          l.Add(first[s]);
      return l.ToArray();

}

2-dimensional dynamic programming problems

The dynamic programming problems we saw above had a 1-dimensional structure. That means that each of those problems could be solved by writing a recursive function with 1 argument. The naive recursive solution was exponentially inefficient, since it solved the same subproblems over and over again. We found that we could solve these problems much more efficiently by filling in a 1-dimensional array in a certain order. This array recorded the solution to each subproblem, so that it was immediately available to use in solving other, larger, subproblems.

For example, the naive recursive solution to the rod-cutting problem looked like this:

  // Return the best price for cutting a rod of length n, given a table
  // with the prices for pieces from lengths 1 .. n.
  int profit(int n, int[] prices) {
      int best = 0;
      for (int i = 1 ; i <= n ; ++i)
          best = Max(best, prices[i] + profit(n - i, prices));
      return best;
  }

Notice that profit() is a function of a single argument "int n" (ignoring the constant array prices[], which could just be stored as a field outside the function). To compute the result more efficiently, we used a bottom-up approach that filled in a 1-dimensional array called best[], where best[i] held the best possible profit for a rod of size i, i.e. the value that would be returned by profit(i, prices).

We will now study slightly more challenging dynamic programming problems that have a more general 2-dimensional structure. For these problems, a naive recursive solution will be a function with 2 arguments. Once again, this naive solution will run in exponential time. We can solve these problems more efficiently using bottom-up dynamic programming in which we fill in a 2-dimensional array or array-like data structure. We will need to be careful about the order in which we fill in the array elements. The solution to any subproblem instance will depend on the solutions to smaller instances, so we need to fill in the array in some order that guarantees that the dependent solutions will already be available when we compute any subproblem solution.

Some examples will make these ideas clearer. Let's first consider the problem of finding the longest palindromic subsequence of a given string. For example, consider the string "watermelon stew". The string "wteretw" is a longest palindromic subsequence:

W A T E R M E L O N   S T E W
W   T E R   E           T   W

That's because this subsequence is a palindrome (i.e. it reads the same forwards and backwards), and there is no longer palindromic subsequence in the string. Note that the longest palindromic subsequence is not necessarily unique: "wtemetw" is another possibility.

We can solve this problem using two-dimensional dynamic programming. Suppose that the input string s has length N, with character indices from 0 to (N - 1). Let L(i, j) be the length of the longest palindromic subsequence of s[i .. j].

We must first find a recursive formulation of the problem that will allow us to compute L(i, j) recursively for any i and j. Our base case is straightforward: if i = j, then clearly L(i, j) = 1, since any 1-character string is a palindrome. In fact it will be useful to have an additional base case that is even smaller: if j < i, then s[i .. j] is empty, so L(i, j) = 0.

Now suppose that i < j, which will be the recursive case. We may consider two possibilities:

Since we have found a recursive pattern, we may now write a recursive function to compute L(i, j):

    // Compute the length of the longest palindromic subsequence of s[i .. j].
    int len(string s, int i, int j) {
        if (j < i) return 0;
        if (i == j) return 1;

        return s[i] == s[j] ? len(s, i + 1, j - 1) + 2
                            : Max(len(s, i, j - 1), len(s, i + 1, j));
    }

It works:

 string s = "watermelon stew";
 WriteLine(lpsLength(s, 0, s.Length  1));   // writes 7

However this function will run in exponential time.

To solve this problem more efficiently, we will use bottom-up dynamic programming to fill in a 2-dimensional array len[], where len[i, j] = L(i, j). As mentioned above, we need to be careful about the order in which we fill in the array. Specifically, when we compute len[i, j] we must already know len[i + 1, j] and len[i, j – 1] and len[i + 1, j – 1]. An illustration will help here. Supposed that s = "india". Here is a table showing the values of len[i, j]:

Notice the following:

Now, we certainly cannot fill in this rows in this table from top to bottom, because then as we computed each row we would not already have the values below it. One possible order for filling in the rows is bottom to top, left to right:

When we fill in this order, as we encounter each cell (i, j) we will already know the value of its neighbors below, to the left, and diagonally below and to the left. And so we will be able to compute len[i, j].

Now, there is really no need for us to iterate over all the cells below the diagonal in the table above, since they all have value 0 and will be initialized to that value when we create our array. So instead we can fill in the table from bottom to top, proceeding rightward from the diagonal position in each row:

We now have a strategy, so let's write code to fill in the array:

    // Compute the length of the longest palindromic subsequence of s.
    int longestPalindromic(string s) {
        // len[i, j] is the length of the longest palindromic subsequence of
        // s[i .. j].
        int[,] len = new int[s.Length, s.Length];
        int i, j;
        for (i = s.Length - 1 ; i >= 0 ; --i) {
            len[i, i] = 1;
            for (j = i + 1 ; j < s.Length ; ++j)
                len[i, j] =
                    s[i] == s[j] ? len[i + 1, j - 1] + 2
                                 : Max(len[i, j - 1], len[i + 1, j]);
          }
        return len[0, s.Length - 1];
    }

Our function will run in O(N2), where N = s.Length. This is a huge improvement over the exponential version.

Of course, we may want to know not only the length of the longest palindromic subsequence, but also the subsequence itself! So we'd like to extend our code to return that string. Here is one possible approach: after we have filled in the table above we can use the values in it to reconstruct the string we want. To do that, we start at the upper right, i.e. the value len[0, s.Length – 1]. We can reconstruct a path through the table that explains how that value was derived, and that path will reveal the string itself. At any cell (i, j), if s[i] == s[j] then we know that s[i] and s[j] are included in the string we want, and we can record those characters and proceed to (i + 1, j – 1). Otherwise, we proceed either to (i, j – 1) or to (i + 1, j), choosing the cell with the larger value. Here is an extended version of the function above that can reconstruct the string in this way:

  // Compute the longest palindromic subsequence of s.
  string longestPalindromic(string s) {
      // len[i, j] is the length of the longest palindromic subsequence of
      // s[i .. j].
      int[,] len = new int[s.Length, s.Length];
    
      … same code as above for filling in the table … 
    
      // Now len[0, s.Length - 1] is the length of the longest palindromic
      // subsequence.  We now want the subsequence itself.  We need to build
      // the sequence from the outside inward, so we use two strings a and b
      // and will return (a + b).
    
      string a = "", b = "";
      i = 0;
      j = s.Length - 1;
      while (j >= i) {
          if (j == i) {
              a += s[i];
              break;
          }
          if (s[i] == s[j]) {
              a = a + s[i];
              b = s[j] + b;
              ++i;
              --j;
          } else if (len[i, j - 1] > len[i + 1, j])
              --j;
            else ++i;
      }
    
    return a + b;
  }

Study the function to understand how it works.

This code for constructing the longest palindromic subsequence may seem like a chore, and so you may be wondering if there is an easier way. Actually there is. When we build the table, instead of storing the length of the longest palindromic subsequence of s[i .. j] in each table cell, we can store the longest palindromic subsequence itself:

    // Compute the longest palindromic subsequence of s.
    string longestPalindromic(string s) {
        // p[i, j] is the longest palindromic subsequence of s[i .. j].
        string[,] p = new string[s.Length, s.Length];
        int i, j;
        
        for (i = s.Length - 1 ; i >= 0 ; --i) {
            p[i, i] = s[i].ToString();
            for (j = i + 1 ; j < s.Length ; ++j) 
                if (s[i] == s[j])
                    p[i, j] = s[i] + p[i + 1, j - 1] + s[j];
                else {
                    string t = p[i, j - 1], u = p[i + 1, j];
                    p[i, j] = t.Length > u.Length ? t : u;
                }
        }
    
        return p[0, s.Length - 1];
    }

That was easier! This seems like a nicer solution for this problem.

For some other dynamic programming problems, however, it may be easier or more efficient to store intermediate values in the table and the reconstruct the final solution at the end. (Actually we did that for a couple of one-dimensional dynamic programming problems above.)

The subset sum problem

Let's look at another classic problem that we can solve using two-dimensional dynamic programming: the subset sum problem. Given a set of positive integers and an integer k, we wish to determine whether any subset of the given set has sum k.

As usual, we'd first like to come up with a recursive formulation of the problem. As a clue to how to do that, recall how last week we wrote a function that generated all subsets of a given set. Here is the general approach we took. Given a set S, we can take any element x from the set, and let S' be the remaining elements in S. If we recursively generate all subsets of S', then each subset is itself a subset of S. In addition, we can add x to any of those subsets to form a subset of S.

So now suppose that we are given a set of positive integers S and an integer k, and we'd like to know whether any subset of S has the sum k. Take any integer x from S, and let S' be the remaining integers in S. If any subset of S' has sum k, then that is a subset of S which has sum k. In addition, if any subset of S' has sum (k – x), then we can add x to that subset to obtain a subset of S which has sum k.

We can use this idea to solve the problem recursively:

// Return true if any subset of a[0 .. (i - 1)] has sum k.
bool hasSum(int[] a, int i, int k) {
    if (k == 0)
        return true;   // the empty set is a subset, and has sum k
    if (i == 0)
        return false;  // set is empty, cannot have non-zero sum

    return hasSum(a, i - 1, k)             // we can make k without a[i - 1]
        || a[i - 1] <= k &&
           hasSum(a, i - 1, k - a[i - 1]); // we can make k by adding a[i - 1]
}
    
bool hasSum(int[] a, int k) => hasSum(a, a.Length, k);

As usual, this native recursive solution may take exponential time to run.

The function hasSum above takes two parameters i and k (in addition to the constant array a). That is a sign that we can solve this problem using two-dimensional dynamic programming. We will need a two-dimensional array that holds the boolean value hasSum(a, i, k) for every possible value of i and k. In our bottom-up implementation, we will also call this array hasSum. Specifically, hasSum[i, k] will be true if any subset of a[0 .. (i – 1)] has sum k, just like in the recursive function above.

Once again we must consider the order in which we will fill the array elements. When we compute hasSum[i, k], we may need to know the value of hasSum[i – 1, j] for any value j in the range 0 <= j <= k. That shows that we should fill the array in increasing order of i. It does not matter whether we fill each array column in increasing or decreasing order of k, since the computation of hasSum[i, k] does not depend on any other values in the same column.

After we have filled in the array, if it turns out that there was some subset with sum k, we would like to generate that subset. As in other dynamic programming problems, we can use an iterative loop to reconstruct the solution from the array. At the beginning of each loop iteration, we have values i and k such that hasSum[i, k] is true, so we know that some subset of a[0 .. (i - 1)] has sum k.

Combining these ideas, here is our bottom-up solution:

// Does any subset of the integers in a have sum s?  If so, return the subset;
// otherwise return null.
int[] subsetSum(int[] a, int s) {
    // hasSum[i, k] is true if some subset of a[0 .. (i - 1)] adds up to k.
    bool[,] hasSum = new bool[a.Length + 1, s + 1];
    hasSum[0, 0] = true;   // we can make 0 from the empty set
    
    for (int i = 1 ; i <= a.Length ; ++i)
        for (int k = 0; k <= s ; ++k)
            hasSum[i, k] = hasSum[i - 1, k]    // we can make k without a[i - 1]
                        || a[i - 1] <= k &&
                           hasSum[i - 1, k - a[i - 1]];  // we can make k by adding a[i - 1]
    
    if (!hasSum[a.Length, s])
        return null;  // sum is not possible
      
    // Now construct the integers in the set.
    List<int> result = new List<int>();
    for (int i = a.Length ; i > 0 ; --i) {
        if (!hasSum[i - 1, s]) {
            result.Add(a[i - 1]); 
            s -= a[i - 1];
        }
    }
    
    return result.ToArray();
}

Once again, you may feel that having to reconstruct the solution at the end is a bother. Is there an easier way? Well, just as in the previous exercise we could store the solutions themselves in the array we are filling in. For example, instead of a two-dimensional array of booleans, we could store a two-dimensional array of List<int>:

List<int>[,] sumSet = new List<int>[a.Length + 1, s + 1];

And then for any i and k, if there is a subset of a[0 .. i - 1] whose sum is k, then sumSet[i, k] could hold a list of integers containing that subset, and could otherwise be null.

However this solution would be relatively inefficient. If we use lists to represent sets, then every time we want to construct a list that contains all the elements in a previous set plus a new integer, we must make a copy of the previous list. If the problem instance was large, all of these list copies could take a significant amount of time and memory.

If, however, we represent sets using linked lists rather than List objects (which are really arrays), then no copying would be necessary, since we can prepend an element to a linked list (while leaving the existing list intact) in O(1). So that would be a reasonable solution, and you may want to try to code that up as an exercise. Of course, even that solution would be less efficient than our solution above, since it is hard to beat an array of booleans if you are trying to save space or time. So is it worth using linked lists to avoid the extra loop at the end of the method above? You can make your own judgment about that.