Dynamic programming is a general technique that we can use to solve a wide variety of problems. Many of these problems involve optimization, i.e. finding the shortest/longest/best solution to a certain problem.
Problems solvable using dynamic programming generally have the following characteristics:
They have a recursive structure. In other words, the problem's solution can be expressed recursively as a function of the solutions to one or more subproblems. A subproblem is a smaller instance of the same problem.
They have overlapping subproblems. A direct recursive implemention solves the same subproblems over and over again, leading to exponential inefficiency.
Usually we can dramatically improve the running time by arranging so that each subproblem will be solved only once. There are two ways to do that:
In a top-down implementation, we keep the same recursive code structure but add a cache of solved subproblems. This technique is called memoization.
In a bottom-up implementation, we also use a data structure (typically an array) to hold subproblem solutions, but we build up these solutions iteratively.
Generally we prefer a bottom-up solution, because
A bottom-up implementation is generally more efficient.
The running time of the bottom-up implementation is usually more obvious.
We will first study one-dimensional dynamic programming problems, which have a straightforward recursive structure. Next week, we will look at two-dimensional problems, which are a bit more complex.
Computing the Fibonacci number Fn is a trivial example of dynamic programming. Recall that the Fibonacci numbers are defined as
F1 = 1
F2 = 1
Fn = Fn-1 + Fn-2 (n ≥ 3)
yielding the sequence 1, 1, 2, 3, 5, 8, 13, 21, …
Here is a recursive function that naively computes the n-th Fibonacci number:
int fib(int n) =>
n < 3 ? 1 : fib(n - 1) + fib(n – 2);
What is its running time? The running time T(n) obeys the recurrence
T(n) = T(n – 1) + T(n – 2)
This is the recurrence that defines the Fibonacci numbers themselves! In other words,
T(n) = O(Fn)
The Fibonacci numbers themselves increase exponentially. It can be shown mathematically that
Fn = O(φn)
where
φ = (1 + √5) / 2
So fib
runs in exponential time! That
may come as a surprise, since the function looks so direct and
simple. Fundamentally it is inefficient because it is solving the
same subproblems repeatedly. For example, fib(10)
will
compute fib(9)
+ fib(8)
. The recursive call to fib(9)
will
compute fib(8)
+ fib(7)
, and so we see that
we already have two independent calls to fib(8)
. Each of
those calls in turn will solve smaller subproblems over and over
again. In a problem with a recursive structure such as this one, the
repeated work multiplies exponentially, so that the smallest
subproblems (e.g. fib(3)
) are computed an enormous
number of times.
As mentioned above, one way we can eliminate the
repeated work is to use a cache that stores answers to subproblems we
have already solved. This technique is called memoization.
Here is a memoized implementation of fib
, using a local
function:
int fib(int n) { int[] cache = new int[n + 1]; cache[1] = 1; cache[2] = 1; int f(int i) { if (cache[i] == 0) cache[i] = f(i - 1) + f(i - 2); return cache[i]; } return f(n); }
Above, the cache
array holds all the Fibonacci numbers
we have already computed. In other words, if we have already computed
Fi, then cache[
i
]
= Fi.
Otherwise, cache[
i
]
= 0.
This memoized version runs in linear time, because the line
cache[i] = f(n - 1) + f(n - 2);
runs only once for each value of i. This is a dramatic improvement!
In this particular recursive problem, the subproblem structure is quite straightforward: each Fibonacci number Fn depends only on the two Fibonacci numbers below it, i.e. Fn-1 and Fn-2. Thus, to compute all the Fibonacci numbers up to Fn we may simply start at the bottom, i.e. the values F1 and F2, and work our way upward, first computing F3 using F1 and F2, then F4 and so on. Here is the implementation:
int fib(int n) { int[] a = new int[n + 1]; a[1] = a[2] = 1; for (int k = 3 ; k <= n ; ++n) a[k] = a[k - 1] + a[k - 2]; return a[n]; }
Clearly this will also run in linear time. Note that this implementation is not even a recursive function. This is typical: a bottom-up solution consists of one or more loops, without recursion.
This example may seem trivial. But the key idea is that we can efficiently solve a recursive problem by solving subproblem instances in a bottom-up fashion, and as we will see we can apply this idea to many other problems.
The rod cutting problem is a classic dynamic programming problem. Suppose that we have a rod that is n cm long. We may cut it into any number of pieces that we like, but each piece's length must be an integer. We will sell all the pieces, and we have a table of prices that tells us how much we will receive for a piece of any given length. The problem is to determine how to cut the rod so as to maximize our profit.
We can express an optimal solution recursively. Let Pi be the given price for selling a piece of size i. We want to compute Vn, which is the maximum profit that we can attain by chopping a rod of length n into pieces and selling them. Any partition of the rod will begin with a piece of size i cm for some value 1 ≤ i ≤ n. Selling that piece will yield a profit of Pi. The maximum profit for dividing and selling the rest of the rod will be Vn – i. So Vn, i.e. the maximum profit for all possible partitions, will equal the maximum value for 1 ≤ i ≤ n of
Pi + Vn – i
Here is a naive recursive solution to the problem:
// Table of prices for different piece sizes. int[] prices = [0, 1, 5, 8, 9, 10, 17, 17, 20, 24, 30]; // Return the largest possible profit for cutting a rod of length n, // given a table with the prices for pieces from lengths 1 .. n. int profit(int n) { int best = 0; for (int i = 1 ; i <= n ; ++i) best = Max(best, prices[i] + profit(n - i)); return best; }
This solution runs in exponential time,
because for each possible size k it will recompute profit(k)
many times.
This solution uses bottom-up dynamic programming:
// Return the largest possible profit for cutting a rod of length n, // given a table with the prices for pieces from lengths 1 .. n. int profit(int n) { int[] best = new int[n + 1]; // best possible price for each size for (int k = 1 ; k <= n ; ++k) { // Compute the best price best[k] for a rod of length k. best[k] = 0; for (int i = 1 ; i <= k ; ++i) best[k] = Max(best[k], prices[i] + best[k - i]); } return best[n]; // best price for a rod of length n }
Once again, this bottom-up solution is not a recursive function. Notice its double loop structure. Each iteration of the outer loop computes best[k], i.e. the solution to the subproblem of finding the best price for a rod of size k. To compute that, the inner loop must loop over all smaller subproblems, which have already been solved, looking for a maximum possible profit.
This version runs in O(N2).
Often when we use dynamic programming to solve an optimization problem such as this one, we want not only the value of the optimal solution, but also the solution itself. For example, the function in the previous section tells us the best possible price that we can obtain for a rod of size n, but it doesn't tell us the sizes of the pieces in which the rod should be cut!
So we'd like to extend our solution to return that information. To do that, for each size k < n we must remember not only the best possible prices for a rod of size k, but also the size of the first piece that we should slice off from a rod of that size in order to obtain that price. With that information, we can reconstruct the solution at the end:
// Return the largest possible profit for cutting a rod of length n, // given a table with the prices for pieces from lengths 1 .. n. // Also return an array of the sizes into which to cut the rod // in order to achieve that profit. (int, int[]) profit(int n) { int[] best = new int[n + 1]; // best possible price for a given rod size int[] cut = new int[n + 1]; // size of first piece to slice from a given rod size for (int k = 1 ; k <= n ; ++k) { // Compute the best price best[k] for a rod of length k. best[k] = 0; for (int i = 1 ; i <= k ; ++i) { // for every size <= k int p = prices[i] + best[k - i]; if (p > best[k]) { best[k] = p; cut[k] = i; // remember the best size to cut } } } List<int> l = new List<int>(); for (int s = n ; s > 0 ; s -= cut[s]) l.Add(cut[s]); sizes = l.ToArray(); return (best[n], sizes); }
Study this function to understand how it works. Like the preceding version, it runs in O(N2).
As another possibility, instead of storing the size of the first piece to slice for any given rod size k, we can store a string containing the sizes of all pieces into which we should cut a rod of size k. In other words, this string will contain the actual solution for rod size k. That will allow us to avoid the extra loop at the end. Our code might look like this:
// Return the largest possible profit for cutting a rod of length n, // given a table with the prices for pieces from lengths 1 .. n. // Also return a string containing the sizes into which to cut the rod // in order to achieve that profit. (int, string) profit(int n, int[] prices) { int[] best = new int[n + 1]; // best possible price for a given rod size string[] sizes = new string[n + 1]; // sizes into which to cut a given rod size for (int k = 1 ; k <= n ; ++k) { // Compute the best price best[k] for a rod of length k. best[k] = 0; for (int i = 1 ; i <= k ; ++i) { // for every size <= k int p = prices[i] + best[k - i]; if (p > best[k]) { best[k] = p; sizes[k] = sizes[k - i] + i + " "; } } } return (best[n], sizes[n]); }
Recall that last week we saw how to write a recursive function that can print out all ways to form a sum from Czech coins. We wrote two versions of the function. In the first version, order matters, so e.g. 1 + 2 and 2 + 1 are considered to be distinct sums. In the second version, order does not matter, so e.g. 1 + 2 and 2 + 1 are the same sum, and we only want to print one of these. We accomplished that by forcing each sum to contain a non-increasing series of values.
Now consider the related problem of computing only the number of possible ways to form a sum. Let's assume that order matters, so we want to count all permutations of each sum (e.g. 1 + 2 and 2 + 1) separately. Here's a recursive function to perform this computation:
int[] coins = { 1, 2, 5, 10, 20, 50 }; long ways(int n) { if (n < 0) return 0; else if (n == 0) return 1; // 1 way to sum to 0: the empty set else { long w = 0; foreach (int c in coins) w += ways(n - c); return w; } }
Let's run this function for an increasing sequence of values of n:
n = 0: ways = 1 n = 1: ways = 1 n = 2: ways = 2 n = 3: ways = 3 n = 4: ways = 5 n = 5: ways = 9 ... n = 30: ways = 5,878,212 n = 31: ways = 10,050,981 n = 32: ways = 17,185,883 n = 33: ways = 29,385,638 n = 34: ways = 50,245,647 ...
We see that the function's value increases exponentially. Also, it becomes very slow for values of n that are more than about 30.
The function is slow because it's solving the same subproblems over and over again. We can make it much more efficient using dynamic programming. This is a 1-dimensional dynamic programming problem, since ways() takes a single argument. Here is a dynamic programming solution:
int[] coins = { 1, 2, 5, 10, 20, 50 }; long ways_dp(int n) { long[] ways = new long[n + 1]; ways[0] = 1; for (int k = 1 ; k <= n ; ++k) { long w = 0; foreach (int c in coins) if (k - c >= 0) w += ways[k - c]; ways[k] = w; } return ways[n]; }
This function will be far more efficient than the previous version.
In fact ways_dp(N)
will run in O(N), since the outer for
loop iterates N times and the inner foreach
loop
iterates only a constant number of times.