This is an old revision of the document!
Table of Contents
This chapter introduces a new design technique. Instead of using greedy algorithms or divide and conquer, we use dynamic programming. It comes from the divide and conquer method but is the opposite of greedy solutions. This techniques divides a problem into subproblems then builds up solutions to those subproblems until it reaches the full problem. It is close to brute-force.
Chapter 6.1
Weighted Interval Scheduling: A Recursive Procedure
In this section we see our first example of dynamic programming applied to the interval scheduling we previously solved. There are two stages to our process: first the recursive procedure then an iterative algorithm. This problem is different from our previous greedy approach because each interval has a certain weight/value associated with it. Our problem is to schedule as many nonoverlapping intervals as possible and maximize the value of the schedule.
The problem has n intervals 1,…n. Each request i has a start time si,finish time fi, and a value vi. We want to maximize the total value of a set of all mutually compatible intervals.
The Recursive Algorithm
Our previous greedy approach that chose the interval that ends earliest is not the optimal solution for this problem. We will still order the requests i by increasing finish time. P(j) represents the largest index i<j so that intervals i and j are disjoint !!! what do they mean by disjoint??? !!! (The leftmost interval that ends before j begins). If no request is disjoint from j we set p(j) = 0.
Our solution either includes the last interval, n, or it doesn't!
- If n is in our solution then no interval between p(n) and n is included in the solution (because all of those intervals would overlap with n). So if n is in our solution, our solution is n + the optimal solution of the intervals 1 through p(n).
- If n is not in our solution then our solution is the optimal solution of intervals 1 through n-1.
We are looking for the optimal solution of smaller sets of the problem. Opt(j) returns the value of the solution. We are looking for the value of opt(n). So again…
- If j is in our solution then opt(j) = vj + opt(p(j))
- If j is NOT in our solution then opt(j) = opt(j-1)
6.1 From here we can say that opt(j) = max>(vj + opt(p(j)), opt(j-1)).
6.2 We determine if an interval is in the optimal solution if and only if vj + opt(p(j)) >= opt(j-1).
These statements form the recurrence equation that expresses the optimal solution (or its value) in terms of the optimal solutions to smaller subproblems (p. 254)
Compute-Opt(j) If j = 0 return 0 Else return max(v<sub>j</sub> + Compute-Opt(p(j)), Compute-Opt(j-1)) Endif
Proof that this algorithm correctly computes opt(j):
By induction. The base case opt(0) = 0 is by definition. No assume for some j>0 Compute-Opt(i) correctly computes opt(i) for all i<j. Our induction hypothesis is Compute-Opt(p(j)) = opt(p(j)) and Compute-Opt(j-1)) = opt(j-1)….so opt(j) = max(vj + Compute-Opt(p(j)), Compute-Opt(j-1)) = Compute-Opt(j). p. 255
This implementation of the algorithm is exponential in the worst case!!! This is not our goal of polynomial-time solution.
Memoizing the Recursion
We want to eliminate the redundancy calls in Compute-Opt by storing the value in a globally accessible place and call on the previously computed value when we need it. We use an array M[0…n] to store the value of each n as Compute-Opt of that value is called.
Memoized Compute-Opt()….
M-Compute-Opt(j) If j = 0 return 0 Else if M[j] is not empty then return M[j] Else Define M[j] = max(v<sub>j</sub> + Compute-Opt(p(j)), Compute-Opt(j-1)) return M[j] Endif
Analysis:
6.4 the runtime of M-Compute-Opt(n) is O(n) when we assume the intervals are sorted by finish time
A single call of M-Compute-Opt is constant and we call it n number of times (the number of entries + 1).
Finding the Solution…not the value