This is an old revision of the document!
Table of Contents
6.1 - Weighted Interval Scheduling: A Recursive Procedure
The Weighted Interval Scheduling Problem is a strictly more general version of the Interval Scheduling Problem, in which each interval has a certain value ( or weight) and we want to accept a set of maximum value.
Designing a Recursive Algorithm No natural greedy algorithm is known for this problem, which is why we're looking at dynamic programming. We use the notation from before, where we hvae n requests labeled 1, …, n, with each request i specifying a start time si and a finish time fi. Each interval i now also has a value or weight, vi. Two intervals are compatible if they do not overlap.
Finding the optimal solution on intervals {1,2,..,n} involves looking at the optimal solutions of smaller problems of the form {1,2,…,j}. thus, for any value of j between 1 and n, let Oj denote the optimal solution to the problem consisting of requests {1,…,j} and let OPT(j) denote the value of the solution.
We can say that OPT(j) = max(vj + OPT(p)j)), OPT(j-1)).
Request j belongs to an optimal solution on the set {1,2,..j} if and only if vj + OPT(p(j))≥ OPT(j-1).
This gives us a recursive algorithm to compute OPT(n), assuming that we have already sorted the requests by finishing time and computed the values of p(j) for each j.
Compute-Opt(j) If j = 0 then Return 0 Else Return max(vj+Compute-Opt(p(j)), Compute-Opt(j-1)) Endif
Compute-Opt(j) correctly computes OPT(j) for each j = 1,2,…,n.
Memoizing the Recursion We note that Compute-Opt is really only solving n + 1 different subproblems. How can we eliminate all the redundancy that's happening? We could store the value of Compute-Opt in globaly accessible place the first time we compute it and then simply use this precomputed value in place of all future recursive calls.
So, to re-implement: We will use an array M[0…n] M[j] will start with the value “empty” but will hold the value of Compute-Opt(j) as soon as it is determined. To determine OPT(n), we invoke M-compute-Opt(n).
M-Compute-Opt(j) If j = 0 then Return 0 Else if M[j] is not empty then Return M[j] Else Define M[j] = max(vj+M-Compute-Opt(p(j)), M-Compute-Opt(j-1)) Return M[j] Endif
Analyzing the Memoized Version The running time of M-Compute-Opt(n) is O(n) (assuming the input intervals are sorted by their finish times.
Computing a Solution in addition to its Value:
Find Solution(j) If j = 0 then Output nothing Else If vj+M[p(j)]≥M[j-1] then Output j together with the result of Find-Solution(p(j)) Else Output the result of Find-Solution(j-1) Endif Endif
Since Find-Solution calls itself recursively only on strictly smaller values, it makes a total of O(n) recursive calls and since it spends constant time per call, we have a return of an optimal solution in O(n) time.
6.2 - Principles of Dynamic Programming: Memoization or Iteration over Subproblems
We now use the algorithm from 6.1 to sumarize the basic principles of dp.
Designing the Algorithm The key to the efficient algorithm is really the array M. M(n) contains the value of the optimal solution on the full instance, and Find-Solution can be used to trace back through m efficiently and return an optimal solution itself.
The point to realize is that we can direclty compute teh entries in M by an iterative algorithm, rather than using memoized recursion. We just start with M[0] = 0 and keep incrementing j. So,
Iterative-Compute-Opt M[0] = 0 For j = 1,2,...,n M[j] = max(vj+M[p(j)], M[j-1]) Endfor
Analyzing the Algorithm The running time of Iterative-Compute-Opt is O(n), since it explicitly runs for n iterations and spends constant time in each.