This is an old revision of the document!


Chapter 6

  • In general, most problems do not have a natural greedy algorithm solution that works
  • Divide and conquer is often not strong enough to reduce exponential brute-force search down to polynomial time
  • Dynamic Programming: one implicitly explores the space of all possible solutions, by decomposing things into a series of subproblems, and then building up correct solutions to larger and larger subproblems

6.1 Weighted Interval Scheduling: A Recursive Procedure

  • Each interval has a certain value (or weight), and we want to accept a set of maximum value.

Designing a Recursive Algorithm

  • n requests (1,…,n)
  • each request i specifies a start time si and finish time fi.
  • each request i also has a value or weight (vi)
  • compatible intervals: do not overlap
  • goal: select a subset of mutually compatible intervals so as to maximize the sum of the values of the selected intervals
  • sort requests in order of nondecreasing finish time
  • define p(j), for an interval j, to be the largest index i<j such that intervals i and j are disjoint
    • leftmost interval that ends before j begins
  • p(j) = 0, if no request i<j is disjoint from j
  • Optimal solution O: either interval n belongs to O or it doesn't
    • if it is, no intervals indexed between p(n) and n can belong to O
    • if it is not, O is equal to the optimal solution of request s {1,n-1}
  • For any value of j between 1 and n, let Oj denote the optimal solution to the problem consisting of requests {1,…,j} and let OPT(j) denote the value of this solution
  • OPT(j) = max(vj + OPT(p(j)), OPT(j-1))
    • request j belongs to an optimal solution on the set {1,2,..,j} if and only if vj + OPT(p(j)) >= OPT)j-1)

  • Compute-Opt Subproblems grow very quickly

  • we have not achieved a polynomial-time solution

Memoizing the Recursion

  • Previous algorithm runs in exponential time because of the redundancy of how many times it issues a compute-opt() call
  • Memoization: saving values that have already been computed in a globally-accessible place
    • Have an array M[0…n], where M[j} will start with the value “empty” and then will store the value of Compute-Opt(j) as soon as it is first determined.

Analyzing the Memoized Version

  • Runtime: O(n) ← assuming sorted input intervals by finish times
  • Runtime is bounded by a constant times the number of calls ever issued
  • No explicit upper bound on the number of calls, so we look for a good measure of “progess”
  • Useful progress measure: number of entries in M that are not “empty”
  • M has only n+1 entries ← at most O(n) calls

Computing a Solution in Addition to Its Value

  • We could maintain an array S so that S[i] contains an optimal set of intervals among {1,2,…,i}.
  • If we explicitly maintain S, we increase runtime by an additional factor of O(n)
  • To avoid this, we recover the optimal solution from values saved in the array M after the optimum value has been computed
    • we want to “trace back” through array M to find the set of intervals in the optimal solution
    • Find-Solution: calls itself recursively on strictly smaller values, makes a total of O(n) recursive calls and spends constant time per call.

Personal Thoughts

The actual algorithms in this section were a little bit difficult to follow, but the concept of memoization makes a lot of sense to me. I think this week's problem set will help me understand the concept of finding an optimal solution using dynamic programming better.

Readability: 6.0 Interesting: 6.0

courses/cs211/winter2018/journals/patelk/chapter6.1522073336.txt.gz · Last modified: by patelk
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0