This is an old revision of the document!
Table of Contents
Chapter 6 - Dynamic Programming
Section 6.1
- Summary & Motivations: Section 6.1 is Weighted Interval Scheduling: A Recursive Procedure. In this section, we revisit the weighted interval scheduling problem from chapter 1.2. Whereas before we wanted to pick non-overlapping intervals, now we want to maximize the value we derive from the intervals. The solution to this problem rests on the notion that each interval either is or is not in the optimal solution. If it is, the optimal solution's value equals that interval's value plus the optimal solution of the subproblem derived from the next interval that is compatible with the one in question. If the interval is not in the optimal solution, the optimal solutions value equals the optimal solution of the next interval (NOT the next compatible one, just the next one).
- About the Algorithms: In order to solve this problem, we have to give a few definitions. We say that p(j) is the index i < j such that interval i is compatible with interval j. Further, we say that OPT(j) is the value of the optimal solution to requests 1 through j. Considering the is/is not choice mentioned in the Summary & Motivations section, OPT(j) = max(v_j + OPT(p(j)), OPT(j-1)). We see here the idea of subproblems. Each optimal solution involves knowing about the optimal solution to subproblems. This definition, though it yields a recursive algorithm, yields an algorithm with exponential runtime in the worst case. We can solve that problem by using memoization, though, since the algorithm is computing the same n+1 values multiple times. Assuming the input intervals are sorted by finish time, the memoized algorithm is O(n). We find the specific set of intervals in the optimal solution by back tracking through memoization array, using the intuition that an interval j is included if and only if v_j + OPT(p(j)) >= OPT(j-1).
- My Questions: So far, it's not completely clear to me what the differences are between dynamic programming problems and divide and conquer problems.
- Second Time Around: I didn't understand the concept of p(j) completely the first time in class, but it makes sense now.
- Note to Self: When an algorithm's implementation does not have an obvious or explicit upper bound on the number of calls, look for a measure of progress (p 257).
- Readability: 8 - I thought this was pretty easy to read.
Section 6.2
- Summary & Motivations: Section 6.2 is Principles of Dynamic Programming: Memoization or Iteration over Subproblems. As the title suggests, this short subsection is about iteration vs recursion with memoization for dynamic programming problems. We conclude in this subsection that all dynamic programming problems (or, at least the ones explored in this chapter) can be represented either as a memoized recursive algorithm or as an iterative algorithm, but we choose to henceforth talk about the iterative formulations.
- About the Algorithms: This time, instead of memoizing and recusing, we simply start with M[0] = 0, and add to the array iteratively using the same rule as before: M[j] = max(v_j + M[p(j)], M[j-1]). We still use Find-Solution from Section 6.1 to find the set of intervals in the optimal solution, as we did when the algorithm was written recursively. This algorithm is “clearly” O(n) (p 259).
- My Questions: How do we know when we should use a dynamic programming approach to solving a problem? Just when we can think of ways to break the problem into subproblems, and the subproblems satisfy the rules on page 260?
- Second Time Around:
- Note to Self:
- Readability: 9 - this section was short and sweet.
Section 6.3
- Summary & Motivations: Section 6.3 is Segmented Least Squares: Multi-way Choices.
- About the Algorithms:
- My Questions:
- Second Time Around:
- Note to Self:
- Readability:
