Table of Contents
Chapter 6: Dynamic Programming
6.1: Weighted Interval Scheduling: A Recursive Procedure
We can create a more general version of the interval scheduling problem, solving it without using a greedy algorithm. Using Dynamic Programming, we can account for the weights not all having the same value. The solution using dynamic programming involves a recursive algorithm. We still have n requests with a start time and a finish time, however, each interval has a value. Two intervals are still compatible if they don't overlap, and our goal is to maximize the total value of selected intervals.
If we sort the requests by finish time, we can get a natural ordering to consider intervals. We do not want to have any disjoint intervals. We still must consider an optimal solution as we would looking at a greedy algorithm. However, finding the optimal solutions here involves us looking at the optimal solutions for smaller problems. We can solve this recursively, where OPT(j)=max(vj+OPT(p(j)), OPT(j-1), and we only add j to the optimal solution if vj+OPT(p(j)) >= OPT(j-1). This gives us a recursive algorithm to compute the optimal solution.
This method, however, has an exponential runtime, which is very undesirable. We can use a tree structure to visualize the compute-opt algorithm. We can modify the algorithm to have a polynomial run time. This process is called memoization, and makes use of a global accessible value that recursive calls can access. This allows us to have a run time of O(n).
As of right now this solution only provides us with the optimal value. If we wanted to know the optimal set of intervals as well, we would need to extend our memoized algorithm to keep track of the intervals. This would also have a runtime of O(n).
6.2: Principles of Dynamic Programming: Memoization or Iteration over Subproblems
We can now develop a general algorithm for dynamic programming to iterate over subproblems. The key to this is keeping track of an array, M. It keeps track of each of the subproblems. We can compute the entries in the array through an iterative algorithm rather than a recursive one. This algorithm would run in O(n) time as it only runs for n iterations and spends constant time on each iteration.
Problems solved by dynamic programming share the following properties: there are a polynomial number of subproblems, the solution to the overall problem can be computed from the solutions to the subproblems, there is a natural ordering on the subproblems.It may be difficult to determine whether it is easier to figure out how to link the subproblems together first, or how to solve the subproblems first.
6.3: Segmented Least Squares: Multi-way Choices
The previous problems addressed in this chapter addressed recurrences based on binary choices. It is also possible to use recurrence with multi-way choices to find an optimal solution. The problem is similar to finding the line of best fit on a two dimensional set of axes. Our goal is to find the line with minimum error. A thought to the solution to this problem is to pass a set of lines through the points to minimize the error. This creates another issue, however, as this is not good problem formation. We could also try using brute force to find it, but this is also expensive.
This problem is known as the Segmented Least Squares Problem. It is the backbone of change detection, given certain data points, we want to find the points where a change occurs. For the Segmented Least Squares Problem, we can create segments of points and add a penalty to each partition by using a fixed multiplier and the error calculation.
To do so, we want to find a polynomial number of subproblems by partitioning n objects. We can use this to derive an algorithm. This algorithm would run in O(n^3) time.
