Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
courses:cs211:winter2012:journals:ian:chapter6 [2012/03/28 03:53] – 6.1 sturdyi | courses:cs211:winter2012:journals:ian:chapter6 [2012/03/28 04:27] (current) – sturdyi | ||
---|---|---|---|
Line 8: | Line 8: | ||
====== 6.2 Principles of Dynamic Programming ====== | ====== 6.2 Principles of Dynamic Programming ====== | ||
- | * This solution can be expressed not only as a recursive calculation with memoization but also as iteration over subproblems. Instead of recursively solving the subproblems as needed, start with the last | + | * This solution can be expressed not only as a recursive calculation with memoization but also as iteration over subproblems. Instead of recursively solving the subproblems as needed, start with the last (what happens if the second-to-last task is included) and work backwards. The details are the same, the difference being merely in the logic behind scheduling solution of the subproblems. In general, dynamic programming is useful when there are a polynomial number of subproblems, |
+ | * How is their criterion ii different from the second part of iii (the existence of a recurrence)? | ||
+ | * A note, at least, on the polynomial number of subproblems (and why the iterative setup is not always equivalent to the | ||
+ | recursive): in Abstract Algebra we recently got an extra-credit problem implementing a dynamic algorithm finding the number of three-free permutations of the first n natural numbers (rather convenient timing, really). The algorithm determined, given a set of numbers yet to be placed, whether each one of them could be placed next; for each one that could, the algorithm was recursively applied to determine the number of valid solutions that placed that number next. There are really 2^n subproblems, | ||
+ | * No complaints. | ||
+ | |||
+ | ====== 6.3 Segmented Least Squares ======= | ||
+ | |||
+ | * Dynamic programming does not require binary choice. Segmented least squares attempts to determine the natural number of kinks in a set of points, minimizing the sum of deviations from the solution and a penalty for each break. One can thus make a pass through recursively determining the best way to fit a line to subsets of the point. OLS takes O(n^3) and the rest O(n^2). | ||
+ | * No questions. | ||
+ | * No comments. | ||
+ | * No complaints. | ||
+ | |||
+ | ====== 6.4 Subset Sums and Knapsacks ====== | ||
+ | |||
+ | * The knapsack problem is the selection of a subset (without replacement) of values that has sum as close as possible to a target. Here one cannot recurse merely over items, since unlike in weighted interval schedule the subproblems for a given choice are not independent of the choices already made (they affect the target). Therefore, one needs to define subproblems in two variables, the set of goods to be considered and the weight still to fill (but one can still make the choice sequentially, | ||
+ | * No questions. | ||
+ | * No insights. | ||
+ | * Again with the introduction of the false starts first? I can see their reasons, but still do not much like it. | ||