| Both sides previous revisionPrevious revision | |
| courses:cs211:winter2018:journals:melkersonr:chapter6 [2018/03/27 01:04] – [Section 6.3] melkersonr | courses:cs211:winter2018:journals:melkersonr:chapter6 [2018/03/27 01:04] (current) – [Section 6.3] melkersonr |
|---|
| * **Summary & Motivations:** Section 6.3 is Segmented Least Squares: Multi-way Choices. In this section, we take a page out of the statistics book. Given a set of data points, ordered by their x-coordinate, we want to determine the best fit lines (plural!) that minimize the total error plus total cost (where lines have an associated cost). | * **Summary & Motivations:** Section 6.3 is Segmented Least Squares: Multi-way Choices. In this section, we take a page out of the statistics book. Given a set of data points, ordered by their x-coordinate, we want to determine the best fit lines (plural!) that minimize the total error plus total cost (where lines have an associated cost). |
| * **About the Algorithms:** The subproblems for this algorithm are the line segments. **6.6** says if the last segment of the optimal solution is p_i, ..., p_n, then the value of the optimal solution is OPT(n) = e_{i,n} + C + OPT(i-1) (p 265). In other words, the value of the optimal solution is the error derived from a segment going from point i to point n, plus the cost of a segment, plus the optimal solution for points 1 through i-1. We get the recurrence from **6.6**. The algorithm, then, involves a memoization array M, set to all zeros initially. Then for all pairs i =< j, we compute the least squares error e_{i,j} for the segment p_i, ..., p_j. Then we use the recurrence to compute M[j]. Computing the least squares error is O(n^3) and computing M[j] is O(n^2). Thus, the algorithm is O(n^3) in total, or O(n^2) once the errors are calculated. | * **About the Algorithms:** The subproblems for this algorithm are the line segments. **6.6** says if the last segment of the optimal solution is p_i, ..., p_n, then the value of the optimal solution is OPT(n) = e_{i,n} + C + OPT(i-1) (p 265). In other words, the value of the optimal solution is the error derived from a segment going from point i to point n, plus the cost of a segment, plus the optimal solution for points 1 through i-1. We get the recurrence from **6.6**. The algorithm, then, involves a memoization array M, set to all zeros initially. Then for all pairs i =< j, we compute the least squares error e_{i,j} for the segment p_i, ..., p_j. Then we use the recurrence to compute M[j]. Computing the least squares error is O(n^3) and computing M[j] is O(n^2). Thus, the algorithm is O(n^3) in total, or O(n^2) once the errors are calculated. |
| * **My Questions:** Not a question, but I with the authors had not left it to the reader in an exercise to improve the runtime to O(n^2) in total. | * **My Questions:** Not a question, but I wish the authors had not left it to the reader in an exercise to improve the runtime to O(n^2) in total. |
| * **Second Time Around:** Now that we've talked about the knapsack problem in class, too, it's more clear to me the concept of building up an optimal solution when you do dynamic programming problems iteratively. What's the optimal solution with input 1, 1 and 2, 1 through 3, etc. | * **Second Time Around:** Now that we've talked about the knapsack problem in class, too, it's more clear to me the concept of building up an optimal solution when you do dynamic programming problems iteratively. What's the optimal solution with input 1, 1 and 2, 1 through 3, etc. |
| * **Note to Self:** O(n^3) is still "polynomial"...but that doesn't mean I like it. | * **Note to Self:** O(n^3) is still "polynomial"...but that doesn't mean I like it. |
| * **Readability:** 7 - a little more math than the other sections in this chapter so far, but still not bad. | * **Readability:** 7 - a little more math than the other sections in this chapter so far, but still not bad. |
| |