====== Chapter 6.2: Memoization ====== The key idea behind an efficient algorithm using dynamic programming is the memoized array //M//. It encodes the notion that we are using the value of optimal solutions to the subproblems on intervals //{1, 2,…, j}//for each //j//, and it uses the memoization algorithm to define the value of //M[j]// based on values that come earlier in the array. Once we have //M//, the problem is solved! //M[n]// contains the value of the optimal solution and Find-Solution can be used to trace back through //M// efficiently and return the optimal solution. ===== Iterative Memoization ===== Iterative-Compute-Opt(): * M[0]=0 * For j = 1, 2,..., n: * M[j] = max(v_j+M[p(j)], M[j-1]) This algorithm also runs in O(n) time. In order to develop an algorithm based on dynamic programming, one needs a collection of subproblems derived from the original problem that satisfies a few basic properties: - There are only a polynomial number of subproblems - The solution to the original problem can be easily computed from the solutions to the subproblems - There is a natural ordering on subproblems from "smallest" to "largest" together with an easy-to-compute recurrence that allows one to determine the solution to a subproblem from the solutions to some number of smaller subproblems **NOTE:** Sometimes it may be easier to start the process of designing such an algorithm by formulating a set of subproblems that looks natural, and then figuring out a recurrence that links them together. I would rate this section an 8. It was very easy to read, but I don't understand why the authors made this its own subsection. I feel like they could have combined this with the previous section, when they first brought up memoization, or just added it as a discussion at the end of 6.1. Other than that, though, I understand what's going on!