The key idea behind an efficient algorithm using dynamic programming is the memoized array M. It encodes the notion that we are using the value of optimal solutions to the subproblems on intervals {1, 2,…, j}for each j, and it uses the memoization algorithm to define the value of M[j] based on values that come earlier in the array. Once we have M, the problem is solved! M[n] contains the value of the optimal solution and Find-Solution can be used to trace back through M efficiently and return the optimal solution.
Iterative-Compute-Opt():
This algorithm also runs in O(n) time.
In order to develop an algorithm based on dynamic programming, one needs a collection of subproblems derived from the original problem that satisfies a few basic properties:
NOTE: Sometimes it may be easier to start the process of designing such an algorithm by formulating a set of subproblems that looks natural, and then figuring out a recurrence that links them together.
I would rate this section an 8. It was very easy to read, but I don't understand why the authors made this its own subsection. I feel like they could have combined this with the previous section, when they first brought up memoization, or just added it as a discussion at the end of 6.1. Other than that, though, I understand what's going on!