Differences
This shows you the differences between two versions of the page.
courses:cs211:winter2014:journals:alyssa:chapter_6.2 [2014/03/24 21:11] – created hardnetta | courses:cs211:winter2014:journals:alyssa:chapter_6.2 [2014/03/24 21:28] (current) – hardnetta | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Chapter 6.2: Memoization ====== | ====== Chapter 6.2: Memoization ====== | ||
+ | The key idea behind an efficient algorithm using dynamic programming is the memoized array //M//. It encodes the notion that we are using the value of optimal solutions to the subproblems on intervals //{1, 2,…, j}//for each //j//, and it uses the memoization algorithm to define the value of //M[j]// based on values that come earlier in the array. Once we have //M//, the problem is solved! //M[n]// contains the value of the optimal solution and Find-Solution can be used to trace back through //M// efficiently and return the optimal solution. | ||
+ | ===== Iterative Memoization ===== | ||
+ | Iterative-Compute-Opt(): | ||
+ | * M[0]=0 | ||
+ | * For j = 1, 2,..., n: | ||
+ | * M[j] = max(v_j+M[p(j)], | ||
+ | |||
+ | This algorithm also runs in O(n) time. | ||
+ | |||
+ | In order to develop an algorithm based on dynamic programming, | ||
+ | - There are only a polynomial number of subproblems | ||
+ | - The solution to the original problem can be easily computed from the solutions to the subproblems | ||
+ | - There is a natural ordering on subproblems from " | ||
+ | |||
+ | **NOTE:** Sometimes it may be easier to start the process of designing such an algorithm by formulating a set of subproblems that looks natural, and then figuring out a recurrence that links them together. | ||
+ | |||
+ | I would rate this section an 8. It was very easy to read, but I don't understand why the authors made this its own subsection. I feel like they could have combined this with the previous section, when they first brought up memoization, |