This is an old revision of the document!


Chapter 6

6.1: Weighted Interval Scheduling: A Recursive Procedure

We have previously discussed Interval Scheduling with all weights equal to 1, but this section delves into a more general case of the problem where values can vary. Unfortunately, the greedy algorithm we previously discussed in the unweighted case will no longer be optimal for what we are doing. Thus, we need a new algorithm. In fact, we must move away from greedy algorithms all together, as one may have guessed, we now must use dynamic programming. What we ultimately use involves recursion and memoization. Using memoization allows us to save values after calculating them once so that they may be referred to easily in the future without having to recalculate them over and over again. This is especially useful with all of the recursive calls to help lower our runtime complexity. Without memoization, the algorithm would run in exponential time. After introducing the memoization, the runtime goes down from exponential all of the way to O(n), which is obviously a tremendous improvement. This goes to show the power of memoization in the right situation.

This section was relatively interesting as dynamic programming is something I saw in Theory of Computation that I really did not understand at the time, probably due to the difficulty of the problem is used on. Everything in this section, however, makes perfect sense. Overall, I give it a 6/10.

6.2: Principles of Dynamic Programming: Memoization or Iteration over Subproblems

This section builds off of the specific examples of the previous section to reach a more general understanding of the dynamic programming process. One important distinction it makes is that we will be iterating over subproblems instead of computing solutions recursively. The section then goes on to redefine the previous section's algorithm in terms of the more typical ideas of dynamic programming to serve as a building block. The main difference between the two versions of the algorithm is the previously mentioned concept of iterating over sub problems. The section goes to note that every time it uses this method, there is an equivalent solution that works based off of memoization. The properties we need satisfied to use dynamic programming are: 1. There are only a polynomial number of subproblems. 2. The solution to the original problem can be easily computed from the solutions to the subproblems. 3. There is a natural ordering of subproblems from smallest to largest with an easy-to-compute recurrence, which allows one to determine the solution to a subproblem from the solutions of other, smaller subproblems.

This section better begins to explain exactly what dynamic programming is as opposed to just giving a single example of how to do a problem in a specific way. It gets a 6.5/10.

courses/cs211/winter2018/journals/goldm/ch6.1522105756.txt.gz · Last modified: by goldm
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0