This is an old revision of the document!


Chapter 6

6.1: Weighted Interval Scheduling: A Recursive Procedure

  • More general version of the greedy algorithms we worked on before.
  • Dynamic programming solution: a recurrence equation that expresses the optimal solution (or its value) in terms of the optimal solutions to smaller sub-problems
  • Memoization:
    • Saving values that have already been computed to reduce run time.
    • Analysis on 257

6.2: Principles of Dynamic Programming: Memoization or Iteration over Subproblems

  • Iterating over subproblems instead of computing solutions recursively
  • Deals with using the array M from the Memoization/Recursion answer
  • We can directly compute the entries in M by an iterative algorithm, rather than using memoized recursion.
  • Analysis on 259.
  • Second approach to dynamic programming: iterative building up of subproblems
  • Subproblems for this approach my satisfy the following properties:
    • There are only a polynomial number of subproblems
    • The solution to the original problem can be easily computed from the solutions to the subproblems
    • There is a natural ordering on the subproblems from “smallest” to “largest” together with an easy-to-compute recurrence that allows one to determine the solution to a subproblem from the solutions to some number of smaller subproblems.

6.3: Segmented Least Squares: Multi-way Choices

  • Multi-way choices instead of binary choices
  • Deals with plotting lines between points
  • Penalty of a partition:
    • The number of segments into which we partition P, times a fixed, given multiplier C>0 plus the error value of the optimal line through each segment
  • Design and analysis of this segmented least squares problem can be found from 264-266

6.4: Subset Sums and Knapsacks: Adding a Variable

  • Given a set of items, each with a given weight w and a bound for how much we can carry W
  • Knapsack problem: Find a set of items that maximizes value and weight.
  • Creation and analysis of the optimal algorithm for the knapsack problem begins on page 269 through page 271
  • Knapsack problem can be solved in O(nW) time where n is the number of items that can be put in the sack and W is the weight

Final Thoughts (End of Chap 5, Beg of 6)

This chapter is a little bit more easily understood than last weeks chapter. All in all, the knapsack problem is very intuitive and so is the idea of dynamic programming. Readability: 7/10

courses/cs211/winter2011/journals/andrew/chapter6.1300136497.txt.gz · Last modified: 2011/03/14 21:01 by bennetta
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0