This is an old revision of the document!


Chapter 5: Divide and Conquer

Intro

  • Summary:

There isn't always a Greedy Algorithm or a Divide and Conquer Algorithm to solve a problem efficiently. Dynamic programming is a faster and more subtle technique. “…the basic idea is drawn from the intuition of divide and conquer and is essentially the opposite of the greedy strategy: one implicitly explores the space of all of the possible solutions, by carefully decomposing things into a series of subproblems, and then building up correct solutions to larger and larger subproblems.” “Dangerously close to brute-force.”

Section 1: Weighted Interval Scheduling: A Recursive Procedure

  • Summary:

This is like the interval scheduling problem, but every job has a weight and we want to maximize the sum of the weights/values. Again, we sort them in order of finish time (larger indexes are earlier end times). Notation: for a job i, p(i) is the largest index of a job that's compatible with i. Start off by saying something obvious about an optimal solution. It either includes the nth (latest end time) job or it doesn't. If it does include the nth job then nothing that comes after p(n) could possibly be in the optimal solution, so the optimal solution includes the nth job and the optimal solution for the smaller problem of all of the jobs compatible with n. If the nth job is not in the optimal solution, then the optimal solution is the optimal solution for the smaller problem of all of the jobs except the nth job. And whichever of those two gives a better solution is actually the optimal. I.e. OPT(j) = max(Vj + OPT(p(j)), OPT(j-1)). And then we know j belongs to the optimal solution iff Vj + OPT(p(j))>= OPT(j-1). If we just ran this straight up with recursion (they give an algorithm, but it's pretty clear), it would take exponential time. BUT we can memoize! This only runs in exponential time because of the redundancy of the recursion. We recompute some of those OPT(x)'s a bunch of times! Instead, we compute it once and put it in an array M[] when we compute it, so that every other time we need it, we only have to do a lookup instead of a recursive computation. And since we only fill in every thing once (i.e. we do OPT(0), OPT(1), OPT(2), …, OPT(n), but we only do each one once), it takes O(n) time (wayyy better than exponential!). I already kind of mentioned how you'd figure out a solution (not just a value) based on the result (it's in there if Vj + OPT(p(j)) >= OPT(j-1)), but they give an algorithm for it and show that it also only takes O(n) time to figure out.

  • Insights and Questions:

None.

  • Readability:

Awesome! This book is great.

courses/cs211/winter2011/journals/camille/chapter6.1300154382.txt.gz · Last modified: 2011/03/15 01:59 by cobbc
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0