This is an old revision of the document!


6.1 - Weighted Interval Scheduling: A Recursive Procedure

The Weighted Interval Scheduling Problem is a strictly more general version of the Interval Scheduling Problem, in which each interval has a certain value ( or weight) and we want to accept a set of maximum value.

Designing a Recursive Algorithm No natural greedy algorithm is known for this problem, which is why we're looking at dynamic programming. We use the notation from before, where we hvae n requests labeled 1, …, n, with each request i specifying a start time si and a finish time fi. Each interval i now also has a value or weight, vi. Two intervals are compatible if they do not overlap.

Finding the optimal solution on intervals {1,2,..,n} involves looking at the optimal solutions of smaller problems of the form {1,2,…,j}. thus, for any value of j between 1 and n, let Oj denote the optimal solution to the problem consisting of requests {1,…,j} and let OPT(j) denote the value of the solution.

We can say that OPT(j) = max(vj + OPT(p)j)), OPT(j-1)).

Request j belongs to an optimal solution on the set {1,2,..j} if and only if vj + OPT(p(j))≥ OPT(j-1).

This gives us a recursive algorithm to compute OPT(n), assuming that we have already sorted the requests by finishing time and computed the values of p(j) for each j.

 Compute-Opt(j)
  If j = 0 then
    Return 0
  Else
    Return max(vj+Compute-Opt(p(j)), Compute-Opt(j-1))
  Endif
  

Compute-Opt(j) correctly computes OPT(j) for each j = 1,2,…,n.

Memoizing the Recursion We note that Compute-Opt is really only solving n + 1 different subproblems. How can we eliminate all the redundancy that's happening? We could store the value of Compute-Opt in globaly accessible place the first time we compute it and then simply use this precomputed value in place of all future recursive calls.

So, to re-implement: We will use an array M[0…n] M[j] will start with the value “empty” but will hold the value of Compute-Opt(j) as soon as it is determined. To determine OPT(n), we invoke M-compute-Opt(n).

M-Compute-Opt(j)
 If j = 0 then
   Return 0
 Else if M[j] is not empty then 
   Return M[j] 
 Else
   Define M[j] = max(vj+M-Compute-Opt(p(j)), M-Compute-Opt(j-1))
   Return M[j]
 Endif
 

Analyzing the Memoized Version The running time of M-Compute-Opt(n) is O(n) (assuming the input intervals are sorted by their finish times.

Computing a Solution in addition to its Value:

Find Solution(j)
 If j = 0 then
   Output nothing
 Else
   If vj+M[p(j)]≥M[j-1] then
     Output j together with the result of Find-Solution(p(j))
   Else
     Output the result of Find-Solution(j-1)
   Endif
 Endif

Since Find-Solution calls itself recursively only on strictly smaller values, it makes a total of O(n) recursive calls and since it spends constant time per call, we have a return of an optimal solution in O(n) time.

9/10 - This section made the lecture make more sense.

6.2 - Principles of Dynamic Programming: Memoization or Iteration over Subproblems

We now use the algorithm from 6.1 to sumarize the basic principles of dp.

Designing the Algorithm The key to the efficient algorithm is really the array M. M(n) contains the value of the optimal solution on the full instance, and Find-Solution can be used to trace back through m efficiently and return an optimal solution itself.

The point to realize is that we can direclty compute teh entries in M by an iterative algorithm, rather than using memoized recursion. We just start with M[0] = 0 and keep incrementing j. So,

Iterative-Compute-Opt
 M[0] = 0
 For j = 1,2,...,n
   M[j] = max(vj+M[p(j)], M[j-1])
 Endfor
 

Analyzing the Algorithm The running time of Iterative-Compute-Opt is O(n), since it explicitly runs for n iterations and spends constant time in each.

A Basic Outline To develop an algorithm based on dynamic programming, one needs a collection of subproblems derived from the original problem that satisfies a few basic properties.

  • There are only a polynomial number of subproblems.
  • The solution to the original problem can be easily computed from the solutions to the subproblems.
  • There is a natural ordering on subproblems from “smallest” to “largest”, together with an easy-to-compute recurrence that allows one to determine the solution to a subproblem from the solutions to some number of smaller subproblems

8.5/10 - This section was really short and succinct which I appreciated.

6.3 - Segmented Least Squares: Multi-Way Choices

In this problem, the recurrence will involve what might be called “multi-way choices”: at each step, we have a polynomial number of possibilities to consider for the structure of the optimal solution.

The Problem We want to fit a set of pints well, using as few lines as possible.

Given a sequence of data points, we want to identify a few points in the sequence at which a discrete change occurs.

Formulating the Problem We are given a set of points { = {(x1, y1), (x2,y2),…,(xn,yn)}, with x1 < x2< … <xn. We will use pi to denote the point (xi, yi). We must first partition P into some number of segments. Each segment is a subset of P that represents a contiguous set of x-coordinates - a subset. The penalty of a partition is defined to be a sum of the following terms:

  • the number of segments into which we partition P, times a fixed, given multiplier C >0.
  • for each segment, the error value of the optimal line through that segment.

Our goal in the Segmented Least Squares Problem is to find a partition of minimum penalty. This minimization captures the trade-offs we discussed earlier. We are allowed to consider partitions into any number of segments; as we increase the number of segments, we reduce the penalty terms in part (ii) of the definition, but we increase the term in part (i).

Designing the Algorithm Suppose we let OPT(i) denote the optimum solution for the points p1,…,pi and we let ei,j denote the minimum error of any line wiht respect to pi, pi+1,…pj.

If the last segment of the optimal partition is pi,…,pn, then the value of the optimal solution is OPT(n) = ein + C + OPT(i-1).

Segmented-Least-Squares(n)

Array M[0...n]
Set M[0]=0
For all pairs i <= j
   Compute the least squares error eij for the sement pi,...,pj
Endfor
For j = 1,2,...,n
   Use the recurrence (6.7) to compute M[j]
Endfor
Return M[n]

Find-Segments(j)

If j = 0 then
   Output nothing
Else
   Find an i that minimizes eij + C + M[i-1]
   Output the segment {pi,...,pj} and the result of Find-Segments(i-1)
Endif

Analyzing the Algorithm Once all the eij values have been determined, the running time is O(n^2).

Rating: 8/10

6.4 - Subset Sums and Knapsacks: Adding a Variable

This problem is to select a subset of max total value, subject to the restriction that its total weight not exceed W. Each request i has both a value vi and a weight wi. Greedy algs don't work; for dp, we have to figure out a good set of subproblems.

Designing the Algorithm To find out the value for OPT(n) we not only need the value of OPT(n - ONE), but we also need to know the best solution we can get using a subset of the first n - ONE items and total allowed weight W - wn. So, we have a ton of subproblems - one for each i = 0,…n (each request) and each integer 0⇐wW. We will use OPT(i, w) to denote the value of the optimal solution using a subset of the items {ONE,…,i} with max allowed wieght w.

If w < wi then OPT(i,w) = OPT(i - ONE,w). Otherwise OPT(i,w) = max(OPT(i - ONE, w), wi + OPT(i - ONE, w - wi).

Subset-Sum array M[0…n, 0….W] Initialize M[0,w] = 0 for each w = 0, …, W For i = ONE, 2, …, n

  For w = 0, ...,@
     Use the recurrence above to compute M[i,w]
  Endfor

Endfor Return M[n,W]

analyzing the algorithm Like in weighted interval scheduling, we are building up a table of solutions M and we compute each of the values M[i,w] in O(ONE) time using the previous valus. Thus the run time is proportional to the number of entries in the table. The Subset-Sum(n, W) alg correctly computes the optimal value of the problem and runs in O(nW) time. Given a table M of the optimal values of the subp, the optimal set S can be found in O(n) time.

courses/cs211/winter2014/journals/deirdre/chapter6.1396402690.txt.gz · Last modified: 2014/04/02 01:38 by tobind
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0