Table of Contents

6.1 - Weighted Interval Scheduling: A Recursive Procedure

The Weighted Interval Scheduling Problem is a strictly more general version of the Interval Scheduling Problem, in which each interval has a certain value ( or weight) and we want to accept a set of maximum value.

Designing a Recursive Algorithm No natural greedy algorithm is known for this problem, which is why we're looking at dynamic programming. We use the notation from before, where we hvae n requests labeled 1, …, n, with each request i specifying a start time si and a finish time fi. Each interval i now also has a value or weight, vi. Two intervals are compatible if they do not overlap.

Finding the optimal solution on intervals {1,2,..,n} involves looking at the optimal solutions of smaller problems of the form {1,2,…,j}. thus, for any value of j between 1 and n, let Oj denote the optimal solution to the problem consisting of requests {1,…,j} and let OPT(j) denote the value of the solution.

We can say that OPT(j) = max(vj + OPT(p)j)), OPT(j-1)).

Request j belongs to an optimal solution on the set {1,2,..j} if and only if vj + OPT(p(j))≥ OPT(j-1).

This gives us a recursive algorithm to compute OPT(n), assuming that we have already sorted the requests by finishing time and computed the values of p(j) for each j.

 Compute-Opt(j)
  If j = 0 then
    Return 0
  Else
    Return max(vj+Compute-Opt(p(j)), Compute-Opt(j-1))
  Endif
  

Compute-Opt(j) correctly computes OPT(j) for each j = 1,2,…,n.

Memoizing the Recursion We note that Compute-Opt is really only solving n + 1 different subproblems. How can we eliminate all the redundancy that's happening? We could store the value of Compute-Opt in globaly accessible place the first time we compute it and then simply use this precomputed value in place of all future recursive calls.

So, to re-implement: We will use an array M[0…n] M[j] will start with the value “empty” but will hold the value of Compute-Opt(j) as soon as it is determined. To determine OPT(n), we invoke M-compute-Opt(n).

M-Compute-Opt(j)
 If j = 0 then
   Return 0
 Else if M[j] is not empty then 
   Return M[j] 
 Else
   Define M[j] = max(vj+M-Compute-Opt(p(j)), M-Compute-Opt(j-1))
   Return M[j]
 Endif
 

Analyzing the Memoized Version The running time of M-Compute-Opt(n) is O(n) (assuming the input intervals are sorted by their finish times.

Computing a Solution in addition to its Value:

Find Solution(j)
 If j = 0 then
   Output nothing
 Else
   If vj+M[p(j)]≥M[j-1] then
     Output j together with the result of Find-Solution(p(j))
   Else
     Output the result of Find-Solution(j-1)
   Endif
 Endif

Since Find-Solution calls itself recursively only on strictly smaller values, it makes a total of O(n) recursive calls and since it spends constant time per call, we have a return of an optimal solution in O(n) time.

9/10 - This section made the lecture make more sense.

6.2 - Principles of Dynamic Programming: Memoization or Iteration over Subproblems

We now use the algorithm from 6.1 to sumarize the basic principles of dp.

Designing the Algorithm The key to the efficient algorithm is really the array M. M(n) contains the value of the optimal solution on the full instance, and Find-Solution can be used to trace back through m efficiently and return an optimal solution itself.

The point to realize is that we can direclty compute teh entries in M by an iterative algorithm, rather than using memoized recursion. We just start with M[0] = 0 and keep incrementing j. So,

Iterative-Compute-Opt
 M[0] = 0
 For j = 1,2,...,n
   M[j] = max(vj+M[p(j)], M[j-1])
 Endfor
 

Analyzing the Algorithm The running time of Iterative-Compute-Opt is O(n), since it explicitly runs for n iterations and spends constant time in each.

A Basic Outline To develop an algorithm based on dynamic programming, one needs a collection of subproblems derived from the original problem that satisfies a few basic properties.

8.5/10 - This section was really short and succinct which I appreciated.

6.3 - Segmented Least Squares: Multi-Way Choices

In this problem, the recurrence will involve what might be called “multi-way choices”: at each step, we have a polynomial number of possibilities to consider for the structure of the optimal solution.

The Problem We want to fit a set of pints well, using as few lines as possible.

Given a sequence of data points, we want to identify a few points in the sequence at which a discrete change occurs.

Formulating the Problem We are given a set of points { = {(x1, y1), (x2,y2),…,(xn,yn)}, with x1 < x2< … <xn. We will use pi to denote the point (xi, yi). We must first partition P into some number of segments. Each segment is a subset of P that represents a contiguous set of x-coordinates - a subset. The penalty of a partition is defined to be a sum of the following terms:

Our goal in the Segmented Least Squares Problem is to find a partition of minimum penalty. This minimization captures the trade-offs we discussed earlier. We are allowed to consider partitions into any number of segments; as we increase the number of segments, we reduce the penalty terms in part (ii) of the definition, but we increase the term in part (i).

Designing the Algorithm Suppose we let OPT(i) denote the optimum solution for the points p1,…,pi and we let ei,j denote the minimum error of any line wiht respect to pi, pi+1,…pj.

If the last segment of the optimal partition is pi,…,pn, then the value of the optimal solution is OPT(n) = ein + C + OPT(i-1).

Segmented-Least-Squares(n)

Array M[0...n]
Set M[0]=0
For all pairs i <= j
   Compute the least squares error eij for the sement pi,...,pj
Endfor
For j = 1,2,...,n
   Use the recurrence (6.7) to compute M[j]
Endfor
Return M[n]

Find-Segments(j)

If j = 0 then
   Output nothing
Else
   Find an i that minimizes eij + C + M[i-1]
   Output the segment {pi,...,pj} and the result of Find-Segments(i-1)
Endif

Analyzing the Algorithm Once all the eij values have been determined, the running time is O(n^2).

Rating: 8/10

6.4 - Subset Sums and Knapsacks: Adding a Variable

This problem is to select a subset of max total value, subject to the restriction that its total weight not exceed W. Each request i has both a value vi and a weight wi. Greedy algs don't work; for dp, we have to figure out a good set of subproblems.

Designing the Algorithm To find out the value for OPT(n) we not only need the value of OPT(n - ONE), but we also need to know the best solution we can get using a subset of the first n - ONE items and total allowed weight W - wn. So, we have a ton of subproblems - one for each i = 0,…n (each request) and each integer 0⇐wW. We will use OPT(i, w) to denote the value of the optimal solution using a subset of the items {ONE,…,i} with max allowed wieght w.

If w < wi then OPT(i,w) = OPT(i - ONE,w). Otherwise OPT(i,w) = max(OPT(i - ONE, w), wi + OPT(i - ONE, w - wi).

 Subset-Sum
  array M[0...n, 0....W]
  Initialize M[0,w] = 0 for each w = 0, ..., W
  For i = ONE, 2, ..., n
    For w = 0, ...,@
      Use the recurrence above to compute M[i,w]
    Endfor
  Endfor
  Return M[n,W]

analyzing the algorithm Like in weighted interval scheduling, we are building up a table of solutions M and we compute each of the values M[i,w] in O(ONE) time using the previous valus. Thus the run time is proportional to the number of entries in the table. The Subset-Sum(n, W) alg correctly computes the optimal value of the problem and runs in O(nW) time. Given a table M of the optimal values of the subp, the optimal set S can be found in O(n) time.

8.5. The lecture made more sense so this was kind of just a cursory review of that.

7.ONE - The Max-Flow Problem and the Ford-Fulkerson alg

Flow Networks - a directed graph G = (V, E) with the following features

Nodes other than s and t will be called internal nodes.

We will make two assumptions: first, no edges enters teh source s and no edge leaes the sink t; second, that there is at least one edge incident to each node; and third, that all capacities are integers.

Defining Flow - We say an s-t flow is a function f that maps each edge e to a nonnegative real number. The value f(e) intuitively represents the amount of flow carried by edge e. a flow f must satisfy capacity and conservation conditions. The flow of an edge cannot exceed the capacity of the edge. The values of a flow f, denoted v(f), is defined to be the amount of flow generated at the source.

Max-flow Problem - Given a flow network, find a flow of max possible values.

Designing the alg To push flow, we can push forward on edges with leftover capacity and push backward on edges that are already carrying flow to divert it in a different direction. We define the residual graph Gf of G as

Gf has at most 2x the edges as G.

To augment paths in a residual graph: Let P be a simple s-t path in Gf. We define bottleneck(P,f) to be the min residual capaicty of any edge on P, with respect to the flow f. Now,

 augment(f,P)
   Let b = bottleneck(P,f)
   For each edge (u,v) ∈ P
      If e = (u, v) is a forward edge then 
         increase f(e) in G by b
      Else ((u,v) is a backward edge and let e = (v,u))
         decrease f(e) in G by b
      Endif
    Endfor
    Return(f)

The result of augment(f,P) is a new flow f' in G, obtained by increasing and decreasing the flow values on edges of P.

Now, the alg to compute a s-t flow:

 Max-flow
   Initially f(e) =0 for all e in G
   While there is an s-t path in the residual graph Gf
      Let P be a simple s-t path in Gf
      f' = augment(f,P)
      Update f to be f'
      Update the residual graph Gf to be Gf'
   Endwhile
   Return f

This is the Ford-Fulkerson algorithm.

analyzing the algorithm: termination and running time

This section was a nine out of ten.

7.2- Maximum Flows and Minimum Cuts in a network.

analyzing the algorithm: Flows and Cuts Now we want to show that the flow found by FF has the max possible value of any flow in G.

Consider dividing the nodes of the graph into two sets, a and B, so that s ∈ a and t ∈ B. The capacity fo a cut (a,B), which we will denote c(a,B) is simply the sum of the capacities of all edges out of a.

Note that if (a,B) is a cut, then the edges into B are precisely the edges out of a. So, we have fout(a) = fin(B) and fin(a) = fout(B). So, let f be any s-t flow and (a,B) any s-t cut. Then v(f) = fin(B) - fout(B).

(7.8) Let f be any s-t flow, and (a,B) any s-t cut. Then v(f) ≤ c(a,B).

analyzing the algorithm: Max-Flow equals Min-Cut Let f* denote the flow returned by FF. To show that f* has the max possible vale of any flow in G, we exhibit an s-t cut (a*, B*) for which v(f*) = c(a*, B*). So f* has the max value and (a*, B*) has the min capacity of any s-t cut.

If f is an s-t flow s.t. there is no s-t path in the residual graph Gf, then there is an s-t cute (a*, B*) in G for which v(f) = c(a*, B*). Consequently, f has the max value of any flow in G and (a*, B*) has the min capacity of any s-t cut in G.

 (7.) Given a flow f of max value, we can compute an s-t cut of min capaity in O(m) time.
 
 In every flow network, the max value of an s-t flow is equal to the min capacity of an s-t cut.
 

For interesting, I found this section a 7.

7.5 - Bipartite Matching Problem

The Problem We want to find a matching in G of largest possible size.

Designing the alg The graph defining a matching problem is undirected, but we can make it work.

Beginning with a graph G, we construct a flow network G'. First we direct all edges in G from X to Y. We then add a node s and an edge (s,x) from s to each node in X. We add a node t and an edge (y,t) from each node in Y to t. Finally, we give each edge in G' a capacity of one. Then we compute a max s-t flow in this network G'.

analyzing the alg Here are three simple facts about the set M'.

The size of the maximum matching in G is equal to teh value of the max flow in G';
and the edges in such a matching in G are the edges that carry flow from X to Y in G'.

The FF alg can be used to find a max matching in a bipartite graph in O(mn) time.

I give this section an 8.

7.7 - Extensions to the Max-Flow Problem

Circulations with Demands Suppose there is now a set S of sources generating flow and a set t of sinks. Instead of maximizing the flow values, we will consdier a problem where sources have fixed supply values and sinks have fixed demand values. given a flow network with capacities on the edges, each node v has a demand dv. If dv > 0, the node is a sink. If dv < 0, it is a source. S denotes the nodes with negative; T is the set of nodes with positive demands.

In this setting we say that a circulation with demands {dv} is a function f satisfies the capacity and demand conditions.

So now we just have a feasibility problem.

(7.49) If there exists a feasible circulation with demands {dv}, then Σdv = 0. 

See p 382 for proof of:

There is a feasible circulation with demands {dv} in G iff the max s*-t* flow in G' has value D. If all capacities and demands in G are integers and there is a feasible circulation, then there is a feasible circulation that is integer-valued.

Circulations with Demands and Lower Bounds Consider a flow network with a capacity ce and a lwoer bound le on each edge e.

We simply reduce this problem to the one above. See p383 for proof that

There is a feasible circulation in G iff there is a feasible circulation in G'. If all demands, capacities and lower bounds in G are integers and there is a feasible circulation, then there is a feasible circulation that is integer-valued.

This section wasn't very interesting for me, so only a 6.