Table of Contents

Divide and conquer refers to a class of algorithmic techniques in which one breaks the input into several parts, solves the problem in each part recursively and then combines the solutions to these subproblems into an overall solution.

5.1 - A First Recurrence: The Mergesort Algorithm

Recall: Mergesort sorts a given list of numbers by first dividing them into two equal halves, sorting each half separately by recursion and then combinging the results of these recursive calls using the linear time alg. for merging sorted lists that we saw in Chapter 2.

Consider any algorithm that fits into the above rough problems and let T(n) denote its worst-case running time on input instances of size n. Supposing that n is even, the algorithm spends O(n) time to divide the input into two pieces of size n/2 each; it then spends time T(n/2) to solve each one and finally it spends O(n) time to combine the solutions from the two recursive calls, and thus, the running time T(n) satisfies the following recurrence relation:

 For some constant c, T(n) ≤ 2T(n/2) + cn, when n > 2, and T(2) ≤ c. 
 

Approaches to solving recurrences Two basic ways:

Unroll the recursion In regards to Figure 5.1 on p212

Any function T(.) satisfying (5.1) is bounded by O(n log n), when n>1.

Substituting a Solution into the Mergesort Recurrence If we have a guess for the running time that we want to verify, we can do so by plugging it into the recurrence.

An approach using Partial Substitution Weaker kind of substitution - guess overall form of solution without pinning down exact values at the outset.

This section was a 6.5; I don't think I understand this yet.

5.3 - Counting Inversions

The Problem We will consider a problem that arises in the analysis of rankings, which are becoming important to a number of current applications. A core issue in rankings is the problem of comparing two rankings. A natural way to quantify this notion is to count the number of inversions. We say that two indices i < j for an inversion if ai > aj, that is if the two elements ai and aj are “out of order.”

Designing and Analyzing the Algorithm One way is to look at every pair and see if it's an inversion but that would be O(n^2) - we don't want that.

Basic idea: divide list into two, count the number of inversions in each half and count the number of inversions when merging them. Merge-and-Cont will walk through the sorted lists A and B, removing elements from the front and appending them to the sorted list C. In a given step, we have a current pointer into each list, showing our current position. Suppose the pointers are currently at elements ai and bj. In one step, we compare the elements a and bj being pointed to in each list, remove teh smaller one from its list and append it to the end of list c. Every time ai is appended to C, no new inversions are encountered, since ai is smaller than everything left in list B, and it comes before all of them. But if bj is appended to list C, then it is smaller than all the remaining items in A and it comes after all of them, so we increase our count of the number of inversions by the number of elements remaining in A.

Merge-and-Count(A,B)
Maintain a Current pointer into each list, initialized to point to the front elements
Maintain a variable Count for the number of inversions, initialized to 0
While both lists are nonempty:
   Let ai and bj be the elements pointed to by the Current pointer
   Append the smaller of these two to the output list
   If bj is the smaller element then
      Increment Count by the number of elements remaining in A
   Endif
   Advance the Current pointer in the list form which the smaller element was selected
Endwhile
Once one list is empty, append the remainder of the other list to the output
Return Count and the merged list

The running time of M-and-C is O(n).

Sort-and-Count(L)
If the list has one element then 
   there are no inversions
Else
   Divide the list into two halves:
      A contains the first n/2 elements
      B contains the remaining n/2 elements
   (rA, A) = Sort-and-Count(A)
   (rB, B) = Sort-and-Count(B)
   (r, L) = Merge-and-Count(A,B)
Endif
Return r = rA + rB + r and the sorted list L

The Sort-and-Count algorithm correctly sorts the input list and ocunts the number of inversions; it runs in O(n log n) time for a list with n elements.

This section was a 9 because I liked the topic and method. I understood it when first discussed in class, so the reading wasn't particularly enlightening.

5.4 - Finding the Closest Pair of Points

The Problem Given n points in the plane, find the pair that is closest together.

Designing the Algorithm Let us denote the set of points by P = {p1, …, pn}, where pi has coordinates (xi, yi); and for two points pi, pj ∈ P, we use d(pi, pj) to denote the standard Euclidean distance between them. Our goal is to find a pair of points pi,pj that minimizes d(pi,pj). We will assume that no two points in P have the same x-coordinate or the same y-coordinate.

We'll apply the style of divide and conquer used in Mergesort: we find the closest pair of points in the “left half” and the closest pair of points in the “right half” of P and then we use this information to get the overall solution in linear time. → will give us an O(n log n) running time. Then we need to look at the distances between a point in the left half and a point in the right half. We need to find the smallest in O(n) time after the recursive calls returns.

Setting up the recursion: It will be useful if every recursive call on a set P' ⊆ P, begins with two lists: a list Px' in which all the points in P' have been sorted by increasing x-coordinate, and a list Py' in which all the points in P' have been sorted by increasing y-coordinate. Before any recursion occurs, we set up lists Px and Py. Attached to every entry in each list is a record of the position of that point in both lists.

The first level of recursion will work as follows, with all further levels working in an analogous way. We define Q to be the set of points in the first n/2 positions of the list Px and R to be the set in the final n/2 positions of the list Px. By a single pass through each of Px and Py in O(n) time, we can create the following four lists: Qx, consisting of the points in Q sorted by increasing x-coordinate; Qy, consisting of the points in Q sorted by increasing y-coordinate; and analogous lists Rx and Ry. For each entry of each list, as before, we record the position of the point in both lists it belongs to.

We now recursively determine a closest pair of points in Q (with access to the lists Qx and Qy). Suppose that q0*, q1* are returned as a closest pair of points in Q. Similarly, we obtain r0*, r1* from R.

Combining the Solutions: Let δ be the minimum of d(q0*, q1*) and d(r0*, r1*). Are there points q ∈ Q and r ∈ R for which d(q,r) < δ? If not, then we have already found the closest pair in one of our recursive calls. But if there are, then the closest such q and r form the closest pair in P.

Let x* denote the x-coordinate of the rightmost point in Q and let L denote the vertical line described by the equation x = x*. This line L “separates” Q from R. If there exists q ∈ Q and r ∈ R for which d(q,r) < δ, then each of q and r lies within a distance δ of L.

Then, we can further say - There exists q ∈ Q and r ∈ R for which d(q,r) < δ if and only if there exist s, s'∈ S for which d(s, s') < δ.

Summary of the Algorithm

Closest-Pair(P)
   Construct Px and Py        (O(n log n) time) 
   (p0*, p1*) = Clostest-Pair-Rec(Px, Py)

Closest-Pair-Rec(Px, Py)
   If |P| ≤ 3 then
      find closest pair by measuring all pairwise distances
   Endif
   Construct Qx, Qy, Rx, Ry           (O(n) time)
   (q0*, q1*) = Closest-Pair-Rec(Qx, Qy)
   (r0*, r1*) = Closest-Pair-Rec(Rx, Ry)
   
   δ = min(d(q0*, q1*), d(r0*, r1*))
   x* =  maximum x-coordinate of a point in set Q
   L = {(x,y) : x = x*}
   S = points in P within distance δ of L
   
   Construct Sy                      (O(n) time)
   For each point s ∈ Sy, compute distance from s to each of next 15 points in Sy
       Let s, s' be pair achieving minimum of these distances               (O(n) time)
   
   If d(s,s') < δ then 
       Return (s, s')
   Else if d(q0*, q1*) < d(r0*, r1*) then
       Return(q0*, q1*)
   Else
       Return (r0*, r1*)
   Endif

Analyzing the Algorithm The algorithm correctly outputs a closest pair of points in P.

The running time of the algorithm is O(n log n).

8/10.

5.5 - Integer Multiplication

The Problem Consider the multiplication of two integers. Even though this seems trivial, the way that we are taught to compute it in school takes O(n^2) running time. But we can improve on that!

Designing the Algorithm Let's assume we're in base-2 (but it doesn't matter) and start by writing x as x1 * 2^(n/2) + x0. In other words, x1 corresponds to the “high-order” n/2 bits and x0 corresponds to the “low-order” n/2 bits. Similarly, we write y = y1 * 2^(n/2) + y0. Thus, we have xy = (x1 * 2^(n/2) + x0)(y = y1 * 2^(n/2) + y0) = x1y1 * 2^n + x1y0 _ x0y1)*2^(n/2) + x0y0.

This reduces the problem of solving a single n-bit instance to the problem of solving four n/2-bit instances. → recursviely compute the results for these four instances and then combine them using the above equation. The combining requires a constant number of additions of O(n)-bit numbers so it takes time O(n); thus, the running time is bounded by the recurrence T(n) ≤ 4T(n/2) + cn for a constant c. …good enough? NOPE.

Try using three. This leads to T(n) ≤ O(n^log2(q)). The trick is to consider the result (x1+x0)(y1+y0) = x1y1 + x1y0 + x0y1 + x0y0.

 Recursive-Multiply(x,y):
 Write x = x1 * 2^(n/2) + x0
       y = y1 * 2^(n/2) + y0
 Compute x1+x0 and y1+y0
 p = Recursive-Multiply(x1+x0, y1+y0)
 x1y1 = Recursive-Multiply(x1,y1)
 x0y0 = Recursive-Multiply(x0,y0)
 Return x1y1 * 2^n + (p - x1y1 - x0y0) * 2^(n/2) + x0y0

Analyzing the Algorithm Given two n-bit numbers, the alg performs a constant number of additions on O(n)-bit numbers, in addition to the three recursive calls.

The running time satisfies: T(n) ≤ 3T(n/2) + cn.

The running time of Recursive-Multiply on two n-bit factors is O(n^log2(3)) = O(n^1.59)