Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
courses:cs211:winter2014:journals:deirdre:chapter5 [2014/03/05 05:11] – created tobind | courses:cs211:winter2014:journals:deirdre:chapter5 [2014/03/12 00:47] (current) – [5.4 - Finding the Closest Pair of Points] tobind | ||
---|---|---|---|
Line 2: | Line 2: | ||
====== 5.1 - A First Recurrence: The Mergesort Algorithm ====== | ====== 5.1 - A First Recurrence: The Mergesort Algorithm ====== | ||
+ | Recall: Mergesort sorts a given list of numbers by first dividing them into two equal halves, sorting each half separately by recursion and then combinging the results of these recursive calls using the linear time alg. for merging sorted lists that we saw in Chapter 2. | ||
+ | |||
+ | Consider any algorithm that fits into the above rough problems and let //T(n)// denote its worst-case running time on input instances of size //n//. Supposing that //n// is even, the algorithm spends //O(n)// time to divide the input into two pieces of size //n///2 each; it then spends time // | ||
+ | For some constant c, T(n) ≤ 2T(n/2) + cn, when n > 2, and T(2) ≤ c. | ||
+ | |||
+ | **Approaches to solving recurrences** | ||
+ | Two basic ways: | ||
+ | |||
+ | __Unroll the recursion__ | ||
+ | In regards to Figure 5.1 on p212 | ||
+ | * Analyzing the first few levels: At the first level of recursion, we have a single problem of size //n//, which takes at most //cn// plus the time spent in all subsequent recursive calls. At the next level, we have two problems each of size //n///2, that take at most //c///2, for a total of at most //c// again plust the time in subsequent recursive calsls. At the third level, we have four problems each of size //n///4, each taking at most //cn///4, for a total of most //cn//. | ||
+ | * Identifying a pattern: What's going on in general as recursion expands? | ||
+ | * Summing over all levels of recurrence: | ||
+ | |||
+ | Any function T(.) satisfying (5.1) is bounded by O(n log n), when n>1. | ||
+ | | ||
+ | __Substituting a Solution into the Mergesort Recurrence__ | ||
+ | If we have a guess for the running time that we want to verify, we can do so by plugging it into the recurrence. | ||
+ | |||
+ | __An approach using Partial Substitution__ | ||
+ | Weaker kind of substitution - guess overall form of solution without pinning down exact values at the outset. | ||
+ | |||
+ | This section was a 6.5; I don't think I understand this yet. | ||
+ | |||
+ | ====== 5.3 - Counting Inversions ====== | ||
+ | **The Problem** | ||
+ | We will consider a problem that arises in the analysis of rankings, which are becoming important to a number of current applications. A core issue in rankings is the problem of comparing two rankings. A natural way to quantify this notion is to count the number of inversions. We say that two indices //i < j// for an inversion if //ai > aj//, that is if the two elements //ai// and //aj// are "out of order." | ||
+ | |||
+ | **Designing and Analyzing the Algorithm** | ||
+ | One way is to look at every pair and see if it's an inversion but that would be //O(n^2)// - we don't want that. | ||
+ | |||
+ | Basic idea: divide list into two, count the number of inversions in each half and count the number of inversions when merging them. Merge-and-Cont will walk through the sorted lists //A// and //B//, removing elements from the front and appending them to the sorted list //C//. In a given step, we have a current pointer into each list, showing our current position. Suppose the pointers are currently at elements //ai// and //bj//. In one step, we compare the elements //a// and //bj// being pointed to in each list, remove teh smaller one from its list and append it to the end of list //c//. Every time //ai// is appended to //C//, no new inversions are encountered, | ||
+ | |||
+ | Merge-and-Count(A, | ||
+ | Maintain a Current pointer into each list, initialized to point to the front elements | ||
+ | Maintain a variable Count for the number of inversions, initialized to 0 | ||
+ | While both lists are nonempty: | ||
+ | Let ai and bj be the elements pointed to by the Current pointer | ||
+ | | ||
+ | If bj is the smaller element then | ||
+ | Increment Count by the number of elements remaining in A | ||
+ | Endif | ||
+ | | ||
+ | Endwhile | ||
+ | Once one list is empty, append the remainder of the other list to the output | ||
+ | Return Count and the merged list | ||
+ | |||
+ | The running time of M-and-C is //O(n)//. | ||
+ | |||
+ | Sort-and-Count(L) | ||
+ | If the list has one element then | ||
+ | there are no inversions | ||
+ | Else | ||
+ | | ||
+ | A contains the first n/2 elements | ||
+ | B contains the remaining n/2 elements | ||
+ | (rA, A) = Sort-and-Count(A) | ||
+ | (rB, B) = Sort-and-Count(B) | ||
+ | (r, L) = Merge-and-Count(A, | ||
+ | Endif | ||
+ | Return r = rA + rB + r and the sorted list L | ||
+ | |||
+ | The Sort-and-Count algorithm correctly sorts the input list and ocunts the number of inversions; it runs in //O//(//n// log //n//) time for a list with //n// elements. | ||
+ | |||
+ | This section was a 9 because I liked the topic and method. I understood it when first discussed in class, so the reading wasn't particularly enlightening. | ||
+ | |||
+ | ====== 5.4 - Finding the Closest Pair of Points ====== | ||
+ | **The Problem** | ||
+ | Given //n// points in the plane, find the pair that is closest together. | ||
+ | |||
+ | **Designing the Algorithm** | ||
+ | Let us denote the set of points by //P = {p1, ..., pn}//, where //pi// has coordinates //(xi, yi)//; and for two points //pi, pj ∈ P//, we use //d(pi, pj)// to denote the standard Euclidean distance between them. Our goal is to find a pair of points //pi,pj// that minimizes // | ||
+ | |||
+ | We'll apply the style of divide and conquer used in Mergesort: we find the closest pair of points in the "left half" and the closest pair of points in the "right half" of //P// and then we use this information to get the overall solution in linear time. -> will give us an //O(n// log// n)// running time. Then we need to look at the distances between a point in the left half and a point in the right half. We need to find the smallest in //O(n)// time after the recursive calls returns. | ||
+ | |||
+ | __Setting up the recursion: | ||
+ | |||
+ | The first level of recursion will work as follows, with all further levels working in an analogous way. We define //Q// to be the set of points in the first //n///2 positions of the list //Px// and //R// to be the set in the final //n///2 positions of the list //Px//. By a single pass through each of //Px// and //Py// in //O(n)// time, we can create the following four lists: //Qx//, consisting of the points in //Q// sorted by increasing // | ||
+ | |||
+ | We now recursively determine a closest pair of points in //Q// (with access to the lists //Qx// and //Qy//). Suppose that //q0*, q1*// are returned as a closest pair of points in //Q//. Similarly, we obtain //r0*, r1*// from //R//. | ||
+ | |||
+ | __Combining the Solutions: | ||
+ | |||
+ | Let //x*// denote the // | ||
+ | |||
+ | Then, we can further say - There exists //q ∈ Q// and //r ∈ R// for which //d(q,r) < δ// if and only if there exist //s, s'∈ S// for which //d(s, s') < δ//. | ||
+ | |||
+ | **Summary of the Algorithm** | ||
+ | Closest-Pair(P) | ||
+ | | ||
+ | (p0*, p1*) = Clostest-Pair-Rec(Px, | ||
+ | | ||
+ | Closest-Pair-Rec(Px, | ||
+ | If |P| ≤ 3 then | ||
+ | find closest pair by measuring all pairwise distances | ||
+ | Endif | ||
+ | | ||
+ | (q0*, q1*) = Closest-Pair-Rec(Qx, | ||
+ | (r0*, r1*) = Closest-Pair-Rec(Rx, | ||
+ | |||
+ | δ = min(d(q0*, q1*), d(r0*, r1*)) | ||
+ | x* = maximum x-coordinate of a point in set Q | ||
+ | L = {(x,y) : x = x*} | ||
+ | S = points in P within distance δ of L | ||
+ | |||
+ | | ||
+ | For each point s ∈ Sy, compute distance from s to each of next 15 points in Sy | ||
+ | Let s, s' be pair achieving minimum of these distances | ||
+ | |||
+ | If d(s, | ||
+ | | ||
+ | Else if d(q0*, q1*) < d(r0*, r1*) then | ||
+ | | ||
+ | Else | ||
+ | | ||
+ | Endif | ||
+ | |||
+ | **Analyzing the Algorithm** | ||
+ | The algorithm correctly outputs a closest pair of points in P. | ||
+ | |||
+ | The running time of the algorithm is //O(n //log// n)//. | ||
+ | |||
+ | 8/10. | ||
+ | |||
+ | ====== 5.5 - Integer Multiplication ====== | ||
+ | **The Problem** | ||
+ | Consider the multiplication of two integers. Even though this seems trivial, the way that we are taught to compute it in school takes //O(n^2)// running time. But we can improve on that! | ||
+ | |||
+ | **Designing the Algorithm** | ||
+ | Let's assume we're in base-2 (but it doesn' | ||
+ | |||
+ | This reduces the problem of solving a single //n//-bit instance to the problem of solving four //n///2-bit instances. -> recursviely compute the results for these four instances and then combine them using the above equation. The combining requires a constant number of additions of // | ||
+ | |||
+ | Try using three. This leads to //T(n) ≤ O(n^log2(q))// | ||
+ | |||
+ | | ||
+ | Write x = x1 * 2^(n/2) + x0 | ||
+ | y = y1 * 2^(n/2) + y0 | ||
+ | | ||
+ | p = Recursive-Multiply(x1+x0, | ||
+ | x1y1 = Recursive-Multiply(x1, | ||
+ | x0y0 = Recursive-Multiply(x0, | ||
+ | | ||
+ | |||
+ | **Analyzing the Algorithm** | ||
+ | Given two //n//-bit numbers, the alg performs a constant number of additions on // | ||
+ | |||
+ | The running time satisfies: //T(n) ≤ 3T(n/2) + cn//. | ||
+ | |||
+ | The running time of Recursive-Multiply on two //n//-bit factors is // | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||