=====Chapter 5 - Divide and Conquer=====
For a divide and conquer algorithm, we break the input into several parts and solve each part recursively. We then combine all solutions into an overall solution. To figure out the running time of a divide and conquer algorithm, we look at the recurrence relation, which is based on the number of times we recursively call the algorithm. Generally for a divide and conquer algorithm, the brute force method will run in some polynomial time, and the divide and conquer method will reduce that time to a lower polynomial.
Readability: 8/10
====5.1 - A First Recurrence: The Mergesort Algorithm====
Mergesort is a method used to sort two lists of values. We divide the list in half and sort each half using recursion, and finally we combine the results in linear time. We get the recurrence relation //T(n) ≤ 2T(n/2)+cn// for n>2, and //T(2)≤c// for the base case. There are two ways to solve the recurrence, unrolling the algorithm or starting with a guess. Unrolling the recursion involves looking at the running time for the first levels until we see a pattern. Starting with a guess and substituting, we use induction to find the solution. Also, a "partial" substitution can be used, where we leave our constants as variables and try to figure them out later.
Readability: 5/10 because of the difficult math
====5.2 - Further Recurrence Relations====
We can generalize the problem from section 5.1 by changing the number of recursive calls to a variable //q// instead of 2. Our recurrence relation then becomes //T(n)≤qT(n/2)+cn// for n>2 and //T(2)≤c//. The recurrence relation can be solved using unrolling or substitution. For the case where q>2, we find that each level requires more work than the previous level. This is unlike the case where q=2, where every level requires the same amount of work. With q>2, we get an efficiency of O(nlog2q). For the case where q=1, we find that the first level performs half of the work, and each subsequent level performs half the work of the one before it. For the case of q=1, we get O(n) efficiency. If the merging of our recursively found results takes quadratic instead of linear time, we get O(n2).
Readability: 5/10
====5.3 - Counting Inversions====
The problem we are dealing with is how to tell how similar two sets of rankings are. We do this by counting the number of inversions one has with the other. An inversion occurs when two elements in a sequence are out of order. To find the number of inversions without using brute force, we divide our list into two parts and count the number of inversions in each part. We sort the lists so that we will be able to count the number of inversions between them. We create a new list to append the sorted elements and use the standard merge algorithm with a slight modification to count inversions. Whenever an element of the second half of our original list is added to the new list, we increment the number of inversions by the number of elements remaining in the first half list.
Readability: 7/10 The algorithm became more clear after doing this writeup.
====5.4 - Finding the Closest Pair of Points====
Our problem is how to find the closest pair of points on a plane, which can be solved using brute force in O(n2) time. If we consider the one-dimensional version of this problem, we can see that sorting the points and calculating a distance for each set of consecutive points will give us the closest pair in O(nlogn) time. We use a divide and conquer approach to find the closest pair of points in the right and left halves of the set. We keep two orderings of points for each half, one using the x-coordinate and the other using the y-coordinate. Once we have the shortest distance of the left and right sides, we must consider pairs of points that straddle the center line. We only need to consider points within the our smallest distance measurement on each side of the center line, and we order them by increasing y-value. We only need to calculate distances for points that are within 15 positions of each other in this y-ordering, because any more and they would be more than the previously established smallest distance (delta) apart. The algorithm runs in O(nlogn) time.
Readability: 6/10. Rereading this section a few times helped my understanding, so I'd rate my understanding at an 8/10 now.
====5.5 - Integer Multiplication====
When considering the standard elementary way of multiplying two numbers, we get an efficiency of O(n2). Our first attempt at a solution is to break up each number into its higher order half and its lower order half. We then compute the products of each set of bits and add them together, which means we have broken our problem into 4 recursive calls. Unfortunately, this still gives O(n2) efficiency. To find a more efficient solution, we need to break the problem up into less than 4 recursive calls. The trick is to subtract the sum of the product of the first two halves and the last two halves from the total product. This gives us only 3 recursive calls, and an efficiency of O(N1.59).
Readability: 7/10