Table of Contents
Chapter 5: Divide and Conquer
This chapter will discuss algorithms that break down a given input into different parts and then solves the problem in each part recursively, combining each solution to each part into an overall solution. We will learn how to show and understand these recurrence relations that tell us exactly how that algorithm runs.
5.1: A First Recurrence: The Mergesort Algorithm
This section will discuss Mergesort and the motivations and workings behind it. Mergesort works by dividing the problem into two parts and then solving each, combining each solution at the end. It will take us O(n) time to divide the problem into two, where we will break it down until there is sets of size 2. Then it spends T(n/2) time solving them and finally O(n) to combine everything back into one. So, T(n) is the runtime for the recurrence relation. The actual relation will be T(n) < = 2T(n/2) + cn.
There are two ways that we can solve recurrences. “Unrolling” is where we can look at the first few steps of a problem and try to determine a patter that will hold for all of the recursion. Then, we sum this pattern for all levels and receive the total runtime. The other way is to start with a guess. To prove this approach, you would need to use an induction proof (doesn't sound like the most fun way to me; I think I'll stick with unrolling).
Overall, this section was pretty simple and made sense. It was also good review. It has received a respectable score of 7/10 from our judges.
5.2: Further Recurrence Relations
Looking at problems that have q > 2 subproblems, we can determine that there will be log2n levels of recursion. These functions will also run in O(nlog2q) time. This is proven by looking at the unrolling and finding this pattern, but also by induction in the book. Also, when q = 1 we know that it will have constant runtime.
This section was rather short and the information it held was rather simple. Alas, it dragged on with pretty exhaustive proofs that were well intended but didn't totally satisfy my hunger and yearn for a deeper knowledge of the subject, so they felt rather unnecessary, at least to that extent to which these conclusions were drawn. 4/10.
5.3: Counting Inversions
This sections main topic was of the use of “Merge-and-Count” and “Sort-and-Count.” Both of these are used to determine and count the number of inversions for given sets when compared to each other.
Merge-and-Count will start with a pointer to the front of each list. We will keep track of a count as well. To start, while both lists are not empty, we look at the elements that have the pointers, then append the smaller of the two to our output list. If this element came from list B, we will increment our count by that of the number of remaining elements in list A. This will continue until one list is empty. Then, we will append the rest of the nonempty list to the end of the output list (because we are assuming these lists are presorted; the main pretext of this whole algorithm). This runs in O(n) because it iterates through both lengths to append to the output list.
Sort-and-Count uses Merge-and-Count inside of it. It first checks to make sure the list is greater than 1 and if so, it divides the list into two, running Sort-and-Count on each and then finally running Merge-and-Sort at the end. Therefore, the running time of T(n) satisfies the recurrence relation and it has a runtime of O(nlogn) itself.
This section was actually pretty interesting, seeing the inner-workings of how these algorithms ran. For some reason, I found them particularly compelling. It started off with a good, real-life example (though rather long-winded), and then got into the meat of the algorithms. It was also relatively short, which is always a plus, and for that reason I am granting this section my first 10/10!
