Divide and conquer algorithms break the problem into smaller sections, solve those sections, and then put the sections back together. This is done in the hopes that we will be able to reduce the run time of the algorithm.
The Mergesort algorithm splits a list of numbers in half and in half again and then sorts each side and then puts the two halves together. We then designate T(n) to be the worst-case running time of n input. Then for some c T(n) < 2T(n/2)+cn when n>2. This represents the run time of one recurrence relation. We can solve recurrences in two ways: unroll the recursion or guess and check. Unrolling the recurrence means that we analyze the first few levels and at each level, it takes at most cn run time. Then we can find a pattern, and we can sum over this pattern to find the running time. The unrolling method takes O(nlogn) time. Guessing and checking the algorithm, we use a proof by induction to show that T(n) = cnlogn. Since T(n) is the worst case run time, then we can see that the guessing and checking method will also run in O(nlogn). We can also use a partial substitution method that is a little weaker than the other substitution metho but will run in the same time.
We are still going over this is class so it is a little confusing. I also had forgotten what Mergesort was, so this was a good refresher. I'll need to continue to remember things from 112.
I would give this section a 7/10 as it was helpful but a little confusing for me.
This section generalizes recurrence relations and allows us to create a formula that we can use to determine the run time of later recurrence relations. The generalized formula (on page 215 of the text) must be broken down into 3 cases. The first when q=1, the second when q=2, and finally when q>2. (We went over the case where q>2 Wednesday in class). In order to determine the run time of the q>2 case, we unroll the recurrence relation T(n) ≤ qT(n/2) + cn and find that at any level j, we have q^j instances, each with size n/2^j, which will perform (q/2)^j*cn work. Then if we sum over these levels, we find that the run time of this recurrence relation is O(n^log2q). This means that as q increases so will the run time. This is the method we discussed in class. The book also walks through using partial substitution to determine the run time of the recurrence relation. Te section then looks at the case when q = 1. After unrolling the recurrence relation, we can see that the run time when q = 1 is O(n). This is the result of a geometric series with a decomposing exponent. The section then looks at the recurrence relation T(n) ≤ 2T(n/2)+O(n^2), unrolls it, and determines the run time to be O(n^2). This is less than the original estimate of O(n^2logn) that we may have gotten from first looking at the relation. The motivation behind this section was to determine ways to figure out the run time of recurrence relations and to find a formula to calculate common recurrences.
Practicing unrolling the recurrence relations in class was incredibly helpful as was having concrete examples of algorithms that have that particular recurrence relation. This section was a good follow up to our class discussion. I would give it a 8/10.
This section discusses the problem that occurs when trying to see how similar two people's preferences were. Many websites use this idea in order to best suggest things related to a person's preferences. The most natural way to do this is by counting inversions, or counting the number of switches we have to make to one list in order to make it match the first list. In order to determine this number of inversions in more efficient time (something better than O(n^2)) we have to split the list in half, and then count the number of inversions in each half. Then we sort each half and use Merge-and-Count (on page 224 of the text). This will bring the two list back together and count the number of inversions. The Merge-and-Count runs in O(n) time, and the sorting of each list is O(logn) meaning the inversion counting takes O(nlogn) time.
We went over this algorithm pretty well in class and the section helped back it up. It was relatively helpful and clear. I would give it a 7/10. I felt that it was better explained and written out in the slide.
This algorithm BLEW MY MIND. This section discussed the closet pair problem, where we want to find the closet pair of points in a plane. We first assume that no two points have the same x and y coordinate. The general idea of solving this problem is to divide the points into left and right sections, find the closest pair of points in each of these sections, and then determine if there are any pairs of points along the border of the two halves. In order to find the closest pairs of points in each half, we first sort the halves by their x and y coordinates (so that they are on a line) and look at the points closest to each other. Once we find the closest pair in each side, we can then create a distance delta. We then take that distance delta and look within the delta of the “border” or midpoint for two points closer together. In order to find the closest pair in this middle section of the plane we use the Closet-Pair-Rec algorithm (on page 230 in the text) which runs in O(n) time (this is the MIND BLOWING part). Then, using this ability to find the closest pair in the mid-section in O(n) time, we can find the closest pair in the whole plane in O(nlogn) time.
This discussion in class was very helpful and the book was a good reinforcement of the discussion.
I would give this section an 8/10 because this is a pretty interesting algorithm and a cool way to find the shortest path.
In this section we look at a way to make the multiplication of two integers run in less than O(n^2) time. In order to do this, rather than using the traditional method, in which you multiply each digit in one integer by the other digits in the other integer and then adding the results together. The change the algorithm, the section first looked at splitting the multiplication process into 4 parts of n/2 size. However, this runs in O(n^2) time, which is not an improvement. If we instead divide the multiplication into 3 parts of n/2 size, we can have the algorithm run in (n^1.59) time. The algorithm (one page 233 in the text)finds a way to split the multiplication process into these three parts. We then write x1y1*2^n+(p-x1y1-x0y0)*2^n/2+x0y0 and we are able to achieve our wanted run time.
This section was short and sweet, which was nice. I did not know that was how you multiplied binary numbers (we learned a different way in 210). I would give it a 8/10.