Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
courses:cs211:winter2011:journals:camille:chapter2 [2011/01/25 14:12] – [Section 3: Implementing the Stable Matching Algorithm] cobbc | courses:cs211:winter2011:journals:camille:chapter2 [2011/01/31 02:47] (current) – [Section 1: Computational Tractability] cobbc | ||
---|---|---|---|
Line 12: | Line 12: | ||
Again, this section was very readable. I thought that they did a great job of explaining why we need the mathematical definition of efficiency and how we get to it. And I read this after our classes on t, and I think that was good for me, because I already knew what to expect. | Again, this section was very readable. I thought that they did a great job of explaining why we need the mathematical definition of efficiency and how we get to it. And I read this after our classes on t, and I think that was good for me, because I already knew what to expect. | ||
+ | * **Sara Says:** | ||
+ | "Wait I thought our algorithm for the stable matching problem took O(N^2) time?" --> Yes, | ||
+ | that's correct. | ||
+ | pairings (N!), find one that is stable and that's the solution." | ||
+ | Gale-Shapley is a more efficient algorithm. | ||
===== Section 2: Asymptotic Order of Growth ===== | ===== Section 2: Asymptotic Order of Growth ===== | ||
Line 37: | Line 42: | ||
* **Summary: | * **Summary: | ||
+ | Particular styles of approaches to a problem correspond/ | ||
+ | **Linear (O(n)) Time:** Comes from " | ||
- | | + | **O(nlogn) Time:** You get O(nlogn) time for algorithms that split the input into two equal-sized pieces, solve each piece recursively, |
+ | **Quadratic Time:** Quadratic time comes from a pair of nested loops. The outside loop runs O(n) times, each time starting another loop that runs in O(n) time. The total running time is O(n^2). Ex. finding the closest pair of points in a plane using a brute-force algorithm. You need to find all of the pairs of points (O(n^2) time) and compute the distance between them (constant time). That's not the best way to do that problem, though. Later the book will explain how to do it in O(nlogn) and even O(n) time. | ||
- | | + | **Cubic Time:** More elaborate sets of nested loops can lead to O(n^3) time. Ex. given a bunch of subsets of a set, figuring out if any pair of those subsets is disjoint takes O(n^3). The algorithm for that is on p. 52-53. |
+ | **O(n^k) Time:** It takes O(n^k) time to find k-element subsets of a set of n items (just like it takes O(n^2) time to find pairs). The book goes into an an example of finding an independent set of size k in a graph. Finding all of the k-element subsets takes O(n^k) time, and processing each of those subsets (i.e. checking each pair of nodes within the subset to see if there' | ||
+ | |||
+ | **Beyond Polynomial Time:** Common running times that increase faster than polynomial are 2^n and n!. Ex. finding an independent set of maximum size in a graph - you not only have the O(n^k) time, you have to look at subsets of ALL possible lengths, so that ends up taking O(n^2 * 2^n) time! Even worse are n! running times. n! search spaces arise in 2 ways: (1) It's the number of ways to match n items with n other items (ex. the possible perfect matchings in our stable matching problem) and (2) It's the number of ways to arrange n items in order (ex. the traveling salesman problem - what order should he visit the cities in? --> believed to have no efficient solution). | ||
+ | |||
+ | **Sublinear Time:** Problems that take less time than linear. You don't have to look through all of the input, just need to " | ||
+ | |||
+ | * **Insights and Questions: | ||
+ | I am really interested to see how some of these problems are solved in less time. Also, I am intrigued by the problems that can't be solved efficiently, | ||
+ | |||
+ | * **Readability: | ||
+ | I thought this section was great. The breakups into different asymptotic running times and the connections between the running times (ex. that the " | ||
===== Section 5: Priority Queues ===== | ===== Section 5: Priority Queues ===== | ||
* **Summary: | * **Summary: | ||
+ | Good algorithms are important to get efficient solutions, but sometimes you need more complex data structures in order to implement the algorithms efficiently (or make improvements to the algorithm). Some data structures are designed for specific problems, but some are more broadly applicable - like the priority queue, which is a data structure that "must perform some nontrivial processing each time it is invoked." | ||
* **Insights and Questions: | * **Insights and Questions: | ||
+ | Interesting distinction between arrays/ | ||
* **Readability: | * **Readability: | ||
+ | This section was a little harder to read than others. It flowed well, but the logic of jumping from arrays to priority queues to heaps wasn't quite as clear or as seamless as other sections or as clear as it was in class. I think that actually seeing someone do heapify-up/ |