Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2011:journals:camille:chapter2 [2011/01/25 14:12] – [Section 3: Implementing the Stable Matching Algorithm] cobbccourses:cs211:winter2011:journals:camille:chapter2 [2011/01/31 02:47] (current) – [Section 1: Computational Tractability] cobbc
Line 12: Line 12:
 Again, this section was very readable. I thought that they did a great job of explaining why we need the mathematical definition of efficiency and how we get to it. And I read this after our classes on t, and I think that was good for me, because I already knew what to expect.  Again, this section was very readable. I thought that they did a great job of explaining why we need the mathematical definition of efficiency and how we get to it. And I read this after our classes on t, and I think that was good for me, because I already knew what to expect. 
  
 +    * **Sara Says:**
 +"Wait I thought our algorithm for the stable matching problem took O(N^2) time?" --> Yes, 
 +that's correct.  They are saying the simplest algorithm is "Given all possible perfect 
 +pairings (N!), find one that is stable and that's the solution."  So, that's O(N!).  But, 
 +Gale-Shapley is a more efficient algorithm.
 ===== Section 2: Asymptotic Order of Growth ===== ===== Section 2: Asymptotic Order of Growth =====
  
Line 37: Line 42:
  
     * **Summary:**      * **Summary:** 
 +Particular styles of approaches to a problem correspond/lead to certain common running times. First we get a sense of what the brute-force solution to the problem would take (which is based on the "search space"). The rest of the book will concentrate on improving upon the brute-force solution. 
  
 +**Linear (O(n)) Time:** Comes from "one-pass" style of processing input. That means each element receives a constant amount of time/attention. Ex. computing the maximum or merging two sorted lists (the book goes into these algorithms, but I think they're pretty easy, and they're in our class notes).
  
-    * **Insights and Questions:**+**O(nlogn) Time:** You get O(nlogn) time for algorithms that split the input into two equal-sized pieces, solve each piece recursively, and combine the two solutions in linear time. Ex. sorting (especially mergesort). Algorithms whose most expensive step is sorting the input also have this running time (ex. problems where you first need to sort the input and THEN you can do something on the input that takes O(n) time or constant time or anything less than what it took to sort still have an O(nlogn) running time. 
  
 +**Quadratic Time:** Quadratic time comes from a pair of nested loops. The outside loop runs O(n) times, each time starting another loop that runs in O(n) time. The total running time is O(n^2). Ex. finding the closest pair of points in a plane using a brute-force algorithm. You need to find all of the pairs of points (O(n^2) time) and compute the distance between them (constant time). That's not the best way to do that problem, though. Later the book will explain how to do it in O(nlogn) and even O(n) time. 
  
-    * **Readability:** +**Cubic Time:** More elaborate sets of nested loops can lead to O(n^3) time. Ex. given a bunch of subsets of a set, figuring out if any pair of those subsets is disjoint takes O(n^3). The algorithm for that is on p. 52-53. 
  
 +**O(n^k) Time:** It takes O(n^k) time to find k-element subsets of a set of n items (just like it takes O(n^2) time to find pairs). The book goes into an an example of finding an independent set of size k in a graph. Finding all of the k-element subsets takes O(n^k) time, and processing each of those subsets (i.e. checking each pair of nodes within the subset to see if there's an edge between them) takes O(k^2) time. That gives us an O(k^2 * n^k) running time, but since k is constant, k^2 is also constant so we end up with an O(n^k) running time. Again, this isn't necessarily the best way to go about solving this problem, but the more efficient solutions are for later. 
 +
 +**Beyond Polynomial Time:** Common running times that increase faster than polynomial are 2^n and n!. Ex. finding an independent set of maximum size in a graph - you not only have the O(n^k) time, you have to look at subsets of ALL possible lengths, so that ends up taking O(n^2 * 2^n) time! Even worse are n! running times. n! search spaces arise in 2 ways: (1) It's the number of ways to match n items with n other items (ex. the possible perfect matchings in our stable matching problem) and (2) It's the number of ways to arrange n items in order (ex. the traveling salesman problem - what order should he visit the cities in? --> believed to have no efficient solution). 
 +
 +**Sublinear Time:** Problems that take less time than linear. You don't have to look through all of the input, just need to "query." So the problem is minimizing the "querrying." Ex. binary search algorithm doesn't look at every piece of input and it runs in O(logn) time. O(logn) time comes from algorithms that do a constant amount of work to throw away a constant fraction of the input until the input has been shrunk down to a constant size and can be solved directly. 
 +
 +    * **Insights and Questions:**
 +I am really interested to see how some of these problems are solved in less time. Also, I am intrigued by the problems that can't be solved efficiently, in particular what makes the difference between a problem that can and and a problem that can't be solved efficiently? Crazy stuff! From our HW, though, how do you get some of those crazy running times that are in the first problem?!? 
 +
 +    * **Readability:** 
 +I thought this section was great. The breakups into different asymptotic running times and the connections between the running times (ex. that the "formula" for the time it takes to find k-element subsets is basically the same as the formula for finding pairs)! 
  
 ===== Section 5: Priority Queues ===== ===== Section 5: Priority Queues =====
  
     * **Summary:**      * **Summary:** 
 +Good algorithms are important to get efficient solutions, but sometimes you need more complex data structures in order to implement the algorithms efficiently (or make improvements to the algorithm). Some data structures are designed for specific problems, but some are more broadly applicable - like the priority queue, which is a data structure that "must perform some nontrivial processing each time it is invoked." Priority queues are useful for when you need to maintain a changing set (like the free men in our stable matching problem) and be able to add, delete and select certain elements. Elements in a priority queue have a priority value (key). Priority queues allow you to add to the set, delete from the set, and select the element with the smallest key. Useful for scheduling real-time events (ex. processes on a computer). Each operation (add, delete, select min) should take O(logn) time. The problem of using the priority queue to sort elements in O(nlogn) time shows/"proves" that O(logn) per operation is the right thing to aim for. We use a heap to implement a priority queue because using lists or arrays doesn't work (something has to take O(n) time instead of O(logn) time). A heap is like a balanced binary tree with every parent smaller than either of its children (but it's actually usually maintained in an array - explanation for how that works is in our class notes or on p.60). Operations: heapify-up and heapify-down work in O(logn) time (explanations for how that works are on p.61-63 and in our notes). Priority queues are implemented using heaps - insert uses heap's heapify-up. The minimum element (highest priority) is always at the root of the heap. Delete uses heapify-down. (Running times and a few more operations are on p.64 and in our notes). 
  
     * **Insights and Questions:**     * **Insights and Questions:**
 +Interesting distinction between arrays/lists and "complex" data structures like priority queues/heaps/etc. I never really thought of these as being considerably more complex. I am really curious about other complex data structures. When is it ever useful to make up a data structure just for one type of problem (like the book suggests happens occasionally)? I feel like I need an example of how that could happen. 
  
     * **Readability:**      * **Readability:** 
 +This section was a little harder to read than others. It flowed well, but the logic of jumping from arrays to priority queues to heaps wasn't quite as clear or as seamless as other sections or as clear as it was in class. I think that actually seeing someone do heapify-up/down is really important and you just can't get that from diagrams that don't move. 
courses/cs211/winter2011/journals/camille/chapter2.1295964729.txt.gz · Last modified: 2011/01/25 14:12 by cobbc
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0