Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
courses:cs211:winter2014:journals:haley:chapter2 [2014/01/22 00:18] – archermcclellanh | courses:cs211:winter2014:journals:haley:chapter2 [2014/01/22 02:13] (current) – archermcclellanh | ||
---|---|---|---|
Line 6: | Line 6: | ||
* We define the efficiency of an algorithm as its worst-case performance in terms of running time compared to the time necessary for a brute-force search, and generally aim to achieve polynomial running time over other sorts of runtimes. While the difference between an n^2 algorithm and an n^5 algorithm may seem big, we'd obviously significantly prefer n^5 over 2^n running time. | * We define the efficiency of an algorithm as its worst-case performance in terms of running time compared to the time necessary for a brute-force search, and generally aim to achieve polynomial running time over other sorts of runtimes. While the difference between an n^2 algorithm and an n^5 algorithm may seem big, we'd obviously significantly prefer n^5 over 2^n running time. | ||
- | * I found this section readable, albeit a bit dry and definitely more interesting in class than in a book. | + | * I found this section readable, albeit a bit dry and definitely more interesting in class than in a book. 8/10. |
===== 2.2: Asymptotic Order of Growth ===== | ===== 2.2: Asymptotic Order of Growth ===== | ||
Line 15: | Line 15: | ||
* We find the asymptotically tight bound as some bound that lies both the set of all upper bounds and all lower bounds. This is helpful because, yes, a lot of algorithms are upper bounded by 10000*e^1000, | * We find the asymptotically tight bound as some bound that lies both the set of all upper bounds and all lower bounds. This is helpful because, yes, a lot of algorithms are upper bounded by 10000*e^1000, | ||
* Growth rates are transitive, and we can add them and get the maximum of the two growth rates. This will be helpful in proving the runtime of algorithms that depend on multiple algorithms whose bounds we know, I would imagine. | * Growth rates are transitive, and we can add them and get the maximum of the two growth rates. This will be helpful in proving the runtime of algorithms that depend on multiple algorithms whose bounds we know, I would imagine. | ||
- | * I enjoyed this section a lot, actually. | + | * I enjoyed this section a lot, actually. 9/10. |
===== 2.3: Implementing the Stable Matching Algorithm Using Lists and Arrays ===== | ===== 2.3: Implementing the Stable Matching Algorithm Using Lists and Arrays ===== | ||
- | *This section spends a lot of time | + | * This section spends a lot of time explaining the differences between arrays and lists (p43-45), the gist of which is as follows: |
+ | * Arrays: | ||
+ | * Fixed size | ||
+ | * Constant-time access to element at position //i// | ||
+ | * O(n)-time access to determine whether an element is in unsorted array | ||
+ | * O(log(n))-time access to determine whether an element is in sorted array | ||
+ | * Linked lists: | ||
+ | * Variable size | ||
+ | * O(i)-time access to element at position //i// | ||
+ | * O(n)-time access to determine whether an element is in list | ||
+ | * Constant-time addition and deletion of elements | ||
+ | * Therefore, in order to minimize our runtime, we use the following data structures for the Gale-Shapely algorithm: | ||
+ | * **Integers** to represent the men & women | ||
+ | * **Array** of arrays to represent the preference lists | ||
+ | * **Lists** to represent the unmatched men | ||
+ | * **Map** man's integer ID → array of integers to represent who each man has proposed to | ||
+ | * Two **arrays** to represent the engagements. | ||
+ | * Thus we can implement the G-S algorithm in O(n^2) time | ||
+ | * I mostly skimmed this section because it wasn't that interesting and it was significantly more captivating as a class discussion. 5/10. | ||
+ | |||
+ | ===== 2.4: A Survey of Common Running Times ===== | ||
+ | |||
+ | * This section discusses what algorithms tend to take specific running times | ||
+ | * Linear | ||
+ | * Finding the maximum of a list or array of numbers | ||
+ | * Merging two sorted lists | ||
+ | * Linearithmic | ||
+ | * Sorting: merge sort | ||
+ | * Quadratic | ||
+ | * Anything with two nested loops | ||
+ | * Cubic | ||
+ | * Determining whether sets are disjoint | ||
+ | * Polynomial - O(n^k) | ||
+ | * Brute-force searching over all subsets of size k | ||
+ | * Exponential | ||
+ | * Search algorithm over all subsets of size n | ||
+ | * Factorial | ||
+ | * Travelling salesman-like problems | ||
+ | * Sublinear - O(log(n)) | ||
+ | * Binary search | ||
+ | * This section was informative and didn't try too hard. 7/10. | ||
+ | |||
+ | ===== 2.5: Priority Queues ===== | ||
+ | |||
+ | * This section discusses priority queues, which is a set where each element has some key indicating how important it is. The lower the key, the higher the priority. | ||
+ | * We implement priority queues using a heap, or a balanced binary tree. | ||
+ | * The implementation details span from pg 60-64. | ||
+ | * This section was not interesting to read. 4/10. |