Differences
This shows you the differences between two versions of the page.
| Next revision | Previous revision | ||
| courses:cs211:winter2018:journals:donohuem:chapter2 [2018/01/16 19:42] – created donohuem | courses:cs211:winter2018:journals:donohuem:chapter2 [2018/01/30 00:00] (current) – donohuem | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== Chapter 2 ====== | ====== Chapter 2 ====== | ||
| + | |||
| + | ===== Section 2.1 Computational Tractability ===== | ||
| + | The authors begin by stating the importance of efficient and discrete algorithms. The authors then attempt to define the term // | ||
| + | |||
| + | |||
| + | ===== Section 2.2 Asymptotic Order of Growth ===== | ||
| + | The authors note that discussing an algorithm' | ||
| + | **Asymptotic Upper Bounds**: T(n), worst-case running time, is O(f(n)) if there exists constants c > 0 and nsub0 >= 0 so that for all n >= nsub0, we have T(n) <= c * f(n). The author explains that in this case, T is asymptotically upper bounded by f. | ||
| + | **Asymptotic Lower Bounds**: T(n) is Ω(f(n)) if there exists constants ε > 0 and nsub0 >= 0 so that for all n >= nsub0, we have T(n) >= ε * f(n). In this case, T is asymptotically lower bounded by f. | ||
| + | **Asymptotically Tight Bounds**: If a running time is both O(f(n)) and Ω(f(n)), then we found the best bound, where T(n) grows with f(n) to within a constant factor. If T(n) is both O(f(n)) and Ω(f(n)), then we say that T(n) is Θ(f(n)). | ||
| + | Some important properties of growth rates are that they are transitive (e.g. if f = O(g) and g = O(h), then f = O(h)). Two functions can also be added together (e.g. if f = O(h) and g = O(h), then f + g = O(h). This section was definitely mathematically dense and I will need some practice. Overall the readability is 5/10. | ||
| + | |||
| + | ===== Section 2.3 Implementing the Stable Matching Problem Using Lists and Arrays ===== | ||
| + | Whenever we implement an algorithm, we almost always use specific data structures. The Gale-Shapely algorithm can be implemented using only lists and arrays, two of the the simplest data structures. The authors then discuss Arrays and Lists. Some key pros and cons for each are: | ||
| + | | ||
| + | | ||
| + | **Question** We tend to measure efficiency in the worst case; however, why do we measure list indexing as O(i)? If we are trying to find an element in a list, and we are traversing across nodes, wouldn' | ||
| + | |||
| + | Back to the algorithm; men and women preference lists are implemented as arrays, one for the men and one for the women. For identifying a free man, we use a linked list because once a man is engaged, he must be removed from this list (dynamic). For identifying the highest ranked woman on a man's preference list, we use a separate array called Next. Current engagements are stored in an array called Current. Any new matches need to be updated within Current. Overall, the running time of the G-S algorithm is O(n^2). | ||
| + | |||
| + | |||
| + | ===== Section 2.4 A Survey of Common Running Times ===== | ||
| + | This section can be neatly divided in run times. | ||
| + | |||
| + | **Linear Time:** Two example algorithms given that run in O(n) time are //Computing the Maximum// and //Merging Two Sorted Lists//. For the CtM algorithm, we know it runs in linear time because we must process the entire input (e.g. a list) and perform a constant action on it (ensuring that the ith value is or is not the maximum). The MTSL algorithm has a less intuitive linear run time. We take two elements (using a pointer), one from each list, append the smaller element to a new list, and advance the pointer to the next element in the list from which the smaller element was taken. Since there can be at most 2n iterations using this algorithm, the run time is O(n). | ||
| + | |||
| + | **Linearithmic Time:** No explicit algorithm analysis is given, but we are likely to encounter linearithmic run times when sorting or divide and conquer. An example algorithm is Mergesort. | ||
| + | |||
| + | **Quadratic Time:** Two examples of quadratic time are finding a pair of points that are closest together in a plane and nested loops. In the Closest-Pair Problem, we iterate through each input point (x, y), then iterate through each other input point (a, b), then perform a constant action. Thus, the resulting run time is O(n^2). | ||
| + | |||
| + | **Cubic Time:** Cubic time algorithms are often more elaborate nested loops. For example, three nested loops. | ||
| + | |||
| + | **O(n^k) Time:** The example O(n^k) algorithm given is the problem of finding independent sets in a graph, a computationally hard problem. The run time is O(k^2n^2), which reduces to O(n^k). | ||
| + | |||
| + | **Beyond Polynomial Time:** This includes runtimes like 2^n and n!. The example algorithm given is the problem of finding an independent set of maximum size in a graph, which is O(n^2 2^n) reduced to O(2^n). | ||
| + | |||
| + | **Sublinear Time:** An example is Binary Search, which utilizes three conditionals (if q == p, q < p, q > p) and recursion to make a run time of O(logn). | ||
| + | |||
| + | Overall, the readability of this section was 7/10. | ||
| + | |||
| + | ===== Section 2.5 A More Complex Data Structure: Priority Queues ===== | ||
| + | The motivation for a priority queue is the problem of managing real-time events but processes do not arrive in the order of the urgency in which they should be done. We use a //Heap// to implement a priority queue. A heap is a balanced binary tree where each node's key is greater than or equal to its parent node. We represent a heap using an array, where the index of the left child of a parent is 2i and the right child is 2i + 1. Two critical operations of heaps are Heapify-Up and Heapify-Down. When adding an element that is too small to the end of the heap, Heapify-Up is useful. We check if the added node matches the criterion for the heap; if it doesn' | ||
