Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
courses:cs211:winter2011:journals:charles:chapter2 [2011/01/27 07:00] – created gouldc | courses:cs211:winter2011:journals:charles:chapter2 [2011/01/27 08:20] (current) – gouldc | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | CHAPTER TWO -- BASICS OF ALGORITHM ANALYSIS | + | ====== Chapter 2: Algorithm Analysis ====== |
- | 2.1 Computational Tractability | + | ===== 2.1 Computational Tractability |
"[Many of the problems we study] will involve an implicit search over a large set of combinatorial possibilities, | "[Many of the problems we study] will involve an implicit search over a large set of combinatorial possibilities, | ||
- | 2.2 Asymptotic Order of Growth | + | ===== 2.2 Asymptotic Order of Growth |
As input size grows, the worst-case running time grows at a rate proportional to some function f(n). This function becomes a BOUND on the running time of the algorithm. | As input size grows, the worst-case running time grows at a rate proportional to some function f(n). This function becomes a BOUND on the running time of the algorithm. | ||
Line 11: | Line 11: | ||
Polynomials are O(n^2), logarithms are O(log n), and exponentials are O(r^n). | Polynomials are O(n^2), logarithms are O(log n), and exponentials are O(r^n). | ||
- | 2.3 Implementing the Stable Matching Algorithm Using Lists and Arrays | + | ===== 2.3 Implementing the Stable Matching Algorithm Using Lists and Arrays |
The Gale-Shapley Stable Matching algorithm terminates in at most n^2 iterations (we learned this earlier). It would be nice to construct an algorithm that is O(n^2). For this to work, every iteration has to be implemented in constant time. It is important for programmers to consider the data structures they will be using; they are chosen for efficiency and for ease of implementation. The G-S algorithm can be implemented using arrays and linked lists. A section is dedicated to an overview of these data structures (including their advantages and disadvantages) but we know about them by now. | The Gale-Shapley Stable Matching algorithm terminates in at most n^2 iterations (we learned this earlier). It would be nice to construct an algorithm that is O(n^2). For this to work, every iteration has to be implemented in constant time. It is important for programmers to consider the data structures they will be using; they are chosen for efficiency and for ease of implementation. The G-S algorithm can be implemented using arrays and linked lists. A section is dedicated to an overview of these data structures (including their advantages and disadvantages) but we know about them by now. | ||
- | 2.4 A Survey of Common Running Times | + | ===== 2.4 A Survey of Common Running Times ===== |
- | -Linear time | + | Introduces the notion that "a very small number of distinct reasons" |
- | -O(n log n) time | + | |
- | -Quadratic time | + | |
- | -Cubic time | + | |
- | -O(n^k) time | + | |
- | -Beyond polynomial time (ex: 2^n and n!) | + | |
- | -Sublinear time | + | |
- | 2.5 A More Complex Data Structure: Priority Queues | + | * Linear time |
+ | * O(n log n) time | ||
+ | * Quadratic time | ||
+ | * Cubic time | ||
+ | * O(n^k) time | ||
+ | * Beyond polynomial time (ex: 2^n and n!) | ||
+ | * Sublinear time | ||
- | A priority queue is a data structure designed to store elements with priority values. | + | I am not great at determining the better bounds when you get into factorials and n^ks (confusing!). I was wondering: Say one algorithm (A) is super effective for n<100 but awful for n>=100. Another algorithm (B) is average: it is worse than A for n<100 but it's better than A for n>=100. Can you check the sample size at the beginning, and do algorithm A for n<100, algorithm B for n>=100? Is this ever the case? (My guess would be that sometimes you can't know the sample size, so assuming it will be large is appropriate.) |
+ | |||
+ | ===== 2.5 A More Complex Data Structure: Priority Queues ===== | ||
+ | |||
+ | Priority queues |