This is an old revision of the document!
Table of Contents
Chapter 2: Algorithm Analysis
2.1 Computational Tractability
“[Many of the problems we study] will involve an implicit search over a large set of combinatorial possibilities, and the goal will be to efficiently find a solution that satisfies certain clearly delineated conditions.”) This section reads like a Socratic dialogue: it attempts to define something, refutes its own proposed definitions, and (unlike a Socratic dialogue) finally agrees on a definition. An algorithm is considered “efficient” if it has a polynomial running time.
2.2 Asymptotic Order of Growth
As input size grows, the worst-case running time grows at a rate proportional to some function f(n). This function becomes a BOUND on the running time of the algorithm. There are three types of asymptotic bounds: upper bounds (big-O), lower bounds (omega), and tight bounds (theta). Asymptotic growth rates have the properties of additivity and transitivity. Some propositions/proofs are given about some common types of functions (polynomials, logarithms, and exponentials). Polynomials are O(n^2), logarithms are O(log n), and exponentials are O(r^n).
2.3 Implementing the Stable Matching Algorithm Using Lists and Arrays
The Gale-Shapley Stable Matching algorithm terminates in at most n^2 iterations (we learned this earlier). It would be nice to construct an algorithm that is O(n^2). For this to work, every iteration has to be implemented in constant time. It is important for programmers to consider the data structures they will be using; they are chosen for efficiency and for ease of implementation. The G-S algorithm can be implemented using arrays and linked lists. A section is dedicated to an overview of these data structures (including their advantages and disadvantages) but we know about them by now.
2.4 A Survey of Common Running Times
This section says that there are “a very small number of distinct reasons” why familiar runtime bounds – like O(n) and O(n log n) – recur so frequently in algorithm analysis. The authors hope to make the reader aware that a few reasons exist. Supposedly it is quite an achievement to be able to recognize “these common styles of analysis.”
- Linear time
- O(n log n) time
- Quadratic time
- Cubic time
- O(n^k) time
- Beyond polynomial time (ex: 2^n and n!)
- Sublinear time
2.5 A More Complex Data Structure: Priority Queues
A priority queue is a data structure designed to store elements with priority values. It can add, delete, and select elements. When a priority queue is asked to “select” an element, it returns the element with the highest priority. This characteristic of priority queues is applicable to problems such as the scheduling of processes on a computer. (Weighted interval scheduling comes to mind; see 1.2)