This is an old revision of the document!
Table of Contents
Chapter 2
Section 2.1
The goal of section 2.1 is to define just what exactly makes an algorithm efficient and how to measure that efficiency. The section starts with a very broad and subjective definition of efficiency and goes on to explain that while the first definition makes logical sense, it's wording is too loose and thus hard to measure quantitatively. The second definition makes progress by defining an efficient algorithm as one that performs better than a brute-force search. Brute-force searches will always find an answer (if one can be found), but they are much more likely to achieve the worst case run-time and, in addition, is an “intellectual cop-out”. The third and final definition focuses on defining efficiency based on the mathematical principal of polynomial time.
Polynomial time is a way to express how an algorithm scales as the input size increases. An algorithm meets the definition of polynomial time if it scales by some constant factor c when the input size increases. So, for example, when some algorithm (defined polynomially as cN^d) has its input increased by a factor of 2–N becomes 2N–the runtime turns out to be c * 2^d * N^d, which is only a slowdown of 2^d.
While the book admits there are potentially better ways to define the efficiency of algorithms, years of trial and error have found that defining efficiency in terms of polynomial time is pretty good. Using this definition, it becomes possible to find that some particular algorithm cannot perform in polynomial time and is thereby inefficient.
This section was extremely logical and did not include too much of the mathematical syntax that always ends up confusing me. I rate this section an 8 for interestingness and a 9 for readability.
Section 2.2
Section 2.2 introduces O, Ω, and Θ notation and how they can be used to describe the maximum, minimum, and exact run time of algorithms as their inputs scale up or down. O(f(n)) for some function f(n) means that the algorithm in question will run at most f(n) times for any value of n. Ω(f(n)) means that the algorithm will run at least f(n) times for any value n. While I understood both O and Ω notation, I am not completely sure I have Θ notation down pat.
As I understood the section, it seemed that if some algorithm is both O(f(n)) and Ω(f(n)), then it is also Θ(f(n)). It seems like this means that the algorithm runs both at most and at least f(n) times for any value n, which implies that the exact run time of the algorithm is always known for any value of n.
The chapter goes on to prove some of the basic identities of the three notations. They go on to prove transitivity and sums of functions, then show that each notation can accommodate polynomials, logarithms, and exponents. I won't bother to describe each of the proofs in detail since they are already supplied in the book and I wouldn't want to bore anyone with the complex technical jargon.
The beginning of the chapter was extremely interesting, I did not know two other notations for the run-time of algorithms even existed. The proofs, however, were relatively complex and I would be lying if I said that I understood each one of them after reading through this section. However, I bet knowing that each notation has the capabilities proved in this section will prove invaluable in the future when we write our own proofs. I give this section a 7 for how interesting but only a 3 for readability.
