This is an old revision of the document!
Table of Contents
Chapter 2: Basics
Section 1: Computational Tractability
- Summary:
This section is about the importance of finding efficient algorithms and what it means for an algorithm to be efficient. This book focuses on time efficiency, but also notes that efficient use of space (memory) is also important. We want a definition of efficiency that is platform-independent, instance-independent, and able to predict how the running time changes with increasing input sizes. When we analyze the running time, we'll look at worst-case running times, because best-case and average-case don't really tell us much. An initial comparison to decide how “good” an algorithm's running time is can be made against the running time for the “brute-force” search for solutions (searches all possible solutions to find the correct one). For example with our stable matching problem, there are N! possible perfect matchings (everyone paired up, but not necessarily stable) for N men and N women, but our algorithm should be able to run in a time proportional to N. According to definitions from the 1960's, a good algorithm only grows in running time by some constant factor if the input size is increased multiplicatively. Algorithms with polynomial running times achieve this. This mathematical definition is the best, even though there are a few issues like really really big constant multipliers that make the polynomial running time not so great after all and some instances where exponential running times turn out to work pretty well in almost all cases.
- Insights and Questions:
Wait I thought our algorithm for the stable matching problem took O(N^2) time? [I don't really have any other questions or insights - do I need to come up with more even when I don't have them?] I found the discussion of why average-case running times aren't actually useful to be particularly interesting. It reminded me of a lot that we've discussed regarding our research … that random testing doesn't actually get at we want to know, but how the random test cases were generated (kind of).
- Readability:
Again, this section was very readable. I thought that they did a great job of explaining why we need the mathematical definition of efficiency and how we get to it. And I read this after our classes on t, and I think that was good for me, because I already knew what to expect.
Section 2: Asymptotic Order of Growth
- Summary:
Our mathematical definition of efficiency gives us a way to compute the number of “steps” that an algorithm takes in its worst case. So we might end up with 1.62n^2+3.5n+8 “steps” (the book's example). But this isn't really useful because the “steps” take different amounts of machine-level “steps” depending on exactly what's being done (even though we are defining a “step” to be something that takes constant time/a constant number of machine level steps) and it's also too detailed/precise. So what we really want is to be able to say that function like this grows like n^2 and ignore all of the constants and lower order terms. Big O, Big Omega, and Big Theta notation allow us to do this. The book formally defines them (as we did in class). Big O is for an algorithm/function are it's asymptotic upper bounds (for n's above some cutoff, these values of the Big O functions will always be greater than the initial function). Big Omega is for its asymptotic lower bounds, and Big Theta is a tight bound. The transitive property works so that if f=O(g) and g=O(h), then f=O(h) BUT that doesn't mean that f=g. This also works for things like f=O(g) and g=O(h) so f=O(h). When functions are added, we get situations like f=O(h) and g=O(h) so f+g=O(h). The book gives proofs for these, and we did brief proofs in class, so I won't go into them. Polynomials' Big Theta is the highest degree of n. You don't need a base for logarithms. Logarithms are better than polynomials, polynomials are better than exponential.
- Insights and Questions:
None, really. These were explained well in class, this was a good review, and it's all pretty math-y, which makes it clear.
- Readability:
This section was pretty read-able, but I just skimmed over the proofs (I might revisit them later), because I felt that I had a good understanding of them from what we've discussed in class. Obviously, that kind of math is harder to read in general, but I think they're presented in a really clear format.
Section 3: Implementing the Stable Matching Algorithm
- Summary:
- Insights and Questions:
- Readability:
Section 4: A Survey of Common Running Times
- Summary:
- Insights and Questions:
- Readability:
Section 5: Priority Queues
- Summary:
- Insights and Questions:
- Readability: