This is an old revision of the document!
Table of Contents
Wiki for Chapter 2
Section 2.1
- Summary: Section 2.1 is Computational Intractability. The title is a little misleading, though, because the section is really about efficiency. Either that, or I don't know the definition of computational intractability. The authors talk about time and space efficiency, and then they progress through increasingly precise and robust definitions for efficiency, dispelling any misconceptions readers might have along the way. After the first iteration of an attempt to define efficiency, they establish that there needs to be a “mathematical view” to find a definition for efficiency that is “platform-independent, instance-independent, and of predictive value with respect to increasing input sizes” because the other definitions are too vague (p31). Then, the authors talk about worst-case running time and brute-force search. They conclude that a worst-case analysis is best because it does “a reasonable job of capturing [an algorithm's] efficiency in practice” (p31), and that brute force solutions are lazy because they do not account for the problem's structure. The section includes a helpful table that give runtimes for different algorithms and input sizes. The section closes with a conclusion that having a polynomial runtime is a good enough definition for when an algorithm is considered efficient.
- Motivations: There is no given problem in this section, but I think it's valid to say that the section was motivated by the necessity to understand efficiency before diving into algorithm implementation.
- About the Algorithm: There are no algorithms studied in this section.
- My Questions: This doesn't pertain to motivation/solution/proofs/analysis, but I'd ask the authors why they ended with a third “proposed” definition for efficiency and didn't include a final, concrete definition.
- Second Time Around: I like that this section references the Stable Matching Problem when addressing the concept of a “size parameter” (p31). It spelled out that N = 2n^2, where n is the number of men (or women), and how to arrive at that figure. I don't believe we addressed that explicitly in lecture, and it's always helpful when an author/instructor makes connections for you or uses previous concepts when introducing new ones.
- Note To Self: I want to remember the second proposed definition for efficiency: “An algorithm is efficient if it achieves qualitatively better worst-case performance, at an analytical level, than brute-force search.” I know it's not the final definition of efficiency, but it's a good way to remember that the brute-force solution is usually not the best. It's an “intellectual cop-out,” as the authors say (p32).
- Readability: I give this section a score of 9. It's short and fairly simple. I like that the authors started basic, in a “one might initially think…” scenario, and then progressed to a better, more complete picture.
Section 2.2
- Summary: Section 2.2 is Asymptotic Order of Growth. This section is where we first see big-O notation. In addition to big-O notation, which is used for asymptotic upper bounds, the authors address Ω-notation (asymptotic lower bounds) and Θ-notation (asymptotically tight bounds). Then, the authors give properties of upper, lower, and tight asymptotic bounds, with associated proofs. Finally, the authors address asymptotic bounds for common functions: polynomials (all three bounds associated with highest degree (p40)), logarithms (base isn't important), and exponentials (“every exponential grows faster than every polynomial” (p42)). When addressing bounds for polynomial functions, the authors go back to polynomial time to “formalize the more elusive concept of efficiency” (p41), since they left that open-ended in Section 2.1.
- Motivations: There is no given problem in this section, but I think it's valid to say that the section was motivated by the necessity to also understand orders of growth and how they can show that an algorithm is feasible or not.
- About the Algorithm: There are no algorithms studied in this section.
- My Questions: This is more of a math question, but I'd ask for an explanation of why “it follows from the definition of a limit that there is some n_0 beyond which the ratio is always between (1/2)c and 2c” (p38). It's not immediately obvious to me where the (1/2) and 2 come from.
- Second Time Around: I don't believe asymptotically tight bounds were discussed in lecture, so upon reading the section that's a concept I understand.
- Note To Self: It's helpful to have in my notes that “one step will consist of assigning a value to a variable, looking up an entry in an array, following a pointer, or performing an arithmetic operation on a fixed-size integer” (p35). It's not a new concept, but it's nice to put it in writing. I also want to remember the trick the authors give for determining if an asymptotically tight bound exists: if the ratio of functions f(n) and g(n) converges to a positive constant as n goes to infinity, then f(n) = Θ(g(n)) (p38).
- Readability: I think this section should get a 9. There is some mathematical notation, but it's well-written and not confusing.
Section 2.3
- Summary: Section 2.3 is Implementing the Stable Matching Algorithm Using Lists and Arrays. It is here where we finally talk about actually coding up the Gale-Shapley algorithm. Earlier in Chapter 2, we said that the algorithm terminates in at most n^2 iterations, but it is in this subsection that we see that it actually has a worst case run time of O(n^2) when “counting actual computational steps rather than simply the total number of iterations” (p43). That's an important point to make. If there were something at play within the iterations that runs in more than constant time, then the overall runtime of the algorithm could be more than O(n^2) because there would be n^2 iterations with action within them. The section begins by pointing out the data that needs to be maintained in the execution of the algorithm: the men's and women's rankings, status about the individuals' free-ness, and any matchings that are made. Then, the authors review the pros and cons of array vs linked list representations. The authors proceed with how to implement the algorithm, specifying along the way whether to use an array or a linked list.
- My Questions: Why do they authors say finding the ith item in a linked list is O(i) runtime? I see how that is true, but I haven't seen Big-O runtimes talked about in terms of something other than n before. I think I would have expected it to be presented as: “finding an item in the linked list is O(n) in the worst case,” or something like that.
- Second Time Around: I like that the book explicitly lists what needs to happen in constant time in order for the algorithm to have O(n^2) runtime overall (p 46). I think reading the implementation with array notation spelled out and specifics on how to add and remove men from the linked list of unmatched men made it easier to understand the implementation. For example, “he'll propose to w = ManPref[m,Next[m]]” is a whole lot easier for me to comprehend than talking out loud about abstract ideas. One thing that did not make more sense the second time around is how to implement the women's rankings to determine if she prefers m or m'. I think the figure from the class slides helped a lot for that topic.
- Note To Self: I hadn't thought before about pre-processing data in order to make the algorithm more efficient later. I'll definitely want to keep that trick in my back pocket.
- Readability: I give this section an 8, deducting points for not explaining the preprocessing of the womens' preference lists thoroughly enough.
Section 2.4
- Summary: Section 2.4 is
- Motivations:
- About the Algorithm:
- My Questions:
- Second Time Around:
- Note To Self:
- Readability:
Section 2.5
- Summary: Section 2.5 is
- Motivations:
- About the Algorithm:
- My Questions:
- Second Time Around:
- Note To Self:
- Readability:
