This is an old revision of the document!
Table of Contents
Chapter 2: Basics of Algorithm Analysis
2.1: Computational Tractability
This section started off by defining that all algorithms are fundamentally discrete. That is, they “involve an implicit search over a large set of combinatorial possibilities” with the goal of finding the most efficient solution that solves the problem at hand. The rest of the section discussed the term “efficiency” and possible definitions for it. It settled upon saying that an algorithm is efficient if it has a polynomial run time, with this definition being an “absolute notion.” This means that it allows us to have a strict definition of whether or not a problem has an efficient solution. One of my main takeaways was that the best way to compute and declare run time is by using the worst-case run time. This way, you don't have any disagreements as to what average run time means and it is rare that you will achieve best-case run time, unless it has a tight bound, so you would not want to use that.
2.2: Asymptotic Order of Growth
Here the different forms for declaring bounds of a run time were stated. The upper bound is denoted by big-O notation, meaning that the run time will be no worse than O(*), the star being the run time. Ω represents the lower bound, meaning run time is no better than Ω(*). Tight bound is represented by Θ. I had not previously known what the tight bound was, but I know now that it represents if O(*) and Ω(*) are the same, then the tight bound will be the same as well. Also, though the base does not matter when declaring logarithmic run times, it matters greatly for exponential run times.
2.3: Implementing the Stable Matching Algorithm Using Lists and Arrays
In this section we analyzed the pros and cons to implementing the stable matching problem using lists and arrays. What we are looking for when doing so is the simplest way to implement it while using an efficient structure that will best suit the needs at hand.
The simplest structure that we could use is the array, because it is very easy to implement in pretty much every coding language. The run times are desirable as well, as we can find elements in constant time, search for items in linear time and search in a sorted array in logarithmic time. The draw back to using an array, though, is the fact that it doesn't change size automatically. To do this, we would have to use insert and delete methods that would significantly cost us time. One question I was wondering is just how much time this would cost us?
The more efficient and preferable structure to use in our case would be a (doubly) linked list. This is because it uses pointers that allow us to directly insert into the front and back of the list constant time. But, we can not find an element in constant time, but rather in O(i), because we have to iterate through until the desired value is found.
I thought it was very interesting how they said that in order to implement the GS Algorithm in quadratic time, we must be able to implement it in constant time. I also wonder why a stack would not be a good implementation for the preference lists, as I feel like being able to just pop off the preferred person (for the men) would be relatively quick and simple.
The fact that we can make step 4 of the GS Algorithm (find out of m or m' is preferred by w) work in constant time is pretty cool to me. At face value, I would have thought it impossible, but the nxn array is a pretty useful structure that we put to use, and this whole section cleared up a lot of what I was still not 100% sure about after the discussion in class. I would give it a 6 of 10 for readability, because it was interesting, although most of it was review and reinforcing the ideas previously discussed.
