Chapter 2

2.1: Computational Tractability

The section begins by coming up with a working definition of an efficient algorithm. It begins with saying an algorithm is efficient if it runs quickly on real-life inputs. Next, after discussing brute force, it amends this to saying an algorithm is efficient if it performs better than brute force search. Lastly, the text discusses the concept of polynomial time and amends the definition of efficiency to if an algorithm runs in polynomial time. Lastly, it discusses the different variations of run-time complexity and how they correspond to real time.

I give this section a 7/10 in terms of being interesting as it did not simply in depth describe an algorithm.

2.2: Asymptotic Order of Growth Notation

This section goes into how to specifically talk about run-times. For instance, it discusses disregarding constant factors and terms not of the highest order coefficient. In building up this description, the concepts of Asymptotic upper and lower bounds are introduced to help fully categorize running times. O(*) deals with upper bounds as opposed to specific growth rates. Capital omega is used similar to O(*) to discuss the lower bound. Lastly, asymptotic tight bounds are introduced and are represented by a capital theta. The remainder of the section details specific growth rates of the different bounds and the associated proofs behind them.

This section, once again, did not merely detail a long-winded algorithm. As such, it gets a 7/10.

2.3: Implementing the Stable Matching Problem Using Lists and Arrays

The reading begins to bridge the gap from analysis on paper to actual efficient implementation of algorithms. In the section, the complexity of different array methods are discussed. As a result, the shortcomings of arrays, specifically dynamically managing the data in them, comes to light. As such, another method of implementation, linked lists, is discussed. Linked lists allow for items to more easily be added and removed. As far as I am aware, there is no perfect data type. As such, there are times where the fact that linked lists do not support constant time indexing make them inferior to arrays. The rest of the section details how these data types can be used to ensure that the algorithm can run in O(n squared) as was originally suggested by the pen and paper analysis.

One thought that I have in response to this section pertains to hashing. Through internships and discussion with upper level students, I have heard a lot about how often times things like hash sorts and hash maps can be significantly more efficient than other mappings and sorts, but I have not really been exposed to them through my classes at W&L. So, I am curious what benefits/trade offs things of that nature have. Additionally, are there any other things of that nature, which while not often discussed, allow for efficient implementation.

I generally find data structures and implementation extremely important, so I give this section a 7/10.

courses/cs211/winter2018/journals/goldm/ch2.1.txt · Last modified: by goldm
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0