Chapter 2

2.1: Computational Tractability

The section begins by coming up with a working definition of an efficient algorithm. It begins with saying an algorithm is efficient if it runs quickly on real-life inputs. Next, after discussing brute force, it amends this to saying an algorithm is efficient if it performs better than brute force search. Lastly, the text discusses the concept of polynomial time and amends the definition of efficiency to if an algorithm runs in polynomial time. Lastly, it discusses the different variations of run-time complexity and how they correspond to real time.

I give this section a 7/10 in terms of being interesting as it did not simply in depth describe an algorithm.

2.2: Asymptotic Order of Growth Notation

This section goes into how to specifically talk about run-times. For instance, it discusses disregarding constant factors and terms not of the highest order coefficient. In building up this description, the concepts of Asymptotic upper and lower bounds are introduced to help fully categorize running times. O(*) deals with upper bounds as opposed to specific growth rates. Capital omega is used similar to O(*) to discuss the lower bound. Lastly, asymptotic tight bounds are introduced and are represented by a capital theta. The remainder of the section details specific growth rates of the different bounds and the associated proofs behind them.

This section, once again, did not merely detail a long-winded algorithm. As such, it gets a 7/10.

2.3: Implementing the Stable Matching Problem Using Lists and Arrays

The reading begins to bridge the gap from analysis on paper to actual efficient implementation of algorithms. In the section, the complexity of different array methods are discussed. As a result, the shortcomings of arrays, specifically dynamically managing the data in them, comes to light. As such, another method of implementation, linked lists, is discussed. Linked lists allow for items to more easily be added and removed. As far as I am aware, there is no perfect data type. As such, there are times where the fact that linked lists do not support constant time indexing make them inferior to arrays. The rest of the section details how these data types can be used to ensure that the algorithm can run in O(n squared) as was originally suggested by the pen and paper analysis.

One thought that I have in response to this section pertains to hashing. Through internships and discussion with upper level students, I have heard a lot about how often times things like hash sorts and hash maps can be significantly more efficient than other mappings and sorts, but I have not really been exposed to them through my classes at W&L. So, I am curious what benefits/trade offs things of that nature have. Additionally, are there any other things of that nature, which while not often discussed, allow for efficient implementation.

I generally find data structures and implementation extremely important, so I give this section a 7/10.

2.4: A Survey of Common Running Times

True to its name, this section discusses the aspects of different algorithms that lead to their run times. As such, one can expound to try and understand why other algorithms end up running in these common times. In discussing linear algorithms, the most common thing to lead to linear run time is going over each element of the input once and performing constant time computations on each element. Additionally, it discusses how clever algorithms can take problems like merging two sorted lists, which at first glance seems as if it would be n squared, and allows it to be simply order n. Algorithms running in nlogn often will separate the input in half, solve each half recursively, then merge them back together in linear time, leading to nlogn run time. For quadratic run time, algorithms can act on pairs of input elements in constant time. Alternatively, nested loops will typically result in quadratic run time. For cubic run time, we typically would expect to see more elaborate nested loops. At this run time, many algorithms stop being practical for input of reasonable size. We expect to get greater powers of n from comparing subsets of size k. A running time of 2 to the n can come from iterating over every single subset of a set. A run time of n! can come from checking every single possible ordering of elements in the input. Lastly, sublinear run times usually will occur when a constant amount of computations are performed to throw out a constant fraction of the input, such as in binary search.

I found this section quite exhaustive as it both provided examples and logic. As such, I do not really have any questions about it. It serves to but a face and reason to the typical names we hear as common run times.

This section did feel as if it dragged on a bit, so I give it a 5/10.

2.5: A More Complex Data Structure: Priority Queues

The section goes over what a priority queue is, a set where each element has an associated key that allows it to be sorted by priority; the lower the key the higher the priority. An example of when this is useful is managing computer processes. It goes on to discuss an implementation that allows elements to be added, removed, and have the minimum element selected in logarithmic time. Having the aforementioned run times allows priority queue operations to sort a set. In order to implement the priority queue, the book uses a heap data structure. A heap structure implements the algorithm heapify up to insert elements in logarithmic time. In order to remove the minimum element of the heap, we use ExtractMin. After doing this, if the key of the replacement element is too small, we use heapify up. If it is too big, we use heapify down. Both of these functions fix the heap in logn time. Thus, we can delete an element in logn time. The section then goes on to explain how we can implement the priority queue with a heap and discusses the necessary methods for doing so. Next, to add extra functionality to the queue, specifically, to be able to access nodes by position, we keep a separate array that is organized in a manner that allows us to track position.

This section is important in giving an example of how to use an appropriate data structure to efficiently implement a more complicated idea. I would like to see more and more examples of this throughout the text as I would consider it integral to developing useful algorithms in the real world, not just ones that look good on paper.

Overall, while important, as we have learned this all in class, I found the reading a little slow. Overall, it earns a 4/10.

courses/cs211/winter2018/journals/goldm/ch2.txt · Last modified: by goldm
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0