Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
courses:cs211:winter2014:journals:shannon:chapter2 [2014/01/20 19:13] – nollets | courses:cs211:winter2014:journals:shannon:chapter2 [2014/01/20 22:35] (current) – nollets | ||
---|---|---|---|
Line 1: | Line 1: | ||
======Chapter 2: Basics of Algorithm Analysis====== | ======Chapter 2: Basics of Algorithm Analysis====== | ||
- | ====Section 2.1: Computational Tractability==== | + | =====Section 2.1: Computational Tractability===== |
This section tries to formulate a definition for efficiency. It outlines that we cannot simply define efficiency as speed, because even bad algorithms run quickly on fast computers or when they are run with only small test cases. When looking at efficiency we look at the worst-case scenario as best-case is not particularly realistic and average-case analysis does not really help us look at how the algorithm works and instead looks at how the input is generated. | This section tries to formulate a definition for efficiency. It outlines that we cannot simply define efficiency as speed, because even bad algorithms run quickly on fast computers or when they are run with only small test cases. When looking at efficiency we look at the worst-case scenario as best-case is not particularly realistic and average-case analysis does not really help us look at how the algorithm works and instead looks at how the input is generated. | ||
Line 15: | Line 15: | ||
I would give this section a 7 for interest and readability as it was once again straightforward, | I would give this section a 7 for interest and readability as it was once again straightforward, | ||
- | [[Section22 | Section 2.2]] | + | |
+ | =====Section 2.2: Asymptotic Order of Growth ===== | ||
+ | |||
+ | Bounds of runtime functions are the “worst-case” scenarios that an algorithms runtime is bounded by. A bound can be an upper bound, which is denoted O(f(n)), a lower bound, which is denoted Ω(F(n)), or a tight bound, which is denoted Θ(f(n)). An upper bound occurs when the algorithm’s function is bounded above by a multiple of f(n). A lower bound occurs when the algorithm’s function is less greater than f(n) times some constant. Finally a tight bound occurs when the upper bound and lower bound are the same. The section traced through some examples that solidified these definitions. Asymptotic bounds have many properties including transitivity and additivity. Transitivity means that if f = O(g) and g = O(h), then f = O(h). Another properties of bounds is that if f and g are two functions such that g = O(f) then f + g = Θ(f). Finally the section discussed polynomial (n raised to a power), logarithmic, | ||
+ | |||
+ | The motivation for looking at asymptotic bounds is to have a concrete way of describing the worst-case scenario. It allows us to narrow our prediction of how quickly an algorithm will run and gives us a close range as opposed to a bunch of possibilities. We can also see how the runtimes of programs will be affected by adding different algorithms together. | ||
+ | |||
+ | This section was a lot of notation and it is a little confusing. I think going through some real life examples will help to clarify as thinking about it abstractly is difficult. | ||
+ | |||
+ | I would give this section an 7 out of 10 as it was a little dry to read and was just a lot of notation. | ||
+ | |||
+ | =====Section 2.3 : Implementing the Stable Matching Algorithm Using Lists an Arrays===== | ||
+ | |||
+ | The run time of an algorithm can be greatly affected by the data structures used within the algorithm and the book strives to show how we can find data structures that make algorithms efficient to run and easy to implement. The section then goes in depth on the proper data structures to use to achieve a run time of O(n^2) | ||
+ | |||
+ | The motivations are to create an algorithm that is both easy to create and runs efficiently regardless of the input size. | ||
+ | |||
+ | This was pretty straightforward although I felt that it was more confusing to try and read about it as opposed to when we discussed it in class. | ||
+ | |||
+ | I would give the section a 6 out of 10. Since we had discussed this at length in class already, the section was not super helpful and was a little confusing at times. | ||
+ | |||
+ | =====Section 2.4 A Survey of Common Running Times===== | ||
+ | |||
+ | The best way to approach a new problem is to think about both the bound on the running time you want to achieve and the bound on the search space. Then it helps to be able to recognize the common runtimes and the algorithms they are generally associated with. Linear time, O(n), occurs when you process input in one fell swoop, so that the run time can only be a constant factor of the size n of the input. O(n logn) occurs when an algorithm has to “split the input into two pieces, solves each piece recursively, | ||
+ | |||
+ | The section discussed the Stable Matching Problem, binary search, Independent set, and several other ' | ||
+ | |||
+ | Most of this discussion was a much-needed recap of what I had learned in 112 so I feel pretty good about most of the section discussed. | ||
+ | |||
+ | I would give this section a 8/10 because it was incredibly helpful and created a place where I can easily reference needed runtimes. | ||
+ | |||
+ | =====Section 2.5: A More Complex Data Structure: Priority Queues===== | ||
+ | |||
+ | The priority queue is a data structure that organizes items in an array-like structure where each item has a priority value. When we remove items from the list we take the one with the highest priority first. However, sorting algorithms involved with priority queues run in O(n^2) time and we would like to be able to have O(n logn) time. So we implement a priority queue with a heap. The heap is a data structure similar to a binary search tree. The heap allows us to create a sorting algorithm in O(n logn) time. It uses two algorithms, Heapify-Up and Heapify-Down, | ||
+ | The algorithms used in this section are Heapify-up (page 61) and Heapify-down (page 63). Both algorithms are O(n logn) and allow us to run a sorting algorithm in O(n logn) time. | ||
+ | I am still a little confused on why we just don’t call the heap a priority heap. The distinction between a type of heap and then a priority implemented with a heap. | ||
+ | I would give this section an 8/10. It was interesting and was very easy to follow. It also helped reinforce the ideas that we covered in class. | ||
+ | |||
+ |