Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2014:journals:shannon:chapter2 [2014/01/20 19:31] nolletscourses:cs211:winter2014:journals:shannon:chapter2 [2014/01/20 22:35] (current) nollets
Line 35: Line 35:
  
 I would give the section a 6 out of 10. Since we had discussed this at length in class already, the section was not super helpful and was a little confusing at times. I would give the section a 6 out of 10. Since we had discussed this at length in class already, the section was not super helpful and was a little confusing at times.
 +
 +=====Section 2.4 A Survey of Common Running Times=====
 +
 +The best way to approach a new problem is to think about both the bound on the running time you want to achieve and the bound on the search space. Then it helps to be able to recognize the common runtimes and the algorithms they are generally associated with. Linear time, O(n), occurs when you process input in one fell swoop, so that the run time can only be a constant factor of  the size n of the input. O(n logn) occurs when an algorithm has to “split the input into two pieces, solves each piece recursively, and then combines the two solutions in linear time” (pg 50). Quadratic time, O(n^2), is generally associated with nested loops when each loop runs in O(n) time. Cubic time, O(n^3), similarly occurs when there are more nested loops. O(n^k) time is found when an algorithm must search over all subsets of size k where k is a constant. Finally, O(2^n) and O(n!) were discussed as two very fast growing runtimes that are generally avoided if possible. 2^n is a result of searches of all subsets and n! is a result of a search space that requires the algorithm to match up n items with n different items in every possible way or arrange these items in every possible order. There are also sublinear run times, such as O(logn) which can occur if the search region shrinks during each pass of a loop, as in the binary search algorithm.
 +
 +The section discussed the Stable Matching Problem, binary search, Independent set, and several other 'nameless' algorithms as examples for each run time.
 +
 +Most of this discussion was a much-needed recap of what I had learned in 112 so I feel pretty good about most of the section discussed. 
 +
 +I would give this section a 8/10 because it was incredibly helpful and created a place where I can easily reference needed runtimes.
 +
 +=====Section 2.5: A More Complex Data Structure: Priority Queues=====
 +
 +The priority queue is a data structure that organizes items in an array-like structure where each item has a priority value. When we remove items from the list we take the one with the highest priority first. However, sorting algorithms involved with priority queues run in O(n^2) time and we would like to be able to have O(n logn) time. So we implement a priority queue with a heap. The heap is a data structure similar to a binary search tree. The heap allows us to create a sorting algorithm in O(n logn) time. It uses two algorithms, Heapify-Up and Heapify-Down, to move values through the heap so that it stays in heap-order. Heap-order is when each child is greater than or equal to its parent. 
 +The algorithms used in this section are Heapify-up (page 61) and Heapify-down (page 63). Both algorithms are O(n logn) and allow us to run a sorting algorithm in O(n logn) time. 
 +I am still a little confused on why we just don’t call the heap a priority heap. The distinction between a type of heap and then a priority implemented with a heap. 
 +I would give this section an 8/10. It was interesting and was very easy to follow. It also helped reinforce the ideas that we covered in class.
 +
 +
 +
courses/cs211/winter2014/journals/shannon/chapter2.1390246262.txt.gz · Last modified: 2014/01/20 19:31 by nollets
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0