Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision |
| courses:cs211:winter2018:journals:mccaffreyk:home:2.3 [2018/01/30 03:10] – mccaffreyk | courses:cs211:winter2018:journals:mccaffreyk:home:2.3 [2018/01/30 04:21] (current) – mccaffreyk |
|---|
| ==== Section 2.4: A Survey of Common Running Times ==== | ==== Section 2.4: A Survey of Common Running Times ==== |
| | |
| This chapter discusses some common run-times for programs and the reason that they are so prevalent. These run-times include O(n), O(n^2), O(nlogn), O(n^3),O(k^n), O(n!)and O(log(n)). O(n) arises when programs do one sweep through a set of input data and then perform some constant time operation on it. O(n^2) can arise either when programs use nested loops on the input values or one loop and then perform an operation on each value which requires O(n) time. For instance, selection sort has O(n^2) time complexity. O(n^3) often occurs when programs require two nested loops inside one outer loop. An example is a program which attempts to see if any set in a group of sets has no elements in common with the others. This goes through each set then through each other set then for each element in the outer set, making it have 3 loops. O(nlog(n)) occurs generally when an algorithm splits its input into two pieces, solves each in log(n) time, and then combines them in linear, O(n) time. One example of this complexity is merge sort. | This chapter discusses some common run-times for programs and the reason that they are so prevalent. These run-times include O(n), O(n^2), O(nlogn), O(n^3),O(k^n), O(n!)and O(log(n)). O(n) arises when programs do one sweep through a set of input data and then perform some constant time operation on it. O(n^2) can arise either when programs use nested loops on the input values or one loop and then perform an operation on each value which requires O(n) time. For instance, selection sort has O(n^2) time complexity. O(n^3) often occurs when programs require two nested loops inside one outer loop. An example is a program which attempts to see if any set in a group of sets has no elements in common with the others. This goes through each set then through each other set then for each element in the outer set, making it have 3 loops. O(nlog(n)) occurs generally when an algorithm splits its input into two pieces, solves each in log(n) time, and then combines them in linear, O(n) time. One example of this complexity is merge sort. Log(n) time represents querying (partial reading) of input. It is often called sublinear time. One common example of this is binary search. O(k^n) is exponential rather than quadratic. We get this for some brute force algorithms. For instance, searching a node graph for independent pairs may be exponential. Finally, factorial time stems from products which subtract or add 1 from 0 to n and multiply that by n. For instance, the brute force solution for the stable matching problem would be O(n!) because we do not count matched pairs, allowing us to subtract from the possible matches each time it is run.I would rate the reading for this section at 9/10. It was very straight forward and easy to intuitively understand the causes of these complexities. |
| | |
| | ==== Section 2.5: A More Complex Data Structure: Priority Queues ==== |
| | |
| | Sometimes, we can improve running time by using complex data structures. The priority queue is one of these. In a priority queue, each value has a key where the smallest is first priority and the largest is last priority. Priority Queues may be applied to manage the order of processes in a computer. For operations such as adding, deleting, or selecting the element with the minimum key, priority queues generally run in )(log(n)) time. Thus, extracting all minima or maxima is O(nlog(n)) time as well as using a priority queue to sort. To implement priority queues, we use the heap. Heaps work better than simpler approaches because all operations take less than O(n) time. The best way to maintain heaps is in an array of nodes with their edges in linked list form. For heaps, basic operations are in constant or O(log(n)) time. For instance, Heapify-up and Heapify-down have log(n) complexity. Change key and delete are also in O(log(n)) time. I would rate the ease of reading for this section at 6/10. I feel like its descriptions were overly complicated and understanding these concepts was far easier with the material from class. |