Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision |
courses:cs211:winter2011:journals:charles:chapter2 [2011/01/27 07:44] – gouldc | courses:cs211:winter2011:journals:charles:chapter2 [2011/01/27 08:20] (current) – gouldc |
---|
===== 2.4 A Survey of Common Running Times ===== | ===== 2.4 A Survey of Common Running Times ===== |
| |
Introduces the notion that "a very small number of distinct reasons" are responsible for the recurrence of familiar runtime bounds -- like O(n) and O(n log n) -- in algorithm analysis. According to the authors it is a long-term goal to be able to recognize "these common styles of analysis." It may be helpful to contrast your current runtime bounds with the bounds of a brute force algorithm. You are constantly trying to improve your algorithm; referring to the efficiency of a brute force approach might help o reference to something like better bounds than the brute force approach. | Introduces the notion that "a very small number of distinct reasons" are responsible for the recurrence of familiar runtime bounds -- like O(n) and O(n log n) -- in algorithm analysis. According to the authors it is a long-term goal to be able to recognize "these common styles of analysis." It may be helpful to contrast an algorithm's current runtime bounds with the bounds of some brute force algorithm. You are constantly trying to improve your algorithm; referring to the efficiency of a brute force approach might help you get your bearings, and familiarize yourself with the "landscape" of the particular problem at hand. The familiar runtime bounds described in this section are as follows... |
| |
* Linear time | * Linear time |
* Sublinear time | * Sublinear time |
| |
| I am not great at determining the better bounds when you get into factorials and n^ks (confusing!). I was wondering: Say one algorithm (A) is super effective for n<100 but awful for n>=100. Another algorithm (B) is average: it is worse than A for n<100 but it's better than A for n>=100. Can you check the sample size at the beginning, and do algorithm A for n<100, algorithm B for n>=100? Is this ever the case? (My guess would be that sometimes you can't know the sample size, so assuming it will be large is appropriate.) |
| |
===== 2.5 A More Complex Data Structure: Priority Queues ===== | ===== 2.5 A More Complex Data Structure: Priority Queues ===== |
| |
A priority queue is a data structure designed to store elements with priority values. It can add, delete, and select elements. When a priority queue is asked to "select" an element, it returns the element with the highest priority. This characteristic of priority queues is applicable to problems such as the scheduling of processes on a computer. (Weighted interval scheduling comes to mind; see 1.2) | Priority queues store elements with priority values. Removing from a priority queue returns the element with highest priority. Priority queues can be useful tools for scheduling problems, like the processes on a computer. Proposition 2.11 says a priority queue can sort n numbers in O(n) time. This is because it takes n steps to insert the numbers and n steps to remove the numbers (this is 2n in total, or O(n)). The section then explains the processes "heapify-up" and "heapify-down" which we received handouts for. The underlying heap structure of priority queues allows for insertion and deletion to occur in O(log n) time. |