Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
courses:cs211:winter2014:journals:emily:entrytwo [2014/01/22 03:47] – [Chapter 2.4 A Survey of Common Running Times] hardye | courses:cs211:winter2014:journals:emily:entrytwo [2014/02/24 20:09] (current) – [2.5 A More Complex Data Structure: Priority Queues] hardye | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Entry Two ====== | ====== Entry Two ====== | ||
- | ====== Chapter 2.3 Implementing the Stable Matching Algorithm Using Lists and Arrays====== | + | ====== Chapter 2.3 ====== |
+ | **Implementing the Stable Matching Algorithm Using Lists and Arrays** | ||
In this section of chapter two, we explore the tradeoffs between lists arrays and lists to determine which data is appropriate for the algorithm. Some of the tradeoffs we read about are finding an element in the array or list with given index i, seeing if a given element is an array or list and if the array is sorted or not. An important concept to take away from this section is that preprocessing allows us to convert between the array and list representations in O(n) time, so we are not limited in which data structure we choose and we can freely jump between which one fits the algorithm best! | In this section of chapter two, we explore the tradeoffs between lists arrays and lists to determine which data is appropriate for the algorithm. Some of the tradeoffs we read about are finding an element in the array or list with given index i, seeing if a given element is an array or list and if the array is sorted or not. An important concept to take away from this section is that preprocessing allows us to convert between the array and list representations in O(n) time, so we are not limited in which data structure we choose and we can freely jump between which one fits the algorithm best! | ||
Line 7: | Line 9: | ||
This section did a lot of comparing and contrasting that we went over in detail in class. It was very clear but a lot less interesting because it seemed so repetitive. Readability: | This section did a lot of comparing and contrasting that we went over in detail in class. It was very clear but a lot less interesting because it seemed so repetitive. Readability: | ||
- | ====== Chapter 2.4 A Survey of Common Running Times ====== | + | ====== Chapter 2.4 ====== |
+ | **A Survey of Common Running Times** | ||
In this section we analyze common running times: O(n), O(n log n), O(n< | In this section we analyze common running times: O(n), O(n log n), O(n< | ||
- | * **Linear Time- O(n)** | + | * **Linear Time- O(n)** |
* the run time is at most the size of the input times some constant factor | * the run time is at most the size of the input times some constant factor | ||
* example algorithms that are linear time: | * example algorithms that are linear time: | ||
* computing the maximum | * computing the maximum | ||
* merging two sorted lists | * merging two sorted lists | ||
- | * **O(//n// log //n//) Time** | + | * **O(//n// log //n//) Time** |
+ | * "the running time of any algorithm that splits its input into two equal-sized pieces, solves each piece recursively, | ||
+ | * sorting is the best example of n log n time, specifically merge sort | ||
+ | * n log n frequently is the run time of algorithms because many algorithms require sorting the input | ||
+ | * **Quadratic Time** (p. 51) | ||
+ | * common running time is performing a search over all pairs of input items and spending constant time per pair | ||
+ | * example: find a pair of points in form (x,y) in a given plane that are closest together | ||
+ | * //nested loops// | ||
+ | * **Cubic Time** (p. 52) | ||
+ | * more elaborate nested loops | ||
+ | * example: finding the subset of given sets that is disjoint (has no elements in common) | ||
+ | * **O(n< | ||
+ | * obtain a running time of O(n< | ||
+ | * example: finding independent sets in a graph | ||
+ | * **Beyond Polynomial Time** (p. 54) | ||
+ | * // | ||
+ | * like the brute-force algorithm for k-node independent sets but now iterating over ALL subsets of the graph | ||
+ | * total number of subsets is 2< | ||
+ | * example: given a graph and want to find an independent set of **maximum** size | ||
+ | * //O(n!)// | ||
+ | * grows even more rapidly! | ||
+ | * number of ways to match up n items with n other items | ||
+ | * example: Stable Matching Problem- matching n women with n men | ||
+ | * when the search space consists of all ways to arrange n items in order. | ||
+ | * example: Traveling Salesman Problem: finding distances between pairs of //n// cities, what is the shortest way to visit all of the cities? | ||
+ | * **Sublinear Time** | ||
+ | * some encounters that are even asymptotically smaller than linear! | ||
+ | * happens in a model of computation where input can be " | ||
+ | * example: binary search algorithm | ||
+ | * given a sorted array of n numbers, determine whether a given number p belongs in the array | ||
+ | |||
+ | This section was a really large chunk of information and took a few times to read to understand. The most confusing part was the sub linear time. Even though I don't have any specific questions, just the general topic confused me. I thought this chapter was pretty interesting although not very easy to read. It was a very good review and important to cover moving on into the next sections. I would rate readability an 8/10. | ||
+ | |||
+ | ====== Chapter 2.5 ====== | ||
+ | **A More Complex Data Structure: Priority Queues ** | ||
+ | |||
+ | In this section we discuss a very broadly used data structure to help us achieve our goal of improving run time. We discover the different processes a PQ can perform that a list or array can't. | ||
+ | |||
+ | **The Problem: | ||
+ | For the Stable Matching algorithm we need to maintain a // | ||
+ | |||
+ | **The Motivation: | ||
+ | One motivation for applying PQ's is the problem of managing real-time events (ex: scheduling processes on a computer). We want to implement a priority queue that supports a O(n log n) time per operation such as adding and deleting elements and selecting the element with the minimum key. We will use the priority queue to sort a set of n numbers! | ||
+ | |||
+ | **The Data Structure: Heap** | ||
+ | A heap is a balanced binary tree (or can be represented as an array too) and is the most efficient way to implement a priority queue. The tree has a root and each node can have up to two children (L and R). The keys in the tree are in //heap order// where **every element v, at node i, the element w at i's parent satisfies key(w)< or = key(v)** | ||
+ | |||
+ | **Heap Operations** | ||
+ | * to identify the minimum element is constant time (O(1)) because it is the root node. | ||
+ | * to add or delete elements we have to maintain the balance and order of the tree so we have two processes to maintain | ||
+ | * Heapify-Up | ||
+ | * swap the positions if the child is smaller than the parent | ||
+ | * Heapify-Down | ||
+ | * swap the positions if the replacement node is too large for its position | ||
+ | * //With heapify-up/ | ||
+ | This section was very straight-forward after the class lecture on heaps. It was a lot less interesting to read the chapter than be presented the information in class but overall the text clearly outlined and identified the main points we need to remember about priority queues and heaps and why this data structure helps us achieve a better run-time. I would rate the readability a 8/10. |