Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
courses:cs211:winter2014:journals:emily:entrytwo [2014/01/22 04:26] – [Chapter 2.4 A Survey of Common Running Times] hardye | courses:cs211:winter2014:journals:emily:entrytwo [2014/02/24 20:09] (current) – [2.5 A More Complex Data Structure: Priority Queues] hardye | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Entry Two ====== | ====== Entry Two ====== | ||
- | ====== Chapter 2.3 Implementing the Stable Matching Algorithm Using Lists and Arrays====== | + | ====== Chapter 2.3 ====== |
+ | **Implementing the Stable Matching Algorithm Using Lists and Arrays** | ||
In this section of chapter two, we explore the tradeoffs between lists arrays and lists to determine which data is appropriate for the algorithm. Some of the tradeoffs we read about are finding an element in the array or list with given index i, seeing if a given element is an array or list and if the array is sorted or not. An important concept to take away from this section is that preprocessing allows us to convert between the array and list representations in O(n) time, so we are not limited in which data structure we choose and we can freely jump between which one fits the algorithm best! | In this section of chapter two, we explore the tradeoffs between lists arrays and lists to determine which data is appropriate for the algorithm. Some of the tradeoffs we read about are finding an element in the array or list with given index i, seeing if a given element is an array or list and if the array is sorted or not. An important concept to take away from this section is that preprocessing allows us to convert between the array and list representations in O(n) time, so we are not limited in which data structure we choose and we can freely jump between which one fits the algorithm best! | ||
Line 7: | Line 9: | ||
This section did a lot of comparing and contrasting that we went over in detail in class. It was very clear but a lot less interesting because it seemed so repetitive. Readability: | This section did a lot of comparing and contrasting that we went over in detail in class. It was very clear but a lot less interesting because it seemed so repetitive. Readability: | ||
- | ====== Chapter 2.4 A Survey of Common Running Times ====== | + | ====== Chapter 2.4 ====== |
+ | **A Survey of Common Running Times** | ||
In this section we analyze common running times: O(n), O(n log n), O(n< | In this section we analyze common running times: O(n), O(n log n), O(n< | ||
* **Linear Time- O(n)** (p.48-50) | * **Linear Time- O(n)** (p.48-50) | ||
Line 47: | Line 51: | ||
This section was a really large chunk of information and took a few times to read to understand. The most confusing part was the sub linear time. Even though I don't have any specific questions, just the general topic confused me. I thought this chapter was pretty interesting although not very easy to read. It was a very good review and important to cover moving on into the next sections. I would rate readability an 8/10. | This section was a really large chunk of information and took a few times to read to understand. The most confusing part was the sub linear time. Even though I don't have any specific questions, just the general topic confused me. I thought this chapter was pretty interesting although not very easy to read. It was a very good review and important to cover moving on into the next sections. I would rate readability an 8/10. | ||
- | ====== 2.5 A More Complex Data Structure: Priority Queues ====== | + | ====== |
+ | **A More Complex Data Structure: Priority Queues | ||
+ | |||
+ | In this section we discuss a very broadly used data structure to help us achieve our goal of improving run time. We discover the different processes a PQ can perform that a list or array can't. | ||
+ | |||
+ | **The Problem: | ||
+ | For the Stable Matching algorithm we need to maintain a // | ||
+ | |||
+ | **The Motivation: | ||
+ | One motivation for applying PQ's is the problem of managing real-time events (ex: scheduling processes on a computer). We want to implement a priority queue that supports a O(n log n) time per operation such as adding and deleting elements and selecting the element with the minimum key. We will use the priority queue to sort a set of n numbers! | ||
+ | |||
+ | **The Data Structure: Heap** | ||
+ | A heap is a balanced binary tree (or can be represented as an array too) and is the most efficient way to implement a priority queue. The tree has a root and each node can have up to two children (L and R). The keys in the tree are in //heap order// where **every element v, at node i, the element w at i's parent satisfies key(w)< or = key(v)** | ||
+ | |||
+ | **Heap Operations** | ||
+ | * to identify the minimum element is constant time (O(1)) because it is the root node. | ||
+ | * to add or delete elements we have to maintain the balance and order of the tree so we have two processes to maintain | ||
+ | * Heapify-Up | ||
+ | * swap the positions if the child is smaller than the parent | ||
+ | * Heapify-Down | ||
+ | * swap the positions if the replacement node is too large for its position | ||
+ | * //With heapify-up/ | ||
+ | This section was very straight-forward after the class lecture on heaps. It was a lot less interesting to read the chapter than be presented the information in class but overall the text clearly outlined and identified the main points we need to remember about priority queues and heaps and why this data structure helps us achieve a better run-time. I would rate the readability a 8/10. |