Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
courses:cs211:winter2014:journals:emily:entryfive [2014/02/26 01:58] – [Chapter 4.4] hardye | courses:cs211:winter2014:journals:emily:entryfive [2014/02/26 04:51] (current) – [Chapter 4.2] hardye | ||
---|---|---|---|
Line 51: | Line 51: | ||
**(4.10) The schedule A produced by the greedy algorithm has optimal maximum lateness L.** | **(4.10) The schedule A produced by the greedy algorithm has optimal maximum lateness L.** | ||
+ | The exchange argument is still pretty confusing to me I'd like to see a detailed outline of it. The book confused me after the lecture we had in class. | ||
+ | |||
+ | Readability 8/10 | ||
====== Chapter 4.4 ====== | ====== Chapter 4.4 ====== | ||
**Shortest Paths in a Graph** | **Shortest Paths in a Graph** | ||
Line 82: | Line 85: | ||
**Proof** | **Proof** | ||
- | By induct ion we prove the size of S is the smallest it can be. | + | |
+ | By induction | ||
The algorithm does not work if there are edges with //negative weights// | The algorithm does not work if there are edges with //negative weights// | ||
Line 90: | Line 94: | ||
The runtime of this algorithm is O(m*n). The while loop is executed n-1 times. To compute the edge with the minimum weight for m edges takes O(m) time so the overall runtime is O(mn). | The runtime of this algorithm is O(m*n). The while loop is executed n-1 times. To compute the edge with the minimum weight for m edges takes O(m) time so the overall runtime is O(mn). | ||
If we keep the minimum values for each node in a priority queue we improve our runtime. We add the nodes into the PQ and extract the minimum to add to the set of the shortest path nodes. When we do this our runtime is improved to O(m log n) time. | If we keep the minimum values for each node in a priority queue we improve our runtime. We add the nodes into the PQ and extract the minimum to add to the set of the shortest path nodes. When we do this our runtime is improved to O(m log n) time. | ||
+ | |||
+ | Readability 8/10 | ||
====== Chapter 4.5 ====== | ====== Chapter 4.5 ====== | ||
+ | **The Minimum Spanning Tree Problem** | ||
+ | |||
+ | An example of this problem would be if we wanted to connect a neighborhood' | ||
+ | |||
+ | **Design** | ||
+ | There are three different approaches to greedy algorithms that solve this problem | ||
+ | * Kruskal' | ||
+ | * builds a tree by inserting edges in order of increasing cost | ||
+ | * don't add the edge if it creates a cycle | ||
+ | * Primm' | ||
+ | * start with a root node and greedily grow a tree outward | ||
+ | * add the node that is the cheapest edge to attach | ||
+ | * Reverse Delete | ||
+ | * backwards version of Kruskal' | ||
+ | * start with a full graph and delete edges that are most expensive | ||
+ | * delete as long as it doesn' | ||
+ | |||
+ | **Analysis** | ||
+ | |||
+ | The //cut property//: Assume that all edge costs are distinct. Let S be any subset of nodes that is neither empty nor equal to all of V, and let edge e = (v,w) be the minimum cost edge with one end in S and the other in V-S. Then every minimum spanning tree contains the edge e. | ||
+ | |||
+ | **Proof** | ||
+ | We use the exchange argument. If we replace the least cost edge in a tree with another edge that is more cost we result in a tree with less expense. | ||
+ | Both Kruskal' | ||
+ | |||
+ | Implementation of Primm' | ||
+ | * both prime' | ||
+ | * implementations of prime' | ||
+ | * if we use a priority queue we can implement in O(m log n) | ||
+ | |||
+ | This section was difficult to get past all of the obscure variables which made it more difficult to follow than with concrete examples in class. I wish the book had more examples. Readability: | ||
====== Chapter 4.6 ====== | ====== Chapter 4.6 ====== | ||
+ | **Implementing Kruskal' | ||
+ | |||
+ | We want to store a representation of connected components so we can easily and quickly update the graph and search the graph. | ||
+ | The data structure we use lets us keep disjoint sets. The find operation will return the set that has that node in it. We use this to compare two nodes and see if they' | ||
+ | |||
+ | **The Data Structure** | ||
+ | |||
+ | We could use an array that contains the name of the set containing each element. This would make lookup be O(1) but the union function would take O(n) time. To improve the runtime we will explicitly maintain the list of elements in each set. We also choose the name for the union to be the name of one of the sets so we only have to update the values of the component in the other set. We bound the average running time of k Union operations to have O(k log k) time to update component values. | ||
+ | Another data structure we can use uses pointers. Each node will point to the set it belongs to. We will still use the elements of the set as possible set names. MakeUnion operation will initialize a record for each element in the set with a pointer that points to itself to show its in its own set. | ||
+ | **This confuses me!!!! Why is it pointing to itself?*** With this data structure the Union function is constant time and to reduce the linear Find operations we keep the larger set as the name of the union to make it O(long n) and the MakeUnionFind operation is O(n). | ||
+ | This section was a lot more confusing to me and I thought was pretty difficult to read. It took me multiple times to read over before I felt like I could take notes on it. Rate 5-6/10. | ||