Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
courses:cs211:winter2011:journals:charles:chapter4 [2011/02/14 21:43] – [4.4 Shortest Paths in a Graph] gouldc | courses:cs211:winter2011:journals:charles:chapter4 [2011/02/20 02:45] (current) – [4.8 Huffman Codes and Data Compression] gouldc | ||
---|---|---|---|
Line 30: | Line 30: | ||
*This section refers to directed graphs but the solution will also work for undirected graphs. | *This section refers to directed graphs but the solution will also work for undirected graphs. | ||
===== 4.5 The Minimum Spanning Tree Problem ===== | ===== 4.5 The Minimum Spanning Tree Problem ===== | ||
+ | There are n locations. Every edge has a cost. We want the subset of edges that connects all locations and costs as little as possible. Remember the cut property and the cycle property. Here are descriptions of the three greedy algorithms. | ||
+ | * Kruskal: cut property; | ||
+ | * Prim: O(mlogn); cut property; | ||
+ | * Reverse-Delete: | ||
===== 4.6 Implementing Kruskal' | ===== 4.6 Implementing Kruskal' | ||
+ | The title says it all. We want to know the connected components of a graph, but we don't want to recompute them every time that an edge is added to the graph. A solution to this problem is the the Union-Find structure. It supports three operations: | ||
+ | - MakeUnionFind(S): | ||
+ | - Find(u): returns the name of the component that u belongs to; O(log n). | ||
+ | - Union(A,B): merges two components (A,B) into one component; O(log n). | ||
+ | Efficiency (4.25): // | ||
===== 4.7 Clustering ===== | ===== 4.7 Clustering ===== | ||
+ | Situation: Given a collection of objects, we want k clusters (but not any clustering, we want the " | ||
+ | A variation of Kruskal' | ||
===== 4.8 Huffman Codes and Data Compression ===== | ===== 4.8 Huffman Codes and Data Compression ===== | ||
+ | Huge research area. Think languages. If there are n letters in a language, it's possible to encode the letters in log(n) bits. But we know that letters come up with different frequencies. Suppose we are given the individual frequencies of each letter; is it possible to encode the more frequent letters with fewer bits so as to reduce the average bits per letter (ABL)? | ||
+ | Yeah, Huffman coding is one way. | ||
===== 4.9 Minimum-Cost Arborescences: | ===== 4.9 Minimum-Cost Arborescences: | ||
Under construction... | Under construction... |