Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2011:journals:charles:chapter4 [2011/02/19 07:11] – [4.6 Implementing Kruskal's Algorithm: The Union-Find Data Structure] gouldccourses:cs211:winter2011:journals:charles:chapter4 [2011/02/20 02:45] (current) – [4.8 Huffman Codes and Data Compression] gouldc
Line 35: Line 35:
   * Reverse-Delete: cycle property;   * Reverse-Delete: cycle property;
 ===== 4.6 Implementing Kruskal's Algorithm: The Union-Find Data Structure ===== ===== 4.6 Implementing Kruskal's Algorithm: The Union-Find Data Structure =====
-The title says it all. We want to know the connected components of a graph, but we don't want to recompute them every time that an edge is added to the graph. The solution to this problem is the the Union-Find structure. It supports three operations:+The title says it all. We want to know the connected components of a graph, but we don't want to recompute them every time that an edge is added to the graph. solution to this problem is the the Union-Find structure. It supports three operations:
   - MakeUnionFind(S): returns a Union-Find data structure on the set S, where every element begins in its own component; O(n).   - MakeUnionFind(S): returns a Union-Find data structure on the set S, where every element begins in its own component; O(n).
   - Find(u): returns the name of the component that u belongs to; O(log n).   - Find(u): returns the name of the component that u belongs to; O(log n).
   - Union(A,B): merges two components (A,B) into one component; O(log n).   - Union(A,B): merges two components (A,B) into one component; O(log n).
  
-EFFICIENCY(4.25): Kruskal's Algorithm can be implemented on a graph with n nodes and m edges to run in O(m logn) time.+Efficiency (4.25): //Kruskal's Algorithm can be implemented on a graph with n nodes and m edges to run in O(m logn) time.//
 ===== 4.7 Clustering ===== ===== 4.7 Clustering =====
 +Situation: Given a collection of objects, we want k clusters (but not any clustering, we want the "clustering of maximum spacing"). Brute force approach to finding the best clustering is terribly inefficient, since there are "exponentially many different k-clusterings of a set U." The answer is related to minimum spacing trees.
  
 +A variation of Kruskal's Algorithm solves it (4.26) //The components C1, C2, ..., Ck formed by deleting the k - 1 most expensive edges of the minimum spanning tree T constitute a k-clustering of maximum spacing.//
 ===== 4.8 Huffman Codes and Data Compression ===== ===== 4.8 Huffman Codes and Data Compression =====
 +Huge research area. Think languages. If there are n letters in a language, it's possible to encode the letters in log(n) bits. But we know that letters come up with different frequencies. Suppose we are given the individual frequencies of each letter; is it possible to encode the more frequent letters with fewer bits so as to reduce the average bits per letter (ABL)?
  
 +Yeah, Huffman coding is one way.
 ===== 4.9 Minimum-Cost Arborescences: A Multi-Phase Greedy Algorithm ===== ===== 4.9 Minimum-Cost Arborescences: A Multi-Phase Greedy Algorithm =====
 Under construction... Under construction...
courses/cs211/winter2011/journals/charles/chapter4.1298099504.txt.gz · Last modified: 2011/02/19 07:11 by gouldc
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0