Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2011:journals:charles:chapter4 [2011/02/19 06:53] – [4.5 The Minimum Spanning Tree Problem] gouldccourses:cs211:winter2011:journals:charles:chapter4 [2011/02/20 02:45] (current) – [4.8 Huffman Codes and Data Compression] gouldc
Line 30: Line 30:
 *This section refers to directed graphs but the solution will also work for undirected graphs. *This section refers to directed graphs but the solution will also work for undirected graphs.
 ===== 4.5 The Minimum Spanning Tree Problem ===== ===== 4.5 The Minimum Spanning Tree Problem =====
-There are n locations. Every edge has a cost. We want the subset of edges that connects all locations and costs as little as possible. Remember the cut property and the cycle property. Here is a description of the three greedy algorithms.+There are n locations. Every edge has a cost. We want the subset of edges that connects all locations and costs as little as possible. Remember the cut property and the cycle property. Here are descriptions of the three greedy algorithms.
   * Kruskal: cut property;   * Kruskal: cut property;
   * Prim: O(mlogn); cut property;   * Prim: O(mlogn); cut property;
   * Reverse-Delete: cycle property;   * Reverse-Delete: cycle property;
 ===== 4.6 Implementing Kruskal's Algorithm: The Union-Find Data Structure ===== ===== 4.6 Implementing Kruskal's Algorithm: The Union-Find Data Structure =====
 +The title says it all. We want to know the connected components of a graph, but we don't want to recompute them every time that an edge is added to the graph. A solution to this problem is the the Union-Find structure. It supports three operations:
 +  - MakeUnionFind(S): returns a Union-Find data structure on the set S, where every element begins in its own component; O(n).
 +  - Find(u): returns the name of the component that u belongs to; O(log n).
 +  - Union(A,B): merges two components (A,B) into one component; O(log n).
  
 +Efficiency (4.25): //Kruskal's Algorithm can be implemented on a graph with n nodes and m edges to run in O(m logn) time.//
 ===== 4.7 Clustering ===== ===== 4.7 Clustering =====
 +Situation: Given a collection of objects, we want k clusters (but not any clustering, we want the "clustering of maximum spacing"). Brute force approach to finding the best clustering is terribly inefficient, since there are "exponentially many different k-clusterings of a set U." The answer is related to minimum spacing trees.
  
 +A variation of Kruskal's Algorithm solves it (4.26) //The components C1, C2, ..., Ck formed by deleting the k - 1 most expensive edges of the minimum spanning tree T constitute a k-clustering of maximum spacing.//
 ===== 4.8 Huffman Codes and Data Compression ===== ===== 4.8 Huffman Codes and Data Compression =====
 +Huge research area. Think languages. If there are n letters in a language, it's possible to encode the letters in log(n) bits. But we know that letters come up with different frequencies. Suppose we are given the individual frequencies of each letter; is it possible to encode the more frequent letters with fewer bits so as to reduce the average bits per letter (ABL)?
  
 +Yeah, Huffman coding is one way.
 ===== 4.9 Minimum-Cost Arborescences: A Multi-Phase Greedy Algorithm ===== ===== 4.9 Minimum-Cost Arborescences: A Multi-Phase Greedy Algorithm =====
 Under construction... Under construction...
courses/cs211/winter2011/journals/charles/chapter4.1298098432.txt.gz · Last modified: 2011/02/19 06:53 by gouldc
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0