Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| courses:cs211:winter2018:journals:beckg:ch4 [2018/03/07 03:22] – beckg | courses:cs211:winter2018:journals:beckg:ch4 [2018/03/12 20:48] (current) – beckg | ||
|---|---|---|---|
| Line 125: | Line 125: | ||
| Prim's Algorithm can be implemented in essentially the exact same manner as Dijkstra' | Prim's Algorithm can be implemented in essentially the exact same manner as Dijkstra' | ||
| + | This section was very well explained, however I personally found it more clear the way we discussed in class (and the way I presented it here): with the Cut and Cycle properties first presented together and then used in analyzing the algorithms together. 8/10. | ||
| ===== 4.6: Implementing Kruskal' | ===== 4.6: Implementing Kruskal' | ||
| Line 142: | Line 143: | ||
| Regardless, we hit the bottleneck of sorting the //m// edges: //O(m log m)// time, but more succinctly put, because //m =< n< | Regardless, we hit the bottleneck of sorting the //m// edges: //O(m log m)// time, but more succinctly put, because //m =< n< | ||
| + | I thoroughly enjoyed this section and the nuances between the different implementations (array, pointer, and collapse optimized pointer) of the Union-Find structure were very well explained. 9/10. | ||
| ===== 4.7: Clustering ===== | ===== 4.7: Clustering ===== | ||
| + | Another application of MSTs is // | ||
| + | |||
| + | With regard to formalism, we consider a set //U// of //n// objects labeled // | ||
| + | |||
| + | The solution arises by seeing the clusters as connected components. Sorting the edges in ascending order of distance (cost), we can then iteratively merge these connected components by adding edges (skipping them if they would simply connect back to the same component). This is technically // | ||
| + | |||
| + | Or, we could think of it as taking the MST produced by the algorithm, and deleting the //k-1// most expensive edges. Therefore, within each component we actually have a tree, and there are //k// components. __These components constitute a k-clustering of maximum spacing.__ This hinges on the fact that those most expensive edges are the ones that would have connected any of the //k// components. Therefore, the next edge that would have been added (but wasn' | ||
| + | |||
| + | This was a succinct and yet informative section. They explained the extension (abridgement? | ||
| + | |||
| + | |||
| + | |||
| + | ===== 4.8: Huffman Codes and Data Compression ===== | ||
| + | Encoding complex information into binary sequences is a crucial part of data compression. A particular component is focused on storing information in as little physical memory as possible, and that's where taking advantage of the nonuniform frequencies of letter appearances comes into play. " | ||
| + | |||
| + | A //prefix code// for a set S of letters is a function g that maps each letter to some sequence of zeros and ones, in such a way that for distinct letters x and y in S, the sequence g(x) is not a prefix of the sequence g(y). This then allows the encoded message be perfectly decoded by scanning the bit sequence left to right. //Optimal// prefix codes are those that minimize the average number of bits required per letter (ABL(g)). This coincides with a given text to be translated as it depends on the relative frequencies of the letters' | ||
| + | |||
| + | Note that we can represent prefix codes as binary trees, by simply defining from the root node, taking the path to the left child is a 0 and the right a 1. Additionally, | ||
| + | |||
| + | Proven in the book, we have that there is an optimal prefix code and corresponding tree T in which the two lowest frequency letters are assigned to leaves that are siblings in T. So, given the relative frequencies of each letter, // | ||
| + | * If S has two letters: encode one using 0 and one using 1. | ||
| + | * Else: | ||
| + | * Let y and z be the two lowest freq. letters | ||
| + | * form a new alphabet S' by deleting y and z and replacing them with a new letter w of frequency // | ||
| + | * Recursively construct a prefix code for S' with tree T' | ||
| + | * Define a prefix code for S as follows: | ||
| + | * Start with T' | ||
| + | * Take leaf labeled w and add two children below it labeled y and z | ||
| + | |||
| + | The codes produced by this are called //Huffman codes//, and they achieve the minimum ABL of any prefix code. Implementing the algorithm using heap based PQs, we get a running time of //O(n logn)// for an alphabet of n letters. | ||
| + | |||
| + | I very much liked this section. It was very good explaining everything, as well as the nuances involved in data compression. It certainly " | ||
| + | |||
