Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revision | |||
courses:cs211:winter2014:journals:haley:chapter4 [2014/02/12 07:59] – [Intro & 4.2: Minimizing Lateness] archermcclellanh | courses:cs211:winter2014:journals:haley:chapter4 [2014/03/05 06:04] (current) – archermcclellanh | ||
---|---|---|---|
Line 21: | Line 21: | ||
* We prove this by exchange argument: any optimal solution that isn't the greedy solution must have some sort of inversions! We un-invert those, get rid of lack time, then we have our greedy solution. | * We prove this by exchange argument: any optimal solution that isn't the greedy solution must have some sort of inversions! We un-invert those, get rid of lack time, then we have our greedy solution. | ||
* Not that interesting a section, kind of overly wordy. 5/10. | * Not that interesting a section, kind of overly wordy. 5/10. | ||
+ | |||
+ | ===== 4.4: Shortest Paths ===== | ||
+ | |||
+ | * We want to find the shortest path in a graph, either from some node //s// to some other node //t// or from //s// to every other node. | ||
+ | * We can use Dijkstra' | ||
+ | * Explore nodes by selecting the next (univisited) node //t// as that with the next lowest-cost distance from //s// by way of an already discovered node. | ||
+ | * Add //t// to the discovered set. | ||
+ | * Using a priority queue, Dijkstra' | ||
+ | * 6/10. | ||
+ | |||
+ | ===== 4.5: Minimum Spanning Trees ===== | ||
+ | |||
+ | * A minimum spanning tree is a subset of a graph G such that the new graph G' is connected as cheaply as possible. | ||
+ | * Such a tree will necessarily be a tree. | ||
+ | * Three algorithms to build a minimum spanning tree consist of doing the following: | ||
+ | * Adding edges to the MST in increasing order of cost, provided the next cheapest edge does not form a cycle, | ||
+ | * Beginning at vertex //s//, adding the next cheapest edge, | ||
+ | * Begin with the original G and, in decreasing order of edge cost, delete edges provided the current edge to be deleted will not make the spanning tree disconnected. | ||
+ | * These three algorithms are Kruskal' | ||
+ | * We can implement Prim's algorithm in O(mlogn) time. | ||
+ | * This section wasn't interesting. 4/10. | ||
+ | |||
+ | ===== 4.6: Kruskal' | ||
+ | |||
+ | * We can implement Kruskal using a pointer-based structure to support creation, find, and union for sets. | ||
+ | * With this structure we can run Kruskal in O(m log n) time | ||
+ | * Not that interesting, | ||
+ | |||
+ | ===== 4.7: Clustering ===== | ||
+ | |||
+ | * We can use clustering when we want to group objects based on their similarity. | ||
+ | * In order to find a clustering with maximum spacing (objects in the different groups are as far apart as possible), we use Kruskal but stop before we add the last k-1 edges where k is the number of connected components in G. | ||
+ | * I had a phone interview with a clustering question and I had no idea what to do. This section is a 10/10 because Epic pays really well and asks about clustering in interviews. | ||
+ | |||
+ | ===== 4.8: Huffman Codes & Data Compression ===== | ||
+ | |||
+ | * Data encoding problems all relate to how we can take a problem and make it recursive. | ||
+ | * We can apply data compression to encoding symbols in bits, encoding finding prefix codes (mapping letters to bit strings 1:1), etc. | ||
+ | * We can represent prefix codes by using binary trees: | ||
+ | * A binary tree whose number of leaves is the size of our alphabet, and we label leaves with distinct letters of the alphabet | ||
+ | * By selecting a letter s in our alphabet, we follow the path from the root to the ' | ||
+ | * Huffman' | ||
+ | * This algorithm uses the minimum needed number of bits per letter for a prefix code, and can run in O(k^2) time. If we use priority queues, we can get O(k logk) time. | ||
+ | * I know this should have been interesting, |