Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
courses:cs211:winter2014:journals:haley:chapter4 [2014/02/12 07:59] – [Intro & 4.2: Minimizing Lateness] archermcclellanhcourses:cs211:winter2014:journals:haley:chapter4 [2014/03/05 06:04] (current) archermcclellanh
Line 21: Line 21:
     * We prove this by exchange argument: any optimal solution that isn't the greedy solution must have some sort of inversions! We un-invert those, get rid of lack time, then we have our greedy solution.      * We prove this by exchange argument: any optimal solution that isn't the greedy solution must have some sort of inversions! We un-invert those, get rid of lack time, then we have our greedy solution. 
     * Not that interesting a section, kind of overly wordy. 5/10.     * Not that interesting a section, kind of overly wordy. 5/10.
 +
 +===== 4.4: Shortest Paths =====
 +
 +    * We want to find the shortest path in a graph, either from some node //s// to some other node //t// or from //s// to every other node.
 +    * We can use Dijkstra's algorithm to do this for directed graphs:
 +        * Explore nodes by selecting the next (univisited) node //t// as that with the next lowest-cost distance from //s// by way of an already discovered node.
 +        * Add //t// to the discovered set.
 +    * Using a priority queue, Dijkstra's algorithm runs in O(m) time for m edges.
 +    * 6/10.
 +
 +===== 4.5: Minimum Spanning Trees =====
 +
 +    * A minimum spanning tree is a subset of a graph G such that the new graph G' is connected as cheaply as possible.
 +    * Such a tree will necessarily be a tree.
 +    * Three algorithms to build a minimum spanning tree consist of doing the following:
 +        * Adding edges to the MST in increasing order of cost, provided the next cheapest edge does not form a cycle,
 +        * Beginning at vertex //s//, adding the next cheapest edge,
 +        * Begin with the original G and, in decreasing order of edge cost, delete edges provided the current edge to be deleted will not make the spanning tree disconnected.
 +    * These three algorithms are Kruskal's, Prim's, and the Reverse-Delete algorithms, respectively.
 +    * We can implement Prim's algorithm in O(mlogn) time.
 +    * This section wasn't interesting. 4/10.
 +
 +===== 4.6: Kruskal's Algorithm =====
 +
 +    * We can implement Kruskal using a pointer-based structure to support creation, find, and union for sets.
 +    * With this structure we can run Kruskal in O(m log n) time
 +    * Not that interesting, 5/10
 +
 +===== 4.7: Clustering =====
 +
 +    * We can use clustering when we want to group objects based on their similarity.
 +    * In order to find a clustering with maximum spacing (objects in the different groups are as far apart as possible), we use Kruskal but stop before we add the last k-1 edges where k is the number of connected components in G.
 +    * I had a phone interview with a clustering question and I had no idea what to do. This section is a 10/10 because Epic pays really well and asks about clustering in interviews.
 +
 +===== 4.8: Huffman Codes & Data Compression =====
 +
 +    * Data encoding problems all relate to how we can take a problem and make it recursive.
 +    * We can apply data compression to encoding symbols in bits, encoding finding prefix codes (mapping letters to bit strings 1:1), etc.
 +    * We can represent prefix codes by using binary trees: 
 +        * A binary tree whose number of leaves is the size of our alphabet, and we label leaves with distinct letters of the alphabet
 +            * By selecting a letter s in our alphabet, we follow the path from the root to the 's' leaf, and for each time we go to the left child we write a 0, and for the right we write a 1.
 +    * Huffman's algorithm takes this a step farther, saying that if the alphabet S has more than two letters, we form an alphabet S' excluding the two lowest-frequency letters, replacing them with a new letter. We construct a prefix code //c// for S', and define a prefix code for S beginning with the tree used to find //c//.
 +    * This algorithm uses the minimum needed number of bits per letter for a prefix code, and can run in O(k^2) time. If we use priority queues, we can get O(k logk) time.
 +    * I know this should have been interesting, but since I missed class I'm just basing my opinion on the book which was not interesting. 4/10.
courses/cs211/winter2014/journals/haley/chapter4.1392191961.txt.gz · Last modified: 2014/02/12 07:59 by archermcclellanh
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0