Differences

This shows you the differences between two versions of the page.

Link to this comparison view

courses:cs211:winter2012:journals:richard:chap4 [2012/03/01 05:07] – created marmorsteinrcourses:cs211:winter2012:journals:richard:chap4 [2012/03/01 06:04] (current) marmorsteinr
Line 17: Line 17:
  
 ==4.4== ==4.4==
-This section overviewed Dijkstra's algorithm. It was pretty easy to follow considering I was exposed to this in 112 and during lecture. There was an interesting analogy, the authors compared Dijkstra's algorithm to a system of pipes where the edge lengths are the weights and then a droplet of water falls and creates a wave that expands at constant speed. That's an interesting way to think about it--they talk about how it's sort of a "continuous version" of depth-first search. It runs in O(n log n) though, not O(n). They also provided two proofs for correctness, one with all the exposition and another one with just inequalities.+This section overviewed Dijkstra's algorithm. It was pretty easy to follow considering I was exposed to this in 112 and during lecture. There was an interesting analogy, the authors compared Dijkstra's algorithm to a system of pipes where the edge lengths are the weights and then a droplet of water falls and creates a wave that expands at constant speed. I prefer to think of it as a quickly growing but hungry slime monster gobbling up the nearest nodes, but hey. They talk about how it's sort of a "continuous version" of depth-first search. It runs in O(n log n) though, not O(n). They also provided two proofs for correctness, one with all the exposition and another one with just inequalities. 
 + 
 +I give this a 8/10 
 + 
 +==4.5== 
 +This section is the minimum spanning tree problem. Lots of material that was also in the lecture but it was a good review--and there was some stuff that wasn't in the lecture, for example--it talked about how if you allow edges with 0 cost, there can be multiple minimum path things that aren't trees--but that there always exists a tree that's a solution to the problem. It covers all three of the MST algorithms: Kruskal's, Prim's, the backwards Kruskals. I think Kruskal should have changed his name to "Proper," because then we'd have Prim's algorithm and Proper's algorithm, and it would be great. It talked about it like it was sort of a peculiarity for a problem to have three algorithms that can solve it equally correctly and with the same O bound--it makes me wonder if there aren't other algorithms to the other problems like shortest path that aren't Dijkstra's but are equally as efficient. I really liked the proof to eliminate the distinct edge cost assumption--the making small little "perturbations" took me by surprise, I guess. I didn't even know what a "perturbation" was. Right before that they talk about that you can add the justified cut/cycle edges in any order whatsoever. They're real jerks for talking about something cool, but then saying "while we will not explore this further here..." Who does that? 
 + 
 +I give it a 9/10. 
 + 
 +==4.6== 
 +This section talks about union-find, which you need to implement <del>Proper's</del> Kruskal's algorithm efficiently. Union-find is a lame name also. I have an idea--Kruskal can change his name to Proper so we can have Prim and Proper, and then we can just call the union-find data structure's Kruskal's--because Kruskal is still a pretty cool name, and it'd be a pity to eliminate it from existence. Anyway things I drew from the text--union-find cannot handle deletions of edges which split components. This union-find is really a wacky data structure, because the find always returns a _name_ of something instead of something itself--the textbook always names something using the elements it contains, which I guess makes sense because you can never split them it says, which means that a set could never lose the element it gets its name from--if you could that might be a bad idea. It goes through a lot of different implementations of union-find-- you can have Find being O(1) but Union taking O(n) first, then they change that where you can have a sequence of k unions taking at most O(k log k) time, which is wacky. But then you can also implement it where Union is O(1) and Find is worst case O(log n), but then there's this crazy stuff with a sequence of n finds doing O(crazystuffinsidehere) with a "inverse Ackermann function." Which just blows my mind. 
  
  
  
  
courses/cs211/winter2012/journals/richard/chap4.1330578458.txt.gz · Last modified: 2012/03/01 05:07 by marmorsteinr
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0