Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:ahmadh:ch3 [2018/02/06 00:45] ahmadhcourses:cs211:winter2018:journals:ahmadh:ch3 [2018/02/07 04:06] (current) ahmadh
Line 186: Line 186:
  
 To test if a graph G is strongly connected, we pick any node s and run BFS in G starting from s. We then also run BFS starting from s in G_rev (where G_rev is a graph with all the edges reversed in their directions). Now, if one of these two searches fails to reach every node, then clearly G is not strongly connected. But suppose we find that s has a path to every node, and that every node has a path to s. Then s and v are mutually reachable for every v, and so it follows that every two nodes u and v are mutually reachable: s and u are mutually reachable, and s and v are mutually reachable, so by the above transitive property, we have that u and v are mutually reachable. To test if a graph G is strongly connected, we pick any node s and run BFS in G starting from s. We then also run BFS starting from s in G_rev (where G_rev is a graph with all the edges reversed in their directions). Now, if one of these two searches fails to reach every node, then clearly G is not strongly connected. But suppose we find that s has a path to every node, and that every node has a path to s. Then s and v are mutually reachable for every v, and so it follows that every two nodes u and v are mutually reachable: s and u are mutually reachable, and s and v are mutually reachable, so by the above transitive property, we have that u and v are mutually reachable.
 +
 +==== 3.5.4 Comments ====
 +
 +For me, this section was more review of what I had known about graphs before. The concepts of directed graphs wasn't novel to me, nor was the concept of strong connectivity. As such, I did not find this section particularly interesting--since I didn't learn much that was new. That said, the section was easy to understand, and did a good job explaining different algorithms.
 +
 +===== 3.6 Directed Acyclic Graphs and Topological Ordering =====
 +
 +If a directed graph has no cycles, we call it a directed acyclic graph (DAG).
 +
 +DAGs can be used to encode precedence relations or dependencies in a natural way. Suppose we have a set of tasks labeled {1, 2 ... n} that need to be performed, and there are dependencies among them stipulating, for certain pairs i and j, that i must be performed before j. We can represent such an interdependent set of tasks by introducing a node for each task, and a directed edge (i, j) whenever i must be done before j.
 +
 +For a directed graph G, we say that a topological ordering of G is an ordering of its nodes as v_l, v_2 ... v_n so that for every edge (v_i, v_j), we have i < j. In other words, all edges point "forward" in the ordering. A topological ordering on tasks provides an order in which they can be safely performed.
 +
 +Theorem: 
 +
 + G has a topological ordering, then G is a DAG.
 +
 +Proof: Suppose, by way of contradiction, that G has a topological ordering v_1, v_2, ..., v_n, and also has a cycle C. Let v_i be the lowest-indexed node on C, and let v_j be the node on C just before v_i--thus (v_j, v_i) is an edge. But by our choice of i, we have j > i, which contradicts the assumption that v_1, v_2, ..., v_n was a topological ordering.
 +
 +==== 3.6.1 Designing and Analyzing the Algorithm ====
 +
 +To come up with an efficient algorithm to compute the topological ordering in a graph, we must first establish and prove the following theory:
 +
 + In every DAG G, there is a node v with no incoming edges.
 +
 +Proof: Let G be a directed graph in which every node has at least one incoming edge. We show how to find a cycle in G; this will prove the claim. We pick any node v, and begin following edges backward from v: sihce v has at least
 +one incoming edge (u, v), we can walk backward to u; then, since u has at least one incoming edge (x, u), we can walk backward to x; and so on. We can continue this process indefinitely, since every node we encounter has an incoming edge. But after n+1 steps, we will have visited some node w twice. If we let C denote the sequence of nodes encountered between successive visits to w, then clearly C forms a cycle.
 +
 +Theroem:
 +
 + If G is a DAG, then G has a topological ordering.
 +
 +(The proof for this theorem can be found on page 102 of the textbook.)
 +
 +We can use the following algorithm to compute the topological ordering of a graph G:
 +
 + Find a node v with no incoming edges and order it first      O(n)
 + Delete v from G                                              O(n)
 + Recursively compute a topological ordering of G-{v}          O(n)
 +     and append this order after v                            O(1)
 +
 +This algorithm runs in O(n^2). 
 +
 +We can improve this running time to O(m+n) by modifying the algorithm slightly as follows:
 +
 +We declare a node to be "active" if it has not yet been deleted by the algorithm, and we explicitly maintain two things:
 +(a) for each node w, the number of incoming edges that w has from active nodes; and
 +(b) the set S of all active nodes in G that have no incoming edges from other active nodes.
 +At the start, all nodes are active, so we can initialize (a) and (b) with a single pass through the nodes and edges. Then, each iteration consists of selecting a node v from the set S and deleting it. After deleting v, we go through all nodes w to which v had an edge, and subtract one from the number of active incoming edges that we are maintaining for w. If this causes the number of active incoming edges to w to drop to zero, then we add w to the set S. Proceeding in this way, we keep track of nodes that are eligible for deletion at all times, while spending constant work per edge over the course of the whole algorithm.
 +
 +==== 3.6.2 Comments ====
 +
 +I feel like this section was more challenging to understand compared to the rest of the chapter--probably because it was something entirely new to me. The idea of topological ordering seems intuitive and straightforward, and the O(n^2) algorithm felt the most natural way to compute the ordering. While I thought that the optimization to this algorithm to give the O(m+n) running time was genius, I struggled a fair amount understanding the optimized algorithm. However, a few reads later, I did get the idea (took me longer than it should have, to be honest). I'd say this was the most interesting section from 3.2 to 3.6 (the rest was probably 60% review).
courses/cs211/winter2018/journals/ahmadh/ch3.1517877909.txt.gz · Last modified: by ahmadh
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0