| Both sides previous revisionPrevious revisionNext revision | Previous revision |
| courses:cs211:winter2018:journals:melkersonr:chapter3 [2018/02/06 22:25] – [Section 3.4] melkersonr | courses:cs211:winter2018:journals:melkersonr:chapter3 [2018/02/06 22:33] (current) – [Section 3.6] melkersonr |
|---|
| ===== Section 3.5 ===== | ===== Section 3.5 ===== |
| |
| * **Summary:** Section 3.5 is Connectivity in Directed Graphs. We recall that directed graphs have edges with directionality. An edge (u,v) is said to go from node u to node v. There is not necessarily also an edge (v,u) from v to u. Even though our edges now have a direction, we can still talk about BFS and DFS. First, the matter of representing a directed graph. We still us an adjacency list, but now we have two lists for each node: one for the nodes //to// which it has edges, another for the nodes //from// which it has edges. BFS and DFS work almost exactly the same for directed graphs as for undirected graphs, except that we follow edges based on their directionality. For BFS, we follow the edges out of nodes and ignore the edges in. For DFS, "when DFS is at a node u, it recursively launches a depth-first search, i order, for each node to which u has an edge" (p 98). The last subsection is about strong connectivity, which means that for every two nodes in the graph, there is a path between them in both directions (every pair of nodes is "mutually reachable"). We can determine in linear time whether a graph is strongly connected: run BFS on G and on G_{rev}, and if either of the searches "fails to reach every node, then clearly G is not strongly connected" (p 98). Finally, say the graph is not strongly connected. We can find the strong component by taking the intersection of the nodes reached by BFS on G and G_{rev}. | * **Summary:** Section 3.5 is Connectivity in Directed Graphs. We recall that directed graphs have edges with directionality. An edge (u,v) is said to go from node u to node v. There is not necessarily also an edge (v,u) from v to u. Even though our edges now have a direction, we can still talk about BFS and DFS. First, the matter of representing a directed graph. We still us an adjacency list, but now we have two lists for each node: one for the nodes //to// which it has edges, another for the nodes //from// which it has edges. BFS and DFS work almost exactly the same for directed graphs as for undirected graphs, except that we follow edges based on their directionality. For BFS, we follow the edges out of nodes and ignore the edges in. For DFS, "when DFS is at a node u, it recursively launches a depth-first search, in order, for each node to which u has an edge" (p 98). The last subsection is about strong connectivity, which means that for every two nodes in the graph, there is a path between them in both directions (every pair of nodes is "mutually reachable"). We can determine in linear time whether a graph is strongly connected: run BFS on G and on G_{rev}, and if either of the searches "fails to reach every node, then clearly G is not strongly connected" (p 98). Finally, say the graph is not strongly connected. We can find the strong component by taking the intersection of the nodes reached by BFS on G and G_{rev}. |
| * **My Questions:** Is there such a thing as bipartiteness for a directed graph? | * **My Questions:** Is there such a thing as bipartiteness for a directed graph? |
| * **Second Time Around:** At first, I thought that you might end up adding nodes multiple times to a BFS tree of a directed graph or have some weird structure other than a tree if a later node goes back to an earlier node. Then I recalled that the BFS algorithm prescribes adding a node to a layer only if it not already Discovered. | * **Second Time Around:** At first, I thought that you might end up adding nodes multiple times to a BFS tree of a directed graph or have some weird structure other than a tree if a later node goes back to an earlier node. Then I recalled that the BFS algorithm prescribes adding a node to a layer only if it not already Discovered. |
| |
| * **Summary & Motivations:** Section 3.6 is Directed Acyclic Graphs and Topological Ordering. A directed acyclic graph (DAG), is a directed graph with no cycles. A common example of a DAG is a dependency network - like for course prerequisites, or a list of tasks where some must be done before others or the output of some is the input of others. We represent such relationships by giving each task a node, and for each instance where task i must be done before task j we have the directed edge (i,j). Presented with a DAG that represents dependencies, it's natural to ask for a valid ordering of the tasks. Such an ordering is called a topological ordering. Theorem **3.18** says that "If G has a topological ordering, then G is a DAG" (p 101). We now ask whether every DAG has a topological ordering and if it does how we can find it. | * **Summary & Motivations:** Section 3.6 is Directed Acyclic Graphs and Topological Ordering. A directed acyclic graph (DAG), is a directed graph with no cycles. A common example of a DAG is a dependency network - like for course prerequisites, or a list of tasks where some must be done before others or the output of some is the input of others. We represent such relationships by giving each task a node, and for each instance where task i must be done before task j we have the directed edge (i,j). Presented with a DAG that represents dependencies, it's natural to ask for a valid ordering of the tasks. Such an ordering is called a topological ordering. Theorem **3.18** says that "If G has a topological ordering, then G is a DAG" (p 101). We now ask whether every DAG has a topological ordering and if it does how we can find it. |
| * **About the Algorithms:** An algorithm to answer our question above does exist. Finding that topological ordering involves first selecting a node with no incoming edges, for we know that it doesn't depend on the completing of any other task. We know that there is always at least one node with no incoming edges because otherwise there would be a cycle in the graph and it wouldn't be a DAG. Furthermore, every DAG does have a topological ordering, which the authors prove by induction. Once we find this node that has no incoming edges, we delete it from the graph. Doing so does only removes edges, rather than adding them, so no cycle could be created and the graph is still a DAG. We continue the process of finding a node with no incoming edges and deleting that node, adding that node after the one before, until all nodes are deleted. We end up with a topological ordering for the DAG, but there could be multiple valid orderings. This algorithm is O(n^2) without any cleverness in finding a node with no incoming edges. If we're clever, though, we can get O(n+m) runtime. We facilitate this cleverness by maintaining knowledge about which nodes haven't been deleted yet ("active" nodes): for each node, the number of incoming edges from active nodes; and a set of all active nodes that have no incoming edges from other active nodes. Then we simply select a node to delete from that set, go through the nodes to which it had an edge, and subtract one from its value for the number of active incoming nodes. If that number drops to zero, we add that node to the set. | * **About the Algorithms:** Finding a topological ordering involves first selecting a node with no incoming edges, for we know that it doesn't depend on the completion of any other task. We know that there is always at least one node with no incoming edges because otherwise there would be a cycle in the graph and it wouldn't be a DAG. Furthermore, every DAG does have a topological ordering, which the authors prove by induction. Once we find this node that has no incoming edges, we delete it from the graph. Doing so does only removes edges, rather than adding them, so no cycle could be created and the graph is still a DAG. We continue the process of finding a node with no incoming edges and deleting that node, adding that node after the one before, until all nodes are deleted. We end up with a topological ordering for the DAG, but there could be multiple valid orderings. This algorithm is O(n^2) without any cleverness in finding a node with no incoming edges. If we're clever, though, we can get O(n+m) runtime. We facilitate this cleverness by maintaining knowledge about which nodes haven't been deleted yet ("active" nodes): for each node, maintain the number of incoming edges from active nodes; and a set of all active nodes that have no incoming edges from other active nodes. Then we simply select a node to delete from that set, go through the nodes to which it had an edge, and subtract one from its value for the number of active incoming nodes. If that number drops to zero, we add that node to the set. |
| * **My Questions:** | * **My Questions:** Is there a reason to talk about directed cyclic graphs? |
| * **Second Time Around:** We weren't able to cover the part where we improve the runtime for finding a topological ordering form O(n^2) to O(n+m) in class. It's a good example of first finding a brute force solution (and a solution that works), and then improving it. | * **Second Time Around:** We weren't able to cover the part where we improve the runtime for finding a topological ordering form O(n^2) to O(n+m) in class. It's a good example of first finding a brute force solution (and a solution that works), and then improving it. |
| * **Note to Self:** If a graph depicting dependencies contained a cycle C, "there would be no way to do any of the tasks in C: since each task in C cannot begin until some other one complete, no task in C could ever be done, since none could be done first" (p 100). That makes sense, but I'm not sure I would have thought of it without being asked, "If there were a cycle, could any of the tasks in the cycle be completed?" | * **Note to Self:** If a graph depicting dependencies contained a cycle C, "there would be no way to do any of the tasks in C: since each task in C cannot begin until some other one complete, no task in C could ever be done, since none could be done first" (p 100). That makes sense, but I'm not sure I would have thought of it without being asked, "If there were a cycle, could any of the tasks in the cycle be completed?" |
| * **Readability:** | * **Readability:** 9 - I thought this section was pretty straightforward. |