Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:patelk:chapter3 [2018/02/04 21:13] – [3.2 Basic Definitions and Applications] patelkcourses:cs211:winter2018:journals:patelk:chapter3 [2018/02/05 05:40] (current) – [3.6 Directed Acyclic Graphs and Topological Ordering] patelk
Line 87: Line 87:
   * For any two nodes s ant t in a graph, their connected components are either identical or disjoint   * For any two nodes s ant t in a graph, their connected components are either identical or disjoint
  
 +==== Personal Thoughts ====
 +
 +Like the previous section, this section was pretty simple, as it reviewed past concepts that were taught in multiple other classes. This section provided good reference material for BFS and DFS. It contained nothing to confusing and was review.
 +Readability: 8.5
 +Interesting: 7
 ===== 3.3 Implementing Graph Traversal Using Queues and Stacks===== ===== 3.3 Implementing Graph Traversal Using Queues and Stacks=====
  
Line 98: Line 103:
   * Connected graphs: at least m >= n-1 edges   * Connected graphs: at least m >= n-1 edges
  
-    //Advantages and disadvantages of Adjacency Matrix vs. List// +//Advantages and disadvantages of Adjacency Matrix vs. List// 
-    - The adjacency matrix representation of a graph requires O(n²) space, while the adjacency list representation requires only O(m+n) space. + 
-    - Matrix: allows O(1) time to check if a given edge exists in the graph, but representation takes O(n²) space and checking for edges takes considerable O(n) time +  - The adjacency matrix representation of a graph requires O(n²) space, while the adjacency list representation requires only O(m+n) space. 
-    - List: works better for sparse graphs, there is an array containing a list of all nodes adjacent to node v, requires O(m+n) space, length of all lists in 2m -> O(m)+  - Matrix: allows O(1) time to check if a given edge exists in the graph, but representation takes O(n²) space and checking for edges takes considerable O(n) time 
 +  - List: works better for sparse graphs, there is an array containing a list of all nodes adjacent to node v, requires O(m+n) space, length of all lists in 2m -> O(m) 
 + 
 +  * Degree of a node v is the number of incident edges it has 
 + 
 +**Queues and Stacks** 
 +  * Order of element selection is crucial in DFS and BFS 
 +  * Queue: set from which we extract elements in FIFO order 
 +  * Stack: set from which we extract elements in LIFO order 
 +  * Use a doubly linked list as representations 
 +    * Queue: new element is added to the end of the lists as the last element 
 +    * Stack: new element is added to the first position 
 +    * Insertions are O(1) 
 + 
 +**Implementing Breadth-First Search** 
 +  * Use an adjacency list 
 +  * Array of Discovered nodes, length n 
 + 
 +  * Can use either a queue or stack in the below algorithm since order in each layer doesn't matter. 
 + 
 +{{:courses:cs211:winter2018:journals:patelk:bfsalgorithm.png?nolink&400|}} 
 +{{:courses:cs211:winter2018:journals:patelk:bfsalgorithm2.png?nolink&400|}} 
 + 
 +  * This algorithm runs in O(m+n) time when using the adjacency list representation.  
 +  * Algorithm can be implemented using a single list L maintained as a queue; nodes are processed in the order they are first discovered, hence all nodes in a layer are processed as a contiguous sequence 
 + 
 +**Implementing Depth-First Search** 
 + 
 +  * Nodes are processed in a stack, rather than in a queue. 
 +  * Explores a node u, scans neighbors of u, shifts attention to first not-yet explored node v, explores v 
 +  * Set array Explored[v] to be true when we scan v's incident edges 
 +  * Algorithm is underspecified: adjacency list of a node being explored can be processed in any order 
 + 
 +{{:courses:cs211:winter2018:journals:patelk:dfsalgorithm2.png?nolink&400|}} 
 + 
 +  * Have an array parent; set parent[v] = u when node v is added to stack; when node u (but not s) is marked as Explored, edge can be added to the tree 
 +  * Adding and deleting nodes from stack is O(1) 
 +  * Total number of nodes added to S: O(m+n) 
 + 
 +**Finding the Set of All Connected Components** 
 +  * O(m+n): although algorithm may run BFS/DFS multiple times, only a constant amount of work on a given edge or node is done in the iteration when the connected component it belongs to is under consideration 
 + 
 +==== Personal Thoughts ==== 
 + 
 +This section was not intuitive for me, but made a lot of sense as I was reading. Being familiar with BST, DST, queues, and stacks already, helped make this section easier to understand. The algorithms do take some time to conceptually understand, but are not that difficult. 
 +Readability:
 +Interesting: 6.5 
 +===== 3.4 Testing Bipartiteness: An Application of Breadth-First Search===== 
 + 
 +  * Bipartite Graph: Node set V can be partitioned into sets X and Y in such a way that every edge has one end in X and the other end in Y 
 +  * Examples: cycles of odd lengths or graphs that contain an odd cycle are not bipartite 
 + 
 +**Designing the Algorithm** 
 +  * Assume graph G is connected 
 +  * Identical procedure as BFS, move outward from s coloring the nodes 
 +  * Implement this on top of BFS, taking BFS implementation and adding an extra array Color over the nodes (colors are assigned based on even/odd) 
 +  * Runtime: O(n) 
 + 
 +**Analyzing the Algorithm** 
 +  * Let G be a connected graph, and let L<sub>1</sub>, L<sub>2</sub>,...be the layers produced by BFS starting at node s. Then exactly of of the following two things must hold: 
 +      - There is no edge of G joining two nodes of the same layer: Bipartite 
 +      - There is an edge of G joining two nodes of the same layer: Not Bipartite 
 + 
 +==== Personal Thoughts ==== 
 + 
 +I don't really remember bipartite graphs from other classes, but they are pretty simple and this section made them very simple. We did already talk about them in class, so that did help as well. There wasn't any information that was too confusing in this short section. 
 +Readability: 10 
 +Interesting:
 +===== 3.5 Connectivity in Directed Graphs===== 
 + 
 +  * Directed Graph: edge goes from u to v (asymmetric relationship) 
 + 
 +**Representing Directed Graphs** 
 + 
 +  * Use an adjacency list representation where each node has two lists associated with it 
 +        - Consists of nodes to which it has edges 
 +        - Consists of nodes from which it has edges  
 + 
 +**The Graph Search Algorithms** 
 + 
 +  * BFS & DFS are very similar if they are in undirected graphs 
 +  * BFS: start at node s, define a first layer of nodes w/ all those to which s has an edge, define a second layer..., etc. 
 +  * Nodes in layer j are precisely those for which the shortest path from s has exactly j edges 
 +  * Running time: O(m+n) 
 +  * DFS: also linear, recursively launches a depth-first search in order for each node to which u has an edge 
 + 
 +**Strong Connectivity** 
 + 
 +  * A graph is strongly connected if for every two nodes u and v, there is a path from u to v and a path from v to u: mutually reachable 
 +  * Linear time algorithm to test if a directed graph is strongly connected 
 +  * For any two nodes s and t in a directed graph, their strong components are either identical or disjoint 
 +    * if two nodes s and t are mutually reachable, we claim that the strong components containing s and t are identical 
 +    * for any node v, if s and v are mutually reachable, then t and v are also mutually reachable 
 +    * Compute the strong components for all nodes: O(m+n) 
 + 
 +==== Personal Thoughts ==== 
 + 
 +Again, this section was not very difficult to understand and most of it makes sense. I would like to talk a little bit more about strong connectivity in class. The concept makes sense, but I would like to learn more about why it is useful/important. It seems like O(m+n) is a very common running time for graphs... 
 +Readability:
 +Interesting: 7
  
-  Unordered List ItemDegree of a node v is the number of incident edges it has+===== 3.6 Directed Acyclic Graphs and Topological Ordering===== 
 +  Undirected graph with no cycles: each of its connected components is tree 
 +  * Directed Graph with no cycles is a directed ayclic graph (DAG)
  
 +**The Problem**
 +  * DAGs can be used to encode precedence relations or dependencies (i.e.-prerequisites among tasks)
 +  * Node for each task, and a directed edge (i,j) whenever i must be done before j
 +  * No cycles allowed as there would be no way to do any of the tasks then
 +  * Topological Ordering of G: ordering of its nodes so that for every edge, the first node is less than the second node (all edges point "forward" in the ordering)
 +  * If G has a topological ordering, then G is a DAG
  
 +**Designing and Analyzing the Algorithm**
 +  * Every DAG has a topological ordering 
 +  * First node cannot have any incoming edges (violation of topological ordering)
 +    * Thus, in every DAG G, there is a node with no incoming edges
 +    * If this is not true, then there is a cycle, so it is not a DAG
  
 +{{:courses:cs211:winter2018:journals:patelk:dag.png?nolink&400|}}
  
 +  * Identifying node v with no incoming edges & deleting it from G: O(n)
 +  * Algorithm runs for n iterations, running time: O(n²)
 +  * Can achieve a running time of O(m+n) using the same algorithm, if we iteratively delete nodes with no incoming edges.
 +    * Declare a node to be "active" if it has not yet been deleted by the algorithm
 +      * for each node w, the number of incoming edges that w has from active nodes
 +      * the set S of all active noes in G that have no incoming edges from other active nodes
  
 +  * In the beginning, all nodes are active (initialize both of above) 
 +  * Each iteration, selects a node v from set S and deletes it
 +  * Go through all nodes w to which v had an edge, subtract one from the number of active incoming edges for w
 +  * If zero, add w to S
 +  * This leads to constant work per edge
  
 +==== Personal Thoughts ====
  
 +This section contained information that I had not yet been exposed to. However, most of the section was very intuitive. Cycles are not a very difficult concept so it makes sense why a DAG cannot have a cycle. This is my first time, as far as I remember, being exposed to DAGs, so it will be interesting to see how difficult I find them as we do more complex things with them.
 +Readability: 9
 +Interesting: 8
courses/cs211/winter2018/journals/patelk/chapter3.1517778805.txt.gz · Last modified: by patelk
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0