Table of Contents
Chapter 3
Section 3.1
Section 3.1 covers the introduction to graphs, the syntax associated with them, and their applications in real-world settings. Graphs have been implemented in computer science because of their natural prevalence in our everyday lives. Some of the examples the book gave were airplanes and airports, computer networks, and social networks. Types of graphs covered were undirected and directed edge graphs and the special kind of graph called a tree.
This section definitely helped to reinforce what we talked about in class on Monday, January 29th, but I do wish the book had gone into more depth proving assertion 3.2. The assertion that any two statements imply the third on some undirected graph seemed very interesting. I understand that we didn't have time to cover it in class but I wish the book had proved why it was true.
I give this section a 7 on the interesting scale a and a 7 on the readability scale.
Section 3.2
Section 3.2 covers the basics of graph connectivity and two of the most basic ways to traverse graphs–breadth and depth-first searches. The idea here is to cover the different ways in which we can recursively explore graphs.
A breadth-first traversal of the graph begins with some node {s}. Every node immediately connected to s by a single edge is added to what is called the first “layer” of the breadth-first search. The algorithm is called again, and every node that is connected to a node in the first layer by one edge is added to the second layer. The second layer nodes cannot be nodes that have already been added to a layer. In this way, the algorithm moves out from some base node s and places all nodes into some level, which also calculates the shortest distance to node s.
A depth-first goes about searching the graph in a much different way. Starting from some node s, the algorithm progresses from one node to another until it reaches a “dead end”, where there are no more unknown nodes to explore. At this point, the algorithm backtracks to some explored node that is connected to an unexplored node. The algorithm continues descending into and backing out of the graph until all connected nodes in the graph have been explored.
Having already done this in class, I rate this section an 8 for readability but only a 5 on the interesting scale.
Section 3.3
Section 3.3 goes over the actual implementation of depth and breadth-first searches. The two algorithms are very similar, except for how they discover/explore nodes and the data types they use to implement their processes.
The breadth-first search uses a queue to implement its storage of nodes. By using a queue, nodes that get added to the queue are processed last (Last In, First Out) which allows the layers of the breadth-first algorithm to form properly. In addition, as soon as the BFS discovers a node, it explores it. This may seem obvious but it is quite different from how the DFS does things.
The depth-first search uses a stack to implement its storage and processing of nodes. With a stack, the DFS algorithm has the First In, First Out policy which allows the program to loop all the way down some path in the tree before making it back up to a node with some unexplored edges. An important difference from BFS is that when a node is discovered by DFS, it is just added to the stack and not explored yet. By waiting to explore the node, DFS realizes the goal of a depth-first search, only checking a node's unexplored edges once all other more recent possibilities have been exhausted.
This was an interesting chapter, I give it an 8 on the interesting scale and a 6 on the readability scale. The runtime for both algorithms was O(m + n).
Section 3.4
Section 3.4 demonstrates the usefulness of BFS with a demonstration of BFS can be used to show bipartiteness in graphs. Using the same algorithm we used before to generate a bread-first search layering of the graph, we add two constant actions of coloring the node blue if is in an odd layer and even if it is in an even layer. Adding the two constant actions does not increase run-time, so the algorithm runs in O(n + m) time. In addition, if there are no odd-length cycles in the graph, we will know for sure that we have a bipartite graph.
This section helped give me a greater of bipartite graphs and how we determine them, I give this section a 7 on readability and a 7 on the interesting scale.
Section 3.5
After discussing in detail the connectivity possibilities in undirected (or doubly-directed) graphs, section 3.5 turns our attention to directed graphs. This means each edge point in a direction which has interesting implication for the connectivity of the graph. As it turns out, neither the BFS nor DFS algorithm change too much when directed graphs come into play. The algorithms can be used the set of nodes G which some node u is pointing to or the set of nodes Grev which is a list of nodes that point at node u. Running either a BFS or DFS on both incoming and outgoing nodes can be used to show that a graph has strong connectivity. That is, every connection between node u and v goes both ways. This means the graph, while directed, can essentially be treated like an undirected graph.
The representation of the new adjacency list that included two linked lists for each node, one for incoming and one for outgoing edges, was interesting as I did not totally understand it in class. I give this section an 8 on the interesting scale a 9 for readability.
Section 3.6
Section 3.6 introduces the concept of a Directed Acyclic Graph, shows some of the basic logic behind them, and gives us an algorithm that can determine if some graph has a DAG. The chapter first defines a DAG, then shows that if some graph G has a topological ordering is automatically a DAG, and vice versa. Then, the book introduces an algorithm for the topological ordering in some directed graph by finding a node with no incoming edges, adding it to the ordering and deleting it from the graph, and then recursively calling the algorithm to look at the rest of the graph. This algorithm, unfortunately, runs in O(n^2). However, the book then illustrates how a similar algorithm can come to the same conclusion in only O(n + m) time by more carefully keeping count of which nodes have incoming edges.
I give this chapter a 9 on the interesting scale and a 7 for readability.
