Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| courses:cs211:winter2014:journals:emily:entryfour [2014/02/10 18:14] – hardye | courses:cs211:winter2014:journals:emily:entryfour [2014/02/11 03:24] (current) – [Chapter 3.4] hardye | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| + | ====== Entry Four ====== | ||
| + | |||
| ====== Chapter 3.4 ====== | ====== Chapter 3.4 ====== | ||
| **Testing Bipartiteness: | **Testing Bipartiteness: | ||
| Line 25: | Line 27: | ||
| **The Proof** | **The Proof** | ||
| - | (3.4) Let T be a BFS tree, let x and y be nodes in T belonging to layers | + | |
| + | Consider case 1 where there is no edge joining two nodes of the same layer by [[courses: | ||
| + | |||
| + | Consider case 2 where G must contain an odd cycle so there is an edge that connects two nodes in the same layer. Supposed the two nodes in the same layer, x and y, are in layer L< | ||
| + | |||
| + | Readability: | ||
| + | |||
| + | ====== Chapter 3.5 ====== | ||
| + | **Connectivity in Directed Graphs** | ||
| + | |||
| + | In this section we see what ideas we applied to undirected graphs can apply to directed graphs. | ||
| + | |||
| + | A directed graph has direction to its edges. Say an edge (u, v) has a direction that goes from u to v, its relationship is asymmetric. | ||
| + | |||
| + | **How we represent Directed Graphs:** | ||
| + | * We use the adjacency list representation but modify it so instead of each node having a single list of neighbors each node has two lists: one that has nodes that is points to and one that has nodes that point to it. | ||
| + | |||
| + | **Graph Search Algorithms** | ||
| + | |||
| + | BFS search in directed graphs computes the set of all nodes with the property that s has a path to t, not that t has a path to s. It is very similar to BFS of undirected graph and has the same runtime of O(m+n) | ||
| + | DFS search also runs in linear time and compute the same set of nodes. | ||
| + | |||
| + | **Strong Connectivity** | ||
| + | |||
| + | A graph is strongly connected if for nodes u and v there is a path from u to v and v to u. This also means they are //mutually reachable// | ||
| + | |||
| + | **(3.16)** | ||
| + | |||
| + | If u and v are mutually reachable, and v and w are mutually reachable, then u and w are mutually reachable. | ||
| + | |||
| + | **Proof** | ||
| + | The first path we take is from u to v and then from v to w. Since v is mutually reachable to w we can then go to the path w to v and then from v to u. | ||
| + | |||
| + | To test if a graph G is strongly connected we run a simple BFS starting from s and then another BFS starting from s in G reversed. If one of the searches fails to reach the node then the graph is not strongly connected. | ||
| + | |||
| + | **(3.17)** | ||
| + | |||
| + | For any two nodes s and t in a directed graph, their strong components are either identical or disjoint. | ||
| + | |||
| + | **Proof** | ||
| + | Two nodes s and t are mutually reachable so we claim that the strong components containing s and t are identical. So for any node v if s and v are mutually reachable then t and v are mutually reachable (by 3.16). But if s and t are NOT mutually reachable then there cannot | ||
| + | |||
| + | We can compute the strong components for all nodes in **O(m+n)** | ||
| + | |||
| + | I thought this section was pretty dry. It was short and easy to read but not that interesting to me. Readability: | ||
| + | |||
| + | ====== Chapter 3.6 ====== | ||
| + | **Directed Acyclic Graphs and Topological Ordering** | ||
| + | |||
| + | A directed acyclic graph (DAG) has no cycles. | ||
| + | {{ : | ||
| + | |||
| + | We use DAGs to represent dependencies or precedence relations for example showing a set of tasks where one is dependent on the other or one needs to be performed before the other can. Example: Course prerequisite for CS major. | ||
| + | A DAG has a **topological ordering** which is an ordeing of its nodes so that for every edge (v< | ||
| + | |||
| + | **3.18** | ||
| + | If G has a topological ordering, then G is a DAG. | ||
| + | |||
| + | An algorithm for finding a topological ordering: | ||
| + | |||
| + | **3.19** | ||
| + | In every DAG, there is a node v with no incoming edges | ||
| + | |||
| + | **Proof** | ||
| + | There is a directed graph G where every node has at least one incoming edge. To show how to find a cycle, we pick a node v and follow the edges backward from v since it has at least one incoming edge we can move to node u. And from u backward to x etc. After n+1 steps we will have visited some node twice which shows there is a cycle | ||
| + | |||
| + | Having a node with no incoming edges is all that is needed to produce a topological ordering. | ||
| + | **3.20** If G is a DAG, then G has a topological ordering. | ||
| + | |||
| + | **Algorithm** | ||
| + | |||
| + | To compute a topological ordering of G: | ||
| + | Find a node v with no incoming edges and order it first | ||
| + | | ||
| + | | ||
| + | |||
| + | The runtime of this algorithm is O(n< | ||
| + | To achieve an algorithm with O(m+n) we use the same one but iteratively delete nodes with no incoming edges. | ||
| + | |||
| + | This section was definitely more interesting to read than the last. I think DAG's are pretty cool. The readability was a 9 out of 10 even though some things felt a little vague the first time through. | ||
| + | |||
| + | ====== Chapter 4.1 ====== | ||
| + | **Interval Scheduling: The Greedy Algorithm Stays Ahead** | ||
| + | |||
| + | The idea for a greedy algorithm in the terms of the interval scheduling problem is create a rule to choose the first request then reject all other requests that are not compatible with the chosen request. Then select the next request | ||
| + | |||
| + | **The Algorithm** | ||
| + | |||
| + | | ||
| + | While R is not yet empty | ||
| + | | ||
| + | Add request i to A | ||
| + | | ||
| + | | ||
| + | | ||
| + | |||
| + | We declare that the intervals in the set returned by the algorithm are all comparable. The set is a compatible set of requests. | ||
| + | We now want to show that the greedy solution A, is optimal. So we show that A = O. The greedy rule guarantees that f(i< | ||
| + | |||
| + | **4.3** The greedy algorithm returns an optimal set A. | ||
| + | |||
| + | We can make our algorithm run in O(n log n)... | ||
| + | * begin by sorting the n requests in order of finishing time | ||
| + | * assume that f(i) =< f(j) when i<j | ||
| + | * construct an array S[1...n] with the property that S[i] contains the value s(i). | ||
| + | * select requests by processing intervals in order of increasing f(i). | ||
| + | * This takes O(n) time!! | ||
| + | |||
| + | **A Related Problem: Scheduling All Intervals** | ||
| + | In this problem we have identical resources and have to schedule all of the requests using as few resources as possible. Example: scheduling classes in the smallest number of classrooms possible. | ||
| + | This is called the //interval partitioning problem// | ||
| + | We can define the depth of a set of intervals to be the maximum number that pass over any single point on the time-line. | ||
| + | |||
| + | **4.4** | ||
| + | In any instance of Interval Partitioning, | ||
| + | |||
| + | **Algorithm** | ||
| + | |||
| + | Sort the intervals by their start times, breaking ties arbitrarily | ||
| + | Let I< | ||
| + | For j = 1, 2, 3, ..., n | ||
| + | For each interval I< | ||
| + | Exclude the label of I< | ||
| + | Endfor | ||
| + | If there is any label from {1, 2, ..., d} that has not been excluded then | ||
| + | | ||
| + | Else | ||
| + | Leave I< | ||
| + | Endif | ||
| + | | ||
| + | |||
| + | **4.5** | ||
| + | If we use the greedy algorithm above, every interval will be assigned a label, and no two overlapping intervals will receive the same label. | ||
| + | |||
| + | If you have d labels to use then as you go through the intervals from left to right you assign a label to each interval you encounter, you never reach a point where all of the labels are currently in use. | ||
| + | |||
| + | **4.6** The greedy algorithm above schedules every interval on a resources, using a number of resources equal to the depth of the set of intervals. This is the optimal number of resources needed. | ||
| + | |||
| + | This section was very interesting. I like scheduling a lot because I feel like it is really applicable to a problem that needs to be solved every day and something I could make use of. The readability of this section was a 9 out of 10 especially after reading it after the class lecture. | ||
