Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
courses:cs211:winter2014:journals:emily:entryfour [2014/02/10 19:02] – [Chapter 3.5] hardye | courses:cs211:winter2014:journals:emily:entryfour [2014/02/11 03:24] (current) – [Chapter 3.4] hardye | ||
---|---|---|---|
Line 31: | Line 31: | ||
Consider case 2 where G must contain an odd cycle so there is an edge that connects two nodes in the same layer. Supposed the two nodes in the same layer, x and y, are in layer L< | Consider case 2 where G must contain an odd cycle so there is an edge that connects two nodes in the same layer. Supposed the two nodes in the same layer, x and y, are in layer L< | ||
+ | |||
+ | Readability: | ||
====== Chapter 3.5 ====== | ====== Chapter 3.5 ====== | ||
Line 65: | Line 67: | ||
**Proof** | **Proof** | ||
+ | Two nodes s and t are mutually reachable so we claim that the strong components containing s and t are identical. So for any node v if s and v are mutually reachable then t and v are mutually reachable (by 3.16). But if s and t are NOT mutually reachable then there cannot be a node v that is in the strong component of each. If there were then s and v and v and t would be mutually reachable so s and t would be. | ||
+ | |||
+ | We can compute the strong components for all nodes in **O(m+n)** | ||
+ | |||
+ | I thought this section was pretty dry. It was short and easy to read but not that interesting to me. Readability: | ||
+ | |||
+ | ====== Chapter 3.6 ====== | ||
+ | **Directed Acyclic Graphs and Topological Ordering** | ||
+ | |||
+ | A directed acyclic graph (DAG) has no cycles. | ||
+ | {{ : | ||
+ | |||
+ | We use DAGs to represent dependencies or precedence relations for example showing a set of tasks where one is dependent on the other or one needs to be performed before the other can. Example: Course prerequisite for CS major. | ||
+ | A DAG has a **topological ordering** which is an ordeing of its nodes so that for every edge (v< | ||
+ | |||
+ | **3.18** | ||
+ | If G has a topological ordering, then G is a DAG. | ||
+ | |||
+ | An algorithm for finding a topological ordering: | ||
+ | |||
+ | **3.19** | ||
+ | In every DAG, there is a node v with no incoming edges | ||
+ | |||
+ | **Proof** | ||
+ | There is a directed graph G where every node has at least one incoming edge. To show how to find a cycle, we pick a node v and follow the edges backward from v since it has at least one incoming edge we can move to node u. And from u backward to x etc. After n+1 steps we will have visited some node twice which shows there is a cycle | ||
+ | |||
+ | Having a node with no incoming edges is all that is needed to produce a topological ordering. | ||
+ | **3.20** If G is a DAG, then G has a topological ordering. | ||
+ | |||
+ | **Algorithm** | ||
+ | |||
+ | To compute a topological ordering of G: | ||
+ | Find a node v with no incoming edges and order it first | ||
+ | | ||
+ | | ||
+ | |||
+ | The runtime of this algorithm is O(n< | ||
+ | To achieve an algorithm with O(m+n) we use the same one but iteratively delete nodes with no incoming edges. | ||
+ | |||
+ | This section was definitely more interesting to read than the last. I think DAG's are pretty cool. The readability was a 9 out of 10 even though some things felt a little vague the first time through. | ||
+ | |||
+ | ====== Chapter 4.1 ====== | ||
+ | **Interval Scheduling: The Greedy Algorithm Stays Ahead** | ||
+ | |||
+ | The idea for a greedy algorithm in the terms of the interval scheduling problem is create a rule to choose the first request then reject all other requests that are not compatible with the chosen request. Then select the next request and repeat. There are many ways we could choose a rule to pick the first request such as ordering by the earliest start time, the smallest interval time, or the job with fewest conflicts, or what we choose to use, finish time. | ||
+ | |||
+ | **The Algorithm** | ||
+ | |||
+ | | ||
+ | While R is not yet empty | ||
+ | | ||
+ | Add request i to A | ||
+ | | ||
+ | | ||
+ | | ||
+ | |||
+ | We declare that the intervals in the set returned by the algorithm are all comparable. The set is a compatible set of requests. | ||
+ | We now want to show that the greedy solution A, is optimal. So we show that A = O. The greedy rule guarantees that f(i< | ||
+ | |||
+ | **4.3** The greedy algorithm returns an optimal set A. | ||
+ | |||
+ | We can make our algorithm run in O(n log n)... | ||
+ | * begin by sorting the n requests in order of finishing time | ||
+ | * assume that f(i) =< f(j) when i<j | ||
+ | * construct an array S[1...n] with the property that S[i] contains the value s(i). | ||
+ | * select requests by processing intervals in order of increasing f(i). | ||
+ | * This takes O(n) time!! | ||
+ | |||
+ | **A Related Problem: Scheduling All Intervals** | ||
+ | In this problem we have identical resources and have to schedule all of the requests using as few resources as possible. Example: scheduling classes in the smallest number of classrooms possible. | ||
+ | This is called the //interval partitioning problem// | ||
+ | We can define the depth of a set of intervals to be the maximum number that pass over any single point on the time-line. | ||
+ | |||
+ | **4.4** | ||
+ | In any instance of Interval Partitioning, | ||
+ | |||
+ | **Algorithm** | ||
+ | |||
+ | Sort the intervals by their start times, breaking ties arbitrarily | ||
+ | Let I< | ||
+ | For j = 1, 2, 3, ..., n | ||
+ | For each interval I< | ||
+ | Exclude the label of I< | ||
+ | Endfor | ||
+ | If there is any label from {1, 2, ..., d} that has not been excluded then | ||
+ | | ||
+ | Else | ||
+ | Leave I< | ||
+ | Endif | ||
+ | | ||
+ | |||
+ | **4.5** | ||
+ | If we use the greedy algorithm above, every interval will be assigned a label, and no two overlapping intervals will receive the same label. | ||
+ | |||
+ | If you have d labels to use then as you go through the intervals from left to right you assign a label to each interval you encounter, you never reach a point where all of the labels are currently in use. | ||
+ | |||
+ | **4.6** The greedy algorithm above schedules every interval on a resources, using a number of resources equal to the depth of the set of intervals. This is the optimal number of resources needed. | ||
+ | |||
+ | This section was very interesting. I like scheduling a lot because I feel like it is really applicable to a problem that needs to be solved every day and something I could make use of. The readability of this section was a 9 out of 10 especially after reading it after the class lecture. | ||