This is an old revision of the document!
Table of Contents
Chapter 4
Greedy Algorithms
4.1 Interval Scheduling: The Greedy Algorithm Stays Ahead
Greedy rule that works
- accept the request that finishes first
- algorithm on page 118
Show that this algorithm is optimal
- need to show that this algorithm produces the same size interval as the optimal solution
- To prove this algorithm is optimal need lemma.
- Lemma 1: for all indices r⇐ k we have f(ir) ⇐ f(jr). proof on p. 120
- now can prove greedy algorithm is optimal. see page 121
Scheduling all Intervals
- This is similar to the classroom multi resource problem we did in class
- 4.4: in any instance of interval partitioning the number of resources needed is at least the depth of the set of intervals.
- algorithm on p. 124
- not sure how to code the “exclusion” idea in python.
- 4.5: if we use the greedy algoirthm above every interval will be assigned a label, and no two overlapping intervals will receive the same label.
- 4.6: the greedy algoithm above schedules every interval on a resource, using a number of resources equal to the depth of the set of interals. This is the optimal number of resources needed.
4.2 Scheduling to Minimize Lateness: An exchange Argument
- We did a lot with this in class - talks about maximum lateness
- optimal greedy rule - earliest deadline firt
- algorithm pgs. 127 - 128
- 4.7 there is an optimal sched with no idle time
- 4.8 all schedules with no inversions and no idle time have the same maximum lateness
- 4.9 there is an optimal schedule that has no inversions and no idle time.
- swapping argument page 129 and page 130
- 4.10 the schedule A produced by the greedy algorithm has optimal maximum lateness L.
4.3 Optimal Catching: A more Complex Exchange Argument
- A problem that focuses on a sequence of requests of a different form.
- I don't think we covered this in class and its not listed on the schedule so I'm not going to write on it. If you want me to just let me know!
4.4 Shortest Paths in a Graph
Problem = how to find the shortest paths either from one point to another or to all the points.
- Dijkstra algorithm
- on page 138
- 4.14 consider the set S at any point in the algorithm's execution. For each u element of S, the path Pu is a shortest s-u path.
- proof on page 139
- used dijkstra's for a project in highschool before.
- 4.15 using a priority queue, Dijkstra's algorithm can be implemented on a graph with n nodes and m edges to run in O(m) time, plus the time for extractMin and Changekey operations.
4.5 The Minimum Spanning Tree Problem
- problem: looks for way to hit all the verticies the cheapest way possible.
- 4.16: Let T be a minimum-cost solution to the network design problem defined above. Then (V,T) is a tree.
- Three algorithms that work:
- Kruskals: orders edges from low cost to high cost and keeps adding in the min cost that does not cause a cycle
- Prim's - pick a source node and add the node that can be added most cheaply to the tree. Continue to do this.
- Reverse - delete algorithm - delete highest cost edges as long as it does not disconnect the tree.
- 4.17: assume that all edges costs are distinct. Let s be any subset of nodes that is neither empty nor equal to all of V, and let edge e = (v,w) be the minimum cost edge with one end in S and the other in V - S. Then every minimum spanning tree contains the edge e.
- from pgs 146 - 148 the book discusses the why the three algoithms above are optimal.
- we went over these in class. the book also discusses the cycle and cut properties we talked about
- pgs 149 - 150 talk about the implementation of prims algorithm
- important take away for prim use a priority queue
4.6 implementing Kruskal's algorithm: the union-find data structure
- union-find data -
- given a node u, the operation Find(u) will return the name of the set containing U.
- Can be used to test if two nodes are in the same set.
- Can merge two sets. Use the union-find to maintain connected components of graph G(V,E)
- more info on p 152.
- 4.23 consider the array implementation of the union-find data structure for some S of size n, where unions keep the name of the larger set. The Find operation takes O(1) time, MakeUninionFind(S) takes O(n) time, and any sequence of k Union operations takes at most O(klogk) time.
- pf on page 153
- another implmentation of Union-Find possible.
- 4.24 Consider the above pointer-based implementation of the Union-Find data structure for some set S of size n, where unions keep the name of the larger set. A union operation takes O(1) time. MakeUnionFind(S) takes O(n) time. and a find operation takes O(logn) time.
- Definitely still have some questions of the Union-find data structure, but I am going to go review the slides I missed.
- implementing Kruskal's Algorithm
- sort the edges by cost, then use union-find to maintain the connected components of (V,T) more details on p. 157
- 4.25 Kruskal's algoirthm can be implemented on a graph with n nodes and m edges to run in O(mlogntime)