This is an old revision of the document!


Chapter 4: Greedy Algorithms

A greedy algorithm creates a solution by making the optimal choice at a local level in the hopes of creating a solution that is globally optimal. The easiest example of this is giving change in the way that uses the least amount of coins. There are several ways to prove that the greedy solution is the optimal solution. The two this chapter discusses are greedy stays ahead and exchange (swappy). The motivation by the greedy algorithm is to solve a non-trivial problem optimally and can be used to solve interesting problems.

Section 4.1: Interval Scheduling: The Greedy Algorithm Stays Ahead

The Interval Scheduling problem is a problem where we have n intervals that each have a specific start and finish time. We want to perform as many of these n tasks as possible but the intervals cannot overlap , i.e. they are compatible. The section traces through several ways we could order the intervals prior to running the greedy algorithm, providing counter-examples to the orderings that are not optimal.The optimal ordering has been found to be ordering the intervals by the time they finish. Then we choose the job the finishes first thus allowing us to maximize the time we have left to perform other jobs. If we order the intervals in this way the algorithm (text page 118) will return an optimal ordering of intervals. In order to prove that the greedy solution is the optimal solution, we use the “Greedy Stays Ahead” proof. If we create the “optimal” set of intervals, we know that at every point the greedy algorithm will have chosen an interval that ends either sooner or at the same time as the optimal set. From there we can see that there would not be an interval in the optimal solution that we would not have chosen in the greedy solution. The greedy algorithm runs in O(nloqn) time. The interval sorting take O(logn) time and then the while loop with run in O(n) time. In addition to the greedy algorithm solving the Interval Scheduling problem, it also solves the Interval Partitioning problem. This problem has a set of intervals or jobs that we want to fit in the smallest amount of resources possible (i.e. a bunch of lectures that we want to fit into the smallest number of classrooms). We immediately know the minimum number of classrooms we will use, by seeing how deep the set of intervals is. If there are 3 intervals that overlap at any given point, we know that we will need to have at least 3 resources to fit everything. Then we can design a greedy algorithm (text page 124) to place the intervals into the minimum number of resources. After analyzing the algorithm, we find that the optimal number of resources is exactly the depth of the set of intervals.

This was a pretty straightforward section. I would give it a 9/10 because it provided solutions for some cool problems and explained everything in a clear manner.

Section 4.2: Scheduling to Minimize Lateness: An Exchange Argument

The problem discussed in this section is similar to the schedule all intervals problem except that each request now has a flexible start time. Each interval has a specific deadline and a continuous length of time it must run. The goal of the problem is is to minimize lateness, which is the time that it takes for an interval to finish after its deadline. When designing the algorithm, we sort the jobs in increasing order of their deadlines and then schedule them in this order. Thus the job with the earliest deadline will be ordered first. This algorithm is outlined on page 127 of the test, and will always return the optimal solution, even if it does not seem like it at first. The optimal solution has no idle time, which would be time between jobs where the computer was not processing any jobs. We can prove that this greedy algorithm is the optimal solution by using an exchange argument. This means that there are no inversions, or places in the schedule where a job with a later deadline was scheduled before a job with an earlier deadline. This problem can also be extending to analyze the problem where each of the intervals has a deadline, a needed run time, and an earliest possible start time. The motivation behind this problem could be scheduling jobs for a computer to process for a printer to print. The section did not really discuss the O runtime of the algorithm. This section was straightforward and helped to reinforce what we had discussed in class. I would give the section a 8/10.

Section 4.4: Shortest Paths in a Graph

The problem of finding the shortest path in the graph can be solved by using a greedy algorithm. We use this algorithm on directed graphs, where each to the edges have a specific weight. The algorithm we used was designed by Edsger Dijkstra and we at each point choose the shortest path to the next node. The algorithm, on page 138 of the text, seems a little confusing at points, but does manage to produce an optimal solution every time. We just have to be careful to not grow the set S by the wrong node, which would give us an incorrect shortest path. Thus at any point in our algorithm, the path that we have chosen is the shortest path. This algorithm does not work for negative lengths. Dijkstra's algorithm is an extension of the breadth-first search.

Section 4.5: The Minimum Spanning Tree Problem

Section 4.6: Implementing Kruskal's Algorithm: The Union-Find Data Structure

courses/cs211/winter2014/journals/shannon/chapter4.1393337333.txt.gz · Last modified: 2014/02/25 14:08 by nollets
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0