Differences

This shows you the differences between two versions of the page.

Link to this comparison view

courses:cs211:winter2018:journals:holmesr:section_6.1 [2018/03/27 02:13] – created holmesrcourses:cs211:winter2018:journals:holmesr:section_6.1 [2018/03/27 03:34] (current) holmesr
Line 1: Line 1:
 ====== Chapter 6 Front Matter and Section 6.1: Weighted Interval Scheduling, a Recursive Approach ====== ====== Chapter 6 Front Matter and Section 6.1: Weighted Interval Scheduling, a Recursive Approach ======
  
-The basic idea behind dynamic programming is similar to that of divide and conquer and opposite to that of greedy: dynamic programming divides the problem into subproblems and then takes the solutions to those to create the solution to the larger problem. +The basic idea behind dynamic programming is similar to that of divide and conquer and opposite to that of greedy: dynamic programming divides the problem into subproblems and then takes the solutions to those to create the solution to the larger problem. This leads us to think that the dynamic programming approach may reach up to the brute force search time, however we will never actually need to examine every solution to our problem explicitly.  
 + 
 +===== 6.1 a Recursive Approach to Weighted Interval Scheduling ===== 
 + 
 +First, some notation to assist discussion. We will call p(j) the largest index i<j such that i and j are disjoint. This is to say that i is the leftmost interval that ends before j begins. We can also consider an optimal scheduling, the precise contents of which are unknown, that we will call O. We know that no interval between p(n) and n can be included in O if n is included in O because they overlap n. Additionally, O including n also includes an optimal solution for intervals {1,...,p(n)} because if it did not we could simply replace whatever solution was there with that optimal solution without fearing a loss of optimality or overlapping n.  
 + 
 +Using that principle we can develop the statement that a an interval j should only be added to the solution if the sum of it and its optimal solution for p(j) is greater than the optimal solution of j-1. This statement obviously lends itself to recursion, since one must calculate the optimal set of intervals for p(j) and for j-1. Another thing that is easy to see from this includes the algorithm's possible invocation of redundant recursive calls.  
 + 
 +The redundant recursive calling can be solved by memoization, which creates an array M that stores the results of the function's calling on j in case it is called on j again. The memoization array helps the algorithm to run in linear time instead of exponential. Somehow this also helps the algorithm to return to solution, in addition to its value, in linear time. I think it may take the solution with the optimal size from M and return that and the solution that it calculates but I can not tell for sure.
courses/cs211/winter2018/journals/holmesr/section_6.1.1522116824.txt.gz · Last modified: by holmesr
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0