Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:mccaffreyk:home:6 [2018/03/26 01:45] mccaffreykcourses:cs211:winter2018:journals:mccaffreyk:home:6 [2018/03/26 04:18] (current) mccaffreyk
Line 1: Line 1:
 =====Chapter 6: Dynamic Programming ===== =====Chapter 6: Dynamic Programming =====
 +
 +Dynamic Programming is the opposite of Greedy and is similar to divide and conquer. It 
 +is a good tool for problems which do not have greedy aspects and can only produce exponential divide and
 +conquer solutions. Dynamic Programming is the process of decomposing all information into a collection
 +of sub-problems and then using their solutions to solve progressively larger sub-problems. Overall,
 +it is more powerful than the other strategies we have tried so far. 
 +
 +=== Section 6.1: Weighted Interval Scheduling: A Recursive Procedure ===
 +
 +First, we will apply DP to the weighted interval scheduling problem. This is like the old
 +scheduling problem except that now intervals can have differing values, rendering the old
 +greedy algorithm(minimizing overlaps) obsolete. No other greedy algorithm is currently known.
 +We will take a recursive approach although we could take an iterative approach if we chose. 
 +First, the book derives a complex equation representing a recurrence relationship. It simply 
 +means that we can find a solution by checking the final interval points vs the overlap points,
 +applying and then recursing back to the next interval. Unfortunately, this algorithm is still
 +exponential as it requires Fibonacci like computations. Luckily, this poor efficiency is
 +caused by redundancy: making the same calls over and over again to re-find various sub-optimums. 
 +The solution is memoization: storing and then referencing the results of each of these calls with
 +computer memory. We store all of our memorized values in an array and reference from it when able
 +rather than making calls again. Amazingly, this new memoized algorithm runs in only O(n) time. 
 +This however requires that intervals are sorted by finish times before the algorithm is run. We derive
 +this running time from the fact that the algorithm may recursively fill in at most n-entries in the memoization array
 +and each call fills in one. This section was often difficult to read given the high amounts of math and abstraction
 +combined. Thus, I will rate its ease of reading only 5/10.
 +
 +=== Section 6.2: Principles of Dynamic Programming: Memoization or Iteration over Sub-problems ===
 +
 +Here, we focus on the iterative way of applying this algorithm. we originally used a recursive style
 +but both achieve the same results for this type of dynamic programming. The iterative algorithm iterates
 +n times, always adding one value to our memoized array m. The values added into m are the maximum number of
 +points if x-i intervals are considered, moving from 0 to n intervals where n is the interval that starts last. 
 +We find the values by computing each optimum solution (overlap vs number of points). I really liked the diagram
 +present in this section. It helped me grasp key concepts which I did not yet understand when reading section 6.1.
 +This gets an 8/10. 
 +
 +
 +=== Section 6.3: Segmented Least Squares: Multi-way Choices ===
 +
 +Here we focus on the line of best fit problem. For a given line through a set of points
 +on a 2d graph, we define error value as the sum of the squares of distances between the 
 +line and each point. The interesting thing here is that we allow more than one line. 
 +However, additional lines incur two penalties: segment number is multiplied by a constant
 +and summed with the error of each segment. Clearly, we are attempting to minimize total 
 +error. We can define the constant C as we choose to encourage or discourage additional
 +lines more. The exact algorithm is very abstract. We use iteration somewhat similar
 +to that of section 6.2 to find and memoize the least square error sums in O(n^3) time. Next, we deal with 
 +how many line segments we will need, also with a recursive function. This part takes O(n^2) time. This
 +section was hard for me to an extent similar to 6.1. This is because it gave complex mathematical formulas
 +without explaining them adequately. Further, the algorithms were very abstract forcing me to make assumptions
 +and constantly decipher them. Thus, my score for this section is 5/10. 
 +
 +
 +
 +
 +
  
  
courses/cs211/winter2018/journals/mccaffreyk/home/6.1522028731.txt.gz · Last modified: by mccaffreyk
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0