Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| courses:cs211:winter2018:journals:cantrella:chapter_4 [2018/02/27 04:09] – [Section 4.1] cantrella | courses:cs211:winter2018:journals:cantrella:chapter_4 [2018/03/12 15:52] (current) – [Section 4.8] cantrella | ||
|---|---|---|---|
| Line 2: | Line 2: | ||
| ===== Section 4.1 ===== | ===== Section 4.1 ===== | ||
| Section 4.1 covers the introduction to greedy algorithms using the basic " | Section 4.1 covers the introduction to greedy algorithms using the basic " | ||
| + | |||
| The second algorithm schedules all intervals in an optimal fashion such that the number of resources used is no more than the depth of the intervals. The algorithm works by going through each interval and removing any other intervals that conflict with it. Then, after excluding all of the conflicts, the algorithm stitches together the remaining intervals together so that the fewest resources have to be used. | The second algorithm schedules all intervals in an optimal fashion such that the number of resources used is no more than the depth of the intervals. The algorithm works by going through each interval and removing any other intervals that conflict with it. Then, after excluding all of the conflicts, the algorithm stitches together the remaining intervals together so that the fewest resources have to be used. | ||
| | | ||
| I give this chapter a 6 for readability since the proofs were sometimes hard to understand a 7 on the interesting scale. | I give this chapter a 6 for readability since the proofs were sometimes hard to understand a 7 on the interesting scale. | ||
| + | ===== Section 4.2 ===== | ||
| + | Section 4.2 covers the design and creation of an algorithm that minimizes lateness in job scheduling. The algorithm itself is surprisingly simple, it arranges jobs in the order of their finish time. Even though this algorithm completely ignores the other half of data, the length of each job, it is shown in the rest of the chapter through various proofs that the algorithm always produces an optimal solution. | ||
| + | |||
| + | This section was a much tough read than many of the other sections since so much of it was taken up by the proofs which I personally find much harder to read. As such, I give this section a 4 on readability and a 6 on the interesting scale. | ||
| + | ===== Section 4.4 ===== | ||
| + | This section covers Dijkstra' | ||
| + | |||
| + | I give this section a 7 for readability and a 7 on the interesting scale. | ||
| + | ===== Section 4.5 ===== | ||
| + | Section 4.5 covers the three ways to create a minimum spanning tree from some fully connected graph. The three algorithms used are// | ||
| + | |||
| + | Although this chapter was informative, | ||
| + | ===== Section 4.6 ===== | ||
| + | Section 4.6 delves into the implementation of Kruskal' | ||
| + | |||
| + | The in-depth implementation of Kruskal' | ||
| + | ===== Section 4.7 ===== | ||
| + | Section 4.7 covers an implementation of // | ||
| + | |||
| + | I give this section a 6 on the readability scale and a 7 on the interesting scale. | ||
| + | ===== Section 4.8 ===== | ||
| + | Section 4.8 covered the theory and implementation of data compression using Huffman Codes. The idea behind this is that when data is sent across networks, it needs to be in its most compressed form so that the transmission is efficient. Huffman codes achieve this by efficiently compressing each letter based on its frequency. The algorithm finds the two least frequency letters and merges them together into a new letter. The algorithm operates in O(// | ||
| + | |||
| + | I give this section a 7 on the interesting scale and a 6 for readability. | ||
