Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:cantrella:chapter_4 [2018/02/27 04:10] – [Section 4.1] cantrellacourses:cs211:winter2018:journals:cantrella:chapter_4 [2018/03/12 15:52] (current) – [Section 4.8] cantrella
Line 6: Line 6:
          
 I give this chapter a 6 for readability since the proofs were sometimes hard to understand a 7 on the interesting scale. I give this chapter a 6 for readability since the proofs were sometimes hard to understand a 7 on the interesting scale.
 +===== Section 4.2 =====
 +Section 4.2 covers the design and creation of an algorithm that minimizes lateness in job scheduling. The algorithm itself is surprisingly simple, it arranges jobs in the order of their finish time. Even though this algorithm completely ignores the other half of data, the length of each job, it is shown in the rest of the chapter through various proofs that the algorithm always produces an optimal solution.
 +
 +This section was a much tough read than many of the other sections since so much of it was taken up by the proofs which I personally find much harder to read. As such, I give this section a 4 on readability and a 6 on the interesting scale.
 +===== Section 4.4 =====
 +This section covers Dijkstra's shortest path algorithm that we discussed today in class (Monday, Feb. 26th). I was able to understand the algorithm much better in class as it was easier to grasp with a real-time explanation, but the section definitely helped to supplement my knowledge of the algorithm. For example, the algorithm actually runs in O(//m//log//n//) time not O(//n//log//n//) time. In class we just sort of said nlogn since it is what we are used to, while mlogn is the correct runtime.
 +
 +I give this section a 7 for readability and a 7 on the interesting scale.
 +===== Section 4.5 =====
 +Section 4.5 covers the three ways to create a minimum spanning tree from some fully connected graph. The three algorithms used are//Prim's Algorithm//, //Kruskal's Algorithm//, and the //Reverse-Delete Algorithm//. All the algorithms are greedy by nature, and all three were covered in class beforehand. The chapter goes into various proofs that are used to assert the correctness of the three algorithms, but the most important points are the proving of the cut and cycle properties. The cut property is to edges that are part of the minimum spanning tree, while the cycle property proves that a cycle is inefficient and allows the deletion of nodes in the //Reverse-Delete Algorithm//.
 +
 +Although this chapter was informative, the fact that we had covered it in class for multiple days made the reading a bit redundant as the material had already been presented. I give this section a 4 on the interesting scale and a 6 on the readability scale.
 +===== Section 4.6 =====
 +Section 4.6 delves into the implementation of Kruskal's algorithm. The new data type introduced in the section is the Union-Find data type. Implemented using pointers, the Union-Find data structure performs three operations: MakeUnionFind(//S//) which creates a Union-Find data structure out of some set of nodes //S//, Find(//u//), and Union(//A, B//). Find can be implemented in O(log//n//) for our purposes and returns the set name which node //u// is in. Union combines two nodes A & B into a single set and can be implemented in O(1). The overall runtime for Kruskal's algorithm is O(//m//log//n//).
 +
 +The in-depth implementation of Kruskal's algorithm was interesting, it helped cement the in-class lecture. I give this section a 7 on readability and a 7 on the interesting scale.
 +===== Section 4.7 =====
 +Section 4.7 covers an implementation of //Kruskal's Algorithm// in the problem of clustering. Clustering (specifically, single-link clustering) is setting up a //k//-clustering of nodes that maximize the total distance between the clusters created. The goal of clustering is to maximize the total distance between the clusters, with the number of clusters being given at the beginning of the algorithm. I am unsure of what the point of having a different number of clusters would be, I thought it would make more sense to allow the number of clusters to be chosen by the algorithm to maximize spacing.
 +
 +I give this section a 6 on the readability scale and a 7 on the interesting scale.
 +===== Section 4.8 =====
 +Section 4.8 covered the theory and implementation of data compression using Huffman Codes. The idea behind this is that when data is sent across networks, it needs to be in its most compressed form so that the transmission is efficient. Huffman codes achieve this by efficiently compressing each letter based on its frequency. The algorithm finds the two least frequency letters and merges them together into a new letter. The algorithm operates in O(//k//log//k//) time, with //k// being the number of letters in the alphabet.
 +
 +I give this section a 7 on the interesting scale and a 6 for readability.
courses/cs211/winter2018/journals/cantrella/chapter_4.1519704604.txt.gz · Last modified: by cantrella
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0