Section 4.5 discusses the minimum spanning tree problem, which is focused on finding the cheapest possible connected graph (such that every pair of nodes is connected). It presented 3 primary algorithms for this problem, each of which is guaranteed to produce an optimal solution: - Kruskal's algorithm: starting with just the nodes, and then adding the edges in order of increasing cost (making sure none of them create a cycle) Runtime: O(mlogn) -- see botto of page 149 - Prim's Algorithm: starts from a root node and grows outwards. At each step, we simply add the node that can be attached as cheaply as possible to the partial tree we already have (adding the node that minimizes the "attachment cost"). Runtime: O(mlogn) -- see botto of page 149 - Reverse-Delete Algorithm: starting with a full graph, we begin deleting edges in order of decreasing cost; kind of like a "backwards" version of Kruskal's. Runtime: can be implemented on a graph with n nodes and m edges to run in O(mlogn) time (see page 157). The section then discusses the Cut Property (4.17, see pg 145), which allows us to easily prove the optimality of both Kruskal's and Prim's Algorithms, since both algorithms only include an edge when it is justified by that Cut Property. The section also discusses the Cycle Property (*4.20*, pg 147), which helps us prove that the Reverse-Delete Algorithm produces a minimum spanning tree, since it only adds an edge when it is justified by *4.20.* Note that most of the chapter operates under the assumption that all edge costs are identical, which page 149 proves can work in any situation by simply adding extremely small numbers to perturb identical edges just for tie-breaking purposes. 4.6 discusses implementing kruskal's algorithm through the union-find data structure. The problem it discusses is finding a set of connected components, considering the scenario in which a graph evolves through the addition of edges. Our goal is to maintain the set of connected components of such a graph throughout this evolution process. The book details several operations, including Find(u), MakeUnionFind(S), and Union(A,B) to help us solve this problem. The section later details a better data structure for Union-Find, which uses pointers. 4.7 Focuses on clustering, which can help us measure how similar each pair of objects in a set is. To me, the most powerful sentence of this section was in the middle of page 159: "What is the connection to minimum spanning trees? It's very simple: although our graph-growing procedure was motivated by this cluster-merging idea [in which each time we add an edge that spans two distinct components, it is though as if we have merged the two corresponding clusters], our procedure is Precisely Kruskal's Minimum Spanning Tree Algorithm." This sentence helped me understand this section more than anything else. Overall, this chunk of the textbook started off easy and got progressively more difficult. 4.5 seemed intuitive, easy-to-follow, and even enjoyable. By page 160, however, it seemed the text had mutated into a jumbled mass of variables that was getting tougher and tougher to chip away at as I became more tired. I think I still have a better grasp of this chunk than I have of previous ones, and feel quite confident, but think I still need more practice work. On that note, onto the problem set!