Garrett's Journal

My journal entries for the “Algorithm Design” textbook by Jon Kleinberg and Éva Tardos.

"Algorithm Design" Journal Entries

Week 3

Chapter 3: Graphs
3.1: Basic Definitions and Applications

An undirected graph is defined to be a set of nodes and a set of edges. Each edge is defined to be a set of exactly two nodes that represent a direct relationship between them. A directed graph is an undirected graph in which edges are ordered pairs instead of an unordered set of two nodes. Thus, given any edge in a directed graph, traversal (or “flow”) is only allowed from the start node to the end node.

3.2: Graph Connectivity and Graph Traversal

Breadth-first search of a graph involves traversing a graph in such a way that when a node is visited, one of its children is immediately visited in a recursive fashion. When a node is visited that doesn't contain any children, the search will backtrack back up through the parents of that node until it finds a node with children that haven't been visited yet. This is usually accomplished with a stack that keeps track of the path from the start node to the current node being searched and each node is either marked or not marked to indicate whether or not it and its children have been visited already by the algorithm.

Depth-first search of a graph involves traversing a graph level by level in that every child of the current node is visited before traversing through those children's child nodes. This is usually accomplished with a queue that keeps track of what nodes to traverse after all of the nodes in the current level of the graph have been traversed.

3.3: Implementing Graphs

One way to represent a graph programmatically is through the use of an adjacency matrix. Given a graph of n nodes and m edges, the adjacency matrix of that graph could be represented by a nxn matrix. Each row and column of the matrix would represent a pair of vertices and a non-zero value at the corresponding coordinate in the matrix would represent an edge (while a value of zero at that coordinate would signify that there is no connection between the two nodes).

:!: Section 3.4 is a short section that talks about a specific use of the breadth-first search algorithm that checks a graph to determine if it is bipartite, meaning that the nodes can be organized into two distinct sets in which no two nodes within each set are connected by an edge. I do not think it is necessary to include a summary on a section that is short in addition to being superfluous.

2012/02/01 03:33 · garrettheath4 · 0 Comments

Week 2

Chapter 2: Basics of Algorithm Analysis
Arrays and Lists

Algorithms tend to abstract away the method for storing data that is needed to actually run the algorithm. In order to run the algorithm on a computer, we need a way to transform our algorithm into a form that a computer can recognize. Luckily, most of the steps in the algorithm are usually easily transferrable, but the implementation of data structures is not specified in the algorithm and so is left up to the programmer to determine. The problem with this is that the efficiency of the data structure in terms of space and the way it is manipulated by the algorithm can severely hinder the runtime complexity of the algorithm. Thus, we must be careful in determining which data structures will suit the algorithm best.

The two most basic data structures are arrays and linked lists. The following table summarizes the pros and cons of both.

Operation Arrays Lists
Get Element Fast Slow
Add Element Slow Fast
Replace Element Fast Fast1)
Remove Element Slow Fast2)
A Survey of Common Running Times

Any algorithm can be characterized by its running time. There exists an ordered list of different running time complexities such that the lower order running time complexities will decrease, stay the same, or increase by very little as the problem set grows larger. Likewise, the running time complexities at the high end of the list will increase very very fast as the problem set grows larger. All of these runtime complexities are categories of Big-O complexities and can be expressed in terms of Big-O notation. Big-O notation is discussed in Section 2.2.

In this graph, the x axis is the problem set size and the y axis is the amount of time needed to finish the computation. Linear time problems are problems whose time complexity increases at a constant rate as the problem set grows larger. Examples of linear-time problems include finding the maximum element (such as a number) of a set and merging two sorted lists.

n log n time problems are problems whose time complexity increases at a slowly increasing rate as the problem set grows larger. Since this rate is so slow, it grows similarly to linear time for small problem sets but is much larger than linear time as the problem set becomes larger. Examples of n log n time problems include sorting algorithms, such as mergesort and quicksort.

Quadratic time problems are problems whose time complexity is the square of the size of the problem set, specifically O(n²) where n is the size of the problem set.

Cubic time problems are problems whose time complexity can be described as O(n³) where n is the size of the problem set.

Priority Queues

Priority queues are useful data structures to use for ensuring that elements of a set are always sorted and considered in order from greatest priority to least priority. A data structure that can be used to implement a priority queue is a heap. A heap is a balanced binary tree in which every element's children are larger than itself. If we use a heap to implement a priority queue, then we must take into account what should happen when an element is added or removed from the heap. If an element is removed from the heap, the last element in the heap is swapped into the deleted node and the Heapify-down algorithm is performed on that node. Similarly, if an element is added to the heap, it is added as the last element of the heap and the Heapify-up algorithm is performed on it.

2012/01/25 03:21 · garrettheath4 · 0 Comments

Discussion

Week 72023/02/05 16:59Garrett Koller0 Comments
Week 32023/01/27 15:29Garrett Koller0 Comments
Week 42023/01/27 03:55Garrett Koller0 Comments
Week 82023/01/26 17:32Garrett Koller0 Comments
Week 5-62023/01/26 15:42Garrett Koller0 Comments
Week 92023/01/15 19:47Garrett Koller0 Comments
Week 102023/01/15 19:47Garrett Koller0 Comments
Week 12023/01/15 19:47Garrett Koller0 Comments
Week 22023/01/13 17:26Garrett Koller0 Comments
1) , 2)
After finding the element, which is slow.
courses/cs211/winter2012/journals/garrett/home.txt · Last modified: 2012/03/07 03:47 by garrettheath4
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0