This is an old revision of the document!


Kelly's Wiki

Preface First two Pages

This section discusses the general nature of algorithms, examples of their applications and the process for understanding their efficiency. Algorithms are represented as a fundamental feature of Computer Science.This makes sense as Computers, at their core, solve problems using logical procedures. Efficient algorithms filter out the noise of a problem to focus on the core issue in as simple of a manner as possible. Often, to derive an efficient algorithm programmers must refactor their attempts many times. This refactoring process in turn helps us to better understand the abstractions of more advanced algorithms. Curiously, these first two pages suggest that we emphasize algorithms in our views of computer science. I wonder why this is the case. Is the natural/intuitive way we have of viewing computers inherently flawed? Although brief, this material was very readable. I would give it a solid ~7 to 8/10, where 5 is average relative to other classes.

Section 1.1: Stable Matching

This section focuses on the problem of stable matching, a concept discussed thoroughly in class. Solving the issue is very important because without a solution, we have no consistent way to prevent simple pair-assigning systems from swapping indefinitely. That is, if matching is unstable, then there will always be two individuals which will leave their current pairs to join others. Fortunately, there is a simple algorithm which, when followed by a simple match-based system efficiently creates stability for all pairs. It prioritizes one side of the pairing (A or B), and for each component it iterates through each instance of potential partners, starting from least to most desirable and stopping after a successful pairing. Each component in B may reject its current partner if a better option comes along. Further, the algorithm halts only when every individual of the chosen side(A or B) has attempted to pair with every possible partner and every entity is paired. Here, we see this algorithm applied to relationships between men and women. The section proves that this algorithm produces a stable matching and that the chosen side is “better-off” in terms of the desirability of its pairs than the other side. Further, the section also proves that the worst case running time for this algorithm is O(n^2) and that the algorithm is asynchronous( always produces the same output with the same starting conditions). Overall, this section was harder to understand than the preface. It made more sense in class than in the textbook. I would rate it's ease of reading at 6/10.

Chapter 2

Section 2.1: Computational Tractability

Many of the problems we will focus on have clear conditions. Efficiency is an important component of algorithms. In addition to run-time efficiency, we must also consider concepts such as space efficiency. As we have touched on already, efficient algorithms generally lack unnecessary details. Efficiency is not simply how fast an algorithm runs. There are many components to efficiency such as scale, location and how the algorithm is run. Thus, in forming a definition of efficiency we must use mathematics rather than simply relying on base intuition. Our approach to the speed component of efficiency will be the running time of the worst case. Worst-case analysis is easier to handle(less complications,often little uncertainty over what the worst case is), and tells us key information about the algorithm itself rather than simply the way in which it is run. Further, something that I brought up in class is that many programmers refuse to tolerate things ever going wrong. Thus, since worst-case running time holds the highest risk of consequences, it is logical to focus on it. Brute force search provides an excellent way to compare algorithms. Bts is simply iterating through every single possibility and stopping when we find one that works. Generally, this fails to give insights into problems and has high running time(simple matching problem n! pairs, so brute force is O(n!). Thus, we conclude that “An algorithm is efficient if it achieves qualitatively better worst-case performance, at an analytical level, than brute force search”. A weakness of this definition is that it is overly vague. One desirable property of algorithms is that when we scale up the input size by any factor x, the running time only increases by some constant c. This holds for any algorithms which run in polynomial time. Thus, a new and perhaps too simple definition of efficiency for an algorithm is “running in polynomial time”. There are benefits to such a concise definition such as the ability to classify problems as having an efficient algorithm or not. Overall, this section had a great deal of material but was easy to read/understand. I would rate it 8/10.

Section 2.2: Asymptotic Order of Growth

We analyze run-time in terms of steps(arithmetic operation, following a pointer, looking up an entry, or assigning a value to a variable. We will ignore the coefficients and all but the highest terms in our big-o notation. This is mainly because we want to emphasize similarity between algorithms, it is easier to derive,and at high(significant) running times, the nature of the largest term in the equation holds far more significance than the others. Asymptotic Upper Bound: T(n)=O(f(n)), where T(n)⇐c*f(n).O(n) is a familiar concept, where O stands for order and n is the highest-term with its coefficient removed. Asymptotic lower bound: show that a bound is the best possible instead of the worst possible(we will mostly be working with upper bounds). Asymptotically tight bounds are (basically) both upper and lower bounds.This section reviews the mathematical concepts of transitivity and sum and how they apply to bounds/asymptotic growth rates. As input size increases, logarithmic running times increase the slowest, then polynomial, then exponential(extremely fast, sometimes just described as “exponential running time”). Overall, this section was very hard for me to understand, in part because of the prevalence of dense equations. I would rate the ease of reading at between 4 and 5/10. I feel that my understanding of Asymptotic Lower Bounds and Asymptotic Tight Bounds is limited and I will need to continue studying the math for it to improve.

courses/cs211/winter2018/journals/mccaffreyk/home.1516553825.txt.gz · Last modified: by admin
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0