This is an old revision of the document!


Chapter 2: Basics of Algorithm Analysis

2.1: Computational Tractability

When considering computational algorithms, two crucial aspects are the algorithm's usage of physical memory space, and its run-time. In many situations we will find that a balance must be struck between the two.

This section focuses on the running time side of things. Obviously, given any problem, it is critical to find as efficient an algorithm as possible for solving it. In working toward an objective, mathematically significant definition for efficiency, we see that the primary focus is considering the worst case scenario of algorithm input and run-time. This is because the best case is often trivial and illuminates nothing, and the “average case,” although it can yield insight in many cases, is a difficult thing to define in abstraction, and therefore over-complicates things.

Firstly, for a given algorithm to be useful, we would expect it to perform better than a brute force search, or the consideration of every possible situation (in terms of the G-S stable matching algorithm, this would entail checking every single perfect matching combination). So we land at the following definition:

  • An algorithm is efficient if it has a polynomial running time, meaning:
    • there exist constants c > 0 and d > 0 such that for sufficiently large input instances N, the running time is bounded by cNd primitive computational steps.

Put simply, if the input size were to increase by a constant factor (e.g. double), then the running time should also only decrease by a constant factor. Examples of polynomial run time bounds are n, n log2 n, n2, and n3. Fortunately, in practice it turns out that if an algorithm exists with an impractically high order polynomial bound, such as n100, then it tends to indicate that a lower order bound indeed also exists.

One of the primary benefits of the above specific definition is its objectivity and therefore exact application from abstraction to implementation. Additionally, it is negatable.

I would rank this section a 9, it was very easy to read and very much complemented the class discussion.

2.2: Asymptotic Order of Growth

To move further into the discussion of algorithm running times, a few clarifications and simplifications are in order. First, for the most part we will be considering steps in our pseudo-code algorithms to be anything such as assigning a value to a variable, looking up an entry in an array, following a pointer, or performing an arithmetic operation on a fixed size integer. Additionally, because constants and lower order terms become inconsequential at larger inputs, we have an impetus for abstraction to highest-order polynomial upper bounds.

Asymptotic Upper Bounds: O

Given a function T(n), we say that T(n) is O(f(n)) (read T(n) is order f(n)) if, for sufficiently large n, the function T(n) is bounded above by a constant multiple of f(n). Mathematically put, that is if there exist constants c > 0 and n0 > 0 such that for all n > n0, we have T(n) =< c * f(n).

Note that there are many asymptotic upper bounds for functions, i.e. T(n) = n is both O(n2) and O(n3).

Asymptotic Lower Bounds: Omega

The definition of an asymptotic lower bound is the exact same as the upper bound, except that we say T(n) is Omega(f(n)) if, for sufficiently large n, the function T(n) is bounded above by a constant multiple of f(n).

Asymptotically Tight Bounds: Theta

If a running time T(n) is both O(f(n)) and Omega(f(n)), then we say that it is Theta(f(n)) indicating that T(n) grows exactly like f(n) to within a constant factor.

For example, consider T(n) = pn2 + qn + r.

  • T(n) =< (p + q + r)n2 so T(n) is O(n2,
  • T(n) >= p2 so T(n) is Omega(n2),
    • Therefore T(n) is Theta(n2).

Fun stuff about bounds

  1. Let f and g be two functions such that limn→infty f(n)/g(n) exists and is equal to some c > 0. Then f(n) = Omega(g(n)).
  2. Transitivity holds for O, Omega, and Theta. E.g. if f = O(g) and g = O(h), then f = O(h).
  3. Sums of functions: if f = O(h) and g = O(h), then f + g = O(h). Moreover, this extends to any finite sum of functions fi where each fi = O(h).
  4. If g = O(f), then f + g = Theta(f). I.e. f is an asymptotically tight bound for the combined function f + g.

Asymptotic bounds for common functions

  • Polynomials: growth determined by high-order term.
    • Let f be a polynomial of degree d in which the coefficient ad > 0. Then f = O(nd). Moreover, f = Omega(nd) as well, so it follows that f = Theta(nd).
  • Logarithms: slowly growing–bounded above by polynomials.
    • For every b > 1 and every x > 0, we have logb n = O(nx).
    • Can translate between bases, but often base is left out because loga n = (1/logb a) logb n, so loga n = Theta(logb n), meaning it is irrelevant when discussing bounds using asymptotic notation.
  • Exponentials: of form f(n) = rn for some constant base r.
    • For every r > 1 and every d > 1, we have nd = rn. That is, every exponential grows faster than every polynomial.
    • For the most part, this renders algorithms useless.

This was a nice section, also a 9 out of 10. Very readable, and I particularly like how they are getting down into the mathematical foundations a bit more now.

courses/cs211/winter2018/journals/beckg/ch2.1516060232.txt.gz · Last modified: by beckg
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0