This section of chapter two looks to classify running times at a coarser level of granularity than say, the algorithm runs for at most 1.62n^2 + 3.5n + 8 steps. We don't classify efficiency at that level of specificity so that similarities among different algorithms and among different problems show up more clearly. Let T(n) be a function of the worst-case runtime of an algorithm on an input of size n. We can show the upper bound of this worst-case runtime with the notation O(n). It can be seen that constants do not affect functions significantly after a certain number n with this equation: T(n) = pn^2 + qn + r <= pn^2 + qn^2 + rn^2 = (p + q + r)n^2 for all n>=1 This defines O(n), the upper bound for T(n) which requires a T(n) <= cn^2, where c = p + q + r This is similar to the lower bound, shown by (omega)(n) by this equation T(n) = pn^2 + qn + r >= pn^2 where p > 0. If we can show that a running time T(n) is both O(f(n)) and also ((omega)f(n)), then in a natural sense we've found the "right" bound. Anything inbetween O(f(n)) and ((omega)f(n)) is known as a tight bound characterized by (theta)(f(n)). There are a few more definitions to remember. let f and g be two functions such that for limit of f(n)/g(n) as n approaches infinity exists and its equal to some number c > 0. Then f(n) = (theta)(g(n)). Also, by the property of transitivity, if a function f is asymptotically upper-bounded by a function g, and if g in turn is asymptotically upper-bounded by a function h, then f is asymptotically upper bounded by h. The sums of functions also have a property. If f = O(h) and g = O(h) then f + g = O(h). Lastly, remember that logarithmic functions grow more slowly than polynomial, and polynomials grow more slowly than exponentials. This is a noticeably important chapter. 8.5/10.