Chapter 2.1: Computational Tractability

A simple definition of efficiency when it comes to algorithms is: An algorithm is efficient if, when implemented, it runs quickly on real input instances. However, this doesn’t really say much about how well the algorithm runs or what exactly an input instance is. So, it’s best to focus on the worst-case running time which will serve as a bound on the largest possible running time an algorithm can have for all inputs of a given size N. There are some cases where the algorithm performs well in most instances and only a few situations make it very slow, but this is not common. Average cases are often too “random” to provide a lot of insight. So, for the most part, the worst case running time is the most illuminating. Another definition of efficiency is: An algorithm is efficient if it achieves qualitatively better worst-case performance, at an analytical level, than brute-force search. This is a better definition, but what is “qualitatively better performance?” In the 1960s, the question of how to quantify the notion of reasonable running time arose. The issue is that if the input size, N, increased by a constant factor, the algorithm should only slow down by a constant factor, rather than multiplicatively. So, for an input size increasing from N to, say, 2N, an efficient algorithm’s running time slows down by a factor of 2d. This running time is called polynomial time. This leads to a third definition of efficiency: An algorithm is efficient if it has a polynomial running time. Some examples of this running time are n, nlogn, n2, and n3. With such a specific definition of efficiency, it is possible to prove that there is no efficient algorithm for a particular problem.

I think this section was pretty difficult and confusing, but I think I may have gotten the general idea. I understand why we use worst-case running times and I think I see why polynomial time is the best (it’s definitely better than many of the alternatives i.e. exponential time). I would rate this at a 6.