This is an old revision of the document!


Chapter 2

Section 2.1 Computational Tractability

The authors begin by stating the importance of efficient and discrete algorithms. The authors then attempt to define the term efficiency, concluding that a concrete definition of efficiency is “platform-independent, instance-independent, and of predictive value with respect to increasing size.” The authors then discuss Worst-Case Running Times, explaining that it is best to analyze the worst-case of an algorithm because the worst-case does a reasonably effective job of truly capturing an algorithm's efficiency. (Average-case analysis is too opaque, as there is rarely a quantifiable average. Best-case is obviously too unrealistic). The authors also explain Brute-Force Search, or the process of trying all possibilities for a problem, is almost always the slowest and least effective algorithm. Thus, it can be used as a benchmark for more sophisticated algorithms. Finally, the authors explain the concept of Polynomial Time. An algorithm is polynomial time if its running time is at most proportional to N^d. However, this is a deliberately vague definition, and I will need a firmer definition later on. The authors conclude that the best definition of efficiency is: An algorithm is efficient if it has polynomial running time. They note, however, that efficiency is ultimately relative. Couldn't a non-polynomial time algorithm be the most “efficient” or appropriate for a given problem? The overall readability of this section is 6/10.

courses/cs211/winter2018/journals/donohuem/chapter2.1516133649.txt.gz · Last modified: by donohuem
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0