Analyzing algorithms involves thinking about how their resource requirements (amount of time and space they use) will scale with increasing input size. Computational tractability Finding efficient algorithms We shall concentrate on paradigmatic problems (models), and approaches with minimum irrelevant detail to design efficient algorithms. Most of these problems we are going to study are discrete, meaning they involve an implicit search over a large set of combinatorial possibilities. Our main concern with efficiency is how fast an algorithm can run. What do we mean by efficiency?? After a couple of definitions, 1. An algorithm is efficient, if when implemented runs quickly on real input instances. Unfortunately, this definition does not specify where and how well (or how big the input size is), we implemented an algorithm. We focus on the worst-case running time of an algorithm, in contrast to its average-case running time, because the average-case running time of an algorithm tells us more about how to generate random inputs than the algorithm itself. And using the worst-case running time, we are trying to find a bound on the largest possible running time the algorithm could have over all input sizes. We then shall be using the worst-case running time of an algorithm in comparison with the brute-force search over the search space of possible solutions. Definition 2 suggests, “An algorithm is efficient if its worst-case performance is better qualitatively at an analytical level than the brute-force search. We shall reconsider this definition because it is vague.” What do we mean by qualitatively better performance ?” This suggests that we consider the actual running time of an algorithm more carefully, and try to deduce a “reasonable” running time.
Final definition: An algorithm is efficient if it has a polynomial running time. If input size is N, running time is cN^d for d>0 and c>0. If the input size doubles, then the running time becomes c*(2^d)*(N^d), thus showing that the running time has slowed down by only (2^d). We are able to calculate how an algorithm performs with different input sizes using the third definition of how efficient an algorithm can be. Although our final definition allows us to discuss certain issues in concrete terms versus the first two definitions, we come to a conclusion that there does not exist an efficient algorithm for a particular algorithm, since running times work better than others with specific running times. But overall, polynomial running time works best on wide range of input sizes than other running times.
*We would trade an “easy to understand algorithm” for its “efficiency”. Scale of readability of 2.1 is a 9/10.