Section 2.1: Computational Tractability
This section tries to formulate a definition for efficiency. It outlines that we cannot simply define efficiency as speed, because even bad algorithms run quickly on fast computers or when they are run with only small test cases. When looking at efficiency we look at the worst-case scenario as best-case is not particularly realistic and average-case analysis does not really help us look at how the algorithm works and instead looks at how the input is generated. If we were to attempt to work through a problem using just “brute-force” we would end up with insanely poor run times. The section also discussed polynomial run times which means that as the input size of an algorithm increases by a constant factor, the run time will also slow by a constant factor. When we define efficiency using polynomial run times, we create a definition that is negatable. This way we are able to say that there is not an efficient algorithm for a problem.
The motivation behind this section’s discussion is a desire to create algorithms that are both correct and efficient. However, in order to properly determine what is efficient we have to define efficiency in a way that we will be able to prove. If we cannot prove efficiency then we cannot prove that we have created a working, useful algorithm.
There were no specific algorithms discussed in this section.
I am still a little confused about the two constants, c and d, and how they interplay with each other and how they relate to run time.
After reading this, the relationship between the input size and the algorithm speed became more clear and concrete. It also provided the reasons why we should be concerned with run time and how run times can explode if we are not careful.
I would give this section a 7 for interest and readability as it was once again straightforward, but not as interesting as the section on the actual algorithms.