An algorithm is efficient if, when implemented, it runs quickly on real input instances
At first glance this seems reasonable
But they list a number of objections.
Where?
Different computers have different characteristics.
Different processor speeds over time
Different operating systems
Different loads
Different ...
Implementation problems
Can you implement a good algorithm poorly?
Can the language make a difference?
Are there other factors that make a difference?
On what input?
What happens when the size of the input changes
Have you considered all input possibilities, edge cases?
What do you mean by "runs quickly"?
You live in a very artificial world right now.
Most homework runs in seconds.
But this is not the case in the real world.
I suspect you may have other objections as well if you think about it.
They state that what we need a concrete definition of efficient that
Is platform independent
Instance independent (no implementation)
Predictive of performance as input scales up.
They briefly discuss the magic n
For example, in the stable match problem there are two possible values for n.
The size of each set
The number of 2n preference lists each of size n, gives us a n2 value
At times we will need to be careful of how we select n
And make sure that we state what n is.
In general we analyze the worst case performance.
This seems bad at first,
But it is usually more useful than the best case, alto we will look at that as well.
And it is often MUCH easier to compute than the average case
Average case tends to be
Again, hard to compute
And probably not predictive of real input, which is probably not random.
From Architecture: We will discover that the best way to benchmark your program is as a real program, with real input on the real computer you will be using.
They discuss the search space for the stable matching algorithm.
By the way this would get worse (but not much) if we also permuted the number, but that would only produce duplicate cases.
So checking the entire search space would be bad!
They point out, even with our limited understanding of O(f(n)), we still see that our algorithm is much better than the worst possible case.
What would an exhaustive sorting algorithm do?
They point out that exhaustion is usually a bad approach.
And that we have analytically decided that (no need to run the "algorithm")
A second proposed definition An algorithm is efficient if it achieves qualitatively better worst-case performance, at an analytical level, than a brute-force search.
Unfortunately this suffers from a poor definition of "qualitatively better"