Can there be more than one n0 of a function in Asymptotic Notations?

Considering the case of Big-O here (as an example), you would know that,
 
f(n) is O(g(n)), if for some real constants c (c > 0) and n0, f(n) <= c g(n) for every input size n (n > n0).


My question here is simple, is it possible that there could be more than one n0 of a function? (be it in case of Big-O, or Omega or etc)
It depends on your intent. If this is computing theory class, you only have one of each type (best case, worst case, average case, whatever else each have one measure).

If its at work and you want to know how the darn thing performs, there may be some functions where you have two or more values; consider a function that either does nothing at all because it does not need to or does some hefty computation. It could be O(1) or O(n*n) depending on the inputs, and while in theory that is O(n*n), in reality, you may need both measures, as well as some sort of real world stats on how often each case happens. The point is that it depends on what you want to know: what is the question you are trying to answer determines what you study and what you write down as the answer. In theory, its always the same; in practice you may have a tailored answer to a tailored question.
> f(n) <= c g(n) for every input size n (n > n0)
from one point to the end `a' is bigger than `b', that's all what it means.
I don't understand what you mean with several n_0
Once you identify a value n0 for which f(n) <= c g(n), EVERY INPUT SIZE greater than the identified n0 also satisfies the condition.

All the definition states is that you can identify a point at which ALL succeeding sizes cause f(n) <= c g(n). If it's true for sizes 1 to infinity, then it's also true for sizes 2 to infinity, or 10 to infinity, or 100000 to infinity. So, yes, there can be multiple n0's.

However, for a proof, one would generally calculate a single c and n0 at which f(n) == c g(n) and prove that all values > n satisfy the condition.

Actually, in practice you learn how to identify the common complexity levels and realize that O(n^2) > O(n log(n)) > O(n) > O(log(n)) > O(1). The proof that you are working through is to give you mathematical basis for understanding why complexity levels are important. In the field you won't be doing these calculations.
Topic archived. No new replies allowed.