Why does increasing denominator allow you to find out whether a function is an element of some big-Oh set?

In my textbook, Data Structures and Algorithm Analysis in C++ (Fourth Edition) by Mark Allen Weiss, in the problems for chapter 2, "Algorithm Analysis" on page 89 for problem 2.1 we are supposed to order a series of functions by growth rate from slowest growing to fastest. Here is the problem:

Order the following functions by growth rate: N, N^1/2, N^3/2, N^2, N log N, N log log N, N log^2 N, N log(N^2), 2/N, 2^N, 2^N/2, 37, N^2 log N, N^3. Indicate which functions grow at the same rate.


Here is how I have ordered them so far (excluding the ones I haven't placed yet):

2/N, 37, N^1/2, N, 2N, N^1.5, N^2, N^3

Where I got stuck is when I tried to prove that N^3 is an element of O(2^N/2). In my class I know that if you can prove that C is a constant, then you can know that the big-Oh relationship is true. According to my teacher, in order to solve for C you are allowed to increase the function on the left of the inequality sign. Here is the example my teacher gave:

We want to show f(n) = n^2 + 2n + 1 ∈ O(n^2 ). Start with k = 1, assuming n > 1:
f(n)/g(n) = (n^2 + 2n + 1)/ n^2 < (n^2 + 2n^2 + n^2) / n^2 = 4 (Note that we used the inequality 2n < 2n^2 and 1 < n^2 .)


Why are we allowed to increase the denominator value here? I am confused as to why you are allowed to just change it. Can someone illuminate this?

Note: In case anyone is unfamiliar with big-Oh, here is the definition that you can find at https://www.cs.sfu.ca/~ggbaker/zju/math/growth.html:

For functions f(x) and g(x), we will say that “f(x) is O(g(x))” [pronounced “f(x) is big-oh of g(x)”] if there are positive constants C and k such that
|f(x)|≤C|g(x)| for all x>k.

your def explains it :)
you are looking for the O() function. G is the O function. You are adjusting it to find the answer, this is allowed because its not an equality or even 'real' math, its a (crude) approximation technique. You are finding G by tinkering with it. Maybe, don't think of it as the denominator.

n^2 + 2n + 1 does not require a lot of work (showing it takes some energy, but once you are past that classwork ). Its just the dominate term, which is N^2 ....



Last edited on
"Crude" math is always what confuses me the most. Haha. I guess my question is, if you divide the entire inequality by, in this case, 2^N/2 why does increasing n^3/(2^N/2) allow you to find what C is? Why can't you, for example, decrease it?

Maybe my question is not why you are allowed to do this operation but why this would help you to get what you want? Why would increasing the value say anything about what C is? Wouldn't changing it make it so that you are looking for a C for a different function entirely?

Is it because even if you divide both sides by 2^N/2 you are still looking for a C that would be multiplied by 2^N/2. So as long as you know the function is less than 2^N/2 you are allowed to increase it? Sorry if I have a lot of questions. It is just really hard to grasp.
Last edited on
Why can't you, for example, decrease it?
you can decrease it. Have a go, work out what it tells you, and see for yourself.

lets make it simple and divide by 1:
C appears to (n^2 + 2n + 1)
so that did not help you...
then divide by N and C will end up being some sort of N based polynomial.
you keep going until its a constant.
If you keep decreasing, which at some point means making the denominator 1/n or 1/(N*N) etc you will see the C growing in terms of the powers of N...

right?

--------
its not a function or a real division. I am trying to think of an example...
the closest thing I can think of is balancing a chemical equation.
H2O from hydrogen and oxygen.
H2+O2 -> H20
does not balance, so you have to wrangle the values to make it make sense.
turns out that
2H2+O2 -> 2H2O
you changed the equation, you injected numbers, its totally different from what you started with, /meltdown if this were classical algebra. You can't DO that. But its the way the type of equation is resolved, and its just understood that its not using classic algebra.
This isnt exactly what you are doing, but the point is that the technique isnt exactly classical equation algebra and you can't look at it as if it were.

this may help you.
plot x*x
and plot n^2 + 2n + 1
where N is in the millions.
they never quite overlap, but look:
n = 1000. 1000*1000 = 1000000
1000*1000+2001 is 1002001
its that simple here: the amount of work done is about N*N, the extras from the other terms are hardly doing anything at all. That is really what you need once you get out of school: a ballpark of how much CPU work is being done and how long its going to take on a LOT of real data, and if the answer is 'too long' then you realize you need a different approach, so you check google to see if someone ever did it before, and they probably did, and you use their smart algorithm instead of your poor one. Thats all it really comes down to. I know you need to pass a test on it and to satisfy your professor, but unless you are going for a PHD in computational algorithm development, you won't need to over think this stuff.
Last edited on
So you can increase or decrease all you like as long as you can get a constant for C? What if it was n^2 = C * O(n). We can know intuitively that this is false but we want to prove it. If we divide n^2 by n you obviously get n. But what if I decreased n^2 in order to make it so that I would get a constant. In this case, n/n = 1. Can't I do that if I am allowed to decrease?
Need for correction: it is math.

The purpose of Big-O notation is to simplify a function using its dominant term (and discarding the others).

That is, if an algorithm’s behavior can be described as the function (n log n + 3n), we can discard 3n (for n → ∞) and consider the behavior to be close enough to (n log n). If this is the upper limit of the algorithm’s behavior, we call it O(n log n).

Only if given two O(n log n) algorithms do other terms become significant. Otherwise you can simply consider them to be in the same class (behaviorally).


To figure out your assignment, just plug a number in for ‘n’, say, 10, and run it through your calculator. Order from smallest to largest.

Hope this helps.
Need for correction: it is math.
True. The point was to not think of it like standard algebra/calc /equation manipulation math. Once you are past that mental hangup, yes, it IS a branch of math. Apologies for any confusion; I was just trying to get you to look at it differently and in so doing was a little TOO careless.

n^2 = C * O(n)

sure.
its obvious that N*N is nothing like O(N)
proof by that thingy I can't remember words for where you assume its true and show it isnt:
if it were true, then
1000*1000 would be "about" 1000*C where C is relatively small (say, < 10).
sure, that is fuzzy, but its also obvious that say 5000 (5N) is nothing like 1000000.
It won't hold up in your proofs class, but if you want formality, I need to go away, my formal math days are so far behind me I don't think I can recall any of the symbols except the 'such that'

now, if you could show that 1000N were the real O() of the function, and that N were less than 100, N*N algorithm would be better than a 1000N algorithm. this happens in real life from time to time, by the way. Its rare, but it happens. That isnt really what big-O means though, because as soon as N becomes a million, you are right back to where you were above. If you need to limit N, that can be a critical part of analysis for a specific problem but its irrelevant for a specific algorithm, in other words. Does that make sense? You are not looking at for a specific implementation/problem, you are looking at it for the 'algorithm itself', and in that case, N approaches infinity if you need to overwhelm a close but smaller term.
Last edited on
I think maybe I'm starting to get it. Since the relationship is an inequality, is that why we can modify it in ways you couldn't modify an equality statement?

Like in the case of trying to find out whether N3 = C * O(2N/2) for some C and k, would I be allowed to make the inequality N10 / 2N/2? I am not saying that would be useful but am I allowed to do it as long as it doesn't exceed the original value 2N/2?
Last edited on
I think maybe I'm starting to get it. Since the relationship is an inequality, is that why we can modify it in ways you couldn't modify an equality statement?

this, and also because its an approximation. The rules for management of inequality via algebra may be slightly more strict than what we are doing here as well. Inequality algebra is a little odd and I have forgotten some of it, but I think we break those rules here too at least some of the time.

right, and you *can* set it to those values you said as well, it just gives you a useless result. the reason you don't exceed the original value is because the result is not useful, its a red flag that you are out of the bounds, not a 'you can't do it'. Again, nothing like seeing for yourself, go ahead and try to solve it with your 'bad' values and it will come back with an unhelpful result, and then you may see why there is no point in doing that.
Last edited on
Thank you for taking so much time to help me. I don't know why I can't get this. It's just so confusing to me but know that I have been spending a lot of time doing research and trying to wrap my head around this. Normally, I would just move on but not understanding this has caused me to confuse myself when solving the problems. (I understand if you need to move on. Haha. I might just not be able to grasp this right now).

So maybe my next question is, why are we allowed to do approximations here? Like, what is happening when we divide f(x) by g(x)? What does that represent? Like what does C represent when it is alone on the right side when it is greater than or equal to f(x) / g(x)? I don't know if these questions make sense.
Last edited on
I finally had a chance to read this a little more closely. @OP appears to still has questions about the initial inequality.

The rule:
For functions f(x) and g(x), we will say that “f(x) is O(g(x))” [pronounced “f(x) 
is big-oh of g(x)”] if there are positive constants C and k
such that |f(x)|≤C|g(x)| for all x>k.

The problem:
We want to show f(n) = n^2 + 2n + 1 ∈ O(n^2 ). Start with k = 1, assuming n > 1:

In this problem, f(n) correlates to f(x), and n^2 correlates to g(x). So we need to demonstrate that
n^2 + 2n + 1 <= C(n^2).

First get your head around mathematical truths Based on the fact that n > 1 (our 'k' value):
(Note that we used the inequality 2n < 2n^2 and 1 < n^2 .)

Substituting these two statements into the definition of f(n) to get f'(n): n^2 + 2n^2 + n^2.
Because of the inequalities used when substituting, we know that:
n^2 + 2n + 1 < n^2 + 2n^2 + n^2

thus...
f(n) < f'(n)

Next is probably the step that causes you hangups. If we can show that f'(n) <= C*g(n), then because f(n) < f'n() we know that f(n) < C*g(n). This is where the division comes in.

Test f'(n) compared to g(n)
n^2 + 2n^2 + n^2 <= C*n^2

Divide both sides by n^2
1 + 2 + 1 <= C

So, the inequality holds true for C >= 4.
So, there is a C for which f(n) < C*g(n).

Therefore, f(n) is O(n^2).
Last edited on
to be honest, this stuff is almost invariably poorly taught and over-emphasized. I don't mind trying to help, but I am coming at it from a practical side, not a PHD proof / theory side.

what is the question you want to answer?
- you want to know about how much work your algorithm is doing.
- why you want to know this? Bubblesort 5 million items and time it. Now std::sort 5 million items and time it. Repeat this exercise until you understand why it is useful to know what happened.

Do you need to know that it takes exactly (3*N*N+ 13*N + 15/87)/(e^n)! cpu instructions or is it ok to say that the darn thing is approximately O(N) or whatever?

It turns out, you really do not need to know exactly for most practical cases.

You write some code, and you glance at it and see that you nested 4 loops deep, N^4th, and you ask yourself, is this brute force foolishness too slow, can I do better? You look it up and sure enough, some bright person has done it in O(lg(n) and posted how they did it. You redo your code their way. Problem solved.

That is really all you are trying to do 99.99% of the time on a job.

What does it represent? Your function / your approximation, I think. Ive never done it this way -- you may need to re-read the book to confirm what they stand for. You appear to be driving the approximation until you get the answer, which is the hard way to do it, really, and only useful for rather complex algorithms. Normally, you can look at the loops and how they do iteration (critical, you can nest loops 3 deep and still be O(n) if the loops are not all 0-n ...) and remember that recursion IS a loop and don't forget hidden loops (built in algorithms often loop for you). Honestly, if you know what N is, and you know the 4 most common O()s that exist, you can just print a total for a big N and see what order it is empirically and save a ton of trouble.

lets take a dumb example here.
int count = 0;
for(int i = 0; i < n; i++)
for(int j = i; j < n; j+= n/10);
count ++;

now, try to figure out what it is your book's way.
then print out count for a big value of N, and see what it looks like, compare it to N*N and N*lg(N) and N*N*N and see what you have... or print out a few dozen counts for various Ns and do a curve fit (this is the world, use a math tool), see what it gives. See what approach is the least trouble to do.

at the end of the day, just by looking at it, you can see its slightly more than N and far less than N*N. That is probably all you really needed to know. So you can stop there, most likely.. that is what I usually do
Last edited on
Thank you both!

@doug4- I finally get it now. So if we can establish that g(x) * C is more than some value greater than f(x) (that will be easier to factor) then we know that it is greater than f(x) and can then divide by it to get C. Awesome. Thank you so much!
Topic archived. No new replies allowed.