According to this book on discrete math, if O(x^2) is the complexity, and it takes 3 seconds to run the algorithm when x = 100, then it takes 9 seconds when x = 200. I'm not quite seeing how that calculation works out. I calculated 12 seconds since 2^2 is 4 times bigger than 1^2.

I guess they are saying that since 200 is 100*2 that the factor of 2 is the exponent so that they take 3^2 to get 9 seconds. I just don't see how they get there from the complexity equation of O(x^2). I would have guessed that X is the number of elements or operations or something and that you substitute x with some value and then calculate the answer. Different values of x produce different results and then the relationship between those differences in the results would be a factor that you could multiple by time to see how one compares to another in terms of time. What am I misunderstanding?

I haven't seen any threads that really talk about BIG O in the context of profiling and comparing the actual time that it takes to run the worst case scenario. I've only ever looked at BIG O generally to compare algorithms, but I'm interested a bit more in the math. I just want to fully understand how to determine the time to execute for the same algorithm with different numbers of inputs. Actually I just want to understand exactly what this book is trying to indicate and how they arrived at that conclusion of 9 seconds vs 3 seconds which is currently escaping me.

I guess they are saying that since 200 is 100*2 that the factor of 2 is the exponent so that they take 3^2 to get 9 seconds. I just don't see how they get there from the complexity equation of O(x^2). I would have guessed that X is the number of elements or operations or something and that you substitute x with some value and then calculate the answer. Different values of x produce different results and then the relationship between those differences in the results would be a factor that you could multiple by time to see how one compares to another in terms of time. What am I misunderstanding?

I haven't seen any threads that really talk about BIG O in the context of profiling and comparing the actual time that it takes to run the worst case scenario. I've only ever looked at BIG O generally to compare algorithms, but I'm interested a bit more in the math. I just want to fully understand how to determine the time to execute for the same algorithm with different numbers of inputs. Actually I just want to understand exactly what this book is trying to indicate and how they arrived at that conclusion of 9 seconds vs 3 seconds which is currently escaping me.

Last edited on

You are correct. The book is in error.

3 s = 100^2 * T

3 s = 10000 * T

T = 3/10000 s

time_for_200 = 200^2 * 3/10000 s

time_for_200 = 40000 * 3/10000 s

time_for_200 = 12 s

time_for_500 = 500^2 * 3/10000 s

time_for_500 = 250000 * 3/10000 s

time_for_500 = 75 s

3 s = 100^2 * T

3 s = 10000 * T

T = 3/10000 s

time_for_200 = 200^2 * 3/10000 s

time_for_200 = 40000 * 3/10000 s

time_for_200 = 12 s

time_for_500 = 500^2 * 3/10000 s

time_for_500 = 250000 * 3/10000 s

time_for_500 = 75 s

closed account (*48T7M4Gy*)

O(N2) represents an algorithm whose performance is directly proportional to the square of the size of the input data set. |

n = N: t

n = 2N: t

https://en.wikipedia.org/wiki/Big_O_notation - see graph

Last edited on

The question is bogus in 2 important ways.

First, Big-O is an upper bound. This means that the asymptotic growth of x^2 is greater than or equal to that of the function in question. For example, a function that always takes 3 seconds no matter the size of the input fits this criteria.

Second, even if we assume that the author meant Big-Theta( x^2), in where the actual growth rate is bounded above and below, then there is still a problem as you need to also consider an additive constant.

3 = A * 100^2 + C

=>

y = A * 200^2 + C

In any case, there are infinitely many solutions.

First, Big-O is an upper bound. This means that the asymptotic growth of x^2 is greater than or equal to that of the function in question. For example, a function that always takes 3 seconds no matter the size of the input fits this criteria.

Second, even if we assume that the author meant Big-Theta( x^2), in where the actual growth rate is bounded above and below, then there is still a problem as you need to also consider an additive constant.

3 = A * 100^2 + C

=>

y = A * 200^2 + C

In any case, there are infinitely many solutions.

Last edited on

Topic archived. No new replies allowed.