According to this book on discrete math, if O(x^2) is the complexity, and it takes 3 seconds to run the algorithm when x = 100, then it takes 9 seconds when x = 200. I'm not quite seeing how that calculation works out. I calculated 12 seconds since 2^2 is 4 times bigger than 1^2.
I guess they are saying that since 200 is 100*2 that the factor of 2 is the exponent so that they take 3^2 to get 9 seconds. I just don't see how they get there from the complexity equation of O(x^2). I would have guessed that X is the number of elements or operations or something and that you substitute x with some value and then calculate the answer. Different values of x produce different results and then the relationship between those differences in the results would be a factor that you could multiple by time to see how one compares to another in terms of time. What am I misunderstanding?
I haven't seen any threads that really talk about BIG O in the context of profiling and comparing the actual time that it takes to run the worst case scenario. I've only ever looked at BIG O generally to compare algorithms, but I'm interested a bit more in the math. I just want to fully understand how to determine the time to execute for the same algorithm with different numbers of inputs. Actually I just want to understand exactly what this book is trying to indicate and how they arrived at that conclusion of 9 seconds vs 3 seconds which is currently escaping me.
First, Big-O is an upper bound. This means that the asymptotic growth of x^2 is greater than or equal to that of the function in question. For example, a function that always takes 3 seconds no matter the size of the input fits this criteria.
Second, even if we assume that the author meant Big-Theta( x^2), in where the actual growth rate is bounded above and below, then there is still a problem as you need to also consider an additive constant.
3 = A * 100^2 + C
y = A * 200^2 + C