asymptotic notation

What is asymptotic notation?
Thanks. You did a google for me.I hope you usually do it for all.

Why we use only Asymptotic notation for getting time complexity of any algorithm?
We want to know scalability of algorithm, how it behaves when input size grows (because data amount tend to increase).

Exact measurements are possible in concrete implementation of concrete algorithm running on concrete type of machine. They are important when dataset size is not changing and operations on it are perfomance crucial (like quaternion manipulations). In this case even asymptotically worse algorithm can work better (as there is a constant and lower power costs to the algorithm, which quickly lose relevance when input size increases).
Thanks Mii. for your reply.

That means the time required to any algorithm is depends on the number of inputs, only for such algorithms we can calculate the time complexity by using Asymptotic notation.

Am I right???

That means the time required to any algorithm is depends on the number of inputs
Yes
only for such algorithms we can calculate the time complexity by using Asymptotic notation.
We can calculate time complexity for any algorithm. If time required by algorithm does not depend on data size, we have constant complexity. For example acceess to hash table using perfect hash is strictly O(1): no matter how many elements are there, access always takes roughtly same time.
@akash16 no. It depends of the data structure , the number of operations or iterations .
Simply enter
Big O notation in Google
Thanks Ericool.

But Big O is a Asymptotic notation right?
yes.
Then What is your point?
Sorry but I am not getting you.
? to whom are you talking to ?
It was for you.
The same algorithm will take different amounts of time on different hardware, but if you plot the time vs the size of the data, the curves will start to look the same. To express this more definitely, we use Big-O notation. When we say that an algorithm is O(log(n)) we mean that on any hardware, the limit of the runtime as N approaches infinity is between K1*log(N) and K2*log(N) for some positive values of K1 and K2.

We use Big-O notation to determine how quickly an algorithm will slow down as the input increases.

Now here is something very VERY important that your professor won't tell you: Big-O will often steer you in the wrong direction. You can often do better than Big-O. You're immediately thinking that I'm crazy, but here's the thing, you will learn about Big-O for GENERIC algorithms. For example, you will have it drilled into your head that sorting requires O(n*log(n)) time. That is true for comparison based sorts only. In other words, if the only thing you can do is compare values then sorting takes n*log(n) time. If you can do anything else then you might be able to do better.

When faced with an algorithm problem and Big-O says that you can't do it in reasonable time, look at how the problem is different from the generic problem that Big-O addresses. Then exploit the hell out of the difference. You may find that the problem is solveable after all. I've seen this happen a few times.
Topic archived. No new replies allowed.