Coplexity of the algorithm

Hi All,
I have been reading about evaluating the complexity of algortihms, etc..etc..though I am very good at math and probability I never understood a bit of this.
I read everything nearly 100 times.....and NO USE.
How do we measure complexity of algorithm,is it through the time it takes to to execute particular program?
I see somewhere mentioned complexity of a program as nlogn ,2^n,n^2, something like that...what does that mean??

I am trying to alot to understand but in vain...Please help
The complexity or (big O notation) is usually the maximum number of iterations normally of a loop.

like:
1
2
3
for(int i = 0; i < n; ++i)
{
}
has the complexity of n: O(n)

while
1
2
3
4
5
6
for(int i = 0; i < n; ++i)
{
  for(int i = 0; i < n; ++i)
  {
  }
}
has the complexity of n2: O(n2)

Read:
http://en.wikipedia.org/wiki/Big_O_notation
Last edited on
The complexity relates to how many steps your algorithm must take (approximately) to work with some data.

The n represents how many elements are contained in the data.
The n is used to show the correlation between amount of data and the steps the algorithm takes.

Let's say we have an array named arr of length n.

What's the complexity of an algorithm which prints a random element from arr?
The complexity is O(1) because there's a finite fixed number of steps involved, which don't depend on n (even if the number of steps isn't actually one).

What's the complexity of an algorithm which prints all the elements from arr?
The complexity is O(n) because there's n steps involved. The number of steps depends on n.

What's the complexity of an algorithm which prints all elements from arr three times?
It is O(3n) but we say it's O(n), because we may disregard coefficients such as 3 because they don't depend on n.

What's the complexity of an algorithm searching for an element in arr?
That is: we are given an element and we must return its position in arr.
The complexity is O(n) because in the worst case we won't even find the element, and we will search all n elements of arr.

But what is the complexity of a binary search on array arr, after the array was sorted?
(Binary search requires the array to be sorted.)
The complexity is O(log2n) usually written simply as O(log n).
This is because our binary search algorithm always cuts the array in half, so there's half less data to search the element in.

arr = [1 1 3 4 5 5 6] search for 6
compare to middle element: 4 < 6 (take right half)
arr = [5 5 6] search for 6
compare to middle element: 5 < 6 (take right half)
arr = [6] search for 6, found


Of course the above is a simplification of the actual algorithm but that's close to how it works.

http://en.wikipedia.org/wiki/Binary_search

What about an algorithm that creates a square matrix named mat out of array arr?
It copies the n elements of array arr into matrix mat a number of n times.
Considering that a single element copied is a step, we have a number of n2 steps.
So the complexity of the algorithm is O(n2).

Read more here:
http://en.wikipedia.org/wiki/Time_complexity#Table_of_common_time_complexities
Last edited on
nlog2n - means that this algorithm performs n operations and each operation does log2n operations. Or it could be the other way around, the algorithm does log2n operations and each operation does n operations. An example of an algorithm that behaves like this is std::sort found in the algorithm library:
http://en.cppreference.com/w/cpp/algorithm/sort

log2n - (usually written logn) means an algorithm that cuts down on the number of operations to carry out by half at each iteration. Example being binary search as Catfish666 pointed out

2n- This is the opposite of log2n because rather than reduce amount of work to be done by half, this algorithm will double the amount of work to be done at each iteration. Example of such algorithm is the classic fibonacci numbers algorithm:
1
2
3
4
5
6
7
unsigned Fib(unsigned n)
{
    if (n <= 1)
        return n;
    else
        return Fib(n - 1) + Fib(n - 2); // doubling occurs here
}


n2 - This is the worst complexity I can think of for an algorithm apart from the occational nn algorithms. This algorithm does n operations per iteration thus making it very sloooowwwww. Another classic example of this being the bubblesort algorithm usually thought in schools (whyy??)


Hope that answers your question. For the most part, algorithm complexity is calculated using the worst case scenario to provide the client with what to expect in the case the algorithm performs at that worst possible case.
Last edited on
It's confusing because most people don't know what big-O means and even those who do commonly use incorrect terminology.

Big O ( < F(n) > ) is a set of functions which are less than or equal to F for all n, ignoring multiplicative and additive constants.

i.e.

O ( F(n) ) = { f(n) : c1 * f( n ) + c2 <= F ( n ) for some constants c1, c2, for all n }

In other words, its an upper bound ignoring constants.

So, n is in the set O( n^2 ), n^ 2 is in the set O ( n^3 ) ...

iIt's confusing because an algorithm which takes 1 iteration always, is in
Big O( n^n^n... ), because Big-O is an upper bound.

Big theta is the one where it's bounded above and below. i.e. it's = to F(n) ignoring constant multiplication or addition. This is also a set. This is usually what people mean when they say Big-O.

It's really a measure of the growth rate of functions we are comparing. So you look at the graphs of the function and see how they vary over time. Some functions grow very rapidly as n increases, and some less.

Usually the functions are Representative of the growth rate of how much time, or memory an algorithm takes as the size of the input increases. For example, brute force cracking a password is easy with two digits, but when you get up to still pretty small number of digits, it will take the age of the universe to complete. That's a bad thing.

Just remember Big-O is an upper bound, big Omega is a lower bound, and big Theta is a tight bound ( bound above and below ). Each of these three are sets of functions.
Last edited on
The wiki article on time complexity explains it quite well:
Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, where an elementary operation takes a fixed amount of time to perform.
...
The time complexity of an algorithm is commonly expressed using big O notation, which excludes coefficients and lower order terms.
...
For example, if the time required by an algorithm on all inputs of size n is at most 5n3 + 3n, the asymptotic time complexity is O(n3).
- http://en.wikipedia.org/wiki/Time_complexity



> The complexity is O(log2n) usually written simply as O(log n).

The complexity is logarithmic; expressed in the big O notation it is O(log n)

For some b, logb n == C. log2 n == O(log n) (exclude coefficient C)

and C1 . log2 n == C2 . logb n

If we think that the 2 in log2 n has any significance, we haven't even begun to understand the big O notation.
Last edited on
LOL. FAQ to the rescue!
http://www.cplusplus.com/faq/sequences/sequencing/sort-algorithms/#Big-O

(Make sure to click on the links to perlmonks and stackoverflow for more easy-to-follow reading.)
Hi All,
Thanks a lot for clear and concise clarfications...These posts are really helpful
@coder777,@catfish666,@smac89,@htirwin,@JLBroges,@Duoas


Thanks
Prasad

Topic archived. No new replies allowed.