In this post I am going to discuss what ‘complexity’ of an algorithm is and a method by which it is determined. I am not going to bore you guys by posting too much about theoretical concepts instead of actual algorithms. From the next post onward I am going to talk only about new algorithms. But it is important that you understand this concept.

Algorithm complexity tells us about how fast or slow an algorithm performs. We use the Big O notation to classify algorithms on the basis of how they respond to input of different sizes. I am going to describe it briefly. The Big O notation basically tells us “what is the execution time of the algorithm directly proportional to”. If I say that the complexity of an algorithm is O(n^2), it means that the execution time of that algorithm is proportional to the square of the input size. Consider the following code snippet:

for(i=0;i<n;i++) { for(j=0;j<n;j++) { //some code here } }

The code inside the two loops will execute n^2 times. Therefore, The complexity of the above code snippet is O(n^2), because the execution time is directly proportional to the square of ‘n’.

Another example, complexity of the following code is O(n^3):

for(i=0;i<n;i++) { for(j=0;j<n;j++) { for(k=0;k<n;k++) { //some code here } } }

Similarly, the following algorithm for printing all elements of an array of size ‘n’ is an O(n) algorithm. In other words, it has linear complexity.

for(i=0;i<n;i++) { printf("%d ",array[i]); }

If I have to write a program for adding two numbers, I’ll write it as follows:

scanf("%d %d",&number1,&number2); int sum=number1+number2; printf("%d",sum);

The execution time of this algorithm is not dependent on the input ( number1 and number2). In this case, we say that the complexity of this code is O(1) i.e. constant complexity.

One more example,

scanf("%d",&n); for(i=1;i<n;i*=2) { //some code here }

The complexity of the above algorithm is O(logn) as the code inside the loop executes ([log(n)] + 1) times. ( [num] means that num is rounded up to the closest integer). The base for log is 2.

E.g.,

for n=8, the values of i for different iterations are i = 1, 2, 4, 8. Therefore, 4 iterations.

For n=31, i = 1, 2, 4, 8, 16, 32. Therefore, 6 iterations.

For n=33, i = 1, 2, 4, 8, 16, 32, 64. Therefore, 7 iterations.

So, the number of times the for loop iterates is directly proportional to log(n).

If I insert a code with O(logn) complexity inside a code with O(n) complexity, I’ll get code with complexity O(n*logn). For example,

scanf("%d",&n); for(j=0;j<n;j++) { for(i=1;i<n;i*=2) { //some code here } }

The execution time of the above code is proportional to n*log(n), therefore O(n*logn)

O(1) algorithms are the best (obviously) but a constant time solution is not possible for most problems. Algorithms of higher order (n^3, n^4,…) are not preferred. They are the main culprits for TLE (Time Limit Exceeded) verdict.

Common algorithm complexities, from better to worse are

O(1) < O(logn) < O(n) < O(n*logn) < O(n^2) < O((n^2)*logn) < O(n^3) < …

Counting sort, that we previously discussed, has a complexity O(n). In counting sort, an array of size ‘n’ is traversed 3 times. Therefore, the execution time is directly proportional to n.

Other basic sorting algorithms like Selection sort, Bubble sort , Quick sort, etc have complexity O(n^2). Efficient sorting algorithms like Merge sort and Heap sort, have complexity O(nlogn), and therefore are preferred over the O(n^2) sorting algorithms.

The Big O notation is widely used to compare algorithms. While introducing new algorithms I too will mention their complexity in this manner.

I came across a page that contains some interesting information.