Let's say we have a problem of size n. Now for each step of our algorithm(which we need write), our original problem becomes half of its previous size(n/2).
So at each step, our problem becomes half.
Step | Problem |
---|---|
1 | n/2 |
2 | n/4 |
3 | n/8 |
4 | n/16 |
When the problem space is reduced(i.e solved completely), it cannot be reduced any further(n becomes equal to 1) after exiting check condition.
Let's say at kth step or number of operations:
problem-size = 1
But we know at kth step, our problem-size should be:
problem-size = n/2^{k}
From 1 and 2:
n/2^{k} = 1 or
n = 2^{k}
Take log on both sides
log_{e} n = k log_{e}2
or
k = log_{e} n / log_{e} 2
Using formula log_{x} m / log_{x} n = log_{n} m
k = log_{2} n
or simply k = log n
Now we know that our algorithm can run maximum up to log n, hence time complexity comes as
O( log n)
A very simple example in code to support above text is :
for(int i=1; i<=n; i=i*2)
{
// perform some operation
}
So now if some one asks you if n is 256 how many steps that loop( or any other algorithm that cuts down it's problem size into half) will run you can very easily calculate.
k = log_{2} 256
k = log_{2} 2 ^{8} ( => log_{a}a = 1)
k = 8
Another very good example for similar case is Binary Search Algorithm.
int bSearch(int arr[],int size,int item){
int low=0;
int high=size-1;
while(low<=high){
mid=low+(high-low)/2;
if(arr[mid]==item)
return mid;
else if(arr[mid]<item)
low=mid+1;
else high=mid-1;
}
return –1;// Unsuccessful result
}