Algorithm

Algorithm #12: Matrix Exponentiation

Please read the previous post on Binary Exponentiation before you start with this one.

Lets first understand what a recurrence relation is. You probably know about the Fibonacci Series. It is a sequence of numbers in which the first number is 0, the second number is 1 and all subsequent numbers are determined using the formula:
f(n) = f(n-1) + f(n-2)

An equation such as the one above, in which one term of a sequence is defined using the previous terms, is called a Recurrence relation. Therefore, relations like
f(n)=f(n-3) + f(n-2) + f(n-1) [ Tribonacci Series ]
or
f(n)=3*f(n-1) + 7*f(n-2) [ an arbitrary example ]
etc.
are recurrence relations.

If we are given the problem to find the nth Fibonacci number modulo a prime number M, the naive solution would be like this:

long long findFibonacci(long long n,long long M)
{
  if(n==1)
    return 0;
  if(n==2)
    return 1;
  long long i,prevterm=0,prevterm2=1,thisterm;
  for(i=3;i<=n;i++)
  {
    thisterm=(prevterm+prevterm2)%M;
    prevterm=prevterm2;
    prevterm2=thisterm;

  }
  return thisterm;
}

In fact, we can write code to find nth element of any recurrence relation in a similar manner.
The problem with the previous code is that it has O(n) i.e. linear complexity.

Matrix exponentiation is a faster method that can be used to find the nth element of a series defined by a recurrence relation.
We’ll take Fibonacci series as an example.

In matrix exponentiation, we first convert the addition in a recurrence relation to multiplication. The advantage of doing this will become clear as you read on.

So the question is: How can we convert the addition in a recurrence relation to multiplication. The answer is matrices!

The general recurrence relation for a series in which a term depends on the previous 2 terms is:
f(n) = a*f(n-1) + b*f(n-2)
( For Fibonacci, a=1 and b=1 )
The matrix form of this equation is:

| f(n)   | =  | p  q | X | f(n-1) |
| f(n-1) |    | r  s |   | f(n-2) |

For convenience let
| p  q | = Z
| r  s |

Therefore, we get
f(n) = p * f(n-1) + q * f(n-2)
and
f(n-1) = r * f(n-1) + s * f(n-2)

For each recurrence relation, the values of p,q,r and s will be different.
On solving the above equations for the Fibonacci sequence we get, p=1, q=1, r=1 and s=0.

So, the Z matrix for Fibonacci sequence is

| 1  1 |
| 1  0 |

and the matrix form for f(n) = f(n-1) + f(n-2) is:

| f(n)   | = | 1  1 | X | f(n-1) |
| f(n-1) |   | 1  0 |   | f(n-2) |

Now lets get to the method for finding the nth element.
Initially we have the matrix,

| f(2) |
| f(1) |

Using the matrix form of Fibonacci series given above, if we have to find the next Fibonacci number, i.e. f(3), we will multiply Z matrix by the above matrix:

| 1  1 |  X | f(2) | = | f(3) |
| 1  0 |    | f(1) |   | f(2) |
If we again multiply Z with | f(3) | , we'll get | f(4) |
	      	            | f(2) |             | f(3) |

So, we have the following equation for the nth Fibonacci number.

| f(n)   | = Z^(n-2) X | f(2) |
| f(n-1) |             | f(1) |

So, we have successfully changed the addition in the recurrence equation to multiplication.
But now what??
As I mentioned in my previous post, we have an algorithm called Binary Exponentiation that can perform the operation base^power in O(log n) time.
Because now our job is to find Z^(n-2), we can do this by using Binary Exponentiation in O(log n) time.

Z^(n-2) will then be multiplied by | f(2) | and we'll get | f(n)   |  
		 		   | f(1) |		  | f(n-1) |

Ofcourse there is the small matter that multiplying matrices contributes to more overhead. But that overhead is tiny as compared to the speed up that we are obtaining by reducing O(n) to O(log n)

Here is the Matrix Exponentiation code for finding the nth Fibonacci number.
Compare it with the iterative version of Binary Exponentiation. You’ll observe that the only change is that we are now performing matrix multiplication instead of simple integer multiplication.
That’s why this algorithm is called Matrix Exponentiation.

void matmult(long long  a[][2],long long  b[][2],long long c[][2],long long  M)//multiply matrix a and b. put result in c
{
	int i,j,k;
	for(i=0;i<2;i++)
	{
		for(j=0;j<2;j++)
		{
			c[i][j]=0;
			for(k=0;k<2;k++)
			{
				c[i][j]+=(a[i][k]*b[k][j]);
				c[i][j]=c[i][j]%M;
			}
		}
	}

}
void matpow(long long Z[][2],int n,long long ans[][2],long long M)
//find ( Z^n )% M and return result in ans
{

	long long temp[2][2];
	//assign ans= the identity matrix
	ans[0][0]=1;
	ans[1][0]=0;
	ans[0][1]=0;
	ans[1][1]=1;
	int i,j;
	while(n>0)
	{
		if(n&1)
		{
			matmult(ans,Z,temp,M);
			for(i=0;i<2;i++)
				for(j=0;j<2;j++)
					ans[i][j]=temp[i][j];
		}
		matmult(Z,Z,temp,M);
		for(i=0;i<2;i++)
			for(j=0;j<2;j++)
				Z[i][j]=temp[i][j];


		n=n/2;
	}
	return;
	
}
long long findFibonacci(long long n,long long M)
{
	
	long long fib;
	if(n>2)
	{
		long long int Z[2][2]={{1,1},{1,0}},result[2][2];//modify matrix a[][] for other recurrence relations
		matpow(Z,n-2,result,M);
		fib=result[0][0]*1 + result[0][1]*0;	//final multiplication of Z^(n-2) with the initial terms of the series
	}
	else
		fib=n-1;
		
	return fib;
}

The challenging part of this algorithm is to find the Z matrix for a recurrence relation.
For the recurrence relation: f(n) = f(n-1) + 2*f(n-2) + 3*f(n-3), we have

| f(n)   |    | p  q  r |   | f(n-1) |
| f(n-1) | =  | s  t  u | X | f(n-2) |
| f(n-2) |    | v  w  x |   | f(n-3) |

Write out the equations for f(n), f(n-1) and f(n-2) from the above matrix equation and you’ll find that the Z matrix is:

| 1  2  3 |
| 1  0  0 |
| 0  1  0 |

Related Problems:
CLIMBING STAIRS (Codechef)

Advertisements

Algorithm #11: Binary Exponentiation

Some programming problems require us to find the value of base^power modulo some positive prime number M (power>=0).
If you had to find the value of ( base^power ) % M, how would you do it??
The easiest method that comes to mind is iterating a loop.

long long findPower(long long base,long long power,long long M)
{
	long long ans=1;
	int i;
	for(i=1;i<=power;i++)
		ans=(ans*base)%M;
	return ans;
}

The above method is simple but not at all efficient. For values of power around 10^8 or more, this method will take a lot of time to run. If you increase the value of power to 10^16, you’ll pass out from college before the computation ends! Increase it to around 10^25, the sun will become a Red Giant and swallow Earth before the value is computed!!

But why would we need to find such large values :P. Yeah that’s a valid question; but it shouldn’t stop us from learning a better algorithm!

There is a much better algorithm than the linear one described above, which is the topic of this post.
I’ve already introduced Binary Exponentiation as a part of an earlier post. But, I need to formalize it for the next post.

Binary Exponentiation is based on the idea that,
to find base^power, all we need to do is find base^(power/2) and square it. And this method can be repeated in finding base^(power/2) also.

Suppose that we need to find 5^8.
5^8=5^4 * 5^4
5^4=5^2 * 5^2
5^2=5 * 5

The number of steps required to find 5^8 has been reduced from 8 to just 3.
As another example, consider 8^51
8^51 = 8^25 * 8^25 * 8
8^25 = 8^12 * 8^12 * 8
8^12 = 8^6 * 8^6
8^6 = 8^3 * 8^3
8^3 = 8 * 8 * 8

If we used the linear algorithm, we would have required 27 steps. But, using this awesome trick we required roughly 5 steps.
In a general sense, we require steps in the order of O(logbase2 n)

This algorithm can be implemented recursively as well as iteratively.

RECURSIVE IMPLEMENTATION:

long long fastPower(long long base,long long power,long long M)
{
    if(power==0)
       return 1;
    if(power==1)
    return base;
    long long halfn=fastPower(base,power/2,M);
    if(power%2==0)
        return ( halfn * halfn ) % M;
    else
        return ( ( ( halfn * halfn ) % M ) * base ) % M;
}

ITERATIVE IMPLEMENTATION:

long long int fastPower(long long base,long long power,long long M)
{
        long long result=1;
        while (power>0) 
        {
                if (power%2==1)         
                        result = (result*base)%M;
                base = (base*base)%M;
                power/=2;
        }
        return result;
}

The iterative implementation can be a little tricky to understand. Take a pen and paper and trace the values of variables through the iterations. That’s an effective way to understand and convince yourself.
The iterative version runs a little faster that the recursive version.

Binary exponentiation will find the value of base^(10^25) in about 85 steps only.. Now that’s really cool…

                        The maximum number of multiplications that are required to compute base ^ power is 2 X FLOOR(logbase2 (power)). Think why 😛

Algorithm #10: Disjoint Set Union

Disjoint Set Union (DSU) or Union-Find is a graph algorithm that is very useful in situations when you have to determine the connected components in a graph.

Suppose that we have N nodes numbered from 1 to N and M edges. The graph can be disconnected and may have multiple connected components. Our job is to find out how many connected components are there in the graph and the number of nodes in each of them.

The basic idea behind DSU is the following:
Initially, all nodes are isolated i.e. there are no edges in the graph. We add edges to the graph one by one.
While adding an edge, check if the two nodes that the edge connects are already present in the same connected component.
– if they are, then do nothing.
– if they aren’t, then make the smaller of the two connected components a part of the larger one.
(So, we are making union of disjoint connected components; therefore the name Disjoint Set Union)

To keep track of what connected component a node belongs to, we use an array named parent[ ].
parent[ i ] tells the ID of the connected component that the ith node belongs to. The ID of a connected component is one of the nodes in that connected component. This node is kind of the leader or parent of all other nodes in that connected component.
Initially, as all nodes are isolated, we have N connected components; each node being the leader of their connected component. Therefore, parent[ i ]=i for all 1<=i<=N.

To keep track of the size of a connected component, we use the size[ ] array.
size[ i ] = the number of nodes in the ith connected component.
Initially, size[ i ] =1, for all 1<=i<=N. This is because, initially, all connected components contain only one node.

When we encounter an edge that connects two nodes a and b that belong to different connected components, we first check which of the two connected components is bigger: the one that a belongs to or the one that b belongs to. The smaller connected component becomes part of the larger one. This is done to reduce the number of nodes whose parent has to be changed.

Notice that we used the function call findParent( i ) to find the parent of the ith node, instead of directly looking at parent[ i ]. The reason for this is:
The parent of a node is not changed as soon as its affiliation to a connected component is changed. We postpone this to when we actually need to find the parent of the node. Doing this avoids many useless operations. So, parent[ i ] may not contain the updated value of the connected component that i belongs to. That’s why it’s important that we use findParent( i ) instead of being lazy and taking the value directly from parent[ i ].

In the end, we need to consider those nodes i that have findParent( i )== i. This is because, these are the nodes that still belong to their initial connected component and were not assigned to a different one during execution. These represent the disjoint connected components we are looking for.

So the complete code for DSU is as follows:


#include<stdio.h>
#define MOD 1000000007
int findParent(int i,int parent[])
//function to find the connected component that the ith node belongs to
{
	if(parent[parent[i]]!=parent[i])
        parent[i]=findParent(parent[i],parent);
	return parent[i];
}
void unionNodes(int a,int b,int parent[],int size[])
//to merge the connected components of nodes a and b
{
	int parent_a=findParent(a,parent),parent_b=findParent(b,parent);
	if(parent_a==parent_b)
        return;
	if(size[parent_a]>=size[parent_b])//the larger connected component eats up the smaller one
	{
         size[parent_a]+=size[parent_b];
         parent[parent_b]=parent_a;
	}
	else
	{
         size[parent_b]+=size[parent_a];
         parent[parent_a]=parent_b;
	}
	return;
}

int main()
{

    int N,M,i,a,b;
    scanf(" %d %d",&N,&M);
    int parent[100001]={0},size[100001]={0};
    for(i=1;i<=N;i++)
    {
        parent[i]=i;
        size[i]=1;
    }

    for(i=1;i<=M;i++)
    {
        //scan each edge and merge the connected components of the two nodes
        scanf(" %d %d",&a,&b);
        unionNodes(a,b,parent,size);
    }

    for(i=1;i<=N;i++)
        printf("Node %d belongs to connected component %d\n",i,findParent(i,parent));
    long long ways=1;
    int nos=0;
    for(i=1;i<=N;i++)
    {
        if(findParent(i,parent)==i)//this condition is true only for disjoint connected components
        {
            printf("%d leader %d size\n",i,size[i]);
            nos++;
        }

    }
    printf("Total connected components : %d",nos);

	return 0;
}

Comparison between DFS and DSU:
The task that DSU achieves in this code can be done using DFS as well. You should to code the same using DFS too.

Some related problems are:
GALACTIK FOOTBALL (Codechef)
FIRE ESCAPE ROUTES (Codechef)

Keep in mind that DFS is not a replacement for DSU. DFS works well in cases when all edges are present in the graph from the beginning. But, in problems where edges are added during the execution and we need to run connectivity queries in between such additions, DSU is the better option. An example of this type of situation is the Kruskal’s algorithm to find the Minimum Spanning Tree (MST).
In Kruskal’s algorithm, before adding an edge to the MST we need to check if the addition of the edge creates a cycle or not. We can use DSU here. If the parents of the two nodes that the edge connects are same, then we know that addition of the edge will create a cycle.
Try implementing Kruskal’s algorithm for MST using DSU by yourself. It’s quite simple once you know DSU.

Problems for MST:
COAL SCAM (Codechef)

Algorithm #9 : Depth- and Breadth- First Search

This post is about the graph traversal algorithms, Depth First Search (DFS) and Breadth First Search (BFS).
BFS and DFS are one of the algorithms for graph exploration. Graph exploration means discovering the nodes of a graph by following the edges. We start at one node and then follow edges to discover all nodes in a graph. The choice of first node may be arbitrary or problem specific.

The difference between BFS and DFS is the order in which the nodes of a graph are explored.
If you are not already familiar with BFS and DFS in theory, I recommend that you read about them. Because I’m going to focus more on their implementation here.

In a nutshell, DFS continues on one path and explores it completely before going down another path.
But in BFS we progress equally in all possible paths.

The following gifs will give you a good general idea about the two.
Here, the nodes are numbered according to the order in which they are explored.

Depth First Search
DFS

Breadth First Search
BFS

[Both the images were taken from commons.wikimedia.org]

IMPLEMENTATION:
We can use any of the four graph representation methods that I introduced in my post Representation of Graphs. In this post we’ll use Adjacency list and assume that the input is the edges in form of pairs of positive integers (i.e. Type 2, if you refer to my post). The nodes are numbered from 1 to n.

For DFS:
DFS has to be implemented using a stack data structure. As recursion uses the internal stack, we can use recursion as follows:

int adjlist[101][101]={0};
int degree[101]={0};
int done[101]={0};//this array marks if a node has already been explored
void dfs(int at)
{
	if(done[at]==1)//if the node has already been explored, then return
		return;
	printf("At node %d\n",at);
	done[at]=1;
	int i=0;
	while(i<degree[at])//for each of the edges on this node
	{
		dfs(adjlist[at][i]);
		i++;
	}
	return;
}
int main()
{
    int n,m,i,a,b;
    scanf(" %d %d",&n,&m);
    i=0;
    while(i<m)
    {
        scanf(" %d %d",&a,&b);
        adjlist[a][degree[a]]=b;
        degree[a]++;
        adjlist[b][degree[b]]=a;
        degree[b]++;
        i++;
    }
    dfs(1);//start with any node. node 1 is the first node here
    return 0;
}

For BFS:
BFS needs a Queue data structure for its implementation. Here I use an array queue[] and integers front and rear to implement Queue.


int main()
{
    int n,m,i,a,b;
    int adjlist[101][101]={0};
    int degree[101]={0};
    scanf(" %d %d",&n,&m);
    i=0;
    while(i<m)
    {
        scanf(" %d %d",&a,&b);
        adjlist[a][degree[a]]=b;
        degree[a]++;
        adjlist[b][degree[b]]=a;
        degree[b]++;
        i++;
    }
    int queue[101],front=0,rear=0;
    int done[101]={0};//this array marks if a node has already been explored
    int at;
    queue[rear]=1;//start with any node. node 1 is the first node heres
    rear++;
    done[1]=1;
    while(front!=rear)
    {
        at=queue[front];
        printf("At node %d\n",at);
        front++;
        for(i=0;i<degree[at];i++)
        {
            if(done[adjlist[at][i]]!=1)
            {
                queue[rear]=adjlist[at][i];
                rear++;
                done[adjlist[at][i]]=1;
            }
        }
    }
    return 0;
}

The array done[] is used to mark the nodes that have already been visited. This has to be done to stop the code from re-discovering already visited nodes and running forever.

Above is a bare-bones implementation of the two algorithms. They do nothing more than exploring the graph. Apart from only exploring the graph, DFS and BFS can also be used to compute other information too.
For example, if we have a tree as input, we can modify the above DFS code to compute the depth of each node in the tree, and also the size of the sub-tree rooted at each node.

int adjlist[101][101]={0};
int degree[101]={0};
int depth[101]={0};
int sizeofsubtree[101]={0};
int done[101]={0};//this array marks which node has already been explored
int dfs(int at,int currentdepth)
{
	if(done[at]==1)//if the node has already been explored, then return
		return 0;
	depth[at]=currentdepth;
	printf("Node %d at depth %d\n",at,depth[at]);
	done[at]=1;
	int i=0,size=1;//initialised to 1 as current node is also part of the sub-tree rooted at current node
	while(i<degree[at])//for each of the edges on this node
	{
		size+=dfs(adjlist[at][i],currentdepth+1);
		i++;
	}
	sizeofsubtree[at]=size;
	return sizeofsubtree[at];
}
int main()
{
    int n,m,i,a,b;
    scanf(" %d %d",&n,&m);
    i=0;
    while(i<m)
    {
        scanf(" %d %d",&a,&b);
        adjlist[a][degree[a]]=b;
        degree[a]++;
        adjlist[b][degree[b]]=a;
        degree[b]++;
        i++;
    }
    dfs(1,0);//start with root node. assuming that node 1 is the root node here
    i=1;
    printf("Depth of subtrees:\n");
    while(i<=n)
    {
        printf("Rooted at %d: %d\n",i,sizeofsubtree[i]);
        i++;
    }
    return 0;
}

A second variable currentdepth is passed to each dfs() instance that represents the depth of the current node. Notice that at line 16, currentdepth+1 has been passed to dfs(); because child of the current node has depth one more than the parent node.

Each recursive instance of dfs() returns the size of the sub-tree rooted at a node.
At every node, we sum up the values returned by dfs() for each child node ( This is done at line 16 ). The size of a sub-tree rooted at a node is the summation of sizes of sub-trees rooted at its children + 1. This is how the size of all sub-trees is computed.

COMPARISION BETWEEN DFS AND BFS:
If all the nodes of a graph have to be discovered, then BFS and DFS both take equal amount of time. But, if we want to search for a specific node, both algorithms may differ in execution time.

DFS is more risky compared to BFS. If a node has more than one edge leading from it, the choice of which edge to follow first is arbitrary.
As we don’t have any intelligently way of choosing which edge to follow first, it may be possible that the required node is present down the first edge that we choose and it is also possible that the required node is present down the last edge that we choose from that node. In the former case, DFS will find the node very quickly, but in the latter case DFS will take a lot of time. If we take a wrong path at some node, DFS will have to completely traverse the whole path before it can go down another path. That’s why DFS is more risky than BFS.

In BFS, all paths are explored equally. So, in some cases the search may be a little slower than DFS but the advantage of BFS is that it doesn’t arbitrary favor one path over the other.

BFS is very useful in problems where you have the find the shortest path. This is because BFS explores closer nodes first. So, when we find the node the first time, we can be sure that this is the shortest path to it. Whereas in DFS, we’ll have to find all possible paths and then select the shortest path.
BFS can also be used in checking if the graph is bipartite.

DFS is useful in problems where we have to check connectivity of graph and in topological sorting.

Suppose we have an infinite graph. If we use DFS to find a specific node, the search will never end if the node is not in the first path that the algorithm chooses. But, given sufficient time, BFS will be able to find it.

COMPLEXITY OF BFS AND DFS:
The complexity of DFS and BFS is O(E), where E is the number of edges.
Of course, the choice of graph representation also matters. If an adjacency matrix is used, they will take O(N^2) time (N^2 is the maximum number of edges that can be present). If an adjacency list is used, DFS/BFS will take O(E) time.

Related Problems:

Codechef : Fire Escape Routes
Hackerearth : Quidditch Practice Problem
Hackerearth : Close call

Algorithm #8: Dynamic Programming for Subset Sum problem

Uptil now I have posted about two methods that can be used to solve the subset sum problem, Bitmasking and Backtracking. Bitmasking was a brute force approach and backtracking was a somewhat improved brute force approach.

In some cases, we can solve the subset sum problem using Dynamic Programming. (Note that I said “in some cases”). This post is an introduction to Dynamic programming.

Okay, so first questions first… What is Dynamic Programming?
Dynamic programming is not an algorithm in itself, it is a programming strategy. The basic idea behind DP is that while computing for the answer, we store the results of the intermediate computations so that we don’t need to recompute them later. We first compute the answer of smaller versions of our problem and store the answers. These intermediate answers help us compute the actual answer and as they are already stored somewhere we don’t need to recalculate them everytime we need them. This saves up a lot of time. That’s DP in a nutshell. There is much more to the definition of Dynamic Programming that you can read on Wikipedia. But for now, this simple definition will do.
This discussion on quora might help.

Let’s look at Subset Sum problem again:

We are given a set of N numbers. Our objective is to find out if it is possible to select some numbers from this set such that their sum exactly equals M. Eg. If we have the numbers {3,5,3,1} and M=8, then the answer is “yes” because of the subset {3,5}. If M=10, then the answer is “no” because no subset has sum=10.

The DP solution looks like this:

#include
int main()
{
    int N,M;
    scanf(" %d %d",&N,&M);
    int nums[N+1],i,j,ispossible[N+1][M+1];
    for(i=1;i<=N;i++)
        scanf(" %d",&nums[i]);
    for(i=0;i<=N;i++)
        for(j=0;j<=M;j++)
            ispossible[i][j]=0;//initialising to 0
    for(i=0;i<=N;i++)
        ispossible[i][0]=1;
    for(i=1;i<=N;i++)
    {
        for(j=0;j<=M;j++) { if(ispossible[i-1][j]==1) ispossible[i][j]=1; if(j-nums[i]>=0 && ispossible[i-1][j-nums[i]]==1)
                ispossible[i][j]=1;
        }
    }
    if(ispossible[N][M]==1)
        printf("Yes");
    else
        printf("No");
    return 0;
}

I’ve taken the input in nums[1], nums[2], … , nums[n]
I’ve used a 2-D array named ispossible[ ][ ]. It has size(N+1) x (M+1).
ispossible[i][j] can be either 0 or 1. Its value will be 1 if it is possible to select some numbers from the set {nums[1],nums[2],nums[3],…,nums[i]} so that their sum equals to j; otherwise it will be 0.

The logic is really simple:

Our final goal is to find the value of ispossible[N][M], which will be 1 if it is possible to obtain a subset of {nums[1],nums[2],nums[3],…,nums[N]} with sum equal to M.
To find the value of ispossible[N][M] we need to find the value of each element in the two dimensional array ispossible[ ][ ].

In general, ispossible[i][j] = 1 iff one of the following two conditions is true:

1. ispossible[i-1][j] is 1. If ispossible[i-1][j] is 1 it means that it is possible to obtain a sum of j by selecting some numbers from {nums[1],nums[2],nums[3],…,nums[i-1]}, so obviously the same is also possible with the set {nums[1],nums[2],nums[3],….,nums[i-1],nums[i]}.

2. ispossible[i-1][j-nums[i]] is 1. If ispossible[i-1][j-nums[i]] is 1 it means that it is possible to obtain a sum of (jnums[i]) by selecting numbers from {nums[1], nums[2],…,nums[i-1]}. Now if we select nums[i] too, then we can obtain a sum of (jnums[i])+nums[i] = j. Therefore ispossible[i][j] is 1.

The above two points serve as the DP equation for calculating all values in ispossible[ ][ ].

Note that first it is important to set ispossible[i][0] = 1 (for 0<=i<=N). Because obviously its possible to obtain a sum = 0 from any value of i.
This algorithm has time complexity O(N*M) and space complexity O(N*M).
The drawback of this algorithm and the reason why I said “in some cases” before is that if N*M is too large, then an array of the required size cannot be declared. So, this method works well only in those cases where N*M is around 10^8.

As I said earlier, Dynamic Programming is not a single algorithm but it is a programming strategy. A programming strategy that can be mastered only by practice. Dynamic programming questions are always unique and new. Each question has its own equation. By practicing enough DP questions you’ll learn how to recognize DP questions and how to think of their equation and solution.

RELATED PROBLEM:
Paying up ( Codechef ): This is a really simple DP problem that is exactly as the problem described in this post.
Fence ( Codeforces )
More questions can be found on Codeforces here.

“Output the answer modulo 10^9 + 7”

You might have noticed that many programming problems ask you to output the answer “modulo 1000000007 (10^9 + 7)”. In this post I’m going to discuss what this means and the right way to deal with this type of questions. I should have covered this topic earlier because questions involving this are not uncommon. Anyways, here it is…

First of all, I’d like to go through some prerequisites.

The modulo operation is the same as ‘ the remainder of the division ’. If I say a modulo b is c, it means that the remainder when a is divided by b is c. The modulo operation is represented by the ‘%’ operator in most programming languages (including C/C++/Java/Python). So, 5 % 2 = 1, 17 % 5 = 2, 7 % 9 = 7 and so on.


WHY IS MODULO NEEDED..

The largest integer data type in C/C++ is the long long int; its size is 64 bits and can store integers from ( –2^63 ) to ( +2^63 -1 ) . Integers as large as 9 X 10^18 can be stored in a long long int.

But in certain problems, for instance when calculating the number of permutations of a size n array, even this large range may prove insufficient. We know that the number of permutations of a size n array is n!. Even for a small value of n, the answer can be very large. Eg, for n=21, the answer is 21! which is about 5 x 10^19 and too large for a long long int variable to store. This makes calculating values of large factorials difficult.

So, instead of asking the exact value of the answer, the problem setters ask the answer modulo some number M; so that the answer still remain in the range that can be stored easily in a variable.

Some languages such as Java and Python offer data types that are capable of storing infinitely large numbers. But data type size is not the only problem. As the size of the number increases the time required to perform mathematical operations on them also increases.

There are certain requirements on the choice of M:
1. It should just be large enough to fit in an int data type.
2. It should be a prime number.
10^9 + 7 fits both criteria; which is why you nearly always find 10^9 + 7 in modulo type questions.
I’ve explained the logic behind the 2nd point in NOTES.

HOW TO HANDLE QUESTIONS INVOLVING MODULO:

Some basic knowledge of modulo arithmetic is required to understand this part.

A few distributive properties of modulo are as follows:
1. ( a + b ) % c = ( ( a % c ) + ( b % c ) ) % c
2. ( a * b ) % c = ( ( a % c ) * ( b % c ) ) % c
3. ( a – b ) % c = ( ( a % c ) – ( b % c ) ) % c ( see note )
4. ( a / b ) % c NOT EQUAL TO ( ( a % c ) / ( b % c ) ) % c
So, modulo is distributive over +, * and – but not / .

One observation that I’d like to make here is that the result of ( a % b ) will always be less than b.

If I were to write the code to find factorial of a number n, it would look something like this:

long long factorial(int n,int M)
{
	long long ans=1;
	while(n&gt;=1)
	{
		ans=(ans*n)%M;
		n--;
	}
	return ans;
}

Notice that on line 6, I performed the modulo operation at EACH intermediate stage.
It doesn’t make any difference if we first multiply all numbers and then modulo it by M, or we modulo at each stage of multiplication.
( a * b * c ) % M is the same as ( ( ( a * b ) % M ) * c ) % M

But in case of computer programs, due to size of variable limitations we avoid the first approach and perform modulo M at each intermediate stage so that range overflow never occurs.
So the following approach is wrong:

long long factorial(int n,int M)//WRONG APPROACH!!!
{
	long long ans=1;
	while(n&gt;=1)
	{
		ans=ans*n;
		n--;
	}
        ans=ans%M;
	return ans;
}

The same procedure can be followed for addition too.
( a + b + c ) % M is the same as ( ( ( a + b ) % M ) + c ) % M
Again we prefer the second way while writing programs. Perform % M every time a number is added so as to avoid overflow.

The rules are a little different for division. This is the main part of this post;
As I mentioned earlier,
( a / b ) % c NOT EQUAL TO ( ( a % c ) / ( b % c ) ) % c which means that modulo operation is not distributive over division.
The following concept is most important to find nCr (ways of selecting r objects from n objects) modulo M. (As an example, I’ve included the code to find nCr modulo M at the end of this post)
To perform division in modulo arithmetic we need to first understand the concept of modulo multiplicative inverse.

Lets go over some basics first.

The multiplicative inverse of a number y is z iff (z * y) == 1.

Dividing a number x by another number y is same as multiplying x with the multiplicative inverse of y.

x / y == x * y^(-1) == x * z (where z is multiplicative inverse of y)

In normal arithmetic, the multiplicative inverse of y is y^(-1); which will correspond to some float value. Eg. Multiplicative inverse of 5 is 0.2, of 3 is 0.333… etc.
But in modulo arithmetic the definition of multiplicative inverse of a number y is a little different. The modulo multiplicative inverse ( MMI ) of a number y is z iff (z * y) % M == 1.

Eg. if M= 7 the MMI of 4 is 2 as ( 4 * 2 ) %7 ==1,
if M=11, the MMI of 7 is 8 as ( 7 * 8 )%11 ==1,
if M=13, the MMI of 7 is 2 as ( 7 * 2 ) % 13==1.
Observe that the MMI of a number may be different for different M.
So, if we are performing modulo arithmetic in our program and we need the result of the operation x / y, we should NOT perform

z=(x/y)%M;

instead we should perform

y2=findMMI(y,M);
z=(x*y2)%M;

Now one question remains.. How to find the MMI of a number n.
The brute force approach would look something like this

int findMMI_bruteforce(int n,int M)
{
	int i=1;
	while(i&lt;M)// we need to go only uptil M-1
	{
		if(( (long long)i * n ) % M ==1)
			return i;
		i++;
	}
	return -1;//MMI doesn't exist
}

The complexity of this approach is O(M) and as M is commonly equal to 10^9 + 7, this method is not efficient enough.

There exist two other algorithms to find MMI of a number. First is the Extended Eucledian algorithm and the second using Fermat’s Little Theorem.

If you are new to modulo arithmetic, you’ll probably not find these topics easy to understand.
These algorithms require prerequisites. If you want to read them they are very well explained on many online sources and some of you will study them in depth in your college’s algorithm course too.
I’ll keep this post simple for now and only give the code using Fermat Little Theorem.

int fast_pow(long long base, long long n,long long M) 
{
    if(n==0)
       return 1;
    if(n==1)
	return base;
    long long halfn=fast_pow(base,n/2,M);
    if(n%2==0)
        return ( halfn * halfn ) % M;
    else
        return ( ( ( halfn * halfn ) % M ) * base ) % M;
}
int findMMI_fermat(int n,int M)
{
	return fast_pow(n,M-2,M);
}

This code uses a function fast_pow() that is used to calculate the value of base^p. Its complexity is O(log p). It is a very efficient method of computing power of a number.
It is based on the fact that to find a^n, we just need to find a^(n/2) and the required answer will be a^(n/2) * a^(n/2) if n is even; and a^((n-1)/2) * a^((n-1)/2) * a if n is odd ( if n is odd n/2 == (n-1)/2 in most programming languages ).

NOTE:

1. For M to be a prime number is really important. Because if it is not a prime number then it is possible that the result of a modulo operation may become 0. Eg. if M=12 and we perform ( 8 * 3 ) % 12, we’ll get 0. But if M is prime then ( ( a % M ) * ( b % M ) ) % M can never be 0 (unless a or b == 0)
[EDIT: Remember that in programming contests, M is greater than all the other values provided. So a case like (14*3)%7 can never occur. ]

2. If M is prime then we can find MMI for any number n such that 1<=n<M
3. ( a – b ) % c = ( ( a % c ) – ( b % c ) ) %c is fine mathematically. But, while programming, don’t use

// if b&gt;a, you'll get wrong result this way 
a=(a%c);
b=(b%c);
ans = ( a - b ) % c; 

instead use

a=a%c;
b=b%c;
ans =  ( a - b + c ) % c;

% works differently with -ve numbers

IMPORTANT:
1. If n1,n2 are int type variables and M=10^9+7, then the result of ( n1 * n2 ) % M will surely be < M ( and capable of fitting in a simple int variable). BUT the value of ( n1 * n2 ) can be greater than the capacity of an int variable. Internally, first ( n1 * n2 ) is computed. So, to avoid overflow either declare n and / or m as long long int OR use explicit type casting (( long long ) n * m ) % M.

As an example, here is the code to find nCr modulo 1000000007, (0<=r<=n<=100000)


int fast_pow(long long base, long long n,long long M)
{
    if(n==0)
       return 1;
    if(n==1)
    return base;
    long long halfn=fast_pow(base,n/2,M);
    if(n%2==0)
        return ( halfn * halfn ) % M;
    else
        return ( ( ( halfn * halfn ) % M ) * base ) % M;
}
int findMMI_fermat(int n,int M)
{
    return fast_pow(n,M-2,M);
}
int main()
{
    long long fact[100001];
    fact[0]=1;
    int i=1;
    int MOD=1000000007;
    while(i&lt;=100000)
    {
        fact[i]=(fact[i-1]*i)%MOD;
        i++;
    }
    while(1)
    {
        int n,r;
        printf(&quot;Enter n: &quot;);
        scanf(&quot; %d&quot;,&amp;n);
        printf(&quot;Enter r: &quot;);
        scanf(&quot; %d&quot;,&amp;r);
        long long numerator,denominator,mmi_denominator,ans;
        //I declared these variable as long long so that there is no need to use explicit typecasting
        numerator=fact[n];
        denominator=(fact[r]*fact[n-r])%MOD;
        mmi_denominator=findMMI_fermat(denominator,MOD);
        ans=(numerator*mmi_denominator)%MOD;
        printf(&quot;%lld\n&quot;,ans);
    }
    return 0;
}

Algorithm #6: Backtracking

I think we are ready to discuss Backtracking now. Last time I posted about recursion; I hope it’ll help you with this topic.
When I told you about Bit Masking, I said that it is a brute force approach to solve the Subset Sum problem. And better algorithms than Bit Masking exist. Backtracking is one of those methods.

In my post about Bitmasking I discussed the Subset Sum problem. Subset sum problem is an NP complete problem. In a nutshell, NP complete is a set of computational problems for which no efficient solution that will give a reasonably good run time for very large test cases has yet been found. The complexity of Bitmasking is O(2^n) and it becomes useless at n>25 due to the dramatic increase in the number of subsets that need to be analysed.

In subset sum problem, we are given a set of positive numbers. We are asked if it is possible to find a subset of this set such that the sum of numbers of the selected subset is exactly m ( a positive number).
Backtracking can be viewed as an attempt to improve the Bitmasking algorithm. Just to remind you, in Bitmasking we analyse all the possible subsets of the given set to find a possible solution subset.
But in backtracking, we will intelligently reject the subsets that we know for sure will not lead to a solution.

For example, suppose we have n=5 and the set is {5,31,3,7,6}. We have to select a subset such that the sum of numbers of the selected subset is 11. Now it is obvious that no subset containing the number 31 can have a sum of 11. So, no subset involving the number 31 will lead to a solution. So, we should not consider those subsets that have 31 in them. We should not waste our time analyzing those subsets. This is the principle of Backtracking. As soon as we come to know that selecting 31 will not lead to a solution, we do not continue analysing subsets with 31 in them.

Below is the code to solve the subset sum problem using backtracking:
backtrack() is a function that takes the set of numbers ( nums[ ] ), the total number of elements ( n ), and the sum that we require ( requiredsum ).
It returns 1 if it is possible to obtain a sum of exactly requiredsum, and 0 if it is not possible to do so.

int backtrack(int nums[],int at,int n,int sumrequired)
{
    if(sumrequired==0)
        return 1;
    if(sumrequired<0)
        return 0;
    if(at==n)
        return 0;
    if(backtrack(nums,at+1,n,sumrequired-nums[at])==1)
        return 1;
    if(backtrack(nums,at+1,n,sumrequired)==1)
        return 1;
    return 0;
}

I mentioned in my post about recursion that programs implemented using recursion can be incredibly short and incredibly difficult to understand. This is an example of that!

Following are the important points about this algorithm:

1. The at variable tells which number’s fate we are deciding. If in a recursive instance at is i, it means that we are currently deciding the fate of the ith number.
2. If the sumrequired variable is 0, it means that we do not need to select any more numbers, so we return the success code (which is 1).
3. A negative sum of numbers can never be achieved because all numbers can have only positive values; so if sumrequired is negative, we return the failure code (which is 0).
4. The recursive call at line 9 represents the case where we select the current number ( nums[at] ). When we select the current number, our problem reduces to selecting numbers from the set {nums[at+1], nums[at+2],…, nums[n-1]} such that their sum is (requiredsum – value[at]) .
5. The recursive call at line 11 represents the case where we don’t select the current number ( nums[at] ). If we don’t select the current number, our problem reduces to selecting numbers from the set {nums[at+1], nums[at+2],…, nums[n-1]} such that their sum is requiredsum itself.

I defined in my previous post that recursion is when we decompose a problem into a smaller problem.
See points 4 and 5 carefully. I am reducing my current problem of finding a solution subset from the set {nums[at],nums[at+1],nums[at+2],…,nums[n-1]} to finding a solution subset from the set {nums[at+1], nums[at+2],…, nums[n-1]}.
Whenever I find a solution (i.e. when requiredsum =0), all recursive calls will terminate one by one and 1 will be returned at each level.

At any time if requiredsum becomes less than 0, it means that the numbers that we have selected till now have their sum greater than the initial requiredsum already . So there’s no point in going furthur. Therefore, we return 0; which denotes that the current choices of elements will not lead to any solution.

In the test case I discussed before (with n=5 and the set of numbers as {5,31,3,7,6} and requiredsum=11),
if we select the number 31, the value of requiredsum will become < 0 in the next recursive call. And that recursive instance will immediately return 0; ( see point 2 )it will not go on deciding the fate of the rest of the numbers. In this way we simply discarded the subsets including the number 31.

The worst case complexity of this algorithm is same as Bitmasking i.e. O(2^n). But it offers a better execution time as it deliberately and intelligently skips some subsets.

PROBLEMS:
PAYING UP (on Codechef)

As it is with all recursive solutions, the code may seem overwhelming at first. I suggest that you look at the order of recursive calls I showed in the last post ( here ) and try to trace the way recursive calls are made in this case.