💨

Dynamic Programming

 
 
Dynamic Programming is a very useful technique in Computer Science and Mathematics used to optimize solutions and performance by breaking down a problem into smaller subproblems.
In simple terms, Dynamic Programming involves breaking down a problem into smaller subproblems and solving each subproblem only once, storing the results in memory, and then using the results to solve bigger subproblems or the original problem. This technique is used to solve problems with overlapping subproblems.
Dynamic Programming is usually implemented using memoization. Memoization involves storing the results of expensive function calls and returning the cached result when the same inputs occur again.
In C++, Dynamic Programming can be implemented using arrays, tables, or even recursive functions. We take advantage of the fact that the subproblems have similar structures and only solve them once.
 
Memoization and Tabulation are two methods used in Dynamic Programming to solve problems with overlapping subproblems in an efficient way. They both store the results of expensive function calls and return the cached result when the same inputs occur again.
 
Memoization :
Memoization is an approach where we store the results of expensive function calls and return the cached result when the same inputs occur again. Memoization is implemented using recursion and caches the results of function calls by mapping the inputs to their outputs. The cache memory can be a hash map, an array, or any other data structure that allows fast access to stored results.
The algorithm first checks whether the function call has already been executed with the input parameters that are being passed. If the function has never been called before with given parameters, the function evaluates the result as usual and stores it in memory. If the function has already been called with given parameter values, the function simply returns the stored result instead of recomputing the same operation again.
Memoization is useful when the same computation is repeated many times with the same inputs. It saves time by not having to redo the same computation on the same input values. Memoization is best suited for recursive problems that follow the top-down approach.
 
Tabulation :
Tabulation, on the other hand, is an approach where we solve the problem by building a table from the bottom up. Tabulation is implemented using iteration, and computes values for smaller subproblems and use the results to build on to solve the larger ones.
In tabulation, we start by solving the smaller subproblems first and then use the results to solve the larger subproblems. The result of each subproblem is stored in a table (usually a 2D array), which is then used to solve the subsequent subproblems.
Tabulation is preferred when the problem follows a bottom-up approach and can be easily represented in a table format. It is easy to implement and memory-efficient as the table only needs to store values of subproblems, rather than intermediate function parameters. In tabulation, we take advantage of the fact that we only need to solve each subproblem once.
 
One of the main advantages of both Memoization and Tabulation is increased performance as costly computations is reduced by caching, and storing results in memory. They reduce the computational time and increase the efficiency of the algorithm.
 
Example code :
 
Recursive, memoized and tabulated approach to solving the fibonacci sequence in C++, along with the time and space complexities of each approach.
 
1. Recursive Approach
#include<bits/stdc++.h> using namespace std; int fibonacci_recursive(int n) { if(n<=1) { return n; } return fibonacci_recursive(n-1) + fibonacci_recursive(n-2); } int main () { int n; cout << "Enter the number for Fibonacci series: "; cin >> n; cout << "Fibonacci of " << n << " is " << fibonacci_recursive(n); return 0; }
 
The time complexity of the recursive approach is O(2^n), since for each function call, it makes two more function calls, leading to an exponential number of calls. The space complexity of the recursive approach is also O(n), since the maximum number of function calls on the call stack at any given time is limited to the size of n.
 
 
2. Memoized Approach
#include<bits/stdc++.h> using namespace std; unordered_map<int, int> memo; // To store the previous result int fibonacci_memoized(int n) { if(n<=1) { return n; } // Check if the result for this n is already in memo if(memo.find(n) != memo.end()) { return memo[n]; } // Calculate and store the result of the new function call int result = fibonacci_memoized(n-1) + fibonacci_memoized(n-2); memo[n] = result; return result; } int main () { int n; cout << "Enter the number for Fibonacci series: "; cin >> n; cout << "Fibonacci of " << n << " is " << fibonacci_memoized(n); return 0; }
 
In the memoized approach, we use an unordered_map ( container ) to store the previous results and look up any necessary values when solving for a subsequent number. The time complexity of this approach is O(n), as we only calculate each number once, and the space complexity is O(n), as we store the result for each number up to n.
 
 
3. Tabulated Approach
#include<bits/stdc++.h> using namespace std; int fibonacci_tabulated(int n) { int table[n+1]; table[0] = 0; table[1] = 1; for(int i = 2; i <= n; i++) { table[i] = table[i-1] + table[i-2]; } return table[n]; } int main () { int n; cout << "Enter the number for Fibonacci series: "; cin >> n; cout << "Fibonacci of " << n << " is " << fibonacci_tabulated(n); return 0; }
 
In the tabulated approach, we create an array to hold the results for each number up to n. We then fill in the array by calculating each value as the sum of the previous two values. The time complexity of this approach is O(n), since we only loop through the array once, and the space complexity is O(n), as we store the entire array of size n+1.
 
 
 
0-1 Knapsack Problem :
 
0-1 Knapsack Problem is a classic problem in dynamic programming. The problem is as follows:
Given a set of items, each with a weight and a value, determine the items to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible.
This is a classic problem of optimization, as we need to maximize the value within a given weight limit.
Dynamic Programming Solution
This problem can be solved using dynamic programming technique. The optimal solution for this problem would be the maximum of the following two solutions -
  1. Include the n-th item in the knapsack and recur for n-1 items with the remaining weight
  1. Exclude the n-th item from the knapsack and recur for n-1 items with the same weight.
The base condition for the recurrsion would be when either all the items are considered or there is no weight left to add an item.
Thus, the optimal solution for n items and W weight can be given as:
optimal(n, W) = max (v[n-1] + optimal(n-1, W-w[n-1]), optimal(n-1, W))
where, n is the number of items, W is the maximum weight capacity of the knapsack, v[] is the value of each item, and w[] is the weight of each item.
We can solve this using a 2D dynamic programming approach where we define a dp[n+1][W+1] array to store the maximum value that can be obtained by considering the first n items and having a maximum weight limit of W.
C++ Code
#include<bits/stdc++.h> using namespace std; int knapsack(int w[], int v[], int n, int W) { int dp[n+1][W+1]; for(int i=0; i<=n; i++) { for(int j=0; j<=W; j++) { if(i==0 || j==0) { dp[i][j] = 0; //base case, when there is no item to consider or no weight to add } else if(w[i-1] <= j) { dp[i][j] = max(v[i-1] + dp[i-1][j-w[i-1]], dp[i-1][j]); //when weight of item is less than current weight } else { dp[i][j] = dp[i-1][j]; //when weight of item is more than current weight } } } return dp[n][W]; } int main() { int n, W; cin >> n >> W; int w[n], v[n]; for(int i=0; i<n; i++) { cin >> w[i] >> v[i]; } cout << "Maximum value that can be obtained: " << knapsack(w, v, n, W); return 0; }
The above code solves the 0-1 Knapsack Problem using dynamic programming. The time complexity of the code is O(n*W), which makes it efficient.
 
 
PRACTICE PROBLEMS :
  1. Edit Distance
  1. Best Time to Buy and Sell Stock II