Time and Space Complexity

 
What is Time Complexity and why is it important ?
 
Time complexity is a measure of the amount of time taken by an algorithm to solve a problem, as a function of the size of the input. It is usually expressed using the "Big O" notation, which provides an upper bound on the number of operations an algorithm will perform as the size of the input grows to infinity.
Time complexity is important because it is a fundamental measure of the efficiency of an algorithm. In practical terms, it can help developers understand how long it will take for an algorithm to complete its task as the size of the input grows. This is particularly important for large-scale applications, where algorithms may need to process massive amounts of data.
Understanding the time complexity of an algorithm can also help developers optimize their code. By identifying algorithms with better time complexity, developers can choose more efficient approaches to solving problems. This can result in faster code execution times, reduced resource consumption, and overall better performance.
Additionally, time complexity can help developers compare the performance of different algorithms that solve the same problem. By comparing the time complexity of two algorithms, developers can identify which algorithm will be more efficient for a given problem size.
 
Following are the three types of time complexity commonly associated with algorithms:
  1. Best-case Time Complexity: The best-case time complexity is the minimum amount of time that an algorithm will take to complete for a given input size. It is usually denoted as Ω (Omega) notation.
  1. Average-case Time Complexity: The average-case time complexity is the expected amount of time that an algorithm will take to complete for a given input size, averaged over all possible inputs. It is usually denoted as Θ (Theta) notation.
  1. Worst-case Time Complexity: The worst-case time complexity is the maximum amount of time that an algorithm will take to complete for a given input size. It is usually denoted as O (Big-O) notation.
It's important to note that the best-case, average-case, and worst-case time complexities may be different for the same algorithm. The worst-case time complexity is usually the most important metric to consider when analyzing the performance of an algorithm, as it represents the upper bound on the time taken for any input size. However, the best-case and average-case time complexities can also be useful in certain situations, such as when designing algorithms for real-time systems or when analyzing the performance of algorithms in practice.
 
Examples :
 
  1. O(1) time complexity:
#include <iostream> using namespace std; int main() { int x = 10; int y = 20; int sum = x + y; cout << "The sum is " << sum << endl; return 0; }
This code has O(1) time complexity because it has a fixed and constant running time regardless of the input size. The code simply adds two variables and prints the result to the console.
 
  1. O(n) time complexity:
#include <iostream> using namespace std; void printArray(int arr[], int size) { for (int i = 0; i < size; i++) { cout << arr[i] << " "; } cout << endl; } int main() { int arr[] = {1, 2, 3, 4, 5}; int n = sizeof(arr) / sizeof(arr[0]); printArray(arr, n); return 0; }
This code has O(n) time complexity because the running time of the program increases linearly with the size of the input array. The printArray() function prints each element of the array to the console, which takes n iterations, where n is the size of the array.
 
  1. O(n^2) time complexity:
#include <iostream> using namespace std; int main() { int n = 5; for (int i = 1; i <= n; i++) { for (int j = 1; j <= n; j++) { cout << i * j << " "; } cout << endl; } return 0; }
This code has O(n^2) time complexity because the nested loop iterates n * n times. It prints the multiplication table of numbers 1 to n. The outer loop iterates n times and the inner loop also iterates n times for each iteration of the outer loop. Therefore, the total number of iterations is n * n.
 
  1. Exponential Time Complexity: O(2^n)
#include <iostream> using namespace std; int fibonacci(int n) { if (n <= 1) { return n; } return fibonacci(n-1) + fibonacci(n-2); } int main() { int n = 5; int result = fibonacci(n); cout << "The " << n << "th Fibonacci number is " << result << endl; return 0; }
This code has exponential time complexity O(2^n), because it calculates nth Fibonacci number recursively which leads to an exponential growth of the number of calls with the input size. The recursion breaks down when n <= 1, and otherwise, passes n-1 and n-2 to new recursive calls for the calculation of (n-1)th and (n-2)nd Fibonacci numbers. This program takes exponentially more time to execute as the input n increases.
 
  1. Logarithmic Time Complexity: O(log n)
#include <iostream> using namespace std; int binarySearch(int arr[], int low, int high, int target) { while (low <= high) { int mid = low + (high - low) / 2; if (arr[mid] == target) { return mid; } else if (arr[mid] < target) { low = mid + 1; } else { high = mid - 1; } } return -1; } int main() { int arr[] = {1, 3, 5, 7, 9, 11}; int n = sizeof(arr) / sizeof(arr[0]); int target = 7; int index = binarySearch(arr, 0, n-1, target); if (index != -1) { cout << "The element " << target << " is found at index " << index << endl; } else { cout << "The element is not found." << endl; } return 0; }
This code has logarithmic time complexity O(log n), because it performs a binary search on a sorted array to find the target value. The algorithm splits the search space in half at each iteration, which reduces the search space exponentially, taking a maximum of log2n iterations to find the target. The program takes less time to execute as the input size grows.
  1. O(n) - Linear Time Complexity:
c++Copy code int linear_search(int arr[], int x, int n) { for (int i = 0; i < n; i++) { if (arr[i] == x) { return i; } } return -1; }
This function performs a linear search for an element x in an array arr of size n. It takes linear time to execute, since the time taken increases proportionally to the size of the input array arr.
  1. O(n log n) - Log-Linear Time Complexity:
c++Copy code void merge(int arr[], int left[], int left_size, int right[], int right_size) { int i = 0, j = 0, k = 0; while (i < left_size && j < right_size) { if (left[i] <= right[j]) { arr[k++] = left[i++]; } else { arr[k++] = right[j++]; } } while (i < left_size) { arr[k++] = left[i++]; } while (j < right_size) { arr[k++] = right[j++]; } } void merge_sort(int arr[], int n) { if (n <= 1) { return; } int mid = n / 2; int left[mid], right[n - mid]; for (int i = 0; i < mid; i++) { left[i] = arr[i]; } for (int i = mid; i < n; i++) { right[i - mid] = arr[i]; } merge_sort(left, mid); merge_sort(right, n - mid); merge(arr, left, mid, right, n - mid); }
This function implements the merge sort algorithm, which takes log-linear time to execute. The merge_sort function recursively splits the input array into halves, sorts each half recursively using the same function, and then merges the sorted halves using the merge function. The time complexity of merge sort is O(n log n) because the time taken to sort the input array grows logarithmically with the size of the input array.
 

 
What is Space Complexity and why is it important ?
 
Space complexity is a measure of the amount of memory space required by an algorithm or program to solve a problem for a given input size. It is the amount of memory required to store the variables and data structures used by the algorithm during its execution. The space complexity of an algorithm is usually expressed in terms of the size of the input.
Space complexity is important because it is a critical factor in the design and implementation of algorithms and programs, especially for large-scale problems. When solving a problem, an algorithm may require large amounts of memory to store intermediate results or data structures. If the available memory is insufficient, the algorithm may fail or perform poorly. Additionally, in some applications such as embedded systems or mobile devices, memory resources may be limited, making it crucial to design algorithms that are efficient in terms of space usage.
Optimizing space complexity can also have indirect benefits for the performance of an algorithm. For example, if an algorithm can store intermediate results in memory instead of re-computing them repeatedly, it can reduce the overall number of operations required, leading to faster execution times.
Overall, considering the space complexity of an algorithm is an essential aspect of algorithm design and analysis, especially in situations where memory resources are limited or where large-scale problems are involved.
 
Examples :
 
  1. O(1) space complexity:
#include <iostream> using namespace std; int main() { int x = 10; int y = 20; int sum = x + y; cout << "The sum is " << sum << endl; return 0; }
This code has O(1) space complexity, because it uses a constant amount of space regardless of the input size. Only a few variables are initialized at the beginning, and no additional space is required during runtime.
 
  1. O(n) space complexity:
#include <iostream> using namespace std; void printArray(int arr[], int size) { for (int i = 0; i < size; i++) { cout << arr[i] << " "; } cout << endl; } int main() { int arr[] = {1, 2, 3, 4, 5}; int n = sizeof(arr) / sizeof(arr[0]); printArray(arr, n); return 0; }
This code has O(n) space complexity, because additional space complexity increases with the input size n. The function printArray() takes the array arr and its size as input parameters and prints each element of the array to the console. The space requirement of the program grows linearly with the input size.
 
  1. O(n^2) space complexity:
#include <iostream> using namespace std; int main() { int n = 5; int arr[n][n]; for (int i = 1; i <= n; i++) { for (int j = 1; j <= n; j++) { arr[i][j] = i * j; } } return 0; }
This code has O(n^2) space complexity, because the space requirement grows exponentially with the input size. Here, arr is an array of n x n, which requires n * n space. An extra O(n) space is also required to store the loop variables.
 
  1. O(2^n) space complexity:
#include <iostream> using namespace std; int fib(int n) { if (n == 0) return 0; if (n == 1) return 1; return fib(n-1) + fib(n-2); } int main() { int n = 10; int result = fib(n); cout << "The " << n << "th Fibonacci number is " << result << endl; return 0; }
 
This program calculates the nth Fibonacci number using recursion. However, this algorithm has exponential space complexity because it uses a lot of memory for each recursive call. When the function is called with a large value of n, it will make many function calls and store intermediate results on the call stack. As a result, the amount of memory required grows exponentially with the value of n.
To solve this issue, we can use dynamic programming technique (which stores all the previously calculated values and calculate next value accordingly) to reduce the space complexity to linear: O(n).
 
  1. O(log n) space complexity
#include <iostream> using namespace std; int binarySearch(int arr[], int low, int high, int target) { while (low <= high) { int mid = low + (high - low) / 2; if (arr[mid] == target) { return mid; } else if (arr[mid] < target) { low = mid + 1; } else { high = mid - 1; } } return -1; } int main() { int arr[] = {1, 3, 5, 7, 9, 11}; int n = sizeof(arr) / sizeof(arr[0]); int target = 7; int index = binarySearch(arr, 0, n-1, target); if (index != -1) { cout << "The element " << target << " is found at index " << index << endl; } else { cout << "The element is not found." << endl; } return 0; }
In this example, the binary search algorithm is used to find the target number in a sorted array. Binary search has logarithmic space complexity, because the amount of memory used does not increase linearly with the increase of the input size. In each iteration, the algorithm only requires a fixed amount of space to store the mid-point, the low index, and the high index. So, the space complexity of this algorithm is O(log n) and not dependent on the size of the input array.
Thus, the memory usage increases slowly with the increase of input size, providing logarithmic space complexity.