A lot of sorting algorithms are used in software industries based upon the preferences of time and space complexity.

Let's take a closer look at some of the most common sorting algorithms and analyze their time and space complexity.

**Bubble Sort**

Bubble Sort is a simple comparison-based sorting algorithm that repeatedly compares adjacent elements and swaps them if they're in the wrong order. Here is an explanation of Bubble Sort's time and space complexity along with a C++ implementation:

Time Complexity

In the worst-case scenario, when the input array is sorted in reverse order, Bubble Sort has a time complexity of O(n^2). This is because the algorithm will have to make n*(n-1) comparisons and swaps to sort the array. In the best-case scenario, when the input array is already sorted, the algorithm will take O(n) time complexity, because it will only need to iterate through the array once. In the average case, Bubble Sort has a time complexity of O(n^2).

Space Complexity

Bubble Sort has a space complexity of O(1) because it only needs some extra space for variable declarations.

C++ Implementation

`#include <iostream> using namespace std; void bubbleSort(int arr[], int n) { for (int i = 0; i < n-1; i++) { for (int j = 0; j < n-i-1; j++) { if (arr[j] > arr[j+1]) { swap(arr[j], arr[j+1]); } } } } int main() { int arr[] = {64, 34, 25, 12, 22, 11, 90}; int n = sizeof(arr) / sizeof(arr[0]); bubbleSort(arr, n); cout << "Sorted array: "; for (int i = 0; i < n; i++) { cout << arr[i] << " "; } cout << endl; return 0; }`

This implementation consists of two nested loops, one which iterates through the entire array and another which iterates from the beginning of the array to the ith index of the outer loop. If the arr[j] is greater than arr[j+1], the swap function is called to swap them, otherwise, we continue to the next pair. After the inner loop has completed, one element will be sorted, so the outer loop moves onto the next element. Finally, the sorted array is printed.

Overall the code has an average time complexity of O(n^2) and takes up constant or O(1) space.

**Selection Sort**

Selection Sort is an in-place comparison sorting algorithm that divides an input array into two parts: the unsorted part and the sorted part. In each iteration, the algorithm selects the smallest element from the unsorted part and places it at the beginning of the sorted part. Here is an explanation of Selection Sort's time and space complexity along with a C++ implementation:

Time Complexity

In the worst-case scenario, Selection Sort has a time complexity of O(n^2) because the algorithm will have to compare the elements of the unsorted part n*(n-1)/2 times to sort the array. In the best-case scenario (when the input array is already sorted), the algorithm will still have to make the same number of comparisons but will not have to swap any elements for each iteration, leading to a time complexity of O(n^2) as well. In the average case, Selection Sort has a time complexity of O(n^2).

Space Complexity

Selection Sort has a space complexity of O(1) because it only needs some extra space for variable declarations.

C++ Implementation

`#include <iostream> using namespace std; void selectionSort(int arr[], int n) { for (int i = 0; i < n-1; i++) { int min_index = i; for (int j = i+1; j < n; j++) { if (arr[j] < arr[min_index]) { min_index = j; } } swap(arr[min_index], arr[i]); } } int main() { int arr[] = {64, 34, 25, 12, 22, 11, 90}; int n = sizeof(arr) / sizeof(arr[0]); selectionSort(arr, n); cout << "Sorted array: "; for (int i = 0; i < n; i++) { cout << arr[i] << " "; } cout << endl; return 0; }`

This implementation selects the smallest element from the unsorted part of the array and places it at the beginning of the sorted part in each iteration. The code consists of two nested loops, one which iterates through the entire array and another which iterates through the unsorted part of the array from the ith index. If the arr[j] is smaller than arr[min_index], min_index will store the current index of the smaller element, when the inner loop completes, we swap the smallest element with the first element of the unsorted part. After that, the algorithm continues with the next unsorted element until the entire array is sorted.

Overall the code has an average time complexity of O(n^2) and takes up constant or O(1) space.

**Insertion Sort**

Insertion Sort is a simple comparison-based sorting algorithm that builds the final sorted array one item at a time. The algorithm iterates through unsorted values, shifting larger values to the right until the correct spot for the current item is found. Here is an explanation of Insertion Sort's time and space complexity along with a C++ implementation:

Time Complexity

The time complexity of Insertion Sort is O(n^2) in the worst and average cases, and O(n) in the best case, which occurs when the input array is already sorted. This is because in the worst and average cases, for each element i in the input array, the algorithm makes i-1 comparisons in the inner loop.

Space Complexity

Insertion Sort has a space complexity of O(1) because it only needs some extra space for variable declarations.

C++ Implementation

`#include <iostream> using namespace std; void insertionSort(int arr[], int n) { for (int i = 1; i < n; i++) { int key = arr[i]; int j = i - 1; while (j >= 0 && arr[j] > key) { arr[j+1] = arr[j]; j--; } arr[j+1] = key; } } int main() { int arr[] = {64, 34, 25, 12, 22, 11, 90}; int n = sizeof(arr) / sizeof(arr[0]); insertionSort(arr, n); cout << "Sorted array: "; for (int i = 0; i < n; i++) { cout << arr[i] << " "; } cout << endl; return 0; }`

This implementation starts with the second element of the array and iterates through every element in the array. During each iteration, the current element is compared to the previous elements in the array, and if it is smaller than the previous element, those elements are shifted one position to the right, using a while loop, until the correct spot is found for the current element. Finally, the algorithm places the current element in the correct spot.

Overall the code has an average time complexity of O(n^2) and takes up constant or O(1) space. However, in practice, Insertion Sort is faster than other sorting algorithms particularly when the input array is nearly sorted or the size of the array is small.

**Radix Sort**

Radix Sort is an efficient, non-comparative sorting algorithm that works by sorting data with integer keys by its digits. The key is sorted multiple times, starting with the least significant or rightmost digit, and moving towards the most significant, or leftmost digit. Radix Sort can be implemented in two ways: least significant digit (LSD) and most significant digit (MSD) radix sort.

Time Complexity

The time complexity of Radix Sort depends on various factors like the size of the array, the range of keys, and the number of iterations. In general, Radix Sort has an O(nk) time complexity, where n is the number of elements in the input array and k is the maximum number of digits in the largest element in the array. In other words, Radix Sort is linear when the number of digits, k, is a constant. However, if k is variable, then Radix Sort can be O(nlog

*k) where log*is the iterative logarithm, which is a very slow-growing function.Space Complexity

Radix Sort has a space complexity of O(n+k) because, in addition to the input array, Radix Sort uses a count array to keep track of the frequencies of each digit in the input array.

Radix Sort C++ Implementation

`#include <iostream> #include <algorithm> using namespace std; void countingSort(int arr[], int n, int exp) { int output[n]; int count[10] = {0}; // count the occurrence of each digit for (int i = 0; i < n; i++) { count[(arr[i] / exp) % 10]++; } // modify the count array by adding the previous count for (int i = 1; i < 10; i++) { count[i] += count[i-1]; } // build the output array for (int i = n-1; i >= 0; i--) { output[count[(arr[i] / exp) % 10] - 1] = arr[i]; count[(arr[i] / exp) % 10]--; } // copy the output array to the original array for (int i = 0; i < n; i++) { arr[i] = output[i]; } } void radixSort(int arr[], int n) { int max_element = *max_element(arr, arr + n); for (int exp = 1; max_element/exp > 0; exp *= 10) { countingSort(arr, n, exp); } } int main() { int arr[] = {170, 45, 75, 90, 802, 24, 2, 66}; int n = sizeof(arr)/sizeof(arr[0]); radixSort(arr, n); cout << "Sorted array: "; for (int i = 0; i < n; i++) { cout << arr[i] << " "; } return 0; }`

This implementation uses counting sort as the stable sort algorithm to sort the input array. After the counting sort for each digit is completed, the array is sorted based on the next digit of each element. The algorithm continues this process until all the digits are sorted.

Overall, the code has an O(n*k) time complexity and takes up O(n+k) space. The performance of Radix Sort is proportional to the number of iterations required to sort the array based on each digit, so it is ideal when the range of the keys is much smaller than the number of elements in the array.

**Merge Sort**

Merge Sort is a divide and conquer algorithm that divides the input array into two halves, sorts each half recursively, and then merges them into a single sorted array. Here is an explanation of Merge Sort's time and space complexity along with a C++ implementation:

Time Complexity

Merge Sort has a time complexity of O(nlogn), which is based on the number of comparisons made by the algorithm. In the best, worst, and average cases, Merge Sort divides the input array into two halves until the base case is reached (arrays of one element). These arrays are then merged and sorted. The amount of work done at each level of recursion is proportional to the size of the sub-array, n.

Space Complexity

Merge Sort has a space complexity of O(n) because it makes use of temporary arrays to store the merged sub-arrays.

C++ Implementation

`#include <iostream> using namespace std; void merge(int arr[], int left, int mid, int right) { int n1 = mid - left + 1; int n2 = right - mid; int L[n1], R[n2]; // copy the left and right parts of arr for (int i = 0; i < n1; i++) { L[i] = arr[left + i]; } for (int j = 0; j < n2; j++) { R[j] = arr[mid + 1 + j]; } // merge the left and right array into arr int i = 0, j = 0, k = left; while (i < n1 && j < n2) { if (L[i] <= R[j]) { arr[k] = L[i]; i++; } else { arr[k] = R[j]; j++; } k++; } // copy the remaining elements if any while (i < n1) { arr[k] = L[i]; i++; k++; } while (j < n2) { arr[k] = R[j]; j++; k++; } } void mergeSort(int arr[], int left, int right) { if (left < right) { int mid = left + (right - left) / 2; mergeSort(arr, left, mid); mergeSort(arr, mid+1, right); merge(arr, left, mid, right); } } int main() { int arr[] = {64, 34, 25, 12, 22, 11, 90}; int n = sizeof(arr) / sizeof(arr[0]); mergeSort(arr, 0, n-1); cout << "Sorted array: "; for (int i = 0; i < n; i++) { cout << arr[i] << " "; } cout << endl; return 0; }`

This implementation divides the input array into two halves, sorts each half recursively, and then merges them into a single sorted array using the merge method. The code consists of three main parts: merging, dividing, and sorting. The merging part uses a temporary array to merge the left and right sub-arrays into a single sorted array. The dividing part divides the array into two halves, and the sorting part sorts the left and right sub-arrays recursively until the array is sorted.

Overall the code has an O(n*logn) time complexity and takes up O(n) space. Merge Sort is a good choice when stability is needed or when the input size is large because it has a reliable worst-case time complexity.

**Heap Sort**

Heap Sort is an in-place sorting algorithm that works by creating a binary heap, where the parent node is greater than or equal to the children nodes. The largest element of the heap is then exchanged with the last element of the heap and the heap is reduced by 1. This process is repeated until the heap is sorted. Here is an explanation of Heap Sort's time and space complexity along with a C++ implementation:

Time Complexity

Heap Sort has a time complexity of O(nlogn), which is based on the number of comparisons made by the algorithm. In the worst case, Heap Sort builds the heap of the input array in O(n) time. The for loop that swaps the root element with the last element in the heap and calls the heapify operation takes O(logn) time. So the time complexity of Heap Sort is O(nlogn).

Space Complexity

Heap Sort has a space complexity of O(1) because it only needs some extra space for variable declarations.

C++ Implementation

`#include <iostream> using namespace std; void heapify(int arr[], int n, int i) { int largest = i; int left = 2*i + 1; int right = 2*i + 2; // find the largest element between root, left, and right child if (left < n && arr[left] > arr[largest]) { largest = left; } if (right < n && arr[right] > arr[largest]) { largest = right; } // swap the root with largest element if (largest != i) { swap(arr[i], arr[largest]); heapify(arr, n, largest); } } void heapSort(int arr[], int n) { // build the heap for (int i = n / 2 - 1; i >= 0; i--) { heapify(arr, n, i); } // extract the elements one by one for (int i = n-1; i >= 0; i--) { swap(arr[0], arr[i]); heapify(arr, i, 0); } } int main() { int arr[] = {64, 34, 25, 12, 22, 11, 90}; int n = sizeof(arr)/sizeof(arr[0]); heapSort(arr, n); cout << "Sorted array: "; for (int i = 0; i < n; i++) { cout << arr[i] << " "; } cout << endl; return 0; }`

This implementation builds the heap of the input array and repeatedly extracts the maximum element until the tree is sorted. The heapify operation starts at the root of the heap and recursively sinks an element down to its correct position. The code consists of two main parts: heapifying part and sorting part. The heapifying part builds the heap of the input array, whereas the sorting part extracts the maximum element and reduces the size of the heap by 1 until the tree is sorted.

Overall the code has an O(nlogn) time complexity and takes up constant or O(1) space. Heap Sort is a good choice when stability is needed or when the input size is large because it has a reliable worst-case time complexity.

**Quick Sort**

Quick Sort is a divide and conquer algorithm that partitions the input array into two halves based on a chosen pivot element. The pivot element is placed in its correct position, and the algorithm repeats this process on the sub-arrays until the entire array is sorted. Here is an explanation of Quick Sort's time and space complexity along with a C++ implementation:

Time Complexity

Quick Sort has an average-case time complexity of O(nlogn) and a worst-case time complexity of O(n^2). The performance of Quick Sort depends on the choice of the pivot element. In the average case, the pivot element divides the input array in roughly equal parts, and the algorithm runs in O(nlogn) time. However, in the worst case, the pivot element causes the input array to be divided into sub-arrays of size n-1 and 1, and the algorithm runs in O(n^2) time. To avoid this worst-case scenario, an additional step called "randomized Quick Sort" can be added, where the pivot element is chosen at random.

Space Complexity

Quick Sort has a space complexity of O(logn) because it makes use of a recursive call stack. For each level of recursion, Quick Sort needs to store the partition index in the stack memory. The recursive call stack has a depth of logn, which is the number of times the input array can be divided into two halves.

C++ Implementation

`#include <iostream> using namespace std; int partition(int arr[], int low, int high) { int pivot = arr[high]; int i = low - 1; for (int j = low; j < high; j++) { if (arr[j] <= pivot) { i++; swap(arr[i], arr[j]); } } swap(arr[i+1], arr[high]); return i+1; } void quickSort(int arr[], int low, int high) { if (low < high) { int pi = partition(arr, low, high); quickSort(arr, low, pi-1); quickSort(arr, pi+1, high); } } int main() { int arr[] = {64, 34, 25, 12, 22, 11, 90}; int n = sizeof(arr) / sizeof(arr[0]); quickSort(arr, 0, n-1); cout << "Sorted array: "; for (int i = 0; i < n; i++) { cout << arr[i] << " "; } cout << endl; return 0; }`

This implementation selects the last element of the input array as the pivot element, partitions the array into two sub-arrays, and places the pivot element in its correct position. The code consists of two parts, partitioning and sorting. The partitioning part takes the last element as pivot and then rearranges the elements such that all elements left of pivot are smaller than it, and all to the right are greater. The sorting part applies this recursively on partitions generated by the partition method.

Overall the code has an O(n*logn) time complexity in the average case and O(n^2) in the worst case. The space complexity of the code is O(logn). Although it has a higher worst-case complexity compared to Merge Sort, Quick Sort is faster than other sorting algorithms, especially on average cases, making it one of the widely used algorithms.

**Introsort**

Introsort is a hybrid sorting algorithm that combines Quicksort and Heapsort to provide the best of both worlds. Introsort begins by using Quicksort to partition the input sequence, and if the recursion level exceeds a certain depth, it switches to Heapsort to complete the sorting. Here is an explanation of Introsort's time and space complexity along with a C++ implementation:

Time Complexity

Introsort has an average-case time complexity of O(nlogn) and a worst-case time complexity of O(nlogn), which is comparable to Quicksort. In practice, the performance of Introsort is often better than Quicksort due to its ability to switch to Heapsort when the recursion depth exceeds a certain threshold. Heapsort provides a guaranteed worst-case time complexity of O(nlogn), which prevents the Quicksort worst-case from happening.

Space Complexity

Introsort has a space complexity of O(logn) because it uses a recursive call stack to sort the input sequence. Unlike Quicksort, Introsort switches to Heapsort when the recursion depth exceeds a certain threshold, which limits the height of the recursion tree and the amount of space needed.

C++ Implementation

`#include <iostream> #include <algorithm> #include <cmath> using namespace std; void heapSort(int arr[], int n) { make_heap(arr, arr + n); sort_heap(arr, arr + n); } int partition(int arr[], int low, int high) { int pivot = arr[high]; int i = low - 1; for (int j = low; j < high; j++) { if (arr[j] <= pivot) { i++; swap(arr[i], arr[j]); } } swap(arr[i+1], arr[high]); return i+1; } void introsort(int arr[], int* first, int* last, int depthLimit) { int size = last-first; if (size < 16) { insertion_sort(first, last); } else if (depthLimit == 0) { heapSort(first, size); } else { int pivot = partition(arr, first-arr, last-arr-1); introsort(arr, first, arr+pivot-1, depthLimit-1); introsort(arr, arr+pivot+1, last, depthLimit-1); } } void introsort(int arr[], int n) { int depthLimit = 2 * log(n); introsort(arr, arr, arr+n-1, depthLimit); } int main() { int arr[] = {64, 34, 25, 12, 22, 11, 90}; int n = sizeof(arr) / sizeof(arr[0]); introsort(arr, n); cout << "Sorted array: "; for (int i = 0; i < n; i++) { cout << arr[i] << " "; } cout << endl; return 0; }`

This implementation combines Quicksort and Heapsort to sort the input sequence. The code consists of three parts, sorting, partitioning, and swapping. The sorting part keeps track of the recursion depth and switches to Heapsort when the depth exceeds a certain threshold. The partitioning part is done by Quicksort, and the swapping part sorts the input sequence using Heapsort when the Quicksort recursion depth exceeds the threshold.

Overall, Introsort has the same O(nlogn) average and worst-case time complexity as Quicksort but with a smaller constant factor in practice thanks to Heapsort. The space complexity of the code is O(logn).

**PRACTICE PROBLEMS :**