Sorting
Bubble Sortā
- It works by repeatedly
swapping adjacent elementsif they are in the wrong order. - This process "bubbles" the largest (or smallest) element to its correct position in each pass through the list.
Implementationā
- Outer Loop
- Inner Loop
- Optimaztaion
Outer Loopā
Traverse the array from first to second last(size-1) element and for each iteration assume the largest unsorted element "bubbles up" to its correct position.
Inner Loopā
Compare adjacent elements. If the current element is greater than the next element, swap them.
It perform sorting from right to left, and in each iteration the from the last to last ith item has been sorted, so we don't need to iterate them so inner loop will continue at size-i-1 times.
Optimizationā
Outer loop doesnn't have anything with the element of the array. Inner loop compare and bubbles up the element. Outer loop is just counting the element means how much times the array needs to bubbles up(the number of time inner loop should run)
Inner loop is comparing adjacent element and bubbles up, if no perform in on outer loop iteration means the array is already sorted, so we don't need to run the outer loop again.
Charectersticsā
In-Place Sorting- no extra memory required- Space Complexity:
O(1)no extra memory required - Time Complexity:
- Worst Case: O(n2) - when the list is in reverse order and requires maximum comparisons and swaps
- Best Case: O(n2) - when the list is already sorted, and no swaps are needed in a single pass.
- Average Case: O(n2)
- Number of Swaps: - In the worst-case scenario, the number of swaps performed by Bubble Sort is equal to the maximum number of inversions in the list. An inversion is a pair of elements where the earlier element is greater than the later one. In the worst case, the list is sorted in reverse order, and every element needs to be swapped into its correct position. That means we need n*(n-1)/2 swaps
- Number of Comparisons:
- first iteration it compare n-1 items
- secont iteration it compare n-2 items
- so total comparison is n*(n-1)/2
- Stability: - It is not stable, meaning that it might change the relative order of equal elements if swapping places them differently.
When to useā
- small dataset
- memory constraint
- educational purpose
Recursion Implementationā
- Outer loop iterate through
0ton-1 - So make a recursion call that can iterate the whole array
- Outer loop just count how much time the array should bubble up, so make a recursion call that simply count the number of time array should bubbles up
Selection Sortā
- It is a simple comparison-based sorting algorithm.
- It works by
repeatedly selecting the smallest(or largest, depending on sorting order) element from the unsorted portion of the list andswapping it with the first elementof the unsorted section. This process is repeated until the entire list is sorted.
Implementationā
- Outer Loop
- Inner Loop
- Swap
Outer Loopā
Traverse the array from first to second last(size-1) element and for each iteration assume first unsorted element is the smallest
The array is divided into two equal parts initially sorted(left) have no element and unsorted(right) have all element, in each iteration ith item has been sorted as it's the first element of unsorted array
Inner Loopā
It is used to find the smallest element from the unsorted portion of the array
Swapā
you have the first item from the unsorted portion of the array which is from the outer loop and indicates as min_index = i
you also have the minimum item indexes from inner loop, so just swap the smallest item with first element.
if first and smallest element are same no swap are required
if(min_index!=i) swap(arr[min_index], arr[i]);
Characteristicsā
In-Place Sorting- no extra memory required- Space Complexity:
O(1)- no extra memory required
- Time Complexity:
- Worst Case: O(n2) - due to two nested loops
- Best Case: O(n2) - it still scans the unsorted portion even if the list is already sorted
- Average Case: O(n2)
- Number of Swaps: - It performs at most n-1 swaps.
- Number of Comparisons:
- first iteration it compare n-1 items
- secont iteration it compare n-2 items
- so total comparison is n*(n-1)/2
- Stability: - It is not stable, meaning that it might change the relative order of equal elements if swapping places them differently.
When to useā
- minimum swaping requires
- memory is constrained
- small dataset
Insertion Sortā
It mimics how we might sort playing cards in our hands.
It works by dividing the array into a sorted and an unsorted portion. The algorithm progressively takes elements from the unsorted portion and inserts them into their correct position within the sorted portion.
Implementationā
- Outer Loop
- Inner Loop
- Shifting
Outer Loopā
The first item is considered sorted by default. Outer loop iterates through the unsorted portion, pick one element at a time and placing it into the correct position in the sorted portion
Inner Loopā
Inner loop is used to find out where the insertion should be performed. It iterate from the last element of sorted portion until lesser value of current element is not found. Until the lesser value is found the item from sorted portion shift one place right.
Shiftingā
It shifts elements instead of swapping. A variable j is declared to track where to insert or the last index of the sorted portion after performing shifting operation.
Charectersticsā
In-Place Sorting- no extra memory required- Space Complexity:
O(1)no extra memory required - Time Complexity:
- Worst Case: O(n2) - the array is sorted in reverse order.
- Best Case: O(n) - No shifting is required; each element is compared once and placed immediately.
- Average Case: O(n2)
- Number of Shift: - in worst case at most n*(n-1)/2 shift is performed.
- Number of Comparisons:
- first iteration it compare n-1 items
- secont iteration it compare n-2 items
- so total comparison is n*(n-1)/2
- Stability: - It is stable, meaning that it keep the relative order of equal elements.
When to useā
- small dataset
- memory constraint
- array is nearly sorted
- minimum comparison required\
Merge Sortā
Merge Sort is a divide-and-conquer sorting algorithm that splits an array into smaller subarrays, sorts those subarrays, and then merges them back together in a sorted manner.
Implementationā
- Merge Sort
- Merge
Merge Sortā
- divide the array into two equal part(
int mid = left + (right - left) / 2;). - splitting the array should be continued until each subarray contains a single element or is empty.
- splitting occurs recursively untile the most left node is found, when the most left node is found, it traverse to right one, and it contues untill all element is splitted.
Mergeā
Combine the two sorted halves into a single sorted array by comparing elements from each half and inserting the smaller one into the final array.
- extract the size of both array.
- copy data from main array to temporary array.
- compare and merge same amount of item from both array.
- merge remaining item of left array.
- merge remaining item of right array.
Charectersticsā
- Space Complexity:
O(n)- due to the temporary arrays used during merging. - Time Complexity:
- Worst Case: O(n log n)
- Best Case: O(n log n)
- Average Case: O(n log n)
- The
log nfactor comes from the division of the array, and thenfactor comes from the merging process.
- Number of Comparisons: in each step there will be
nnumber of comparison and there have totallog nstep so total comparison isn log n. - Stability: - It is stable, meaning that it keep the relative order of equal elements.
When to useā
- large dataset
- suitable for linked lists because it avoids the overhead of random access.
Practiceā
Basic:
- Merge Sort Implementation - GeeksforGeeks
- Count Inversions in an Array - GeeksforGeeks
- Sorting Strings Using Merge Sort - GeeksforGeeks
Intermediate Problems:
- Kth Smallest Element in Sorted Matrix - LeetCode
- Merge K Sorted Arrays - GeeksforGeeks
- Median of Two Sorted Arrays - LeetCode
- Check If Arrays Are Identical - GeeksforGeeks
Advanced Problems:
- Merge Sort for Linked List - GeeksforGeeks
- Closest Pair of Points Using Divide and Conquer - GeeksforGeeks
- Reverse Pairs - LeetCode
- Maximum Sum of Non-Overlapping Subarrays - GeeksforGeeks
Platform:
Quick Sortā
Quick Sort is a divide-and-conquer sorting algorithm that works by selecting a "pivot" element and partitioning the array around the pivot.
The goal is to move smaller elements to the left of the pivot and larger elements to the right. The process is then recursively applied to the sub-arrays.
Implementationā
Pivot Selectionā
The pivot is an element around which the array will be partitioned. Popular strategies include:
- First element - may lead to poor performancefor sorted/revers sorted arrays
- Last element - similar limatations as first element
- Random - helps by avoiding worst-case scenarios
- Medeian of first, last, middle
Partitioningā
- The array is rearranged such that:
- All elements smaller than the pivot are moved to its left.
- All elements larger than the pivot are moved to its right.
- This ensures the pivot is in its correct position in the sorted array.
swap(arr[i], arr[j]);may swap itself.
Recursionā
Apply the same steps to the left and right sub-arrays.
Charectersticsā
In-Place Sorting- no extra memory required during partitioning, recursive call use the function stack for managing sub-problems, but this space usage is not considered "external memory".- Space Complexity:
O(log n)- due to recursive calls in the function stack. - Time Complexity:
- Worst Case: O(n2) - occurs when the pivot consistently produces unbalanced partitions(smallest/largest)
- Best Case: O(n log n) - occurs when the partition is balanced(equal element)
- Average Case: O(n log n)
- Number of Comparisons: It depends on the pivot selection and how the array is partitioned at each step. in worst case n*(n+1)/2 comparison will occur such as pivot is largest/smalles element.
- first partion produces one sub array with n-1 item and another one is 0
- second partion produces one sub array with n-2 item and another one is 1
- Number of Swaps: after making a comparison swap has been performed. so max swap can be n*(n+1)/2
- Stability: - It is not stable, meaning that it might change the relative order of equal elements if swapping places them differently.
When to useā
- large dataset
- in-place sorting
- Performs well with random inputs compared to already sorted or reverse-sorted arrays.
Dutch National Flagā
It is used for rearranging elements in an array based on multiple distinct categories. It is commonly used when elements need to be categorized into three groups.
Implementationā
- Use a three-pointer approach with
low,mid, andhigh. - Initialize:
low = 0- track the first category.mid = 0- scan the array.high = size - 1- track the last category.
- Traverse the array and swap elements into their correct positions:
arr[mid] == 0- swap withlowand increment bothlowandmid.arr[mid] == 1- increment the midarr[mid] == 2- swap withhighand decerementhigh
Charectersticsā
- Time Complexity:
O(n) - Space Complexity:
O(1) - It rearranges elements in a single pass
- In-place Rearrangement
Count Sortā
It is a non-comparison-based sorting algorithm that works well for sorting integers or items with a limited range of discrete values. It is especially efficient when the range of input values is not significantly larger than the number of elements.
The algorithm works by counting the occurrences of each element and using this information to place elements in their correct position in the sorted array.
Implementationā
Find the count of every distinct elementā
- size of count array will be the maximum value from the array.
- traverse the array and count all the distinct element
- extract cumulative sum of counts which optimize extraction of position of each item.
Charectersticsā
- Space Complexity:
O(n + k)- n is the number of elements and k is the range of the input. - Time Complexity:
O(n + k)- n is the number of elements and k is the range of the input.
Comparison Tableā
| Algorithm | Best Case | Average Case | Worst Case | Space Complexity | Stable | In-Place | Notes |
|---|---|---|---|---|---|---|---|
| Bubble Sort | O(n) | O(n²) | O(n²) | O(1) | ā Yes | ā Yes | Very simple, very slow |
| Selection Sort | O(n²) | O(n²) | O(n²) | O(1) | ā No | ā Yes | Fewer swaps, still slow |
| Insertion Sort | O(n) | O(n²) | O(n²) | O(1) | ā Yes | ā Yes | Great for small or nearly sorted data |
| Merge Sort | O(n log n) | O(n log n) | O(n log n) | O(n) | ā Yes | ā No | Consistent performance, extra memory |
| Quick Sort | O(n log n) | O(n log n) | O(n²) | O(log n) | ā No | ā Yes | Very fast in practice |
| Heap Sort | O(n log n) | O(n log n) | O(n log n) | O(1) | ā No | ā Yes | No worst-case slowdown |
| Counting Sort | O(n + k) | O(n + k) | O(n + k) | O(k) | ā Yes | ā No | Works only with small integer ranges |
| Radix Sort | O(nk) | O(nk) | O(nk) | O(n + k) | ā Yes | ā No | Uses digit-by-digit sorting |
| Bucket Sort | O(n + k) | O(n + k) | O(n²) | O(n + k) | Depends | ā No | Best for uniformly distributed data |
| Shell Sort | O(n log n) | O(n^1.5) | O(n²) | O(1) | ā No | ā Yes | Improved insertion sort |
- Stable: Keeps the order of equal elements
- In-Place: Uses constant extra memory
- k: Range of input values or number of digits