Skip to main content

Complexity

Space Complexity

Auxiliary Space

Auxiliary space is the extra space used by an algorithm, excluding the input data.

It includes:

  • Temporary variables
  • Data structures created inside the function
  • Function call stack (recursion stack)

It does NOT include the space taken by input.

bool isPalindrome(string s) {
string rev = string(s.rbegin(), s.rend());
return s == rev;
}
  • s → Input (NOT counted in auxiliary space)
  • rev → N charecter → O(N)

If you solve the same problem with two pointer

bool isPalindrome(string s) {         // s → input string
int left = 0, right = s.size() - 1;
while (left < right) {
if (s[left] != s[right]) return false;
left++;
right--;
}
return true;
}
  • s → Input (NOT counted in auxilary space)
  • No new string, only left and right. Its auxilary space is O(1)

Total Space

Total space = Space required by:

  1. Input
  2. Auxiliary space

Formula:

Total Space = Input Space + Auxiliary Space

The example discussed earlier, both have total space of O(N), as both have input s with O(N) space complexity.

O(log n) — Logarithmic Space

An algorithm uses O(log n) space when the extra memory it uses grows proportionally to the logarithm of the input size.

This usually happens when:

  • We repeatedly divide the problem in half
  • We store digits of a number (binary representation)
  • We use recursion that reduces input by half each time (like binary search)

Example: Converting a number to binary

string toBinary(int n) {
string bits = "";
while (n > 0) {
bits += (n % 2) + '0'; // save remainder (0 or 1)
n /= 2; // divide by 2 each step
}
reverse(bits.begin(), bits.end());
return bits;
}

Why is this O(log n) space?

Each loop divides n by 2.

If:

  • n = 16 → 10000 (5 bits)
  • n = 1024 → 10000000000 (11 bits)

The number of bits needed to represent n in binary is log₂(n).

So the string bits stores about log n characters, meaning:

  • Space Complexity = O(log n)

Hidden Space Cost of Recursion

Recursion may look memory-efficient, but it uses the call stack behind the scenes.

Each recursive call adds a new stack frame to memory.

Stack Frame

A stack frame stores:

  • Function parameters
  • Local variables
  • Return address
  • Temporary values

Each recursive call creates a new stack frame.

Example: Factorial (Recursive)

int factorial(int n) {
if (n <= 1) return 1;
return n * factorial(n - 1);
}

What happens when calling factorial(4)?

Call stack builds like this:

factorial(4)
factorial(3)
factorial(2)
factorial(1)

There are 4 stack frames in memory.

If n = 1000, there will be 1000 stack frames.

Space Complexity

  • Each call uses O(1) space
  • There are n calls

So:

Space Complexity = O(n)

⚠️ This is the hidden cost of recursion.

An iterative version would use only O(1) space.

The space complexity of recursive call is the height of the recursive tree.

In-place Algorithm

An in-place algorithm modifies the input directly and uses only constant extra memory.

In-place: O(1) Extra Space

void reverseInPlace(vector<int>& arr) {
int left = 0, right = arr.size() - 1;
while (left < right) {
swap(arr[left], arr[right]); // swap elements
left++;
right--;
}
}

Why O(1)?

We only use:

  • Two integers (left, right)
  • No extra arrays

Extra space does not depend on n.

Not In-place: O(n) Extra Space

vector<int> reverseNew(vector<int>& arr) {
vector<int> result(arr.rbegin(), arr.rend()); // completely new array
return result;
}

This creates a new vector of size n.

So:

  • Extra Space = O(n)

Time-Space Tradeoff

Sometimes we use more memory to reduce time.

Version 1: Slow but Memory Efficient

// Time: O(n²), Space: O(1)
bool hasDuplicate(vector<int>& arr) {
for (int i = 0; i < arr.size(); i++) {
for (int j = i + 1; j < arr.size(); j++) {
if (arr[i] == arr[j]) return true;
}
}
return false;
}

Why?

  • Two nested loops → O(n²) time
  • No extra data structure → O(1) space

Version 2: Faster but Uses Extra Memory

// Time: O(n), Space: O(n)
bool hasDuplicate(vector<int>& arr) {
unordered_map<int, int> freq;
for (int x : arr) {
freq[x]++;
if (freq[x] == 2) return true;
}
return false;
}

Why?

  • One pass → O(n) time
  • Hash map stores up to n elements → O(n) space

Time Complexity

Time complexity measures how the running time of an algorithm increases with the input size N.

In competitive programming, a common rule of thumb is:

  • A C++ program can perform about 10⁸ operations per second on most judges.

So when designing an algorithm, you estimate:

NAcceptable Complexity
N ≤ 10⁸O(N)
N ≤ 10⁵O(N log N)
N ≤ 10³O(N²)
N ≤ 20O(2ⁿ)
N ≤ 10O(N!)

Logarithmic Complexity

In Computer Science, logarithmic complexity refers to algorithms whose running time grows very slowly as the input size (n) increases.

It is written as:

O(logn)O(\log n)

This means that when the input size increases exponentially, the number of steps increases only linearly.

Logarithm Base Conversion

Logarithms can have different bases such as:

  • (log2)(base2)(log_2) (base 2)
  • (log10)(base10)(log_{10}) (base 10)
  • (ln)(base(e))(ln) (base (e))

Using the change of base formula:

log2(n)=log10(n)log10(2)\log_2(n) = \frac{\log_{10}(n)}{\log_{10}(2)}

Since:

log10(2)0.301\log_{10}(2) \approx 0.301

So:

log2(n)=log10(n)0.301\log_2(n) = \frac{\log_{10}(n)}{0.301} log2(n)3.32×log10(n)\log_2(n) \approx 3.32 \times \log_{10}(n)

That is why we write:

log2(n)=log10(n)×3.32\log_2(n) = \log_{10}(n) \times 3.32

Why Base Doesn't Matter in Big-O

In algorithm analysis, the base of logarithm is ignored:

O(log2n)=O(log10n)=O(lnn)O(\log_2 n) = O(\log_{10} n) = O(\ln n)

Because they differ only by a constant factor (3.32), and constants are ignored in Big-O notation.

Factorial Time Complexity

This occurs when we generate all permutations of N elements.

Example:

N = 3
Permutations = 3! = 6

As N grows:

NN!
5120
103,628,800

So factorial algorithms only work for very small N (≤10).

C++ Permutation Example

Using STL:

#include <bits/stdc++.h>
using namespace std;

int main() {
vector<int> v = {1,2,3};

do {
for(int x : v)
cout << x << " ";
cout << endl;

} while(next_permutation(v.begin(), v.end()));

return 0;
}

Output:

1 2 3
1 3 2
2 1 3
2 3 1
3 1 2
3 2 1
  • next_permutations working in O(N) complexity by swaping element.

Time complexity:

O(N!)O(N!)

Exponential Time Complexity

Exponential complexity occurs when the number of operations grows exponentially with input size.

Example:

O(2n)O(2^n)

Common in:

  • recursion
  • subset generation
  • brute force search

A key property:

Overlapping Subproblems

Many recursive algorithms recompute the same subproblem repeatedly.

Example: Fibonacci

fib(n) = fib(n-1) + fib(n-2)

This leads to repeated calculations, causing O(2ⁿ) complexity.

Dynamic Programming reduces it to O(N).

Computational Complexity

P Problems (Polynomial Time)

Class P contains problems that can be solved efficiently by a deterministic algorithm in polynomial time.

Polynomial time means the running time can be expressed as:

O(n);O(n2);O(n3);O(nk)O(n); O(n^2); O(n^3); O(n^k)

where n = input size and k = constant.

These are considered tractable (practically solvable) problems.

Key idea

If a problem is in P, we can find the answer quickly.

Examples

  1. Sorting numbers

Algorithms like Merge Sort or Quick Sort solve sorting in:

O(nlogn)O(n\log n)

Example Input: [5, 2, 9, 1]
Output: [1, 2, 5, 9]

This is efficient → P problem

Shortest path in graphs

Dijkstra's Algorithm finds the shortest path in polynomial time.

Cities connected by roads with distances.
Find shortest route from A → D.

This can be solved efficiently → P

NP Problems (Nondeterministic Polynomial Time)

A problem is in NP if:

A proposed solution can be verified in polynomial time.

Important:

  • Solving may be hard
  • Checking the solution is easy

Key idea

Guessing is hard, verifying is easy.

Example

  1. Subset Sum Problem

Given numbers: [3, 7, 10, 12]
Target: 22

Question: Is there a subset that sums to 22?

Possible solution: 10 + 12 = 22
Verification: 10 + 12 = 22

Verification takes linear time.

But finding the subset may require checking many combinations.

So it is in NP.

NP-Complete Problems

A problem is NP-Complete if it satisfies two conditions:

  1. It is in NP
  2. Every NP problem can be reduced to it in polynomial time

Meaning:
If we can solve one NP-Complete problem efficiently, we can solve all NP problems efficiently.

This relates to the famous P versus NP Problem.

Key idea

NP-Complete problems are the hardest problems inside NP.

Classic Example

  1. Traveling Salesman Problem (Decision Version)

The famous Travelling Salesman Problem.

Problem:

A salesman must visit all cities exactly once and return home.

Question (decision version):

Is there a tour shorter than distance D?

Example

Cities: A, B, C, D

Distances between each city.

Ask:

Is there a route ≤ 20 km ?

Checking a proposed route is easy:

A → C → B → D → A
distance = 18

Verification = polynomial.

But finding the optimal route grows factorially:

(n1)!/2(n-1)!/2

So it is NP-Complete.

Other NP-Complete Problems

Some famous ones:

  • Boolean Satisfiability Problem (SAT)
  • 3-SAT Problem
  • Hamiltonian Path Problem
  • Vertex Cover Problem

SAT was the first NP-Complete problem proven by Stephen Cook in Cook–Levin theorem.

NP-Hard Problems

A problem is NP-Hard if:

It is at least as hard as the hardest NP problems.

Important:

NP-Hard problems do not have to be in NP.

Meaning:

  • Solution may not even be verifiable in polynomial time.

Relationship

           NP-Hard
┌───────────────┐
│ │
│ NP-Complete │
│ ┌──────┐ │
│ │ NP │ │
│ │ ┌──┐ │ │
│ │ │P │ │ │
│ │ └──┘ │ │
│ └──────┘ │
└───────────────┘

So:

P ⊆ NP
NP-Complete ⊆ NP
NP-Hard ⊇ NP-Complete

Example NP-Hard Problem

  1. Optimization version of TSP

The optimization version of Travelling Salesman Problem:

Find the shortest possible tour visiting all cities.

Why NP-Hard?

Because solving it also solves the NP-Complete decision version.

Example Comparing All Four

Problem: Find shortest route between cities.

VersionClass
Shortest path between two citiesP
Check if route ≤ 20 kmNP
Decision TSPNP-Complete
Find optimal tourNP-Hard

Intuition Summary

ClassMeaning
PEasy to solve
NPEasy to verify
NP-CompleteHardest problems in NP
NP-HardAt least as hard as NP

Simple memory trick

P → Solve fast
NP → Check fast
NP-Complete → Hardest checkable problems
NP-Hard → Harder or equal to NP-Complete

The Big Question: P vs NP

The question is:

P=NP?P = NP ?

Meaning:

If a solution can be verified quickly, can it also be found quickly?

Two possibilities:

Case 1: (P = NP)

Then every NP problem has a fast algorithm.

Huge consequences:

  • cryptography would break
  • optimization problems become easy
  • many AI problems become trivial

Case 2: (P \ne NP)

Then some problems are intrinsically hard to solve, even though solutions are easy to check.

Most computer scientists believe this is true.

When You Mention P vs NP

You mention P vs NP when discussing:

Hard search problems

Example:

  • Traveling Salesman Problem

Find the shortest route visiting cities.

Checking a route is easy.

Finding the best route is very hard.

NP-complete problems

Problems like:

  • Boolean Satisfiability Problem
  • Vertex Cover Problem

These are the hardest problems in NP.

If any one of them is solved in polynomial time → P = NP.

Optimization problems

Examples:

  • scheduling
  • routing
  • packing
  • cryptography

Example Explanation

Suppose the problem is:

Given a graph, does there exist a path visiting every node exactly once?

This is the Hamiltonian Path Problem.

If someone gives you a path:

A → B → D → C

You can verify quickly:

  • every node visited
  • no repeats

Verification: polynomial time

Finding the path: unknown if polynomial

So it belongs to NP.

Why It Is Famous

The P vs NP problem is one of the Millennium Prize Problems announced by the Clay Mathematics Institute.

Prize:

$1,000,000 for a proof.

It is still unsolved.

Simple summary

ConceptMeaning
PProblems solvable quickly
NPSolutions verifiable quickly
P vs NPAre solving and verifying equally easy?

Most researchers believe:

PNPP \ne NP

but it has never been proven.

Question

whats the relation between fibonacci tree(recursion) and perfect binary tree?

The relation between a Fibonacci recursion tree and a perfect binary tree comes from how the recursive calls expand.

Fibonacci Recursion Tree

The Fibonacci function is:

F(n)=F(n1)+F(n2)F(n) = F(n-1) + F(n-2)

If implemented recursively:

fib(n):
if n <= 1 return n
return fib(n-1) + fib(n-2)

Each call generates two recursive calls:

  • (fib(n1))(fib(n-1))
  • (fib(n2))(fib(n-2))

This creates a recursion tree.

Example for n = 5:

                fib(5)
/ \
fib(4) fib(3)
/ \ / \
fib(3) fib(2) fib(2) fib(1)
/ \
fib(2) fib(1)

This tree shows how the recursive calls branch.

Perfect Binary Tree

A Perfect Binary Tree is a binary tree where:

  • Every internal node has exactly 2 children
  • All leaves are at the same level

Example:

        A
/ \
B C
/ \ / \
D E F G

Properties:

  • Numberofnodes=(2h+11)Number of nodes = (2^{h+1}-1)
  • Leaves=(2h)Leaves = (2^h)

Key Difference

The Fibonacci recursion tree is NOT a perfect binary tree.

Reasons:

  1. Unequal depth: Some branches end earlier.

  2. Different subtree sizes

    • Onebranchcalls(F(n1))One branch calls (F(n-1))
    • Othercalls(F(n2))Other calls (F(n-2))

Example:

fib(4)
├── fib(3)
│ ├── fib(2)
│ └── fib(1)
└── fib(2)

The right subtree is smaller.

What is the time complexity of the best-known algorithm for multiplying two n×n matrices?

The standard (naive) matrix multiplication algorithm uses three nested loops:

for i = 1 to n
for j = 1 to n
for k = 1 to n
C[i][j] += A[i][k] * B[k][j]

This gives a time complexity of: O(n³)

Strassen’s Algorithm

Strassen improved matrix multiplication by reducing the number of recursive multiplications.

Time complexity: O(n2.807)

More Advanced Algorithms

Further research improved the exponent even more using advanced algebraic techniques.

The best-known theoretical time complexity today is approximately: O(n2.373)

Important Notes

  • The naive algorithm is O(n³)
  • Strassen reduces it to O(n2.807)
  • Advanced research reduces it further to about O(n2.373)
  • These advanced algorithms are mostly theoretical and not commonly used in practice
What is amortized time complexity?

Amortized time complexity analyzes the cost of a sequence of operations and spreads the total cost over all operations. It guarantees the average cost per operation even in the worst-case sequence.

  • It is not the same as worst-case complexity.
  • It is not the same as average-case complexity, because average-case analysis depends on probability, while amortized analysis does not.
  • It applies to many data structures (like dynamic arrays, stacks, etc.), not only sorting algorithms.

In a dynamic array (like ArrayList), resizing occasionally takes O(n) time, but most insertions take O(1) time. When we distribute the resizing cost over many insertions, the amortized time complexity of append becomes O(1).

what's the difference between Average Case vs Amortized?

In algorithm analysis, Average Case and Amortized Analysis both describe performance that is not the worst case, but they mean very different things.

Average Case Analysis

Average Case measures the expected running time over all possible inputs, assuming some probability distribution of those inputs.

  • We consider different inputs.
  • Each input has some probability.
  • We calculate the expected (average) cost.

Example: Searching in a list

  • Best case: element is first → O(1)
  • Worst case: element is last → O(n)
  • Average case: element somewhere in the middle → O(n) (about n/2 comparisons)

So the average case depends on how inputs are distributed.

Key idea:

Average case = expectation over different inputs.

Amortized Analysis

Amortized Analysis measures the average cost per operation over a sequence of operations, even if some operations are expensive.

  • We consider one sequence of operations
  • Some operations may be very costly
  • But most are cheap
  • The expensive ones are spread out

Example: Dynamic array resizing (like Python list or C++ vector)

Appending elements:

  • Normal append → O(1)
  • Occasionally resizing → O(n)

Example sequence:

append → O(1)
append → O(1)
append → O(1)
resize → O(n)
append → O(1)

Over many operations, the average per operation is still O(1).

Key idea:

Amortized = average cost across operations, not across inputs.

Key Differences

AspectAverage CaseAmortized
What variesDifferent inputsSequence of operations
Uses probabilityYes (input distribution)No probability needed
GuaranteesExpected timeGuaranteed average per operation
ExampleHash table searchDynamic array append

Simple intuition

  • Average case → average over different problems
  • Amortized → average over many operations in one problem

One-line memory trick

Average case = average over inputs Amortized = average over operations

Example with vector push_back()

  1. We define a vector without it's size (vector<int> vec;).
  2. Vector assume its size n (n = 4;).
  3. You push_back() 4 times.
  4. If you push_back() one more time, vector will resize to twice (n = 8);
  5. Now you can push_back() again.
  6. Each push_back() have O(1) complexity, where resize have O(N) complexity.
  7. This one large complexity, reduced with multiple small complexity and amortized to O(1),
What is the space complexity of a recursive function that makes n recursive calls in total, with no additional data structures, where each call adds one frame to the call stack?

In a recursive function, each recursive call adds one new frame to the call stack.

  • If the function makes n recursive calls before reaching the base case
  • The maximum depth of the call stack becomes n.
  • Each stack frame takes O(1) space (since no extra data structures are used).

Therefore, total space used by the call stack is: O(1) × n = O(n)

What is the time complexity of given code below?
for (int i = 1; i <= n; i = i * 3){
for (int j = 1; j <= n; j++){
sum++;
}
}
  • The outer loop multiplies i by 3 each time: 1, 3, 9, 27, ... So it runs log₃(n) times.
  • The inner loop runs n times for each outer loop iteration.

Therefore, total operations: n × log₃(n)

In Big-O notation, logarithm base does not matter, so: O(n log n)

What is the time complexity of given code below?
int j = 0;
for (int i = 0; i < n; i++) {
while (j < n) {
j++;
}
}

Step-by-step reasoning

  1. Outer loop (for)

    • Runs n times (i = 0 to n-1).
  2. Inner loop (while (j < n))

    • j starts at 0.
    • Each iteration increments j by 1.
    • Once j reaches n, the condition j < n becomes false.
  3. Important observation

    • j is not reset inside the outer loop.
    • The while loop runs only until j reaches n once.

Actual execution

  • When i = 0:

    • while (j < n) runs n times (j goes from 0n).
  • When i = 1 to n-1:

    • j is already n.
    • while (j < n) runs 0 times.

Total operations

  • while loop total iterations = n
  • for loop iterations = n, but inner work after first iteration is constant.

Total work ≈ n + nO(n)

Final Time Complexity: O(n)

Key insight

Even though it looks like nested loops, the pointer j only moves forward once, making the total number of inner loop executions linear.

This pattern often appears in two-pointer / sliding window algorithms where nested loops still result in O(n) time.

What is the time complexity of given code below?
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j += i) {
cout << i << " " << j << endl;
}
}

Step 1: Outer loop

The outer loop runs:

i=1 to ni = 1 \text{ to } n

So it executes n times.

Step 2: Inner loop behavior

The inner loop increments by i each time:

j=1,1+i,1+2i,1+3i,...j = 1, 1+i, 1+2i, 1+3i, ...

It stops when j > n.

Number of iterations for a fixed i:

ni\approx \frac{n}{i}

Step 3: Total operations

Total work:

i=1nni\sum_{i=1}^{n} \frac{n}{i}

Factor out n:

n(11+12+13+...+1n)n \left(\frac{1}{1} + \frac{1}{2} + \frac{1}{3} + ... + \frac{1}{n}\right)

The sum

1+12+13+...+1n1 + \frac12 + \frac13 + ... + \frac1n

is the harmonic series, which grows as:

Θ(logn)\Theta(\log n)

Step 4: Final complexity

T(n)=nlognT(n) = n \cdot \log n

Time Complexity: O(n log n)

Intuition

The work per i decreases:

iinner iterations
1n
2n/2
3n/3
4n/4
......
n1

Adding these gives roughly:

n+n2+n3+...+1nlognn + \frac{n}{2} + \frac{n}{3} + ... + 1 \approx n\log n

Tip: This pattern appears often in number theory algorithms like divisor enumeration and sieve-like loops.

What is the time and space complexity of given code below?
void fun(int n) {
if (n <= 0) return;
cout << n << endl;
fun(n - 1);
fun(n - 1);
}

Time Complexity

Each call to fun(n) makes two recursive calls to fun(n-1).

So the recurrence relation is:

T(n)=2T(n1)+O(1)T(n) = 2T(n-1) + O(1)

Expanding:

  • (T(n) = 2T(n-1) + c)
  • (= 2(2T(n-2) + c) + c)
  • (= 4T(n-2) + 3c)
  • (= 8T(n-3) + 7c)

After (k) steps:

T(n)=2kT(nk)+(2k1)cT(n) = 2^k T(n-k) + (2^k -1)c

When (k = n):

T(n)=2nT(0)+(2n1)cT(n) = 2^n T(0) + (2^n -1)c

Therefore,

Time Complexity = (O(2^n))

This happens because the recursion forms a binary recursion tree where the number of calls doubles at each level.

Total number of calls ≈ (2^n - 1).

SpaceComplexitySpace Complexity

Space complexity depends on the maximum recursion depth, not the number of calls.

  • Each call decreases n by 1
  • The deepest chain is:
fun(n)
fun(n-1)
fun(n-2)
...
fun(1)
fun(0)

So the maximum stack depth = n.

Space Complexity = (O(n))

Key Idea:

Even though there are (2^n) calls, they are not on the stack at the same time. Only one path of length n exists simultaneously in the recursion stack.

What is the time complexity of given code below?
void fun(int n) {
if (n <= 0) return;
int arr[1000];
fun(n - 1);
}

Recursion Depth

Each recursive call decreases n by 1:

fun(n)
fun(n-1)
fun(n-2)
...
fun(1)
fun(0)

So the maximum recursion depth = n.

Space Used Per Call

Inside every call:

int arr[1000];
  • This allocates 1000 integers on the stack.
  • If an int is 4 bytes, then per call:
1000×4=4000 bytes1000 \times 4 = 4000 \text{ bytes}

But in asymptotic analysis, constants are ignored, so this is O(1) space per call.

Total Space Complexity

Since n recursive calls can exist simultaneously on the call stack, and each uses O(1) space:

Space=n×O(1)=O(n)\text{Space} = n \times O(1) = O(n)

Important Insight

Even though arr[1000] is large, it is still constant size. If it were:

int arr[n];

then the space complexity would become O(n²) because each recursive call would allocate O(n) space.

What is the time complexity of given code below?
void fun(int n) {
if (n <= 1) return;
fun(n / 2);
fun(n / 2);
}

Time Complexity

Each call makes 2 recursive calls with input n/2.

So the recurrence relation is:

T(n)=2T(n/2)+O(1)T(n) = 2T(n/2) + O(1)

Using the Master Theorem:

  • (a = 2)
  • (b = 2)
  • (f(n) = O(1))

Compute:

nlogba=nlog22=n1=nn^{\log_b a} = n^{\log_2 2} = n^1 = n

Since (f(n) = O(1)) is smaller than (n),

T(n)=O(n)T(n) = O(n)

Time Complexity = O(n)

Intuition (Recursion Tree)

LevelNumber of CallsWork per Level
011
122
244
388
.........
log n(2^log n)(n)

Total work:

1+2+4+...+n=2n1=O(n)1 + 2 + 4 + ... + n = 2n - 1 = O(n)

Space Complexity

Space depends on the maximum recursion depth, not total calls.

Each recursive call reduces n to n/2:

n → n/2 → n/4 → n/8 → ... → 1

The depth of this chain is:

log2n\log_2 n

Even though there are two recursive calls, they are not executed simultaneously on the stack—one finishes before the other begins.

Space Complexity = O(log n)

Final Answer

ComplexityValue
Time ComplexityO(n)
Space ComplexityO(log n)

Key Idea:

  • Work grows like a full binary tree with height log n, giving n total calls.
  • Stack depth only follows one path, which is log n.
How C++ sort() works?

In C++, the sort() function from the C++ Standard Library is implemented using an algorithm called Introsort. Introsort is a hybrid sorting algorithm that combines multiple algorithms to guarantee both speed and good worst-case performance.

Introsort = Quick Sort + Heap Sort + Insertion Sort

It starts with Quicksort, but if the recursion gets too deep (which could cause worst-case (O(n^2))), it switches to Heapsort. For very small partitions, it uses Insertion Sort because it is faster on tiny arrays.

So internally:

start → Quicksort
deep recursion → Heapsort
small arrays → Insertion Sort

How It Works Step by Step

  1. Start with Quicksort

Quicksort divides the array around a pivot.

Example:

[7, 2, 9, 1, 5]

pivot = 5

left = [2,1]
right = [7,9]

Then it recursively sorts both sides.

Average complexity:

O(nlogn)O(n \log n)
  1. Detect Worst Case

If recursion depth exceeds:

2×log(n)2 \times \log(n)

the algorithm assumes Quicksort is degrading toward (O(n^2)).

Then it switches to Heapsort, which guarantees:

O(nlogn)O(n \log n)

worst-case time.

  1. Use Insertion Sort for Small Parts

When subarray size becomes very small (around 16 elements), it switches to insertion sort because:

  • Less overhead
  • Better cache performance
  1. Time Complexity
CaseComplexity
Best(O(n \log n))
Average(O(n \log n))
Worst(O(n \log n))

Because the heapsort fallback prevents (O(n^2)).

  1. Space Complexity

std::sort is in-place.

Space complexity:

O(logn)O(\log n)

This comes from the recursion stack.

Why C++ Uses Introsort

It combines the strengths of multiple algorithms:

AlgorithmStrength
QuicksortVery fast on average
HeapsortGuarantees worst-case (O(n\log n))
Insertion sortFast for tiny arrays

So std::sort() is extremely optimized and usually faster than most handwritten sorts.

Complexities:

  • Time: (O(n \log n))
  • Space: (O(\log n))

If you need stable sorting, use stable_sort(), which uses merge sort internally.

Whats the time complexity of the following code?
vector<int> divisors;

for(int i = 1; i * i <= n; i++){
if(n % i == 0){
divisors.push_back(i);

if(i != n / i)
divisors.push_back(n / i);
}
}

sort(divisors.begin(), divisors.end());

for(int d : divisors)
cout << d << " ";
  • The loop runs up to n\sqrt{n}
  • In the worst case, we can collect at most about 2n2\sqrt{n} divisors

So the vector size can be at most:

k2nk \le 2\sqrt{n}

Sorting requires:

O(klogk)O(k \log k)

Substituting k2nk \le 2\sqrt{n}:

O(2nlog(2n))O(2\sqrt{n} \log(2\sqrt{n}))

Ignoring constants, the overall time complexity becomes:

O(nlogn)O(\sqrt{n}\log \sqrt{n})

Optimized Approach (No Sorting Needed)

vector<int> large;

for(int i = 1; i * i <= n; i++){
if(n % i == 0){
cout << i << " ";
if(i != n / i)
large.push_back(n / i);
}
}

for(int i = large.size() - 1; i >= 0; i--){
cout << large[i] << " ";
}
  • First loop runs up to n\sqrt{n}
  • Second loop prints stored divisors (at most n\sqrt{n})

So the total time complexity is:

O(n)O(\sqrt{n})

This approach avoids sorting and prints the divisors in sorted order.

What is the space complexity?
Map<Integer, List<Integer>> map = new HashMap<>();

for (int i = 0; i < n; i++) {
List<Integer> list = new ArrayList<>();

for (int j = 0; j < n; j++) {
list.add(j);
}

map.put(i, list);
}
  • The Map stores n keys.
  • For each key, a List of size n is created.
  • So, the total number of stored elements is n × n = n².
  • Additionally, there are n list objects and n map entries, but these are dominated by the elements.

Final Space Complexity:O(n2)O(n^2)

What is the time complexity?
void solve(int n)
{
if (n <= 1)
return;

for (int i = 0; i < n; i++)
{
print(i);
}

solve(n / 3);
}

The recurrence relation is:

T(n) = T(n/3) + O(n)

Expanding:

n + n/3 + n/9 + n/27 + ...

This is a geometric series.

Sum ≈ n × (1 + 1/3 + 1/9 + ...)
= n × (3/2)
= O(n)

The first level dominates the total work, so the overall time complexity is: O(n)