Complexity
Space Complexity
Auxiliary Space
Auxiliary space is the extra space used by an algorithm, excluding the input data.
It includes:
- Temporary variables
- Data structures created inside the function
- Function call stack (recursion stack)
It does NOT include the space taken by input.
bool isPalindrome(string s) {
string rev = string(s.rbegin(), s.rend());
return s == rev;
}
s→ Input (NOT counted in auxiliary space)rev→ N charecter → O(N)
If you solve the same problem with two pointer
bool isPalindrome(string s) { // s → input string
int left = 0, right = s.size() - 1;
while (left < right) {
if (s[left] != s[right]) return false;
left++;
right--;
}
return true;
}
s→ Input (NOT counted in auxilary space)- No new string, only
leftandright. Its auxilary space is O(1)
Total Space
Total space = Space required by:
- Input
- Auxiliary space
Formula:
Total Space = Input Space + Auxiliary Space
The example discussed earlier, both have total space of O(N), as both have input s with O(N) space complexity.
O(log n) — Logarithmic Space
An algorithm uses O(log n) space when the extra memory it uses grows proportionally to the logarithm of the input size.
This usually happens when:
- We repeatedly divide the problem in half
- We store digits of a number (binary representation)
- We use recursion that reduces input by half each time (like binary search)
Example: Converting a number to binary
string toBinary(int n) {
string bits = "";
while (n > 0) {
bits += (n % 2) + '0'; // save remainder (0 or 1)
n /= 2; // divide by 2 each step
}
reverse(bits.begin(), bits.end());
return bits;
}
Why is this O(log n) space?
Each loop divides n by 2.
If:
- n = 16 → 10000 (5 bits)
- n = 1024 → 10000000000 (11 bits)
The number of bits needed to represent n in binary is log₂(n).
So the string bits stores about log n characters, meaning:
- Space Complexity = O(log n)
Hidden Space Cost of Recursion
Recursion may look memory-efficient, but it uses the call stack behind the scenes.
Each recursive call adds a new stack frame to memory.
Stack Frame
A stack frame stores:
- Function parameters
- Local variables
- Return address
- Temporary values
Each recursive call creates a new stack frame.
Example: Factorial (Recursive)
int factorial(int n) {
if (n <= 1) return 1;
return n * factorial(n - 1);
}
What happens when calling factorial(4)?
Call stack builds like this:
factorial(4)
factorial(3)
factorial(2)
factorial(1)
There are 4 stack frames in memory.
If n = 1000, there will be 1000 stack frames.
Space Complexity
- Each call uses O(1) space
- There are n calls
So:
Space Complexity = O(n)
⚠️ This is the hidden cost of recursion.
An iterative version would use only O(1) space.
The space complexity of recursive call is the height of the recursive tree.
In-place Algorithm
An in-place algorithm modifies the input directly and uses only constant extra memory.
In-place: O(1) Extra Space
void reverseInPlace(vector<int>& arr) {
int left = 0, right = arr.size() - 1;
while (left < right) {
swap(arr[left], arr[right]); // swap elements
left++;
right--;
}
}
Why O(1)?
We only use:
- Two integers (
left,right) - No extra arrays
Extra space does not depend on n.
Not In-place: O(n) Extra Space
vector<int> reverseNew(vector<int>& arr) {
vector<int> result(arr.rbegin(), arr.rend()); // completely new array
return result;
}
This creates a new vector of size n.
So:
- Extra Space = O(n)
Time-Space Tradeoff
Sometimes we use more memory to reduce time.
Version 1: Slow but Memory Efficient
// Time: O(n²), Space: O(1)
bool hasDuplicate(vector<int>& arr) {
for (int i = 0; i < arr.size(); i++) {
for (int j = i + 1; j < arr.size(); j++) {
if (arr[i] == arr[j]) return true;
}
}
return false;
}
Why?
- Two nested loops → O(n²) time
- No extra data structure → O(1) space
Version 2: Faster but Uses Extra Memory
// Time: O(n), Space: O(n)
bool hasDuplicate(vector<int>& arr) {
unordered_map<int, int> freq;
for (int x : arr) {
freq[x]++;
if (freq[x] == 2) return true;
}
return false;
}
Why?
- One pass → O(n) time
- Hash map stores up to
nelements → O(n) space
Time Complexity
Time complexity measures how the running time of an algorithm increases with the input size N.
In competitive programming, a common rule of thumb is:
- A C++ program can perform about 10⁸ operations per second on most judges.
So when designing an algorithm, you estimate:
| N | Acceptable Complexity |
|---|---|
| N ≤ 10⁸ | O(N) |
| N ≤ 10⁵ | O(N log N) |
| N ≤ 10³ | O(N²) |
| N ≤ 20 | O(2ⁿ) |
| N ≤ 10 | O(N!) |
Logarithmic Complexity
In Computer Science, logarithmic complexity refers to algorithms whose running time grows very slowly as the input size (n) increases.
It is written as:
This means that when the input size increases exponentially, the number of steps increases only linearly.
Logarithm Base Conversion
Logarithms can have different bases such as:
Using the change of base formula:
Since:
So:
That is why we write:
Why Base Doesn't Matter in Big-O
In algorithm analysis, the base of logarithm is ignored:
Because they differ only by a constant factor (3.32), and constants are ignored in Big-O notation.
Factorial Time Complexity
This occurs when we generate all permutations of N elements.
Example:
N = 3
Permutations = 3! = 6
As N grows:
| N | N! |
|---|---|
| 5 | 120 |
| 10 | 3,628,800 |
So factorial algorithms only work for very small N (≤10).
C++ Permutation Example
Using STL:
#include <bits/stdc++.h>
using namespace std;
int main() {
vector<int> v = {1,2,3};
do {
for(int x : v)
cout << x << " ";
cout << endl;
} while(next_permutation(v.begin(), v.end()));
return 0;
}
Output:
1 2 3
1 3 2
2 1 3
2 3 1
3 1 2
3 2 1
next_permutationsworking inO(N)complexity by swaping element.
Time complexity:
Exponential Time Complexity
Exponential complexity occurs when the number of operations grows exponentially with input size.
Example:
Common in:
- recursion
- subset generation
- brute force search
A key property:
Overlapping Subproblems
Many recursive algorithms recompute the same subproblem repeatedly.
Example: Fibonacci
fib(n) = fib(n-1) + fib(n-2)
This leads to repeated calculations, causing O(2ⁿ) complexity.
Dynamic Programming reduces it to O(N).
Computational Complexity
P Problems (Polynomial Time)
Class P contains problems that can be solved efficiently by a deterministic algorithm in polynomial time.
Polynomial time means the running time can be expressed as:
where n = input size and k = constant.
These are considered tractable (practically solvable) problems.
Key idea
If a problem is in P, we can find the answer quickly.
Examples
- Sorting numbers
Algorithms like Merge Sort or Quick Sort solve sorting in:
Example
Input: [5, 2, 9, 1]
Output: [1, 2, 5, 9]
This is efficient → P problem
Shortest path in graphs
Dijkstra's Algorithm finds the shortest path in polynomial time.
Cities connected by roads with distances.
Find shortest route from A → D.
This can be solved efficiently → P
NP Problems (Nondeterministic Polynomial Time)
A problem is in NP if:
A proposed solution can be verified in polynomial time.
Important:
- Solving may be hard
- Checking the solution is easy
Key idea
Guessing is hard, verifying is easy.
Example
- Subset Sum Problem
Given numbers: [3, 7, 10, 12]
Target: 22
Question: Is there a subset that sums to 22?
Possible solution: 10 + 12 = 22
Verification: 10 + 12 = 22 ✔
Verification takes linear time.
But finding the subset may require checking many combinations.
So it is in NP.
NP-Complete Problems
A problem is NP-Complete if it satisfies two conditions:
- It is in NP
- Every NP problem can be reduced to it in polynomial time
Meaning:
If we can solve one NP-Complete problem efficiently, we can solve all NP problems efficiently.
This relates to the famous P versus NP Problem.
Key idea
NP-Complete problems are the hardest problems inside NP.
Classic Example
- Traveling Salesman Problem (Decision Version)
The famous Travelling Salesman Problem.
Problem:
A salesman must visit all cities exactly once and return home.
Question (decision version):
Is there a tour shorter than distance D?
Example
Cities: A, B, C, D
Distances between each city.
Ask:
Is there a route ≤ 20 km ?
Checking a proposed route is easy:
A → C → B → D → A
distance = 18
Verification = polynomial.
But finding the optimal route grows factorially:
So it is NP-Complete.
Other NP-Complete Problems
Some famous ones:
- Boolean Satisfiability Problem (SAT)
- 3-SAT Problem
- Hamiltonian Path Problem
- Vertex Cover Problem
SAT was the first NP-Complete problem proven by Stephen Cook in Cook–Levin theorem.
NP-Hard Problems
A problem is NP-Hard if:
It is at least as hard as the hardest NP problems.
Important:
NP-Hard problems do not have to be in NP.
Meaning:
- Solution may not even be verifiable in polynomial time.
Relationship
NP-Hard
┌───────────────┐
│ │
│ NP-Complete │
│ ┌──────┐ │
│ │ NP │ │
│ │ ┌──┐ │ │
│ │ │P │ │ │
│ │ └──┘ │ │
│ └──────┘ │
└───────────────┘
So:
P ⊆ NP
NP-Complete ⊆ NP
NP-Hard ⊇ NP-Complete
Example NP-Hard Problem
- Optimization version of TSP
The optimization version of Travelling Salesman Problem:
Find the shortest possible tour visiting all cities.
Why NP-Hard?
Because solving it also solves the NP-Complete decision version.
Example Comparing All Four
Problem: Find shortest route between cities.
| Version | Class |
|---|---|
| Shortest path between two cities | P |
| Check if route ≤ 20 km | NP |
| Decision TSP | NP-Complete |
| Find optimal tour | NP-Hard |
Intuition Summary
| Class | Meaning |
|---|---|
| P | Easy to solve |
| NP | Easy to verify |
| NP-Complete | Hardest problems in NP |
| NP-Hard | At least as hard as NP |
Simple memory trick
P → Solve fast
NP → Check fast
NP-Complete → Hardest checkable problems
NP-Hard → Harder or equal to NP-Complete
The Big Question: P vs NP
The question is:
Meaning:
If a solution can be verified quickly, can it also be found quickly?
Two possibilities:
Case 1: (P = NP)
Then every NP problem has a fast algorithm.
Huge consequences:
- cryptography would break
- optimization problems become easy
- many AI problems become trivial
Case 2: (P \ne NP)
Then some problems are intrinsically hard to solve, even though solutions are easy to check.
Most computer scientists believe this is true.
When You Mention P vs NP
You mention P vs NP when discussing:
Hard search problems
Example:
- Traveling Salesman Problem
Find the shortest route visiting cities.
Checking a route is easy.
Finding the best route is very hard.
NP-complete problems
Problems like:
- Boolean Satisfiability Problem
- Vertex Cover Problem
These are the hardest problems in NP.
If any one of them is solved in polynomial time → P = NP.
Optimization problems
Examples:
- scheduling
- routing
- packing
- cryptography
Example Explanation
Suppose the problem is:
Given a graph, does there exist a path visiting every node exactly once?
This is the Hamiltonian Path Problem.
If someone gives you a path:
A → B → D → C
You can verify quickly:
- every node visited
- no repeats
Verification: polynomial time
Finding the path: unknown if polynomial
So it belongs to NP.
Why It Is Famous
The P vs NP problem is one of the Millennium Prize Problems announced by the Clay Mathematics Institute.
Prize:
$1,000,000 for a proof.
It is still unsolved.
Simple summary
| Concept | Meaning |
|---|---|
| P | Problems solvable quickly |
| NP | Solutions verifiable quickly |
| P vs NP | Are solving and verifying equally easy? |
Most researchers believe:
but it has never been proven.
Question
whats the relation between fibonacci tree(recursion) and perfect binary tree?
The relation between a Fibonacci recursion tree and a perfect binary tree comes from how the recursive calls expand.
Fibonacci Recursion Tree
The Fibonacci function is:
If implemented recursively:
fib(n):
if n <= 1 return n
return fib(n-1) + fib(n-2)
Each call generates two recursive calls:
This creates a recursion tree.
Example for n = 5:
fib(5)
/ \
fib(4) fib(3)
/ \ / \
fib(3) fib(2) fib(2) fib(1)
/ \
fib(2) fib(1)
This tree shows how the recursive calls branch.
Perfect Binary Tree
A Perfect Binary Tree is a binary tree where:
- Every internal node has exactly 2 children
- All leaves are at the same level
Example:
A
/ \
B C
/ \ / \
D E F G
Properties:
Key Difference
The Fibonacci recursion tree is NOT a perfect binary tree.
Reasons:
-
Unequal depth: Some branches end earlier.
-
Different subtree sizes
Example:
fib(4)
├── fib(3)
│ ├── fib(2)
│ └── fib(1)
└── fib(2)
The right subtree is smaller.
What is the time complexity of the best-known algorithm for multiplying two n×n matrices?
The standard (naive) matrix multiplication algorithm uses three nested loops:
for i = 1 to n
for j = 1 to n
for k = 1 to n
C[i][j] += A[i][k] * B[k][j]
This gives a time complexity of: O(n³)
Strassen’s Algorithm
Strassen improved matrix multiplication by reducing the number of recursive multiplications.
Time complexity: O(n2.807)
More Advanced Algorithms
Further research improved the exponent even more using advanced algebraic techniques.
The best-known theoretical time complexity today is approximately: O(n2.373)
Important Notes
- The naive algorithm is O(n³)
- Strassen reduces it to O(n2.807)
- Advanced research reduces it further to about O(n2.373)
- These advanced algorithms are mostly theoretical and not commonly used in practice
What is amortized time complexity?
Amortized time complexity analyzes the cost of a sequence of operations and spreads the total cost over all operations. It guarantees the average cost per operation even in the worst-case sequence.
- It is not the same as worst-case complexity.
- It is not the same as average-case complexity, because average-case analysis depends on probability, while amortized analysis does not.
- It applies to many data structures (like dynamic arrays, stacks, etc.), not only sorting algorithms.
In a dynamic array (like ArrayList), resizing occasionally takes O(n) time, but most insertions take O(1) time. When we distribute the resizing cost over many insertions, the amortized time complexity of append becomes O(1).
what's the difference between Average Case vs Amortized?
In algorithm analysis, Average Case and Amortized Analysis both describe performance that is not the worst case, but they mean very different things.
Average Case Analysis
Average Case measures the expected running time over all possible inputs, assuming some probability distribution of those inputs.
- We consider different inputs.
- Each input has some probability.
- We calculate the expected (average) cost.
Example: Searching in a list
- Best case: element is first → O(1)
- Worst case: element is last → O(n)
- Average case: element somewhere in the middle → O(n) (about n/2 comparisons)
So the average case depends on how inputs are distributed.
Key idea:
Average case = expectation over different inputs.
Amortized Analysis
Amortized Analysis measures the average cost per operation over a sequence of operations, even if some operations are expensive.
- We consider one sequence of operations
- Some operations may be very costly
- But most are cheap
- The expensive ones are spread out
Example: Dynamic array resizing (like Python list or C++ vector)
Appending elements:
- Normal append → O(1)
- Occasionally resizing → O(n)
Example sequence:
append → O(1)
append → O(1)
append → O(1)
resize → O(n)
append → O(1)
Over many operations, the average per operation is still O(1).
Key idea:
Amortized = average cost across operations, not across inputs.
Key Differences
| Aspect | Average Case | Amortized |
|---|---|---|
| What varies | Different inputs | Sequence of operations |
| Uses probability | Yes (input distribution) | No probability needed |
| Guarantees | Expected time | Guaranteed average per operation |
| Example | Hash table search | Dynamic array append |
Simple intuition
- Average case → average over different problems
- Amortized → average over many operations in one problem
One-line memory trick
Average case = average over inputs Amortized = average over operations
Example with vector push_back()
- We define a vector without it's size (
vector<int> vec;). - Vector assume its size n (
n = 4;). - You
push_back()4 times. - If you
push_back()one more time, vector will resize to twice (n = 8); - Now you can
push_back()again. - Each
push_back()haveO(1)complexity, where resize haveO(N)complexity. - This one large complexity, reduced with multiple small complexity and amortized to
O(1),
What is the space complexity of a recursive function that makes n recursive calls in total, with no additional data structures, where each call adds one frame to the call stack?
In a recursive function, each recursive call adds one new frame to the call stack.
- If the function makes n recursive calls before reaching the base case
- The maximum depth of the call stack becomes n.
- Each stack frame takes O(1) space (since no extra data structures are used).
Therefore, total space used by the call stack is: O(1) × n = O(n)
What is the time complexity of given code below?
for (int i = 1; i <= n; i = i * 3){
for (int j = 1; j <= n; j++){
sum++;
}
}
- The outer loop multiplies
iby 3 each time: 1, 3, 9, 27, ... So it runs log₃(n) times. - The inner loop runs n times for each outer loop iteration.
Therefore, total operations: n × log₃(n)
In Big-O notation, logarithm base does not matter, so: O(n log n)
What is the time complexity of given code below?
int j = 0;
for (int i = 0; i < n; i++) {
while (j < n) {
j++;
}
}
Step-by-step reasoning
-
Outer loop (
for)- Runs
ntimes (i = 0ton-1).
- Runs
-
Inner loop (
while (j < n))jstarts at0.- Each iteration increments
jby1. - Once
jreachesn, the conditionj < nbecomes false.
-
Important observation
jis not reset inside the outer loop.- The
whileloop runs only untiljreachesnonce.
Actual execution
-
When
i = 0:while (j < n)runs n times (jgoes from0→n).
-
When
i = 1ton-1:jis alreadyn.while (j < n)runs 0 times.
Total operations
whileloop total iterations = nforloop iterations = n, but inner work after first iteration is constant.
Total work ≈ n + n → O(n)
Final Time Complexity: O(n)
Key insight
Even though it looks like nested loops, the pointer j only moves forward once, making the total number of inner loop executions linear.
This pattern often appears in two-pointer / sliding window algorithms where nested loops still result in O(n) time.
What is the time complexity of given code below?
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j += i) {
cout << i << " " << j << endl;
}
}
Step 1: Outer loop
The outer loop runs:
So it executes n times.
Step 2: Inner loop behavior
The inner loop increments by i each time:
It stops when j > n.
Number of iterations for a fixed i:
Step 3: Total operations
Total work:
Factor out n:
The sum
is the harmonic series, which grows as:
Step 4: Final complexity
Time Complexity: O(n log n)
Intuition
The work per i decreases:
| i | inner iterations |
|---|---|
| 1 | n |
| 2 | n/2 |
| 3 | n/3 |
| 4 | n/4 |
| ... | ... |
| n | 1 |
Adding these gives roughly:
Tip: This pattern appears often in number theory algorithms like divisor enumeration and sieve-like loops.
What is the time and space complexity of given code below?
void fun(int n) {
if (n <= 0) return;
cout << n << endl;
fun(n - 1);
fun(n - 1);
}
Time Complexity
Each call to fun(n) makes two recursive calls to fun(n-1).
So the recurrence relation is:
Expanding:
- (T(n) = 2T(n-1) + c)
- (= 2(2T(n-2) + c) + c)
- (= 4T(n-2) + 3c)
- (= 8T(n-3) + 7c)
After (k) steps:
When (k = n):
Therefore,
Time Complexity = (O(2^n))
This happens because the recursion forms a binary recursion tree where the number of calls doubles at each level.
Total number of calls ≈ (2^n - 1).
Space complexity depends on the maximum recursion depth, not the number of calls.
- Each call decreases
nby 1 - The deepest chain is:
fun(n)
fun(n-1)
fun(n-2)
...
fun(1)
fun(0)
So the maximum stack depth = n.
Space Complexity = (O(n))
Key Idea:
Even though there are (2^n) calls, they are not on the stack at the same time. Only one path of length n exists simultaneously in the recursion stack.
What is the time complexity of given code below?
void fun(int n) {
if (n <= 0) return;
int arr[1000];
fun(n - 1);
}
Recursion Depth
Each recursive call decreases n by 1:
fun(n)
fun(n-1)
fun(n-2)
...
fun(1)
fun(0)
So the maximum recursion depth = n.
Space Used Per Call
Inside every call:
int arr[1000];
- This allocates 1000 integers on the stack.
- If an
intis 4 bytes, then per call:
But in asymptotic analysis, constants are ignored, so this is O(1) space per call.
Total Space Complexity
Since n recursive calls can exist simultaneously on the call stack, and each uses O(1) space:
Important Insight
Even though arr[1000] is large, it is still constant size. If it were:
int arr[n];
then the space complexity would become O(n²) because each recursive call would allocate O(n) space.
What is the time complexity of given code below?
void fun(int n) {
if (n <= 1) return;
fun(n / 2);
fun(n / 2);
}
Time Complexity
Each call makes 2 recursive calls with input n/2.
So the recurrence relation is:
Using the Master Theorem:
- (a = 2)
- (b = 2)
- (f(n) = O(1))
Compute:
Since (f(n) = O(1)) is smaller than (n),
Time Complexity = O(n)
Intuition (Recursion Tree)
| Level | Number of Calls | Work per Level |
|---|---|---|
| 0 | 1 | 1 |
| 1 | 2 | 2 |
| 2 | 4 | 4 |
| 3 | 8 | 8 |
| ... | ... | ... |
| log n | (2^log n) | (n) |
Total work:
Space Complexity
Space depends on the maximum recursion depth, not total calls.
Each recursive call reduces n to n/2:
n → n/2 → n/4 → n/8 → ... → 1
The depth of this chain is:
Even though there are two recursive calls, they are not executed simultaneously on the stack—one finishes before the other begins.
Space Complexity = O(log n)
Final Answer
| Complexity | Value |
|---|---|
| Time Complexity | O(n) |
| Space Complexity | O(log n) |
Key Idea:
- Work grows like a full binary tree with height log n, giving n total calls.
- Stack depth only follows one path, which is log n.
How C++ sort() works?
In C++, the sort() function from the C++ Standard Library is implemented using an algorithm called Introsort. Introsort is a hybrid sorting algorithm that combines multiple algorithms to guarantee both speed and good worst-case performance.
Introsort = Quick Sort + Heap Sort + Insertion Sort
It starts with Quicksort, but if the recursion gets too deep (which could cause worst-case (O(n^2))), it switches to Heapsort. For very small partitions, it uses Insertion Sort because it is faster on tiny arrays.
So internally:
start → Quicksort
deep recursion → Heapsort
small arrays → Insertion Sort
How It Works Step by Step
- Start with Quicksort
Quicksort divides the array around a pivot.
Example:
[7, 2, 9, 1, 5]
pivot = 5
left = [2,1]
right = [7,9]
Then it recursively sorts both sides.
Average complexity:
- Detect Worst Case
If recursion depth exceeds:
the algorithm assumes Quicksort is degrading toward (O(n^2)).
Then it switches to Heapsort, which guarantees:
worst-case time.
- Use Insertion Sort for Small Parts
When subarray size becomes very small (around 16 elements), it switches to insertion sort because:
- Less overhead
- Better cache performance
- Time Complexity
| Case | Complexity |
|---|---|
| Best | (O(n \log n)) |
| Average | (O(n \log n)) |
| Worst | (O(n \log n)) |
Because the heapsort fallback prevents (O(n^2)).
- Space Complexity
std::sort is in-place.
Space complexity:
This comes from the recursion stack.
Why C++ Uses Introsort
It combines the strengths of multiple algorithms:
| Algorithm | Strength |
|---|---|
| Quicksort | Very fast on average |
| Heapsort | Guarantees worst-case (O(n\log n)) |
| Insertion sort | Fast for tiny arrays |
So std::sort() is extremely optimized and usually faster than most handwritten sorts.
Complexities:
- Time: (O(n \log n))
- Space: (O(\log n))
If you need stable sorting, use stable_sort(), which uses merge sort internally.
Whats the time complexity of the following code?
vector<int> divisors;
for(int i = 1; i * i <= n; i++){
if(n % i == 0){
divisors.push_back(i);
if(i != n / i)
divisors.push_back(n / i);
}
}
sort(divisors.begin(), divisors.end());
for(int d : divisors)
cout << d << " ";
- The loop runs up to
- In the worst case, we can collect at most about divisors
So the vector size can be at most:
Sorting requires:
Substituting :
Ignoring constants, the overall time complexity becomes:
Optimized Approach (No Sorting Needed)
vector<int> large;
for(int i = 1; i * i <= n; i++){
if(n % i == 0){
cout << i << " ";
if(i != n / i)
large.push_back(n / i);
}
}
for(int i = large.size() - 1; i >= 0; i--){
cout << large[i] << " ";
}
- First loop runs up to
- Second loop prints stored divisors (at most )
So the total time complexity is:
This approach avoids sorting and prints the divisors in sorted order.
What is the space complexity?
Map<Integer, List<Integer>> map = new HashMap<>();
for (int i = 0; i < n; i++) {
List<Integer> list = new ArrayList<>();
for (int j = 0; j < n; j++) {
list.add(j);
}
map.put(i, list);
}
- The
Mapstores n keys. - For each key, a
Listof size n is created. - So, the total number of stored elements is n × n = n².
- Additionally, there are n list objects and n map entries, but these are dominated by the
n²elements.
Final Space Complexity:
What is the time complexity?
void solve(int n)
{
if (n <= 1)
return;
for (int i = 0; i < n; i++)
{
print(i);
}
solve(n / 3);
}
The recurrence relation is:
T(n) = T(n/3) + O(n)
Expanding:
n + n/3 + n/9 + n/27 + ...
This is a geometric series.
Sum ≈ n × (1 + 1/3 + 1/9 + ...)
= n × (3/2)
= O(n)
The first level dominates the total work, so the overall time complexity is: O(n)