What Is Big O Notation? Understanding Algorithm Complexity With Real Examples
You write a function that works perfectly with 10 elements. Tests pass, code review approved, shipped to production. Three months later, when the data reaches 10 million, the system stops responding.
This scenario is real. And it happens repeatedly to developers who don't understand Big O.
What Is Big O Notation?
Big O Notation is the mathematical notation used to describe how an algorithm behaves — in terms of time and memory — as the input size grows. It tells you how many resources an algorithm will consume in the worst case.
What matters: not the absolute time, but the growth rate. Whether a function runs in 10ms or 20ms depends on the environment. But when data doubles, does the time double, quadruple, or grow logarithmically — that is universal.
O(1) — Constant Time
No matter how large the input, the operation completes in the same amount of time.
// Accessing the first element of an array - always O(1) function getFirst(arr) { return arr[0]; } // Reading a value from an object - always O(1) function getUser(users, id) { return users[id]; // hash map lookup } getFirst([1]); // 1ms getFirst([1, 2, 3, ..., 10000000]); // 1ms (identical)
Array index access, hash map reads, stack push/pop — these are O(1). No matter how much data grows, the time doesn't change.
O(log n) — Logarithmic Time
The problem is halved at every step. When data doubles, only one step is added.
// Binary Search - finding an element in a sorted array function binarySearch(arr, target) { let left = 0; let right = arr.length - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); if (arr[mid] === target) return mid; if (arr[mid] < target) left = mid + 1; else right = mid - 1; } return -1; }
Maximum 10 steps in an array of 1,000 elements. Maximum 20 steps in 1,000,000. 30 steps in 1,000,000,000. Even if data grows a billion times, only 30 steps were added.
Binary search, balanced binary tree lookup, heap operations — all O(log n).
O(n) — Linear Time
Each element is processed once. When data doubles, time doubles.
// Searching an array - in the worst case, every element is checked function linearSearch(arr, target) { for (let i = 0; i < arr.length; i++) { if (arr[i] === target) return i; } return -1; } // Sum calculation function sum(arr) { return arr.reduce((total, num) => total + num, 0); }
1,000 steps for 1,000 elements. 1,000,000 steps for 1,000,000. Linear growth.
O(n log n) — Linearithmic Time
Most efficient sorting algorithms live here. Worse than O(n) but much better than O(n²).
// Merge Sort - O(n log n) function mergeSort(arr) { if (arr.length <= 1) return arr; const mid = Math.floor(arr.length / 2); const left = mergeSort(arr.slice(0, mid)); const right = mergeSort(arr.slice(mid)); return merge(left, right); } function merge(left, right) { const result = []; let i = 0, j = 0; while (i < left.length && j < right.length) { if (left[i] <= right[j]) result.push(left[i++]); else result.push(right[j++]); } return [...result, ...left.slice(i), ...right.slice(j)]; }
JavaScript's Array.sort(), Python's sorted() — both O(n log n).
O(n²) — Quadratic Time
Two nested loops. When data doubles, time quadruples.
// Bubble Sort - O(n²) function bubbleSort(arr) { for (let i = 0; i < arr.length; i++) { for (let j = 0; j < arr.length - i - 1; j++) { if (arr[j] > arr[j + 1]) { [arr[j], arr[j + 1]] = [arr[j + 1], arr[j]]; } } } return arr; } // Classic mistake: finding duplicates with nested loops function hasDuplicates(arr) { for (let i = 0; i < arr.length; i++) { for (let j = i + 1; j < arr.length; j++) { if (arr[i] === arr[j]) return true; // O(n²) } } return false; } // The right approach: using Set for O(n) function hasDuplicatesFast(arr) { return new Set(arr).size !== arr.length; // O(n) }
1,000,000 operations for 1,000 elements. 100,000,000 for 10,000. This is why production systems crash.
O(2ⁿ) — Exponential Time
The number of operations doubles at every step. Becomes unusable even with small inputs.
// Fibonacci - naive recursive (O(2^n)) function fibonacci(n) { if (n <= 1) return n; return fibonacci(n - 1) + fibonacci(n - 2); // each call creates two branches } fibonacci(10) // 177 calls fibonacci(20) // 21,891 calls fibonacci(40) // 331,160,281 calls fibonacci(50) // trillions of calls → system freezes
Reduced to O(n) with memoization:
// Fibonacci - O(n) with memoization function fibonacci(n, memo = {}) { if (n in memo) return memo[n]; if (n <= 1) return n; memo[n] = fibonacci(n - 1, memo) + fibonacci(n - 2, memo); return memo[n]; }
The Comparison: Seeing It With Numbers
How dramatic the difference becomes as n grows:
n = 10:
O(1) → 1 operation
O(log n) → 3 operations
O(n) → 10 operations
O(n log n) → 33 operations
O(n²) → 100 operations
n = 1,000:
O(1) → 1 operation
O(log n) → 10 operations
O(n) → 1,000 operations
O(n log n) → 10,000 operations
O(n²) → 1,000,000 operations
n = 1,000,000:
O(1) → 1 operation
O(log n) → 20 operations
O(n) → 1,000,000 operations
O(n log n) → 20,000,000 operations
O(n²) → 1,000,000,000,000 operations ← minutes/hours
Space Complexity: Memory, Not Time
Big O applies to memory usage as well as time.
// O(1) space - constant memory function sum(arr) { let total = 0; // a single variable for (const num of arr) total += num; return total; } // O(n) space - memory proportional to input function double(arr) { return arr.map(x => x * 2); // creates a new array } // Recursive functions occupy space on the call stack function factorial(n) { if (n <= 1) return 1; return n * factorial(n - 1); // O(n) space — stack n levels deep }
Practical Rule: When Does It Matter?
You don't need to calculate the Big O of every piece of code. It matters for: functions that operate on large data sets, functions called inside loops, database queries (the N+1 problem), and API endpoint response times.
O(n²) is acceptable for small, fixed-size data. For growing data, aim for O(n log n) or better.
When writing code, ask: "How does this behave with data 1000 times larger?" Knowing the answer is your best insurance against production surprises.