Understanding Time Complexity and Big O Notation - A Beginner's Guide with Examples
May 10, 2015

When you write code, it’s not just about making it work — it’s about making it work efficiently. This is where Time Complexity and Big O Notation come into play. These concepts help us understand how fast or slow our algorithms are as the input size grows.
In this article, we’ll break down these concepts in simple terms with plenty of examples and code.
What is Time Complexity?
Time Complexity is a way to describe how the running time of an algorithm increases as the size of the input increases.
Think of it this way:
- If you have a list of 10 items, your algorithm might take 1 second
- If you have a list of 100 items, will it take 10 seconds? 100 seconds? Or still 1 second?
Time complexity helps us answer this question without actually running the code with different input sizes.
What is Big O Notation?
Big O Notation is the mathematical notation we use to express time complexity. It describes the worst-case scenario — the maximum time an algorithm could take.
The “O” stands for “Order of” and represents the upper bound of growth rate.
Common Big O notations (from fastest to slowest):
| Notation | Name | Example |
|---|---|---|
| O(1) | Constant | Array access by index |
| O(log n) | Logarithmic | Binary search |
| O(n) | Linear | Simple loop through array |
| O(n log n) | Linearithmic | Merge sort, Quick sort |
| O(n²) | Quadratic | Nested loops |
| O(2ⁿ) | Exponential | Recursive Fibonacci |
| O(n!) | Factorial | Generating all permutations |
O(1) - Constant Time
An algorithm has O(1) time complexity when its execution time is constant, regardless of input size.
Example: Accessing an Array Element
#include <stdio.h>
int getFirstElement(int arr[], int size) {
return arr[0]; // Always takes the same time
}
int getElementAtIndex(int arr[], int index) {
return arr[index]; // Also O(1) - direct memory access
}
int main() {
int arr[] = {10, 20, 30, 40, 50};
printf("First element: %d\n", getFirstElement(arr, 5));
printf("Element at index 3: %d\n", getElementAtIndex(arr, 3));
return 0;
}Output:
First element: 10
Element at index 3: 40Whether the array has 5 elements or 5 million elements, accessing a single element by index takes the same time. This is O(1).
More O(1) Examples:
- Pushing/popping from a stack
- Inserting/removing from the front of a linked list
- Checking if a number is even or odd
- Getting the length of an array (if stored)
O(n) - Linear Time
An algorithm has O(n) time complexity when its execution time grows linearly with the input size.
Example: Finding Maximum Element
#include <stdio.h>
int findMax(int arr[], int n) {
int max = arr[0];
for (int i = 1; i < n; i++) { // Loops n-1 times
if (arr[i] > max) {
max = arr[i];
}
}
return max;
}
int main() {
int arr[] = {34, 12, 89, 45, 67, 23};
int n = 6;
printf("Maximum element: %d\n", findMax(arr, n));
return 0;
}Output:
Maximum element: 89If the array has 10 elements, we loop 10 times. If it has 1000 elements, we loop 1000 times. The time grows linearly with n.
JavaScript Version:
function findMax(arr) {
let max = arr[0];
for (let i = 1; i < arr.length; i++) {
if (arr[i] > max) {
max = arr[i];
}
}
return max;
}
const numbers = [34, 12, 89, 45, 67, 23];
console.log("Maximum element:", findMax(numbers));More O(n) Examples:
- Linear search
- Printing all elements of an array
- Counting occurrences of an element
- Summing all elements
O(n²) - Quadratic Time
An algorithm has O(n²) time complexity when it contains nested loops where each loop runs n times.
Example: Bubble Sort
#include <stdio.h>
void bubbleSort(int arr[], int n) {
for (int i = 0; i < n - 1; i++) { // Outer loop: n times
for (int j = 0; j < n - i - 1; j++) { // Inner loop: n times
if (arr[j] > arr[j + 1]) {
// Swap elements
int temp = arr[j];
arr[j] = arr[j + 1];
arr[j + 1] = temp;
}
}
}
}
void printArray(int arr[], int n) {
for (int i = 0; i < n; i++) {
printf("%d ", arr[i]);
}
printf("\n");
}
int main() {
int arr[] = {64, 34, 25, 12, 22, 11, 90};
int n = 7;
printf("Original array: ");
printArray(arr, n);
bubbleSort(arr, n);
printf("Sorted array: ");
printArray(arr, n);
return 0;
}Output:
Original array: 64 34 25 12 22 11 90
Sorted array: 11 12 22 25 34 64 90With nested loops, if n = 10, we do roughly 100 operations. If n = 100, we do roughly 10,000 operations. The time grows with the square of n.
JavaScript Version:
function bubbleSort(arr) {
const n = arr.length;
for (let i = 0; i < n - 1; i++) {
for (let j = 0; j < n - i - 1; j++) {
if (arr[j] > arr[j + 1]) {
// Swap using destructuring
[arr[j], arr[j + 1]] = [arr[j + 1], arr[j]];
}
}
}
return arr;
}
const numbers = [64, 34, 25, 12, 22, 11, 90];
console.log("Original:", [...numbers]);
console.log("Sorted:", bubbleSort(numbers));More O(n²) Examples:
- Selection sort
- Insertion sort
- Checking for duplicates with nested loops
- Printing all pairs
O(log n) - Logarithmic Time
An algorithm has O(log n) time complexity when it divides the problem in half with each step.
Example: Binary Search
Binary search is the classic O(log n) algorithm. It requires a sorted array.
#include <stdio.h>
int binarySearch(int arr[], int n, int target) {
int left = 0;
int right = n - 1;
while (left <= right) {
int mid = left + (right - left) / 2;
if (arr[mid] == target) {
return mid; // Found!
}
else if (arr[mid] < target) {
left = mid + 1; // Search right half
}
else {
right = mid - 1; // Search left half
}
}
return -1; // Not found
}
int main() {
int arr[] = {2, 5, 8, 12, 16, 23, 38, 56, 72, 91};
int n = 10;
int target = 23;
int result = binarySearch(arr, n, target);
if (result != -1) {
printf("Element %d found at index %d\n", target, result);
} else {
printf("Element not found\n");
}
return 0;
}Output:
Element 23 found at index 5Why is it O(log n)?
Let’s trace through the search for 23 in the array:
| Step | Left | Right | Mid | arr[mid] | Action |
|---|---|---|---|---|---|
| 1 | 0 | 9 | 4 | 16 | 23 > 16, search right |
| 2 | 5 | 9 | 7 | 56 | 23 < 56, search left |
| 3 | 5 | 6 | 5 | 23 | Found! |
With 10 elements, we needed only 3 comparisons. With 1,000,000 elements, we’d need only about 20 comparisons!
The formula is: log₂(n)
| n | log₂(n) |
|---|---|
| 10 | ~3 |
| 100 | ~7 |
| 1,000 | ~10 |
| 1,000,000 | ~20 |
| 1,000,000,000 | ~30 |
JavaScript Version:
function binarySearch(arr, target) {
let left = 0;
let right = arr.length - 1;
while (left <= right) {
const mid = Math.floor((left + right) / 2);
if (arr[mid] === target) {
return mid;
} else if (arr[mid] < target) {
left = mid + 1;
} else {
right = mid - 1;
}
}
return -1;
}
const sortedArray = [2, 5, 8, 12, 16, 23, 38, 56, 72, 91];
console.log("Index of 23:", binarySearch(sortedArray, 23));
console.log("Index of 100:", binarySearch(sortedArray, 100));O(n log n) - Linearithmic Time
Many efficient sorting algorithms like Merge Sort and Quick Sort have O(n log n) time complexity.
Example: Merge Sort
#include <stdio.h>
void merge(int arr[], int left, int mid, int right) {
int n1 = mid - left + 1;
int n2 = right - mid;
int L[n1], R[n2];
for (int i = 0; i < n1; i++)
L[i] = arr[left + i];
for (int j = 0; j < n2; j++)
R[j] = arr[mid + 1 + j];
int i = 0, j = 0, k = left;
while (i < n1 && j < n2) {
if (L[i] <= R[j]) {
arr[k] = L[i];
i++;
} else {
arr[k] = R[j];
j++;
}
k++;
}
while (i < n1) {
arr[k] = L[i];
i++;
k++;
}
while (j < n2) {
arr[k] = R[j];
j++;
k++;
}
}
void mergeSort(int arr[], int left, int right) {
if (left < right) {
int mid = left + (right - left) / 2;
mergeSort(arr, left, mid); // Sort first half
mergeSort(arr, mid + 1, right); // Sort second half
merge(arr, left, mid, right); // Merge the halves
}
}
int main() {
int arr[] = {38, 27, 43, 3, 9, 82, 10};
int n = 7;
printf("Original: ");
for (int i = 0; i < n; i++) printf("%d ", arr[i]);
printf("\n");
mergeSort(arr, 0, n - 1);
printf("Sorted: ");
for (int i = 0; i < n; i++) printf("%d ", arr[i]);
printf("\n");
return 0;
}Output:
Original: 38 27 43 3 9 82 10
Sorted: 3 9 10 27 38 43 82Why O(n log n)?
- We divide the array in half (log n levels)
- At each level, we process all n elements
- Total: n × log n operations
Visual Comparison of Time Complexities
Here’s how different time complexities compare as input size grows:
| n | O(1) | O(log n) | O(n) | O(n log n) | O(n²) |
|---|---|---|---|---|---|
| 10 | 1 | 3 | 10 | 33 | 100 |
| 100 | 1 | 7 | 100 | 664 | 10,000 |
| 1,000 | 1 | 10 | 1,000 | 9,966 | 1,000,000 |
| 10,000 | 1 | 13 | 10,000 | 132,877 | 100,000,000 |
As you can see, O(n²) algorithms become extremely slow with large inputs, while O(log n) stays manageable.
Rules for Calculating Big O
Here are some simple rules to determine the time complexity of your code:
Rule 1: Drop Constants
O(2n) becomes O(n)
O(100) becomes O(1)
Constants don’t matter when n is very large.
Rule 2: Drop Lower Order Terms
O(n² + n) becomes O(n²)
O(n + log n) becomes O(n)
When n is very large, the highest order term dominates.
Rule 3: Different Inputs = Different Variables
void printBoth(int arr1[], int n1, int arr2[], int n2) {
for (int i = 0; i < n1; i++) {
printf("%d ", arr1[i]);
}
for (int j = 0; j < n2; j++) {
printf("%d ", arr2[j]);
}
}This is O(n + m), not O(n). Use different variables for different inputs.
Rule 4: Nested Loops Multiply
for (int i = 0; i < n; i++) { // O(n)
for (int j = 0; j < n; j++) { // O(n)
// O(1) operation
}
}
// Total: O(n) × O(n) = O(n²)Rule 5: Sequential Loops Add
for (int i = 0; i < n; i++) { // O(n)
// operation
}
for (int j = 0; j < n; j++) { // O(n)
// operation
}
// Total: O(n) + O(n) = O(2n) = O(n)Practice: Analyze These Code Snippets
Snippet 1:
int sum = 0;
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
sum += i + j;
}
}Answer: O(n²) - nested loops
Snippet 2:
int sum = 0;
for (int i = 1; i < n; i = i * 2) {
sum += i;
}Answer: O(log n) - loop variable doubles each time
Snippet 3:
int sum = 0;
for (int i = 0; i < n; i++) {
for (int j = 0; j < 10; j++) {
sum += i + j;
}
}Answer: O(n) - inner loop runs constant 10 times
Snippet 4:
int sum = 0;
for (int i = 0; i < n; i++) {
for (int j = i; j < n; j++) {
sum++;
}
}Answer: O(n²) - still quadratic (runs n + (n-1) + (n-2) + … + 1 times)
Space Complexity
While we’ve focused on Time Complexity, there’s also Space Complexity — how much memory an algorithm uses.
| Space Complexity | Example |
|---|---|
| O(1) | Using a fixed number of variables |
| O(n) | Creating an array of size n |
| O(n²) | Creating a 2D array of size n×n |
// O(1) space - only using fixed variables
int sum = 0;
for (int i = 0; i < n; i++) {
sum += arr[i];
}
// O(n) space - creating new array
int* copy = malloc(n * sizeof(int));
for (int i = 0; i < n; i++) {
copy[i] = arr[i];
}Summary
Understanding Time Complexity and Big O Notation is essential for writing efficient code:
| Complexity | Growth Rate | Example Algorithms |
|---|---|---|
| O(1) | Constant | Array access, hash table lookup |
| O(log n) | Logarithmic | Binary search |
| O(n) | Linear | Linear search, simple loops |
| O(n log n) | Linearithmic | Merge sort, Quick sort |
| O(n²) | Quadratic | Bubble sort, nested loops |
Key Takeaways:
- Big O describes the worst-case growth rate
- We drop constants and lower-order terms
- Nested loops multiply, sequential loops add
- Always aim for the most efficient algorithm for your use case
Understanding these concepts will help you:
- Write faster code
- Make better algorithm choices
- Ace technical interviews
Happy coding! 🚀