Skip to content

Big O Notation

This content is for DSA. Switch to the latest version for up-to-date documentation.

Big O notation is a way to describe how the running time or space requirements of an algorithm grow as the input size increases. It helps us compare the efficiency of different algorithms, especially for large inputs.

  • It helps us predict how fast or slow an algorithm will be as the problem gets bigger.
  • It allows us to compare different solutions and pick the most efficient one.
  • It gives us a common language to talk about performance.
  1. O(1): Constant time (fastest)
  2. O(log n): Logarithmic time (e.g., binary search)
  3. O(n): Linear time (e.g., simple loops)
  4. O(n²): Quadratic time (e.g., nested loops)

Image

Suppose you have a list of numbers:

// O(n) example: print each number
for (int i = 0; i < n; i++) {
System.out.println(numbers[i]);
}

Here, the time it takes grows as the list gets bigger, so it’s O(n).

AlgorithmBest CaseAverage CaseWorst Case
Linear SearchO(1)O(n)O(n)
Binary SearchO(1)O(log n)O(log n)
Bubble SortO(n)O(n²)O(n²)
Selection SortO(n²)O(n²)O(n²)
Insertion SortO(n)O(n²)O(n²)
Built with passion by Ngineer Lab