# big-o What is Big-O Notation?

## Example

Big-O notation is a notation used to talk about the long-term growth rates of functions. It's often used in the analysis of algorithms to talk about the runtime of an algorithm or related concepts like space complexity.

In common usage, big-O notation is used to talk about how an algorithm's runtime scales as a size of the input. For example, we'd say that selection sort has a runtime of O(n2) because the runtime grows quadratically as a function of the size of the array to sort. That is, if you double the size of the input, the runtime of selection sort should roughly double. When using big-O notation, the convention is to drop coefficients and to ignore lower-order terms. For example, while technically it is not wrong to say that binary search runs in time O(2 log2 n + 17), it's considered poor style and it would be better to write that binary search runs in time O(log n).

Formally, big-O notation is used to quantify the long-term behavior of a function. We say that f(n) = O(g) (sometimes denoted f(n) ∈ O(g(n)) in some sources) if there are fixed constants c and n0 such that f(n) ≤ c · g(n) for all n ≥ n0. This formal definition accounts for why we don't care about low-order terms (they can be subsumed by making c larger and increasing n0) and constant factors (the c term absorbs them). This formal definition is often used in the rigorous analysis of algorithms but is rarely used colloquially. PDF - Download big-o for free