Big O
October 13, 2025
Big O notation is a way of describing the performance of a function without using time. Rather than timing a function from start to finish, big O describes how the time grows as the input size increases. It is used to help understand how programs will perform across a range of inputs.In this post I’m going to cover 4 frequently-used categories of big O notation: constant, logarithmic, linear, and quadratic. Don’t worry if these words mean nothing to you right now. I’m going to talk about them in detail, as well as visualise them, throughout this post.
Source: Big O
I remember first being introduced to Big O notation most likely in my first year of computer science, decades ago now.
As someone who’d fiddled around with programming in languages like BASIC, it likely never occurred to me that programming was a kind of mathematical science and we could reason about it in the same way we reason about things like mathematics.
Big O notation is a way of thinking about the performance of an algorithm in relative terms.
This is a great, dynamic, and engaging exploration of the topic.







