Demystifying Big-O

Published on Friday, August 11, 2023 (9 months ago)

The mysterious Big-O notation is what we’ll be talking about here. It is used to describe the time complexity of algorithms.

The computational time complexity describes the change of the runtime when an algorithm’s input size changes, growing or shrinking. In other words, this useful to tell how slow a algorithm might be at processing any given data-set size.

The most common time complexities, in ascending order, are O(1)O(1), O(logn)O(\log n), O(n)O(n), O(nlogn)O(n \log n) and O(n2)O(n^2). We will follow this order as well.

Constant Time

O(1)O(1) or order 1.

The runtime of these are constant, i.e., no matter how large the data-set is, the algorithm performs the same…always.

An example of O(1)O(1) could be

N.B.

If this array has less than two elements, this will obviously fail.

But fail fast one might say. It is still O(1)O(1) after all.

Granted, this is very simplified example, but no matter how large the array is, it will always take the exact same time to log the second element of an array.

Logarithmic Time

O(logn)O(\log n) or order n log n.

A way to explain the logarithmic time complexity is that for every doubling of elements, the time unit increases by one. E.g. it might take 2 milliseconds (ms) to process 8 elements, and 3 ms to process 16. Following this logic, it should not increase by another millisecond until the number of elements has doubled again - e.g. 32 elements.

This can be useful, if yoou have some larger data-sets.

Linear Time

O(n)O(n) or order n.

If you have an algorithm of with a time complexity of O(N)O(N), and you provide it with some data to process, this data-set is containing NN-elements, the algorithm will finish after a certain amount of time. Now, if you give it a data-set with N2N*2-elements, it should now take twice as long to finish. With N3N*3-elements, this jumps to thrice the original speed, and so on and so forth.

Remember, we are never talking about the absolute time (e.g. Big-O is an indicator of time, Big Ben tells time) it takes to finish, but rather the time required, each depending on the size of the data-sets.

Linearithmic Time

O(nlogn)O(n \log n) or order n log n.

Like with the ”logarithmic time” from above, the O(nlogn)O(n \log n) also excel when you have a larger data-set.

This is in fact the merge sorting algorithm, you can read how it compares to the bucket-sort here.

Quadratic Time

O(n2)O(n^2) or order n squared.

If an algorithm performs with the time complexity n2n^2, it means this will quadruple in time it takes to finish when we double the number of elements in the data-set provided. As you can imagine, this is not the best option for a system that has to stay fast on a few elements as well as on a lot of elements.

Quick note, as you are able to tell from the code-example above, this is actually how the bubble sorting algorithm perform. If you wish to read more about this, tested against the the merge sorting algorithm (mentioned above), you can read about it here.

That being said, these sort of algorithms are not a waste to use, they might just need more time to finish up.