Published on Friday, August 11, 2023 (2 weeks ago)
The mysterious Big-O notation is what we’ll be talking about here. It is used to describe the time complexity of algorithms.
The computational time complexity describes the change of the runtime when an algorithm’s input size changes, growing or shrinking. In other words, this useful to tell how slow a algorithm might be at processing any given data-set size.
The most common time complexities, in ascending order, are , , , and . We will follow this order as well.
or order 1.
The runtime of these are constant, i.e., no matter how large the data-set is, the algorithm performs the same…always.
An example of could be
If this array has less than two elements, this will obviously fail.
But fail fast one might say. It is still after all.
Granted, this is very simplified example, but no matter how large the array is, it will always take the exact same time to log the second element of an array.
or order n log n.
A way to explain the logarithmic time complexity is that for every doubling of elements, the time unit increases by one. E.g. it might take 2 milliseconds (ms) to process 8 elements, and 3 ms to process 16. Following this logic, it should not increase by another millisecond until the number of elements has doubled again - e.g. 32 elements.
This can be useful, if yoou have some larger data-sets.
or order n.
If you have an algorithm of with a time complexity of , and you provide it with some data to process, this data-set is containing -elements, the algorithm will finish after a certain amount of time. Now, if you give it a data-set with -elements, it should now take twice as long to finish. With -elements, this jumps to thrice the original speed, and so on and so forth.
Remember, we are never talking about the absolute time (e.g. Big-O is an indicator of time, Big Ben tells time) it takes to finish, but rather the time required, each depending on the size of the data-sets.
or order n log n.
Like with the ”logarithmic time” from above, the also excel when you have a larger data-set.
This is in fact the merge sorting algorithm, you can read how it compares to the bucket-sort here.
or order n squared.
If an algorithm performs with the time complexity , it means this will quadruple in time it takes to finish when we double the number of elements in the data-set provided. As you can imagine, this is not the best option for a system that has to stay fast on a few elements as well as on a lot of elements.
Quick note, as you are able to tell from the code-example above, this is actually how the bubble sorting algorithm perform. If you wish to read more about this, tested against the the merge sorting algorithm (mentioned above), you can read about it here.
That being said, these sort of algorithms are not a waste to use, they might just need more time to finish up.