Logarithm

1 minute read

It’s mathematical concept which is very often used in Computer Science in context of algorithms complexity, it’s even sounds similar 😀

The equation:

logx(n) = y iff xy = n

log2(n) = y iff 2y = n


By default, in cs we assume that based of algorithm is 2, so we don’t even write it.

log2(n) = y → log(n) = y iff 2y = n


So in other words what does it imply? The Log of N is the power to which you need to put 2 to get N.

For example:

log(1) = 0 because 20

log(4) = 2 because 22

log(256) = 8 because 28


But noticed one thing, as y is doubled the Log of N increase just by one.

log(16) = 4 (23 * 2 = 24)

log(32) = 5 (24 * 2 = 25)


What does that mean for us? If we get algorithm with complexity log(n) its incredible good! When N doubles, log(n) increases only by one, if we tie this back to complexity analysis, when we have an algorithm with a time complexity of log(n), that is incredibly good! That means as the input doubles, as the input increases, the number of elementary operations that we’re performing in the algorithm increases only by one.

Imagine a linear time complexity algorithm with an input of size 1000, it might take roughly 1000 operations to complete, whereas a logarithmic time complexity algorithm with the same input data would take roughly 10 operations to complete, since 210 = 1000.

Example of algorithm with complexity O(log(n)) may be a binary search.

Comments