# Logarithm

It’s mathematical concept which is very often used in Computer Science in context of algorithms complexity, it’s even sounds similar 😀

The equation:

log_{x}(n) = y iff _{x}^{y} = n

log_{2}(n) = y iff _{2}^{y} = n

By default, in cs we assume that based of algorithm is 2, so we don’t even write it.

log_{2}(n) = y → log(n) = y iff _{2}^{y} = n

So in other words what does it imply?
The Log of N is the power to which you need to put 2 to get N.

For example:

log(1) = 0 because _{2}^{0}

log(4) = 2 because _{2}^{2}

log(256) = 8 because _{2}^{8}

But noticed one thing, as y is doubled the Log of N increase just by one.

log(16) = 4 (_{2}^{3} * 2 = _{2}^{4})

log(32) = 5 (_{2}^{4} * 2 = _{2}^{5})

What does that mean for us? If we get algorithm with complexity log(n) its incredible good! When N doubles, log(n) increases only by one, if we tie this back to complexity analysis, when we have an algorithm with a time complexity of log(n), that is incredibly good! That means as the input doubles, as the input increases, the number of elementary operations that we’re performing in the algorithm increases only by one.

Imagine a linear time complexity algorithm with an input of size 1000, it might take roughly 1000 operations to complete, whereas a logarithmic time complexity algorithm with the same input data would take roughly 10 operations to complete, since _{2}^{10} = 1000.

Example of algorithm with complexity O(log(n)) may be a binary search.

## Comments