Why are algorithms with logarithmic time complexity O(log n) considered incredibly efficient, especially for large input sizes?

Interview Question - Problem Solving

Over the past few weeks, I've had engaging conversations with numerous IT professionals right here on LinkedIn and via email, all as part of my interview preparations. I have been asking them tons of questions about tech interviews, the company's culture, preparation guide. The most common feedback I have got is "Remember, it's not just about knowing the answers, it's about how you articulate them". I have decided to share the Q/As here on LinkedIn, hoping it might be useful to others as well.

Question: Why are algorithms with logarithmic time complexity O(log n) considered incredibly efficient, especially for large input sizes?

The answer may seem straightforward: these algorithms have slower running times as input sizes grow. But let's dive into it in a more comprehensive and presentable manner.

Consider an algorithm with a time complexity of O(log n), using a base-2 logarithm.

Let's say we have an input size of 16 (n = 16). As per the O(log n) time complexity, the number of operations required would be proportional to log₂n.

For n = 16:
log₂16 = 4

So, for an input size of 16, the number of operations is approximately 4.

Now, let's double the input size (n = 32). The number of operations becomes:
log₂32 = 5

For an input size of 32, the number of operations is approximately 5.

The key observation is that even though we doubled the size of the input, the number of operations required by the algorithm increased by just 1(4 to 5). This illustrates the efficiency of algorithms with logarithmic time complexity. As the input size grows, the number of additional operations required by the algorithms increments very slowly relative to the input size. This makes algorithms with logarithmic time complexity very efficient for large-size inputs.