We launched our first course, Introduction to Data Structures in C++! Learn more 🎉

Fundamentals

How to calculate time complexity?

July 7, 2022

As taught previously, time complexity is the amount of time taken by an algorithm to run, as a function of the length of the input. It measures the time taken to execute each statement of code in an algorithm. You can learn more about the basics of time complexity here.


But why is time complexity even important?

Say you are working at YouTube and you have been given a problem to solve. A problem generally has more than one solution. How do you pick the best solution out of them?


Questions like these ask us to consider the difficulty of a problem, or the relative efficiency of two or more approaches to solving a problem. When a program is to be run repeatedly, its efficiency and that of its underlying algorithm become important.

Generally, we associate efficiency with the time it takes a program to run, although there are other resources that a program sometimes must conserve, such as:

1. The amount of storage space taken by its variables.

2. The amount of traffic it generates on a network of computers.

3. The amount of data that must be moved to and from disks


Typically you will analyze the time required for an algorithm, and the space required for a data structure to decide the best solution for a problem. This is why time complexity is important.


The next question that comes to the mind is: but HOW to calculate time complexity? Let’s learn more about it now!


As I mentioned in my previous post on time complexity, we usually express the running time of a program using the “big-oh notation” for various reasons. We also learnt about best, average, and worst case scenarios. All these are important before you learn how to calculate time complexity.

I recommend you read that first and come back once you have clarity on these concepts.



Rules for calculating time complexity

The time complexity of an algorithm is denoted O(...)  where the three dots represent some function. Usually, the variable n denotes the input size.

For example, if the input is an array of numbers, n will be the size of the array, and if the input is a string, n will be the length of the string.


Time complexity of sequential statements

Sequential statements are statements with basic operations like comparisons, assignments, and reading a variable.

Conventionally, sequential statements in the program take constant time. Thus, we can say their time complexity is O(1).


A typical sequential statement look like this 👇

Total time complexity of this block will be, total = time(statement1) + time(statement2) + ... + time (statementN) which is 3*O(1) here.



Time complexity of conditional statements

Sequential statements are statements that are represented in the form of “if…then”.

A typical conditional statement look like this 👇

Since we care about the worst-case with Big O, we take the maximum possible runtime for these statements.


Total time complexity of this block will be,

total = Math.max([t(statement1) + t(statement2)], [time(statement3)])

which translates to maximum time complexity among the statements inside ‘if’ and statements inside ‘else’.



Time complexity of loop statements

Loop statements are statements that are repeated multiple times inside either “for” or “while”.

A typical loop statement looks like this 👇

For any loop, we find out the runtime of the block inside them and multiply it by the number of times the program will repeat the loop.


Total time complexity of this block will be, total = n * [t(statement1) + t(statement2)



Time complexity of nested loop statements

Nested loop statements that have loops inside loops.

A typical nested loop statement looks like this 👇

Total time complexity of this block will be, total = n * [t(statement1) + m * t(statement2...3)]


If there are k nested loops, the time complexity is O(n^k).


For example, the time complexity of the following code is O(n):


And the time complexity of the following code is O(n^2):



Time complexity of function call statements

Since a function is a combination of the statements mentioned previously, its time complexity depends on the statements used.



Logarithmic complexity

Sometimes, we divide the input after each iteration, instead of adding or subtracting from it. In that case, we have to calculate the number of iterations with the help of the log function.

For example, take this simple code to calculate the number of digits in a positive integer n:

How many times is the loop running? The answer is log10(n) times. So the complexity of this algorithm would be O(log10(n)).



Order of magnitude

A time complexity does not tell us the exact number of times the code inside a loop is executed, but it only shows the order of magnitude. In the following examples, the code inside the following loops is executed:


Example 1

Number of times loop runs = 3*n


Example 2

Number of times loop runs = n+5


Example 3

Number of times loop runs = n/2


But, the time complexity of all these programs will be O(n) only. We do not take into consideration the constants that are in multiplication or addition with n.



This article should give you enough clarity on how to calculate time complexity and help you find the most efficient solution to a problem in terms of time taken. Hope this helps ✨

Want to learn more?

Our first course, Introduction to Data Structures in C++ worth ₹499/month is available at a price of your choice for the first month access.

Offer available for a limited time

Subscribe

Learn more

301/302, 3rd Floor, Saket Callipolis, Sarjapur Main Rd, Doddakannelli, Bengaluru, Karnataka 560035

© CodePark Pvt. Ltd. 2022

301/302, 3rd Floor, Saket Callipolis, Sarjapur Main Rd, Doddakannelli, Bengaluru, Karnataka 560035

© CodePark Pvt. Ltd. 2022

301/302, 3rd Floor, Saket Callipolis, Sarjapur Main Rd, Doddakannelli, Bengaluru, Karnataka 560035

© CodePark Pvt. Ltd. 2022