Learn everything you need to know about Big O notation, the essential tool for analyzing algorithm performance, with this comprehensive cheat sheet. Covering everything from the basics to advanced topics, this guide includes clear explanations, code examples, and images to help you master Big O notation and write more efficient code.

Big O Analysis is a technique for analyzing and ranking the efficiency of algorithms.

This enables you to pick the most efficient and scalable algorithms. This article is a Big O Cheat Sheet explaining everything you need to know about Big O Notation.

📕 Top 15 Data Structures and Algorithms Books for Every Developer

Big O Analysis is a technique for analyzing how well algorithms scale. Specifically, we ask how efficient an algorithm is as the input size increases.

Efficiency is how well system resources are used while producing an output. The resources we are primarily concerned with are time and memory.

Therefore, when performing Big O Analysis, the questions we are asking are:

- How does an algorithm’s usage of memory change as the input size grows?
- How does the time an algorithm takes to produce an output change as the input size grows?

The answer to the first question is the algorithm’s space complexity, while the answer to the second is its time complexity. We use a special notation called Big O Notation to express answers to both questions. This will be covered next in the Big O Cheat Sheet.

Before moving forward, I must say that to make the most of this Big O Cheat Sheet, you need to understand a little Algebra. In addition, because I will be giving Python examples, it is also useful to understand a bit of Python. An in-depth understanding is unnecessary since you won’t be writing any code.

In this section, we will cover how to perform Big O Analysis.

When performing Big O Complexity Analysis, it is important to remember that algorithm performance depends on how the input data is structured.

For example, sorting algorithms run fastest when the data in the list is already sorted in the correct order. That is the best-case scenario for the algorithm’s performance. On the other hand, the same sorting algorithms are slowest when data is structured in the reverse order. That is the worst-case scenario.

When performing Big O Analysis, we only consider the worst-case scenario.

Let us begin this Big O Cheat Sheet by covering how to perform space complexity analysis. We want to consider how the additional memory an algorithm uses scales as the input becomes larger and larger.

For example, the function below uses recursion to loop from n to zero. It has a space complexity that is directly proportional to n. This is because as n grows, so does the number of function calls on the call stack. So, it has a space complexity of O(n).

```
def loop_recursively(n):
if n == -1:
return
else:
print(n)
loop_recursively(n - 1)
```

However, a better implementation would look like this:

```
def loop_normally(n):
count = n
while count >= 0:
print(count)
count =- 1
```

In the algorithm above, we only create one additional variable and use it to loop. If n grew larger and larger, we would still only use one additional variable. Therefore, the algorithm has constant space complexity, denoted by the “O(1)” symbol.

By comparing the space complexity of the two algorithms above, we concluded that the while loop was more efficient than recursion. That is the main objective of Big O Analysis: analyzing how algorithms change as we run them with larger inputs.

When performing time complexity analysis, we are unconcerned about the growth in total time taken by the algorithm. Rather, we are concerned by the growth of computational steps taken. This is because the actual time depends on many systemic and random factors that are hard to account for. So, we only track the growth of computational steps and assume that each step is equal.

To help demonstrate time complexity analysis, consider the following example:

Suppose we have a list of users where each user has an ID and name. Our task is to implement a function that returns the user’s name when given an ID. Here’s how we might do that:

```
users = [
{'id': 0, 'name': 'Alice'},
{'id': 1, 'name': 'Bob'},
{'id': 2, 'name': 'Charlie'},
]
def get_username(id, users):
for user in users:
if user['id'] == id:
return user['name']
return 'User not found'
get_username(1, users)
```

Given a list of users, our algorithm loops through the entire user’s array to find the user with the correct ID. When we have 3 users, the algorithm performs 3 iterations. When we have 10, it performs 10.

Therefore, the number of steps is linearly and directly proportional to the number of users. So, our algorithm has linear time complexity. However, we can improve on our algorithm.

Suppose instead of storing users in a list, we stored them in a dictionary. Then, our algorithm for looking up a user would look like this:

```
users = {
'0': 'Alice',
'1': 'Bob',
'2': 'Charlie'
}
def get_username(id, users):
if id in users:
return users[id]
else:
return 'User not found'
get_username(1, users)
```

With this new algorithm, suppose we had 3 users in our dictionary; we would perform several steps to get the user name. And suppose we had more users, say ten. We would perform the same number of steps as before to get the user. As the number of users grows, the number of steps to get a username remains constant.

Therefore, this new algorithm has constant complexity. It does not matter how many users we have; the number of computational steps taken is the same.

In the previous section, we discussed how to calculate the Big O space and time complexity for different algorithms. We used words such as linear and constant to describe complexities. Another way to describe complexities is to use Big O Notation.

Big O Notation is a way to represent an algorithm’s space or time complexities. The notation is relatively simple; it is an O followed by parentheses. Inside the parentheses, we write a function of n to represent the particular complexity.

Linear complexity is represented by n, so we would write it as O(n) (*read as “O of n”*). Constant complexity is represented by 1, so we would write it as O(1).

There are more complexities, which we will discuss in the next section. But generally, to write an algorithm’s complexity, follow the following steps:

- Try to develop a mathematical function of n, f(n), where f(n) is the amount of space used or computational steps followed by the algorithm and n is the input size.
- Take the most dominant term in that function. The order of dominance of different terms from most dominant to least dominant is as follows: Factorial, Exponential, Polynomial, Quadratic, Linearithmic, Linear, Logarithmic, and Constant.
- Eliminate any coefficients from the term.

The result of that becomes the term we use inside our parentheses.

**Example**:

Consider the following Python function:

```
users = [
'Alice',
'Bob',
'Charlie'
]
def print_users(users):
number_of_users = len(users)
print("Total number of users:", number_of_users)
for i in number_of_users:
print(i, end=': ')
print(user)
```

Now, we will calculate the algorithm’s Big O Time complexity.

We first write a mathematical function of n f(n) to represent the number of computational steps the algorithm takes. Recall that n represents the input size.

From our code, the function performs two steps: one to calculate the number of users and the other to print the number of users. Then, for each user, it performs two steps: one to print the index and one to print the user.

Therefore, the function that best represents the number of computational steps taken can be written as f(n) = 2 + 2n. Where n is the number of users.

Moving on to step two, we pick the most dominant term. 2n is a linear term, and 2 is a constant term. Linear is more dominant than constant, so we pick 2n, the linear term.

So, our function is now f(n) = 2n.

The last step is to eliminate coefficients. In our function, we have 2 as our coefficient. So we eliminate it. And the function becomes f(n) = n. That is the term we use between our parentheses.

Therefore, the time complexity of our algorithm is O(n) or linear complexity.

The last section in our Big O Cheat Sheet will show you different complexities and the associated graphs.

Constant complexity means that the algorithm uses a constant amount of space(when performing space complexity analysis) or a constant number of steps(when performing time complexity analysis). This is the most optimal complexity as the algorithm does not need additional space or time as the input grows. It is, therefore, very scaleable.

Constant complexity is represented as O(1). However, it is not always possible to write algorithms that run in constant complexity.

Logarithmic complexity is represented by the term O(log n). It is important to note that if the logarithm base is not specified in Computer Science, we assume it is 2.

Therefore, log n is log2n. Logarithmic functions are known to grow quickly at first and then slow down. This means they scale and work efficiently with increasingly large numbers of n.

For Linear Functions, if the independent variable scales by a factor of p. The dependent variable scales by the same factor of p.

Therefore, a function with a linear complexity grows by the same factor as the input size. If the input size doubles, so will the number of computational steps or memory usage. Linear complexity is represented by the symbol O(n).

O (n * log n) represents linearithmic functions. Linearithmic functions are a linear function multiplied by the logarithm function. Therefore, a linearithmic function yields results slightly larger than linear functions when log n is greater than 1. This is because n increases when multiplied by a number greater than 1.

Log n is greater than 1 for all values of n greater than 2(remember, the log n is log2n). Therefore, for any value of n greater than 2, Linearithmic functions are less scaleable than linear functions. Of which n is greater than 2 in most cases. So, linearithmic functions are generally less scaleable than logarithmic functions.

Quadratic complexity is represented by O(n2). This means if your input size increases by 10 times, the number of steps taken or space used increases by 102 times or 100! This is not very scaleable, and as you can see from the graph, the complexity blows up very quickly.

Quadratic complexity arises in algorithms where you loop n times, and for each iteration, you loop n times again, for example, in Bubble Sort. While it is generally not ideal, at times, you have no option but to implement algorithms with quadratic complexity.

An algorithm with polynomial complexity is represented by O(np), where p is some constant integer. P represents the order in which n is raised.

This complexity arises when you have a p number of nested loops. The difference between polynomial complexity and quadratic complexity is quadratic is on the order of 2, while polynomial has any number greater than 2.

Exponential complexity grows even faster than polynomial complexity. In Math, the default value for the exponential function is the constant e (Euler’s number). In Computer Science, however, the default value for the exponential function is 2.

Exponential complexity is represented by the symbol O(2n). When n is 0, 2n is 1. But when n is increased to 5, 2n blows up to 32. An increase in n by one will double the previous number. Therefore, functions of exponential are not very scaleable.

Factorial complexity is represented by the symbol O(n!). This function also grows very quickly and is, therefore, not scaleable.

This article covered Big O Analysis and how to calculate it. We also discussed the different complexities and discussed their scalability.

#datastructures #algorithms #dsa

10.65 GEEK