In computer programming, as in other aspects of life, there are different ways of solving a problem. These different ways may imply different times, computational power, or any other metric you choose, so we need to compare the efficiency of different approaches to pick up the right one.

Now, as you may know, computers are able to solve problems based on algorithms.

Algorithms are procedures or instructions (set of steps) that tell a computer what to do and how to do it.

Nowadays, they evolved so much that they may be considerably different even when accomplishing the same task. In the most extreme case (which is quite usual by the way), different algorithms programmed in different programming languages may tell different computers with different hardware and operating systems to perform the same task, in a completely different way. That’s crazy, isn’t it?

The thing is that while one algorithm takes seconds to finish, another will take minutes with even small data sets. How can we compare different performances and pick the best algorithm to solve a particular problem?

Fortunately, there are ways of doing this, and we don’t need to wait and see the algorithm at work to know if it can get the job done quickly or if it’s going to collapse under the weight of its input. When we consider the complexity of an algorithm, we shouldn’t really care about the exact number of operations that are performed; instead, we should care about how the number of operations relates to the problem size. Think about it: if the problem size doubles, does the number of operations stay the same? Do they double? Do they increase in some other way? To answer these questions, we need to measure the time complexity of algorithms.

Time complexity represents the number of times a statement is executed. The time complexity of an algorithm is NOT the actual time required to execute a particular code, since that depends on other factors like programming language, operating software, processing power, etc. The idea behind time complexity is that it can measure only the execution time of the algorithm in a way that depends only on the algorithm itself and its input.

To express the time complexity of an algorithm, we use something called the “Big O notation”. **The Big O notation is a language we use to describe the time complexity of an algorithm. **It’s how we compare the efficiency of different approaches to a problem, and helps us to make decisions.

Big O notation expresses the run time of an algorithm in terms of how quickly it grows relative to the input (this input is called “n”). This way, if we say for example that the run time of an algorithm grows “on the order of the size of the input”, we would state that as “O(n)”. If we say that the run time of an algorithm grows “on the order of the square of the size of the input”, we would express it as “O(n²)”. But what does that mean exactly?

The key to understanding time complexity is understanding the rates at which things can grow. The rate in question here is time taken per input size. There are different types of time complexities, so let’s check the most basic ones.

Constant Time Complexity: O(1)

When time complexity is constant (notated as “O(1)”), the size of the input (n) doesn’t matter. Algorithms with Constant Time Complexity take a constant amount of time to run, independently of the size of n. They don’t change their run-time in response to the input data, which makes them the fastest algorithms out there.

For example, you’d use an algorithm with constant time complexity if you wanted to know if a number is odd or even. No matter if the number is 1 or 9 billions (the input “n”), the algorithm would perform the same operation only once, and bring you the result.

Also, if you wanted to print out once a phrase like the classic “Hello World”, you’d run that too with constant time complexity, since the amount of operations (in this case 1) with this or any other phrase will remain the same, no matter which operating system or which machine configurations you are using.

To remain constant, these algorithms shouldn’t contain loops, recursions or calls to any other non-constant time function. For constant time algorithms, run-time doesn’t increase: the order of magnitude is always 1.

Linear Time Complexity: O(n)

When time complexity grows in direct proportion to the size of the input, you are facing Linear Time Complexity, or O(n). Algorithms with this time complexity will process the input (n) in “n” number of operations. This means that as the input grows, the algorithm takes proportionally longer to complete.

These are the type of situations where you have to look at every item in a list to accomplish a task (e.g. find the maximum or minimum value). Or you can also think about everyday tasks like reading a book or finding a CD (remember them?) in a CD stack: if all data has to be examined, the larger the input size, the higher the number of operations are.

Linear running time algorithms are very common, and they relate to the fact that the algorithm visits every element from the input.

#2020 jun tutorials #overviews #algorithms #complexity #programming #algorithms

Time Complexity: How to measure the efficiency of algorithms
1.50 GEEK