Have you heard people say that async Python code is faster than “normal” (or sync) Python code? How can that be? In this article I’m going to try to explain what async is and how it differs from normal Python code.

What Do “Sync” and “Async” Mean?

Web applications often have to deal with many requests, all arriving from different clients and within a short period of time. To avoid processing delays it is considered a must that they should be able to handle several requests in parallel, something commonly known as concurrency. I will continue to use web applications as an example throughout this article, but keep in mind that there are other types of applications that also benefit from having multiple tasks done concurrently, so this discussion isn’t specific to the web.

The terms “sync” and “async” refer to two ways in which to write applications that use concurrency. The so called “sync” servers use the underlying operating system support of threads and processes to implement this concurrency. Here is a diagram of how a sync deployment might look:

Sync Server

In this situation we have five clients, all sending requests to the application. The public access point for this application is a web server that acts as a load balancer by distributing the requests among a pool of server workers, which might be implemented as processes, threads or a combination of both. The workers execute requests as they are assigned to them by the load balancer. The application logic, which you may write using a web application framework such as Flask or Django, lives in these workers.

This type of solution is great for servers that have multiple CPUs, because you can configure the number of workers to be a multiple of the number of CPUs, and with this you can achieve an even utilization of your cores, something that a single Python process cannot do due to the limitations imposed by the Global Interpreter Lock (GIL).

#python #developer

Sync vs. Async Python: What is the Difference?
2.05 GEEK