To build scalable applications it’s important to understand how your downstream dependencies scale and what limitations you can hit.

The majority of Azure services expose functionality over HTTP REST APIs. The Azure SDKs, in turn, wrap the HTTP communication into an easy-to-use set of client and model types.

Every time you call a method on a Client class, an HTTP request is sent to the service. Sending an HTTP request requires a socket connection to be established between client and the server. Establishing a connection is an expensive operation that could take longer than the processing of the request itself. To combat this, .NET maintains a pool of HTTP connections that can be reused instead of opening a new one for each request.

The post details the specifics of HTTP connection pooling based on the .NET runtime you are using and ways to tune it to make sure connection limits don’t negatively affect your application performance.

NOTE: most of this is not applicable for applications using .NET Core. See .NET Core section for details.

.NET Framework

Connection pooling in the .NET framework is controlled by the ServicePointManager class and the most important fact to remember is that the pool, [by default](https://docs.microsoft.com/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit), is limited to **2** connections to a particular endpoint (host+port pair) in non-web applications, and to **unlimited** connection per endpoint in ASP.NET applications that have autoConfig enabled (without autoConfig the limit is set to 10). After the maximum number of connections is reached, HTTP requests will be queued until one of the existing connections becomes available again.

Imagine writing a console application that uploads files to Azure Blob Storage. To speed up the process you decided to upload using using 20 parallel threads. The default connection pool limit means that even though you have 20 BlockBlobClient.UploadAsync calls running in parallel only 2 of them would be actually uploading data and the rest would be stuck in the queue.

NOTE: The connection pool is centrally managed on .NET Framework. Every ServiceEndpoint has one or more connection groups and the limit is applied to connections in a connection group. HttpClientcreates a connection group per-client so every HttpClient instance gets it’s own limit while instances of HttpWebRequest reuse the default connection group and all share the same limit (unless ConnectionGroupName is set). All Azure SDK client by default use a shared instance of HttpClientand as such share the same pool of connections across all of them.

#azure sdk #.net #azuresdk #clientlibraries #connectionpool #httpclient #sdk #servicepointmanager

.NET Framework Connection Pool Limits and the new Azure SDK for .NET
7.15 GEEK