1638788400
We’re thrilled to share the latest set of Azure SDK modules for Go that are now in beta. These modules follow the Azure SDK Guidelines to provide an improved developer experience. They’re grouped into management and client modules. The management modules allow you to manage resources in your Azure subscription, by creating and managing instances of Azure services.
⭐️You can see more at the link at the end of the article. Thank you for your interest in the blog, if you find it interesting, please give me a like, comment and share to show your support for the author.
1637286724
👉in This Article, We Learn About Azure SDK November 2021 Release
⭐️You can see more at the link at the end of the article. Thank you for your interest in the blog, if you find it interesting, please give me a like, comment and share it with everyone. Thanks! ❤️
1636696800
👉 Beginning in September of 2021, the Azure client SDK BOM has been released monthly. You can depend on the latest features of Azure SDK client libraries with BOM.
⭐️Thank you3 for your interest in the blog, if you find it interesting, please give me a like, comment and share for everyone to know. Thanks! ❤️️
1636448400
In this post we’ll demonstrate how we can use the NVIDIA® Jetson Nano™ device running AI on IoT edge combined with power of Azure platform to create an end-to-end AI on edge solution. We are going to use a custom AI model that is developed using NVIDIA® Jetson Nano™ device, but you can use any AI model that fits your needs. We will see how we can leverage the new Azure SDKs to create a complete Azure solution.
1623912452
Azure Monitor helps you maximize the availability and performance of your apps. It delivers a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.
All data collected by Azure Monitor fits into one of two fundamental types:
To better support analyzing these data sources, we’re pleased to announce the 1.0 Beta 1 release of the Azure Monitor Query client library. The library:
This blog post will highlight new features of the library.
#azure sdk #.net #azure #azure-monitor #azuresdk #java #javascript #logs #metrics
1623317100
For those looking to build applications with event-based architectures, Azure Event Grid is a cloud-based service that provides reliable event delivery at a massive scale, insulating users from infrastructure concerns. The service fully manages all routing of events from any source, to any destination, for any application. Azure service events and custom events can be published directly to the service, where the events can then be filtered and sent to various recipients, such as built-in handlers or custom webhooks. These features help vastly improve serverless, ops automation, and integration work (for more information, read What can I do with Event Grid?). Included below is an image that demonstrates how Event Grid connects to various event sources and handlers.
We are excited to announce the release of the new Azure Event Grid client library’s first beta for .NET, Java, JavaScript, and Python! Check out the links to learn more about how to install the package for each language.
The new client libraries simplify the process of authenticating and publishing messages. You can use the new Azure Event Grid client library to publish events to the Event Grid service in various event schemas (EventGrid, CloudEvents v1.0, or a custom schema) and to consume events that have been delivered to event handlers. This post will explore both event publishing and event consuming in more detail. Another notable difference from the previous Event Grid client library is the ability to specify a custom ObjectSerializer to use when serializing event data to/from JSON (by default, using JsonObjectSerializer).
Read on to learn more about the CloudEvents v1.0 schema, and how you can now use Event Grid to work with CloudEvents!
CloudEvents is a collaborative effort between numerous companies and individuals in the industry (Microsoft included) to produce a vendor-neutral specification for “describing event data in a common way”. Simply stated, the CloudEvents spec defines a set of common metadata attributes that describe the event being transferred (such as a unique identifier or the time of the event occurrence). The self-described goal of CloudEvents is “to define interoperability of event systems that allow services to produce or consume events, where the producer and consumer can be developed and deployed independently”. The common event format thus allows for easy integration of work across platforms and services and can help standardize how events are consumed.
The new Event Grid client libraries support the JSON format for CloudEvents, meaning that JSON-serialized events of the CloudEvents schema that are sent and received by the service must adhere to the given specification.
#azure sdk #azuresdk #cloudevents #eventgrid #sdk
1602676020
To build scalable applications it’s important to understand how your downstream dependencies scale and what limitations you can hit.
The majority of Azure services expose functionality over HTTP REST APIs. The Azure SDKs, in turn, wrap the HTTP communication into an easy-to-use set of client and model types.
Every time you call a method on a Client
class, an HTTP request is sent to the service. Sending an HTTP request requires a socket connection to be established between client and the server. Establishing a connection is an expensive operation that could take longer than the processing of the request itself. To combat this, .NET maintains a pool of HTTP connections that can be reused instead of opening a new one for each request.
The post details the specifics of HTTP connection pooling based on the .NET runtime you are using and ways to tune it to make sure connection limits don’t negatively affect your application performance.
NOTE: most of this is not applicable for applications using .NET Core. See .NET Core section for details.
Connection pooling in the .NET framework is controlled by the ServicePointManager class and the most important fact to remember is that the pool, [by default](https://docs.microsoft.com/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit), is limited to **2** connections to a particular endpoint (host+port pair) in non-web applications, and to **unlimited** connection per endpoint in ASP.NET applications that have
autoConfig enabled (without autoConfig
the limit is set to 10). After the maximum number of connections is reached, HTTP requests will be queued until one of the existing connections becomes available again.
Imagine writing a console application that uploads files to Azure Blob Storage. To speed up the process you decided to upload using using 20 parallel threads. The default connection pool limit means that even though you have 20 BlockBlobClient.UploadAsync calls running in parallel only 2 of them would be actually uploading data and the rest would be stuck in the queue.
NOTE: The connection pool is centrally managed on .NET Framework. Every ServiceEndpoint
has one or more connection groups and the limit is applied to connections in a connection group. HttpClient
creates a connection group per-client so every HttpClient
instance gets it’s own limit while instances of HttpWebRequest
reuse the default connection group and all share the same limit (unless ConnectionGroupName is set). All Azure SDK client by default use a shared instance of HttpClient
and as such share the same pool of connections across all of them.
#azure sdk #.net #azuresdk #clientlibraries #connectionpool #httpclient #sdk #servicepointmanager
1597806000
Since we shipped the first Azure Identity library preview in June 2019, it has been a vital part of building Azure cloud solutions. We have received great feedback from our development community and have added new features and have fixed many bugs. However, most of the changes have been in preview in the past few months. Today, we are proud to share the stable release in .NET, Java, Python, and JavaScript/TypeScript with you. This blog will give you a brief introduction to what we are bringing in this release.
In this release, we have added support for more environments and developer platforms, without compromising the simplicity of the DefaultAzureCredential
class. It’s now easier than ever to authenticate your cloud application on your local workstation, with your choice of IDE or developer tool. When the application is deployed to Azure, you are given more control and insights on how your application is authenticated.
Use the links below to find the August release of each language:
In the Azure Identity November 2019 release, DefaultAzureCredential
supported reading credentials from environment variables, Managed Identity, Windows shared token cache, and interactively in the browser (for .NET & Python), in that order. In this new release, DefaultAzureCredential
is much more powerful, supporting a set of new environments in the following order (a merged list of all languages):
DefaultAzureCredential
will read account information specified via environment variables and use it to authenticate.DefaultAzureCredential
will authenticate with that account.DefaultAzureCredential
will authenticate with that account.DefaultAzureCredential
will authenticate with that account.DefaultAzureCredential
will authenticate with that account.DefaultAzureCredential
will authenticate with that account.az login
command, the DefaultAzureCredential
will authenticate with that account.DefaultAzureCredential
will interactively authenticate the developer via the current system’s default browser.Using the DefaultAzureCredential
remains the same as the previous releases:
// .NET
var client = new SecretClient(new Uri(keyVaultUrl), new DefaultAzureCredential());
// Java
DefaultAzureCredential credential = new DefaultAzureCredentialBuilder().build();
SecretClient secretClient = new SecretClientBuilder()
.vaultUrl(keyVaultUrl)
.credential(credential)
.buildClient();
// JavaScript
const client = new SecretClient(keyVaultUrl, new DefaultAzureCredential());
## Python
client = SecretClient(vault_url, DefaultAzureCredential())
Not only is the DefaultAzureCredential updated to support these environments, you can also pick the specific credential to use. Here are the list of credentials grouped by usage types:
#azure sdk #azure #azuresdk #identity #java #sdk
1596650400
When using Azure SDK .NET client libraries in high throughput applications, it’s important to know how to maximize performance and avoid extra allocations while preventing bugs that could be introduced by accessing data from multiple threads. This article covers the best practices for using clients and models efficiently.
The main rule of Azure SDK client lifetime management is: treat clients as singletons.
There is no need to keep more than one instance of a client for a given set of constructor parameters or client options. This can be implemented in many ways: creating an instance once and passing it around as a parameter, storing an instance in a field, or registering it as a singleton in a dependency injection container of your choice.
❌ Bad (extra allocations and initialization):
foreach (var secretName in secretNames)
{
var client = new SecretClient(new Uri("<secrets_endpoint>"), new DefaultAzureCredential());
KeyVaultSecret secret = client.GetSecret(secretName);
Console.WriteLine(secret.Value);
}
✔️ Good:
var client = new SecretClient(new Uri("<secrets_endpoint>"), new DefaultAzureCredential());
foreach (var secretName in secretNames)
{
KeyVaultSecret secret = client.GetSecret(secretName);
Console.WriteLine(secret.Value);
}
✔️ Also good:
public class Program
{
internal static SecretClient Client;
public static void Main()
{
Client = new SecretClient(new Uri("<secrets_endpoint>"), new DefaultAzureCredential());
}
}
public class OtherClass
{
public string DoWork()
{
KeyVaultSecret secret = Program.Client.GetSecret(settingName);
Console.WriteLine(secret.Value);
}
}
We guarantee that all client instance methods are thread-safe and independent of each other (guideline). This ensures that the recommendation of reusing client instances is always safe, even across threads.
✔️ Good:
var client = new SecretClient(new Uri("<secrets_endpoint>"), new DefaultAzureCredential());
foreach (var secretName in secretNames)
{
// Using clients from parallel threads
Task.Run(() => Console.WriteLine(client.GetSecret(secretName).Value));
}
Because most model use-cases involve a single thread and to avoid incurring an extra synchronization cost the input and output models of the client methods are non-thread-safe and can only be accessed by one thread at a time. The following sample illustrates a bug where accessing a model from multiple threads might cause an undefined behavior.
❌ Bad:
KeyVaultSecret newSecret = client.SetSecret("secret", "value");
foreach (var tag in tags)
{
// Don't use model type from parallel threads
Task.Run(() => newSecret.Properties.Tags[tag] = CalculateTagValue(tag));
}
client.UpdateSecretProperties(newSecret.Properties);
If you need to access the model from different threads use a synchronization primitive.
✔️ Good:
KeyVaultSecret newSecret = client.SetSecret("secret", "value");
foreach (var tag in tags)
{
Task.Run(() =>
{
lock (newSecret)
{
newSecret.Properties.Tags[tag] = CalculateTagValue(tag);
}
);
}
client.UpdateSecretProperties(newSecret.Properties);
Clients are immutable after being created, which also makes them safe to share and reuse safely (guideline). This means that after the client is constructed, you cannot change the endpoint it connects to, the credential, and other values passed via the client options.
❌ Bad (configuration changes are ignored):
var secretClientOptions = new SecretClientOptions()
{
Retry =
{
Delay = TimeSpan.FromSeconds(5)
}
};
var mySecretClient = new SecretClient(new Uri("<...>"), new DefaultAzureCredential(), secretClientOptions);
// This has no effect on the mySecretClient instance
secretClientOptions.Retry.Delay = TimeSpan.FromSeconds(100);
NOTE: An important exception from this rule are credential type implementations that are required to support rolling the key after the client was created (guideline). Examples of such types include AzureKeyCredential and StorageSharedKeyCredential. This feature is to enable long-running applications while using limited-time keys that need to be rolled periodically without requiring application restart or client re-creation.
#azure sdk #.net #azuresdk #clientlibraries #clientlifetime #clients #sdk #threadsafety
1592394655
Note: SDK Track 2 is still a preview and subject to API changes.
Table of contents
The future of Azure Service Bus .NET SDK
ServiceBusClient
ServiceBusSender
ServiceBusReceiver
Safe Batching …
#azureservicebus #azuresdk #.net #azure service bus sdk #busreceivedmessage #programming
1592394571
Note: SDK Track 2 is still a preview and subject to API changes.
Table of contents
The future of Azure Service Bus .NET SDK
ServiceBusClient
ServiceBusSender
ServiceBusReceiver
Safe Batching …
#azureservicebus #azuresdk #.net #azure service bus sdk #servicebusmessage #programming
1592377725
Note: SDK Track 2 is still a preview and subject to API changes.
Table of contents
The future of Azure Service Bus .NET SDK
ServiceBusClient
ServiceBusSender
ServiceBusReceiver
Safe Batching …
#azureservicebus #azuresdk #.net #azure service bus sdk #servicebusclient #programming