1596983100
Welcome to the February release of the Azure SDK. We have updated the following libraries:
These are ready to use in your production applications. You can find details of all released libraries on our releases page.
New preview releases:
We believe these are ready for your use, but not yet ready for production. Between now and the GA release, these libraries may undergo API changes. We’d love your feedback! If you use these libraries and like what you see, or you want to see changes, let us know in the GitHub issues for the appropriate language.
Use the links below to get started with your language of choice. You will notice that all the preview libraries are tagged with “preview”.
If you want to dive deep into the content, the release notes linked above and the change logs they point to give more details on what has changed.
One of the main advantages of the new Azure SDK is that common developer concerns are treated the same, irrespective of which client library you are using. In this issue, we will tackle logging and show you how to enable logging on each platform.
All .NET client libraries emit events to ETW (Event Tracing for Windows) via the EventSource
class. This system has been a part of the .NET framework for a long time. Event sources allow you to use structured logging in your application code with a minimal performance overhead.
Although you can use out-of-process tools (such as PerfView or dotnet trace), a core tenet of our libraries includes the ability to easily send logs to the console. When developing your app, you should be able to view the logs in real time without much overhead. You can accomplish this with a one-liner at the top of your application:
using AzureEventSourceListener listener = AzureEventSourceListener.CreateConsoleLogger();
The Java client libraries all use SLF4J under the covers for logging. There are many mechanisms for configuring SLF4J to get just the right logging for your application. By default, there is no SLF4J implementation in the client library. Today, we will show the easiest implementation – the simple logger. Download and add slf4j-simple 1.7.28.jar to your classpath (ensuring you do not have another SLF4J library in your app). Set the AZURE_LOG_LEVEL
environment variable to “verbose”. For example, in bash:
export AZURE_LOG_LEVEL="verbose"
Or, in PowerShell:
$env:AZURE_LOG_LEVEL="verbose"
Then run your application as normal. The logs will be emitted on the console. For more information on the Java logging system, refer to our wiki.
There are two easy ways to enable logging for your Node applications. You can set the AZURE_LOG_LEVEL
to “verbose”. In this case, the logs will be output to stderr (for Node applications) or the console (for browser applications). For example, in bash:
export AZURE_LOG_LEVEL="verbose"
Or, in PowerShell:
$env:AZURE_LOG_LEVEL="verbose"
If you want to do the same thing in code, you can use the @azure``/``logger
module:
import { setLogLevel } from "@azure/logger";
setLogLevel("verbose");
This allows you to be more dynamic in your logging. For example, you might want to enable logging on a single client library, or replace the logger with your own implementation. For more information, check out the logger library.
Python uses the standard logging module. This makes it really easy to configure just like you would any other Python library:
import logging
logging.basicConfig()
## Enable DEBUG logging for all azure libraries
azure_root_logger = logging.getLogger('azure')
azure_root_logger.setLevel(logging.DEBUG)
You can see more configuration examples in the logging cookbook.
As you can see, the same logging features are provided in each language, but how each language accomplishes them is idiomatic to the language. How you work with these features should feel very natural in your language of choice.
If it doesn’t, let us know!
So far, the community has filed hundreds of issues against these new SDKs with feedback ranging from documentation issues to API surface area change requests to pointing out failure cases. Please keep that coming. We work in the open on GitHub and you can submit issues here:
#azure sdk #releases #code #dev #azure
1624342320
Welcome to the June release of the Azure SDK. We have updated the following libraries:
JsonWebKey
.#azure sdk #azure #azure-sdk #javascript #python #release #sdk
1596983100
Welcome to the February release of the Azure SDK. We have updated the following libraries:
These are ready to use in your production applications. You can find details of all released libraries on our releases page.
New preview releases:
We believe these are ready for your use, but not yet ready for production. Between now and the GA release, these libraries may undergo API changes. We’d love your feedback! If you use these libraries and like what you see, or you want to see changes, let us know in the GitHub issues for the appropriate language.
Use the links below to get started with your language of choice. You will notice that all the preview libraries are tagged with “preview”.
If you want to dive deep into the content, the release notes linked above and the change logs they point to give more details on what has changed.
One of the main advantages of the new Azure SDK is that common developer concerns are treated the same, irrespective of which client library you are using. In this issue, we will tackle logging and show you how to enable logging on each platform.
All .NET client libraries emit events to ETW (Event Tracing for Windows) via the EventSource
class. This system has been a part of the .NET framework for a long time. Event sources allow you to use structured logging in your application code with a minimal performance overhead.
Although you can use out-of-process tools (such as PerfView or dotnet trace), a core tenet of our libraries includes the ability to easily send logs to the console. When developing your app, you should be able to view the logs in real time without much overhead. You can accomplish this with a one-liner at the top of your application:
using AzureEventSourceListener listener = AzureEventSourceListener.CreateConsoleLogger();
The Java client libraries all use SLF4J under the covers for logging. There are many mechanisms for configuring SLF4J to get just the right logging for your application. By default, there is no SLF4J implementation in the client library. Today, we will show the easiest implementation – the simple logger. Download and add slf4j-simple 1.7.28.jar to your classpath (ensuring you do not have another SLF4J library in your app). Set the AZURE_LOG_LEVEL
environment variable to “verbose”. For example, in bash:
export AZURE_LOG_LEVEL="verbose"
Or, in PowerShell:
$env:AZURE_LOG_LEVEL="verbose"
Then run your application as normal. The logs will be emitted on the console. For more information on the Java logging system, refer to our wiki.
There are two easy ways to enable logging for your Node applications. You can set the AZURE_LOG_LEVEL
to “verbose”. In this case, the logs will be output to stderr (for Node applications) or the console (for browser applications). For example, in bash:
export AZURE_LOG_LEVEL="verbose"
Or, in PowerShell:
$env:AZURE_LOG_LEVEL="verbose"
If you want to do the same thing in code, you can use the @azure``/``logger
module:
import { setLogLevel } from "@azure/logger";
setLogLevel("verbose");
This allows you to be more dynamic in your logging. For example, you might want to enable logging on a single client library, or replace the logger with your own implementation. For more information, check out the logger library.
Python uses the standard logging module. This makes it really easy to configure just like you would any other Python library:
import logging
logging.basicConfig()
## Enable DEBUG logging for all azure libraries
azure_root_logger = logging.getLogger('azure')
azure_root_logger.setLevel(logging.DEBUG)
You can see more configuration examples in the logging cookbook.
As you can see, the same logging features are provided in each language, but how each language accomplishes them is idiomatic to the language. How you work with these features should feel very natural in your language of choice.
If it doesn’t, let us know!
So far, the community has filed hundreds of issues against these new SDKs with feedback ranging from documentation issues to API surface area change requests to pointing out failure cases. Please keep that coming. We work in the open on GitHub and you can submit issues here:
#azure sdk #releases #code #dev #azure
1596975780
Welcome to the March release of the Azure SDK. We have updated the following libraries:
These are ready to use in your production applications. You can find details of all released libraries on our releases page.
New preview releases:
In addition, we have released a new preview for the Java distributed tracing client, and a new GA release for the Python distributed tracing client. The distributed tracing client allows you to trace a request from the SDK entry-point through to the service using Azure Monitor.
We believe these are ready for you to use and experiment with, but not yet ready for production. Between now and the GA release, these libraries may undergo API changes. We’d love your feedback! If you use these libraries and like what you see, or you want to see changes, let us know in GitHub issues.
Use the links below to get started with your language of choice. You will notice that all the preview libraries are tagged with “preview”.
If you want to dive deep into the content, the release notes linked above and the change logs they point to give more details on what has changed.
This month, we are introducing a preview of the Azure Cognitive Search client. Azure Cognitive Search is search-as-a-service, allowing developers to add a rich search experience over private, heterogenous content in web, mobile, and enterprise applications. You’ve probably seen this type of search experience in action when you use a product search capability within an e-commerce site. Let’s take a look at how you can implement the search capability in your own client applications. For this demonstration, I’m going to be using JavaScript and the React framework.
We recommend that most applications use an intermediary web API service to protect the API key. You can write your web API using Azure Functions and Node.js. The same JavaScript API is used to access Azure Cognitive Search.
Start by creating a singleton service client:
import { SearchIndexClient, SearchApiKeyCredential } from '@azure/search';
import searchClientConfiguration from './searchConfig.json';
const searchClient = new SearchIndexClient(
searchClientConfiguration.endpoint,
searchClientConfiguration.indexName,
new SearchApiKeyCredential(searchClientConfiguration.apiKey)
);
export default searchClient;
Now, let’s imagine a search page with a search box at the top and a list of results:
import React, { useState } from 'react';
import searchClient from './searchClient';
const SearchPage = () => {
const [results, setResults] = useState([]);
const [searchString, setSearchString] = useState("");
const searchProducts = async (value) => {
setSearchString(value);
const searchResponse = await searchClient.search({
searchText: value,
orderBy: [ "price desc" ],
select: [ "productName", "price" ],
top: 20,
skip: 0
});
const searchResults = [];
for await (const result of searchResponse.results) {
searchResults.push(result);
}
setResults(searchResults);
};
return (
<>
<SearchInputBox onSearch={(value) => searchProducts(value)} />
<FacetDisplay search={searchString} />
<div className="searchResult">
<ul>
{results.map(result => <li>{result}</li>)}
</ul>
</div>
</>
);
};
When the user enters something in the SearchInputBox
component, the onSearch
method is called. This is asynchronous, allowing your application to be responsive to more user inputs. The search string is stored in state (re-rendering the component to update the FacetDisplay
), then a search is executed against the Azure Cognitive Search service. In this case, we are taking a single page of 20 results (as specified by the top
and skip
values). If you wish to implement paging, you would increment the skip
value to skip that number of entries. In this case, successive values of skip
would be 20, 40, 60, and so on.
Facets allow you to display buckets that assist the user to further refine their search. If you are searching for a chair, you might want to refine your search based on location, rating, or color. Here is an example component:
import React, { useEffect, useState } from 'react';
import searchClient from './searchClient';
const SearchPage = ({ searchString }) => {
const [locations, setLocations] = useState([]);
const [rating, setRating] = useState([]);
useEffect(() => {
const runFacetQuery = async () => {
const response = await searchClient.search({
searchText: searchString,
facets: [
"location,count:3,sort:count",
"rating,count:5,sort:count",
"color,count:3,sort:count"
]
});
setLocations(facets.location.map(v => v.value));
setRating(facets.rating.map(v => v.value));
};
runFacetQuery();
});
return (
<>
<SearchInputBox onSearch={(value) => searchProducts(value)} />
<FacetDisplay search={searchString} />
<div className="searchResult">
<h2>Location</h2>
<ul>
{locations.map(result => <li key={result}>{result}</li>)}
</ul>
<h2>Rating</h2>
<ul>
{rating.map(result => <li key={result}>{result}</li>)}
</ul>
</div>
</>
);
};
Although both the search page and the facet display components both do searches, you can combine the searches into one operation. Once the user clicks on a facet, you will want to refine the search using an OData search filter. For example, let’s say the user clicked on the Location``=``US
facet. You can perform that search as follows:
import { odata } from '@azure/search';
// In this case $facetLocation = 'US'
const response = await searchClient.search({
searchText: searchString,
filter: odata`location eq ${facetLocation}`,
orderBy: [ "price desc" ],
select: [ "productName", "price" ],
top: 20,
skip: 0
facets: [
"location,count:3,sort:count",
"rating,count:5,sort:count",
"color,count:3,sort:count"
]
});
The odata
formatter ensures that the variables you use are quoted properly.
Let’s turn our attention to the search input box. One of the features of any good product search is an autocompleter. Azure Cognitive Search provides suggesters to allow the search service to suggest records you might be interested in. You can use this to implement an autocomplete feature:
const response = await searchClient.autocomplete({
searchText: searchString,
suggesterName: 'sg'
});
const suggestions = response.results || [];
You can then use the suggestions to populate the drop-down autocomplete box.
For more information on the Azure Cognitive Search SDK for JavaScript, check out the API documentation. The Azure Cognitive Search SDK is also available for Python, Java, and .NET.
So far, the community has filed hundreds of issues against these new SDKs with feedback ranging from documentation issues to API surface area change requests to pointing out failure cases. Please keep that coming. We work in the open on GitHub and you can submit issues here:
Finally, please keep up to date with all the news about the Azure developer experience programs and let us know how we are doing by following @azuresdk on Twitter.
#azure sdk #releases #azure
1596986700
This month, we have promoted three of the client libraries to general availability, and expanded our service support to include a preview SDK for our first Cognitive Service: the Azure Text Analytics service.
The new generally available libraries being released this month are:
These are ready to use in your production applications. You can find details of all released libraries on our releases page.
New preview releases:
We believe these are ready for your use, but not yet ready for production. Between now and the GA release, these libraries may undergo API changes. We’d love your feedback! If you use these libraries and like what you see, or you want to see changes, let us know in the GitHub issues for the appropriate language.
Use the links below to get started with your language of choice. You will notice that all the preview libraries are tagged with “preview”.
If you want to dive deep into the content, the release notes linked above and the change logs they point to give more details on what has changed.
The Text Analytics API is part of the Azure Cognitive Services suite of machine learning services that provides advanced natural language processing over raw text. It can be used for sentiment analysis, language detection, key phrase extraction and entity recognition (such as PII).
The new SDK supports all the features of the new v3.0 REST API for Text Analytics. For example, you can detect the language that the text was written in, identify PII (personally identifiable information), extract key phrases, categorize concepts like places and people within the text, link to external sources (like Wikipedia or Bing) for disambiguation, and perform sentiment analysis.
To use the Text Analytics SDK, first create a client. We’ll use C# for this months snippets, although the SDK is also available in Java, Python, and JavaScript / TypeScript. To create a client:
var endpoint = new Uri(myEndpoint);
var client = new TextAnalyticsClient(endpoint, new DefaultAzureCredential());
The DefaultAzureCredentials``()
object will use whatever credentials it can find. If you are running the app on a local development workstation, it will use the user credentials from local development tools like Visual Studio. If you are running the app in the Azure cloud, it will use the connected service principal.
Let’s take a typical string and use the named entities API to obfuscate PII (Personally Identifiable Information) within a hypothetical logging method:
var input = "SSN 555-55-5555, phone: 555-555-5555, some other info";
RecognizePiiEntitiesResult result = client.RecognizePiiEntities(input);
IReadOnlyCollection<NamedEntity> entities = result.NamedEntities;
var output = new StringBuilder(input);
foreach (var entity in entities) {
var newText = new string('*', entity.Length);
output.Replace(entity.Text, newText);
}
Console.WriteLine(output);
The output should be:
SSN ***********, phone: ************, some other info
The PII has been replaced with something innocuous. The SDK has both synchronous and asynchronous methods in all libraries, allowing you the flexibility to build your app in the way that you prefer.
Let’s take a look at another use case – sentiment analysis. Use sentiment analysis to find out what your customers think about the comments that they write in social media or other channels. The API returns a score between 0 and 1 for each document. This time, we will look at a Python example. As before, you need a client reference:
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.getenv("AZURE_TEXT_ANALYTICS_ENDPOINT")
api_key = os.getenv("AZURE_TEXT_ANALYTICS_KEY")
client = TextAnalyticsClient(endpoint = self.endpoint, credential=self.api_key)
With a reusable client, you can perform any of the text analytics operations:
docs = [
"This speaker was awesome. The talk was very relevant to my work.",
"How boring! The speaker was monotone and put me to sleep!"
]
api_result = client.analyze_sentiment(docs)
results = [doc for doc in api_result if not doc.is_error]
for idx, s in enumerate(results):
print("Sentiment = {} for doc {}".format(s.sentiment, docs[idx]))
This gives you an idea of how easy sentiment analysis is to implement, but there is much more power there. For example, you can do per-sentence sentiment analysis.
Be sure to check out all the samples for Text Analytics and let us know what you think! You can find samples for .NET, Java, JavaScript / TypeScript, and Python.
So far, the community has filed hundreds of issues against these new SDKs with feedback randing from documentation issues to API surface area change requests to pointing out failure cases. Please keep that coming. We work in the open on GitHub and you can submit issues here:
#azure sdk #releases #azure #python #code #dev
1594753020
Multiple vulnerabilities in the Citrix Application Delivery Controller (ADC) and Gateway would allow code injection, information disclosure and denial of service, the networking vendor announced Tuesday. Four of the bugs are exploitable by an unauthenticated, remote attacker.
The Citrix products (formerly known as NetScaler ADC and Gateway) are used for application-aware traffic management and secure remote access, respectively, and are installed in at least 80,000 companies in 158 countries, according to a December assessment from Positive Technologies.
Other flaws announced Tuesday also affect Citrix SD-WAN WANOP appliances, models 4000-WO, 4100-WO, 5000-WO and 5100-WO.
Attacks on the management interface of the products could result in system compromise by an unauthenticated user on the management network; or system compromise through cross-site scripting (XSS). Attackers could also create a download link for the device which, if downloaded and then executed by an unauthenticated user on the management network, could result in the compromise of a local computer.
“Customers who have configured their systems in accordance with Citrix recommendations [i.e., to have this interface separated from the network and protected by a firewall] have significantly reduced their risk from attacks to the management interface,” according to the vendor.
Threat actors could also mount attacks on Virtual IPs (VIPs). VIPs, among other things, are used to provide users with a unique IP address for communicating with network resources for applications that do not allow multiple connections or users from the same IP address.
The VIP attacks include denial of service against either the Gateway or Authentication virtual servers by an unauthenticated user; or remote port scanning of the internal network by an authenticated Citrix Gateway user.
“Attackers can only discern whether a TLS connection is possible with the port and cannot communicate further with the end devices,” according to the critical Citrix advisory. “Customers who have not enabled either the Gateway or Authentication virtual servers are not at risk from attacks that are applicable to those servers. Other virtual servers e.g. load balancing and content switching virtual servers are not affected by these issues.”
A final vulnerability has been found in Citrix Gateway Plug-in for Linux that would allow a local logged-on user of a Linux system with that plug-in installed to elevate their privileges to an administrator account on that computer, the company said.
#vulnerabilities #adc #citrix #code injection #critical advisory #cve-2020-8187 #cve-2020-8190 #cve-2020-8191 #cve-2020-8193 #cve-2020-8194 #cve-2020-8195 #cve-2020-8196 #cve-2020-8197 #cve-2020-8198 #cve-2020-8199 #denial of service #gateway #information disclosure #patches #security advisory #security bugs