Azure Log Analytics and Azure Monitor are a very important part of Azure infrastructure and their adoption should start even before the migration to the cloud
Azure Monitor and Log Analytics are a very important part of Azure infrastructure. In my opinion, the adoption of these tools should start before a company starts its migration to azure. Using these tools over on premises servers can generate a performance baseline to be used when migrating the servers, ensuring the environment will be improving.
However, it needs to be a careful implementation, if you choose to take the easier way, you may cost some buckets to your company pocket.
The biggest example I noticed was the alert system. This system is an essential part not only of these tools but of the azure infrastructure as a hole, but you need to take care with the expenses. When you use the alert system you have some options to choose and, among them, Log or Metrics.
The log option means you need to build a Kusto query to retrieve an information from the Log Analytics storage, while the Metrics means you will define one metric you are interested on and Log Analytics will do the rest.
Let’s analyse an example. Imagine you would like to build an alert to notify you everytime a processor core is over 80% for more than 15 minutes. It seems an easy example to use a metric, right?
That’s what happens when you choose to use metrics:
Since the check will be by core, the alert needs 3 dimensions: Computer, the metric (processor) and the instance (core). However, we also can’t overreact. Any core can easily be over 80% at any moment, that’s why we need to configure the 15 minutes of check. The processor will only be in trouble if over 80% during 15 minutes.
The configuration is like this:
Log Analytics uses Kusto Query Language, or KQL, to query the information on its storage. Using it we can build a query capable of achive the same result as the monitoring metric. Learn a new query language to build this is not the easier task when starting a migration, but the difference may worth it.
The KQL query we need will be this one:
| where CounterName==”% Processor Time” and InstanceName != “_Total” and ObjectName==”Processor”
| where TimeGenerated>=ago(15m)
| summarize MinProcesor=min(CounterValue) by Computer,InstanceName
| where MinProcesor >=80
The price difference is amazing:
Such a high price difference would turn metrics useless. Why would anyone use metrics for alerts if KQL is so cheaper?
On the objects where we don’t have the Log option to build conditions, we still can use KQL queries instead of metrics. We can configure the objects to send all their log to a log analytics on our azure environment. By doing that, we will be able to configure the alerts for all of them on the log analytics environment.
The objects’ configuration is not in the same place or exactly the same everywhere. Let’s analyse the existing variations.
In this article, you learn how to set up Azure Data Sync services. In addition, you will also learn how to create and set up a data sync group between Azure SQL database and on-premises SQL Server.
This article will help you understand how to analyze Azure Cosmos DB data using Azure Synapse Analytics.
In this post we will learn about Azure SDK for Java application and HTTP logging scenarios in an Azure Functions environment. We will look at the scenario of managing secrets in the Azure Key Vault with the Key Vault and Identity client libraries and how to activate and access the SDK logs in the Azure Functions environment.
In this Tutorial, we will learn how to integrate Azure Purview and Azure Synapse Analytics capabilities to access data catalog assets hosted in Azure Purview from Azure Synapse.
In the article, we will go to the next step to create a subscription and use webhook event handlers to view those logs in our Azure web application.