1628821437
Azure Monitor is a technology used to collect telemetry data from your Azure and on-premises environments. You can use Azure Monitor to improve performance and support the stability of your applications by identifying issues quickly. Azure Monitor includes Azure Metrics and Azure Logs to store telemetry data and track data in real-time and for historical analysis.
Azure Metrics track values collected at regular intervals and can be used to describe the health of your resources at a particular point in time. Azure Metrics are lightweight and useful for creating dashboard KPIs, alerting, and fast detection of issues.
In part three of this seven-part Azure SQL VM series, we will use Azure metrics to identify the uncached and cached health of your virtual machine and disk resources. We will show how to find metrics at the VM scope, pin metrics to create a custom dashboard, how to manage alerts, and finally comparing cached and uncached metrics at the VM and disk levels.
0:00 Show begins
0:48 Tracking VM cached health
2:55 Demo
9:23 Getting started
✔️Resources:
Disk performance metrics: https://aka.ms/storagemetrics?WT.mc_id=dataexposed-c9-niner
Azure Monitor: https://aka.ms/azuremonitor?WT.mc_id=dataexposed-c9-niner
#AzureSQL #AzureSQLVM #AzureMonitor #AzureSQLVirtualMachine
1623912452
Azure Monitor helps you maximize the availability and performance of your apps. It delivers a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.
All data collected by Azure Monitor fits into one of two fundamental types:
To better support analyzing these data sources, we’re pleased to announce the 1.0 Beta 1 release of the Azure Monitor Query client library. The library:
This blog post will highlight new features of the library.
#azure sdk #.net #azure #azure-monitor #azuresdk #java #javascript #logs #metrics
1598959140
Many enterprises and SaaS companies depend on a variety of external API integrations in order to build an awesome customer experience. Some integrations may outsource certain business functionality such as handling payments or search to companies like Stripe and Algolia. You may have integrated other partners which expand the functionality of your product offering, For example, if you want to add real-time alerts to an analytics tool, you might want to integrate the PagerDuty and Slack APIs into your application.
If you’re like most companies though, you’ll soon realize you’re integrating hundreds of different vendors and partners into your app. Any one of them could have performance or functional issues impacting your customer experience. Worst yet, the reliability of an integration may be less visible than your own APIs and backend. If the login functionality is broken, you’ll have many customers complaining they cannot log into your website. However, if your Slack integration is broken, only the customers who added Slack to their account will be impacted. On top of that, since the integration is asynchronous, your customers may not realize the integration is broken until after a few days when they haven’t received any alerts for some time.
How do you ensure your API integrations are reliable and high performing? After all, if you’re selling a feature real-time alerting, you’re alerts better well be real-time and have at least once guaranteed delivery. Dropping alerts because your Slack or PagerDuty integration is unacceptable from a customer experience perspective.
Specific API integrations that have an exceedingly high latency could be a signal that your integration is about to fail. Maybe your pagination scheme is incorrect or the vendor has not indexed your data in the best way for you to efficiently query.
Average latency only tells you half the story. An API that consistently takes one second to complete is usually better than an API with high variance. For example if an API only takes 30 milliseconds on average, but 1 out of 10 API calls take up to five seconds, then you have high variance in your customer experience. This is makes it much harder to track down bugs and harder to handle in your customer experience. This is why 90th percentile and 95th percentiles are important to look at.
Reliability is a key metric to monitor especially since your integrating APIs that you don’t have control over. What percent of API calls are failing? In order to track reliability, you should have a rigid definition on what constitutes a failure.
While any API call that has a response status code in the 4xx or 5xx family may be considered an error, you might have specific business cases where the API appears to successfully complete yet the API call should still be considered a failure. For example, a data API integration that returns no matches or no content consistently could be considered failing even though the status code is always 200 OK. Another API could be returning bogus or incomplete data. Data validation is critical for measuring where the data returned is correct and up to date.
Not every API provider and integration partner follows suggested status code mapping
While reliability is specific to errors and functional correctness, availability and uptime is a pure infrastructure metric that measures how often a service has an outage, even if temporary. Availability is usually measured as a percentage of uptime per year or number of 9’s.
AVAILABILITY %DOWNTIME PER YEARDOWNTIME PER MONTHDOWNTIME PER WEEKDOWNTIME PER DAY90% (“one nine”)36.53 days73.05 hours16.80 hours2.40 hours99% (“two nines”)3.65 days7.31 hours1.68 hours14.40 minutes99.9% (“three nines”)8.77 hours43.83 minutes10.08 minutes1.44 minutes99.99% (“four nines”)52.60 minutes4.38 minutes1.01 minutes8.64 seconds99.999% (“five nines”)5.26 minutes26.30 seconds6.05 seconds864.00 milliseconds99.9999% (“six nines”)31.56 seconds2.63 seconds604.80 milliseconds86.40 milliseconds99.99999% (“seven nines”)3.16 seconds262.98 milliseconds60.48 milliseconds8.64 milliseconds99.999999% (“eight nines”)315.58 milliseconds26.30 milliseconds6.05 milliseconds864.00 microseconds99.9999999% (“nine nines”)31.56 milliseconds2.63 milliseconds604.80 microseconds86.40 microseconds
Many API providers are priced on API usage. Even if the API is free, they most likely have some sort of rate limiting implemented on the API to ensure bad actors are not starving out good clients. This means tracking your API usage with each integration partner is critical to understand when your current usage is close to the plan limits or their rate limits.
It’s recommended to tie usage back to your end-users even if the API integration is quite downstream from your customer experience. This enables measuring the direct ROI of specific integrations and finding trends. For example, let’s say your product is a CRM, and you are paying Clearbit $199 dollars a month to enrich up to 2,500 companies. That is a direct cost you have and is tied to your customer’s usage. If you have a free tier and they are using the most of your Clearbit quota, you may want to reconsider your pricing strategy. Potentially, Clearbit enrichment should be on the paid tiers only to reduce your own cost.
Monitoring API integrations seems like the correct remedy to stay on top of these issues. However, traditional Application Performance Monitoring (APM) tools like New Relic and AppDynamics focus more on monitoring the health of your own websites and infrastructure. This includes infrastructure metrics like memory usage and requests per minute along with application level health such as appdex scores and latency. Of course, if you’re consuming an API that’s running in someone else’s infrastructure, you can’t just ask your third-party providers to install an APM agent that you have access to. This means you need a way to monitor the third-party APIs indirectly or via some other instrumentation methodology.
#monitoring #api integration #api monitoring #monitoring and alerting #monitoring strategies #monitoring tools #api integrations #monitoring microservices
1620723180
Do you have the following questions when considering Serverless360 as a monitoring solution for your Azure Serverless Applications?
In real-time Azure Serverless services are put together to build orchestrations that solve critical business needs. What is required is a monitoring solution for these applications. What one can find in the Azure portal is Azure Monitor, a monitoring solution for an Azure entity. Serverless360 complements the Azure portal by providing the much-needed monitoring for serverless applications.
What is required is a consolidated report on the status of all the Azure entities that participate in the business application. It is really challenging to correlate the reports on every entity from the Azure monitor. Serverless360 provides a consolidated report on all the Azure entities that participate in the business solution which makes it easy to infer.
Restriction with Azure Monitor is that every alert rule is strongly tied to an entity and only a couple of metrics can be configured to be monitored. Serverless360 data monitoring solution enables monitoring multiple entities on an extensive list of metrics enabling monitoring the Serverless applications at various perspectives like availability, reliability, performance, and so on.
#azure #azure monitor #azure monitor #serverless360
1624713540
This article is a part of the series – Learn NoSQL in Azure where we explore Azure Cosmos DB as a part of the non-relational database system used widely for a variety of applications. Azure Cosmos DB is a part of Microsoft’s serverless databases on Azure which is highly scalable and distributed across all locations that run on Azure. It is offered as a platform as a service (PAAS) from Azure and you can develop databases that have a very high throughput and very low latency. Using Azure Cosmos DB, customers can replicate their data across multiple locations across the globe and also across multiple locations within the same region. This makes Cosmos DB a highly available database service with almost 99.999% availability for reads and writes for multi-region modes and almost 99.99% availability for single-region modes.
In this article, we will focus more on how Azure Cosmos DB works behind the scenes and how can you get started with it using the Azure Portal. We will also explore how Cosmos DB is priced and understand the pricing model in detail.
As already mentioned, Azure Cosmos DB is a multi-modal NoSQL database service that is geographically distributed across multiple Azure locations. This helps customers to deploy the databases across multiple locations around the globe. This is beneficial as it helps to reduce the read latency when the users use the application.
As you can see in the figure above, Azure Cosmos DB is distributed across the globe. Let’s suppose you have a web application that is hosted in India. In that case, the NoSQL database in India will be considered as the master database for writes and all the other databases can be considered as a read replicas. Whenever new data is generated, it is written to the database in India first and then it is synchronized with the other databases.
While maintaining data over multiple regions, the most common challenge is the latency as when the data is made available to the other databases. For example, when data is written to the database in India, users from India will be able to see that data sooner than users from the US. This is due to the latency in synchronization between the two regions. In order to overcome this, there are a few modes that customers can choose from and define how often or how soon they want their data to be made available in the other regions. Azure Cosmos DB offers five levels of consistency which are as follows:
In most common NoSQL databases, there are only two levels – Strong and Eventual. Strong being the most consistent level while Eventual is the least. However, as we move from Strong to Eventual, consistency decreases but availability and throughput increase. This is a trade-off that customers need to decide based on the criticality of their applications. If you want to read in more detail about the consistency levels, the official guide from Microsoft is the easiest to understand. You can refer to it here.
Now that we have some idea about working with the NoSQL database – Azure Cosmos DB on Azure, let us try to understand how the database is priced. In order to work with any cloud-based services, it is essential that you have a sound knowledge of how the services are charged, otherwise, you might end up paying something much higher than your expectations.
If you browse to the pricing page of Azure Cosmos DB, you can see that there are two modes in which the database services are billed.
Let’s learn about this in more detail.
#azure #azure cosmos db #nosql #azure #nosql in azure #azure cosmos db
1620025440
Kubernetes is one of the most popular choices for container management and automation today. A highly efficient Kubernetes setup generates innumerable new metrics every day, making monitoring cluster health quite challenging. You might find yourself sifting through several different metrics without being entirely sure which ones are the most insightful and warrant utmost attention.
As daunting a task as this may seem, you can hit the ground running by knowing which of these metrics provide the right kind of insights into the health of your Kubernetes clusters. Although there are observability platforms to help you monitor your Kubernetes clusters’ right metrics, knowing exactly which ones to watch will help you stay on top of your monitoring needs. In this article, we take you through a few Kubernetes health metrics that top our list.
A crash loop is the last thing you’d want to go undetected. During a crash loop, your application breaks down as a pod starts and keeps crashing and restarting in a circle. Multiple reasons can lead to a crash loop, making it tricky to identify the root cause. Being alerted when a crash loop occurs can help you quickly narrow down the list of causes and take emergency measures to keep your application active.
#devops #kubernetes #monitoring #observability #kubernetes health monitoring #monitoring for kubernetes