In a previous article, we reviewed how to set up logs and troubleshoot Cosmos DB issues using Azure Log Analytics.

Context

You can read the full story by navigating to the link above. In a nutshell, I pushed 31 million Invoices into Cosmos DB with a total size reaching 47GBs. The database had containers with different indexing and logical partition key configuration. The test’s purpose wasn’t related to costs analysis but an unexpected spike in charges for my subscription resulted in this article.

So here we are now.

For reference, a code-snippet that I used for data ingestion (utilizing parallelism of Cosmos DB .Net SDK):

Identifying the source

A low throughput for an account helped with keeping database charges in a reasonable range. The cost for Cosmos DB data ingestion was 35$. At the same time, Azure Log Analytics — surprising 50$.

That gives a good reason to investigate why costs were so high and how to reduce them without compromising the depth of logging for the system.

First, we need a correlation between Logs stored per gigabyte of Cosmos DB data. Kusto query language is a perfect tool to get this information.

#logs #nosql #cosmosdb #monitoring #database

Azure Cosmos DB. Logs. Cost optimization.
1.55 GEEK