dot NET

dot NET

1605298440

Azure Management Superpowers with Pulumi

Managing infrastructure as code is a vital skill in today’s cloud-first world. Learn how you can use C# or TypeScript to define and deploy Azure infrastructure and applications, including serverless functions, Kubernetes clusters, Cosmos DB, and much more.

#azure #programming #developer #csharp #typescript

What is GEEK

Buddha Community

Azure Management Superpowers with Pulumi

TDE customer-managed keys in Azure SQL Database

Azure SQL Database is a Platform-as-a-Service (PaaS) solution that offers managed database service. Azure DB provides many features such as automatic database tuning, vulnerability assessment, automated patching, performance tuning, alerts. It provides a 99.995% availability SLA for the Zone redundant database in the business-critical service tier.

This article explores Transparent Data Encryption (TDE) using the customer-managed key in Azure SQL Database.

Introduction

In an on-premise SQL Server instance, database administrators can enable Transparent Data Encryption (TDE) for securing the data and log files of a database. It is helpful to protect you from a malicious threat by encrypting data at rest. You get real-time encryption of the database, transaction log files and associated backup files without any configuration changes at the application end.

The high-level steps for implementing the TDE encryption are as below.

  • Create Master Key
  • Configure a Certificate protected by the master key
  • Create Database Encryption Key
  • Enable Encryption
  • Backup Certificate
  • Restoring a Certificate

In the following image, we can visualize the TDE hierarchy. If you are new to TDE, you can refer to the following articles to get familiar with TDE.

Azure SQL DB TDE using Service Managed Key

If you migrate your on-premise databases to Azure SQL Database, TDE is enabled by default. You can connect to the Azure portal and verify the configuration. It uses an Azure service managed key. It is Azure responsibility to manage the key without any user intervention. Microsoft automatically uses its internal security policy for rotating these certificates. It protects the certificate root key using its internal secret store.

As shown below, my [labazuresql] database is encrypted using the Transparent data encryption.

#azure #sql azure #azure sql database #azure sql #customer-managed

Eric  Bukenya

Eric Bukenya

1624713540

Learn NoSQL in Azure: Diving Deeper into Azure Cosmos DB

This article is a part of the series – Learn NoSQL in Azure where we explore Azure Cosmos DB as a part of the non-relational database system used widely for a variety of applications. Azure Cosmos DB is a part of Microsoft’s serverless databases on Azure which is highly scalable and distributed across all locations that run on Azure. It is offered as a platform as a service (PAAS) from Azure and you can develop databases that have a very high throughput and very low latency. Using Azure Cosmos DB, customers can replicate their data across multiple locations across the globe and also across multiple locations within the same region. This makes Cosmos DB a highly available database service with almost 99.999% availability for reads and writes for multi-region modes and almost 99.99% availability for single-region modes.

In this article, we will focus more on how Azure Cosmos DB works behind the scenes and how can you get started with it using the Azure Portal. We will also explore how Cosmos DB is priced and understand the pricing model in detail.

How Azure Cosmos DB works

As already mentioned, Azure Cosmos DB is a multi-modal NoSQL database service that is geographically distributed across multiple Azure locations. This helps customers to deploy the databases across multiple locations around the globe. This is beneficial as it helps to reduce the read latency when the users use the application.

As you can see in the figure above, Azure Cosmos DB is distributed across the globe. Let’s suppose you have a web application that is hosted in India. In that case, the NoSQL database in India will be considered as the master database for writes and all the other databases can be considered as a read replicas. Whenever new data is generated, it is written to the database in India first and then it is synchronized with the other databases.

Consistency Levels

While maintaining data over multiple regions, the most common challenge is the latency as when the data is made available to the other databases. For example, when data is written to the database in India, users from India will be able to see that data sooner than users from the US. This is due to the latency in synchronization between the two regions. In order to overcome this, there are a few modes that customers can choose from and define how often or how soon they want their data to be made available in the other regions. Azure Cosmos DB offers five levels of consistency which are as follows:

  • Strong
  • Bounded staleness
  • Session
  • Consistent prefix
  • Eventual

In most common NoSQL databases, there are only two levels – Strong and EventualStrong being the most consistent level while Eventual is the least. However, as we move from Strong to Eventual, consistency decreases but availability and throughput increase. This is a trade-off that customers need to decide based on the criticality of their applications. If you want to read in more detail about the consistency levels, the official guide from Microsoft is the easiest to understand. You can refer to it here.

Azure Cosmos DB Pricing Model

Now that we have some idea about working with the NoSQL database – Azure Cosmos DB on Azure, let us try to understand how the database is priced. In order to work with any cloud-based services, it is essential that you have a sound knowledge of how the services are charged, otherwise, you might end up paying something much higher than your expectations.

If you browse to the pricing page of Azure Cosmos DB, you can see that there are two modes in which the database services are billed.

  • Database Operations – Whenever you execute or run queries against your NoSQL database, there are some resources being used. Azure terms these usages in terms of Request Units or RU. The amount of RU consumed per second is aggregated and billed
  • Consumed Storage – As you start storing data in your database, it will take up some space in order to store that data. This storage is billed per the standard SSD-based storage across any Azure locations globally

Let’s learn about this in more detail.

#azure #azure cosmos db #nosql #azure #nosql in azure #azure cosmos db

Aisu  Joesph

Aisu Joesph

1626494598

Managed Identities in Azure with Terraform

In this article, I’ll explain the concepts around Managed Identities in Azure, the different types of managed identities, and how to assign them to a VM. Then we will show how to authenticate Terraform to Azure using the managed identity. Lastly, we will configure an Application Gateway to use a managed identity in order to access secrets in an Azure Key Vault.

What is a managed identity?

Managed identities provide an identity for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication.

Crucially the management of credentials is handled by the managed identity (hence the word managed), and not by the application or the developer.

Using Managed Identities to Authenticate with Terraform

You can use a _system-assigned _managed identity to authenticate when using Terraform. The managed identity will need to be assigned RBAC permissions on the subscription, with a role of either Owner, or both Contributor and User access administrator.

Azure Application Gateway and Key Vault with Managed Identity in Terraform

Manged identities can also be created and managed using Terraform and then assigned a role. These can then be tied to a resource, like a VM or Application Gateway.

#azure-devops #azure-managed-identities #azure-active-directory #azure #terraform

Ruthie  Bugala

Ruthie Bugala

1620435660

How to set up Azure Data Sync between Azure SQL databases and on-premises SQL Server

In this article, you learn how to set up Azure Data Sync services. In addition, you will also learn how to create and set up a data sync group between Azure SQL database and on-premises SQL Server.

In this article, you will see:

  • Overview of Azure SQL Data Sync feature
  • Discuss key components
  • Comparison between Azure SQL Data sync with the other Azure Data option
  • Setup Azure SQL Data Sync
  • More…

Azure Data Sync

Azure Data Sync —a synchronization service set up on an Azure SQL Database. This service synchronizes the data across multiple SQL databases. You can set up bi-directional data synchronization where data ingest and egest process happens between the SQL databases—It can be between Azure SQL database and on-premises and/or within the cloud Azure SQL database. At this moment, the only limitation is that it will not support Azure SQL Managed Instance.

#azure #sql azure #azure sql #azure data sync #azure sql #sql server

Advancing Azure business continuity management

How we define a “service” for our BCM program

If you ask three people what a service is, you may get three different answers. At Microsoft, we define a service (business process or technology) as a means of delivering value to customers (first- or third-party) by facilitating outcomes customers want to achieve.

To ensure the highest level of resiliency for each of our “services” we include:

  • People: The people who are responsible for providing the service.
  • Process: The methodology used to provide the service.
  • Technology: The tools used to deliver the service or the technology itself delivered as the value.

Customers see our services as product offerings that are comprised of various bundled services. Each individual service is mapped in our inventory and run through the BCM program to ensure that the people, processes, and technologies for those services are resilient to a variety of failures.

Our end-to-end program identifies, prioritizes, maps, and tests every service providing more than “box checking” compliance. Instead, we focus on a broad understanding of how to provide the best service to our customers who demand reliable service offerings for their business.

How the BCM program is managed in practice

Through a sophisticated set of tooling, every service (both internal and external facing) is uniquely mapped and shared with a string of compliance tooling addressing privacy, security, BCM, and more. This ensures that every service contains sharable meta-data for other tools regardless of type or criticality.

In the context of this post, records are automatically ported to our BCM management tool. Once there, they are automatically scoped for disaster recovery (DR) requirements that meet certifiable standards and to deliver on our customer promises. These records contain the most familiar elements of a BCM program, including business impact analysis, dependencies, workforce, suppliers, recovery plans, and tests. In addition, we provide insight into potential customer impacts, detection capabilities, and willingness to failover.

Testing recoverability

No amount of tooling, policies, or documents can provide the same level of confidence in service recovery and sustainability as comprehensive testing. Azure services test at various levels ranging from individual unit tests, all the way to complete “region down” scenarios. Every service must show proof of testing and that their recovery meets their stated goals—both internally and what we guarantee to our end customers in the Service Level Agreements (SLAs). Tabletop testing, in which simulated emergencies are merely discussed, is not considered acceptable or compliant for our program.

Our most robust integrated testing takes place in our “Canary” environment that consists of two distinct production datacenter regions: one in Eastern United States and the other in Central United States.

On a regular basis, we test service recovery with a complete zone or region shutdown (simulating a major production outage or catastrophic loss), forcing all services to invoke their recovery plans. These tests not only verify service recoverability, but also test our incident response team’s processes for managing major incidents. For Availability Zones, we test and verify the seamless continuation of service availability in the face of an entire zone loss. These are end-to-end tests that include detection, response, coordination, and recovery.

All processes from detection to response and action are performed as if it were a real service-impacting event. Service responders are the normal on-call engineers. Additionally, we also test synthetic customer responsible functions, such as virtual machine (VM) failover to paired regions, ensuring customer workloads can operate in large scale failure scenarios.

#management #azure #azure business #continuity management