veera hkr

1660300663

Azure Traffic Manager | A Complete Guide On Azure Traffic Manager

An increase in traffic to your web applications often results in more customers. The customers should always get satisfactory web performance. The surges in internet traffic towards the application might result in bottlenecks, which makes it difficult to provide users a clean interaction with the application. That is why traffic management is important for internet-facing applications. Azure provides a service for traffic management, called Azure Traffic Manager, that helps in distributing traffic across application endpoints. In this post, we will give a detailed understanding of the Azure Traffic Manager and how it works. Keep on reading to know all about the Azure Traffic Manager. Let us get started.

Let us first understand what is the service provided by Azure traffic manager. Basically, this service balances the traffic load of services hosted in Azure. The routing policy is defined by the client and traffic to the services hosted in Azure is redirected according to set policies. Traffic manager is a DNS-based service. Thus, it will improve the availability and performance applications.

Let’s see how to create and configure traffic manager in Azure.

Create Traffic Manager

Step 1 − Login to Azure management portal and click ‘New’ at the bottom left corner.

Step 2 − Select Network Services → Traffic Manager → Quick Create.

Create Traffic Manager

Step 3 − Enter the DNS prefix and select the Load Balancing Method.

There are three options in this dropdown.

Performance − This option is ideal when you have endpoints in two different locations. When a DNS is requested, it is redirected to the region closest to the user.

Round Robin − This option is ideal when you want to distribute the traffic among multiple endpoints. Traffic is distributed in round robin fashion by selecting a healthy endpoint.

Failover − In this option, a primary access point is set up, but in case of failure alternate endpoints are made available as backup.

Step 4 − Based on your needs you can choose a load balancing method. Let’s choose performance here.

Step 5 − Click create.

You will see the traffic manager created and displayed in your management portal. Its status will be inactive until it is configured.

Traffic Manager Status

Create Endpoints to be Monitored via Traffic Manager

Step 1 − Select the ‘Traffic Manager’ from the left panel in the management portal that you want to work on.

Step 2 − Select ‘Endpoints’ from the top horizontal menu as shown in the following image. Then select ‘Add Endpoints’.

Endpoints Traffic Manager

Step 3 − The screen shown in the following image will appear. Choose the service type and items under that service will be listed.

Step 4 − Select the service endpoints and proceed.

Endpoints Traffic Manager

Step 5 − The service endpoints will be provisioned.

You can see that in this case, the service ‘tutorialsPointVM’ created in Azure will now be monitored by the traffic manager and its traffic will be redirected according to the specified policy.

Configure the Policy

Step 1 − Click on ‘Configure’ in the top menu bar as shown in the following image.

Step 2 − Enter the DNS Time to Live (TIL). It is the amount of time for which a client/user will continue to use a particular endpoint. For example, if you enter 40 seconds the traffic manager will be queried after every 40 seconds for the changes in the traffic management system.

Configure the Policy

Step 3 − You can change the load balancing method here by choosing a desired method from the dropdown. Here, let’s choose ‘Performance’ as chosen earlier.

Load Balancing Settings

Step 4 − If you scroll down, you will see heading ‘Monitoring Setting’. You can choose the protocol; enter port number and relative path for a service to be monitored.
 

 

What is GEEK

Buddha Community

TDE customer-managed keys in Azure SQL Database

Azure SQL Database is a Platform-as-a-Service (PaaS) solution that offers managed database service. Azure DB provides many features such as automatic database tuning, vulnerability assessment, automated patching, performance tuning, alerts. It provides a 99.995% availability SLA for the Zone redundant database in the business-critical service tier.

This article explores Transparent Data Encryption (TDE) using the customer-managed key in Azure SQL Database.

Introduction

In an on-premise SQL Server instance, database administrators can enable Transparent Data Encryption (TDE) for securing the data and log files of a database. It is helpful to protect you from a malicious threat by encrypting data at rest. You get real-time encryption of the database, transaction log files and associated backup files without any configuration changes at the application end.

The high-level steps for implementing the TDE encryption are as below.

  • Create Master Key
  • Configure a Certificate protected by the master key
  • Create Database Encryption Key
  • Enable Encryption
  • Backup Certificate
  • Restoring a Certificate

In the following image, we can visualize the TDE hierarchy. If you are new to TDE, you can refer to the following articles to get familiar with TDE.

Azure SQL DB TDE using Service Managed Key

If you migrate your on-premise databases to Azure SQL Database, TDE is enabled by default. You can connect to the Azure portal and verify the configuration. It uses an Azure service managed key. It is Azure responsibility to manage the key without any user intervention. Microsoft automatically uses its internal security policy for rotating these certificates. It protects the certificate root key using its internal secret store.

As shown below, my [labazuresql] database is encrypted using the Transparent data encryption.

#azure #sql azure #azure sql database #azure sql #customer-managed

Eric  Bukenya

Eric Bukenya

1624713540

Learn NoSQL in Azure: Diving Deeper into Azure Cosmos DB

This article is a part of the series – Learn NoSQL in Azure where we explore Azure Cosmos DB as a part of the non-relational database system used widely for a variety of applications. Azure Cosmos DB is a part of Microsoft’s serverless databases on Azure which is highly scalable and distributed across all locations that run on Azure. It is offered as a platform as a service (PAAS) from Azure and you can develop databases that have a very high throughput and very low latency. Using Azure Cosmos DB, customers can replicate their data across multiple locations across the globe and also across multiple locations within the same region. This makes Cosmos DB a highly available database service with almost 99.999% availability for reads and writes for multi-region modes and almost 99.99% availability for single-region modes.

In this article, we will focus more on how Azure Cosmos DB works behind the scenes and how can you get started with it using the Azure Portal. We will also explore how Cosmos DB is priced and understand the pricing model in detail.

How Azure Cosmos DB works

As already mentioned, Azure Cosmos DB is a multi-modal NoSQL database service that is geographically distributed across multiple Azure locations. This helps customers to deploy the databases across multiple locations around the globe. This is beneficial as it helps to reduce the read latency when the users use the application.

As you can see in the figure above, Azure Cosmos DB is distributed across the globe. Let’s suppose you have a web application that is hosted in India. In that case, the NoSQL database in India will be considered as the master database for writes and all the other databases can be considered as a read replicas. Whenever new data is generated, it is written to the database in India first and then it is synchronized with the other databases.

Consistency Levels

While maintaining data over multiple regions, the most common challenge is the latency as when the data is made available to the other databases. For example, when data is written to the database in India, users from India will be able to see that data sooner than users from the US. This is due to the latency in synchronization between the two regions. In order to overcome this, there are a few modes that customers can choose from and define how often or how soon they want their data to be made available in the other regions. Azure Cosmos DB offers five levels of consistency which are as follows:

  • Strong
  • Bounded staleness
  • Session
  • Consistent prefix
  • Eventual

In most common NoSQL databases, there are only two levels – Strong and EventualStrong being the most consistent level while Eventual is the least. However, as we move from Strong to Eventual, consistency decreases but availability and throughput increase. This is a trade-off that customers need to decide based on the criticality of their applications. If you want to read in more detail about the consistency levels, the official guide from Microsoft is the easiest to understand. You can refer to it here.

Azure Cosmos DB Pricing Model

Now that we have some idea about working with the NoSQL database – Azure Cosmos DB on Azure, let us try to understand how the database is priced. In order to work with any cloud-based services, it is essential that you have a sound knowledge of how the services are charged, otherwise, you might end up paying something much higher than your expectations.

If you browse to the pricing page of Azure Cosmos DB, you can see that there are two modes in which the database services are billed.

  • Database Operations – Whenever you execute or run queries against your NoSQL database, there are some resources being used. Azure terms these usages in terms of Request Units or RU. The amount of RU consumed per second is aggregated and billed
  • Consumed Storage – As you start storing data in your database, it will take up some space in order to store that data. This storage is billed per the standard SSD-based storage across any Azure locations globally

Let’s learn about this in more detail.

#azure #azure cosmos db #nosql #azure #nosql in azure #azure cosmos db

Aisu  Joesph

Aisu Joesph

1626494598

Managed Identities in Azure with Terraform

In this article, I’ll explain the concepts around Managed Identities in Azure, the different types of managed identities, and how to assign them to a VM. Then we will show how to authenticate Terraform to Azure using the managed identity. Lastly, we will configure an Application Gateway to use a managed identity in order to access secrets in an Azure Key Vault.

What is a managed identity?

Managed identities provide an identity for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication.

Crucially the management of credentials is handled by the managed identity (hence the word managed), and not by the application or the developer.

Using Managed Identities to Authenticate with Terraform

You can use a _system-assigned _managed identity to authenticate when using Terraform. The managed identity will need to be assigned RBAC permissions on the subscription, with a role of either Owner, or both Contributor and User access administrator.

Azure Application Gateway and Key Vault with Managed Identity in Terraform

Manged identities can also be created and managed using Terraform and then assigned a role. These can then be tied to a resource, like a VM or Application Gateway.

#azure-devops #azure-managed-identities #azure-active-directory #azure #terraform

Ruthie  Bugala

Ruthie Bugala

1620435660

How to set up Azure Data Sync between Azure SQL databases and on-premises SQL Server

In this article, you learn how to set up Azure Data Sync services. In addition, you will also learn how to create and set up a data sync group between Azure SQL database and on-premises SQL Server.

In this article, you will see:

  • Overview of Azure SQL Data Sync feature
  • Discuss key components
  • Comparison between Azure SQL Data sync with the other Azure Data option
  • Setup Azure SQL Data Sync
  • More…

Azure Data Sync

Azure Data Sync —a synchronization service set up on an Azure SQL Database. This service synchronizes the data across multiple SQL databases. You can set up bi-directional data synchronization where data ingest and egest process happens between the SQL databases—It can be between Azure SQL database and on-premises and/or within the cloud Azure SQL database. At this moment, the only limitation is that it will not support Azure SQL Managed Instance.

#azure #sql azure #azure sql #azure data sync #azure sql #sql server

Advancing Azure business continuity management

How we define a “service” for our BCM program

If you ask three people what a service is, you may get three different answers. At Microsoft, we define a service (business process or technology) as a means of delivering value to customers (first- or third-party) by facilitating outcomes customers want to achieve.

To ensure the highest level of resiliency for each of our “services” we include:

  • People: The people who are responsible for providing the service.
  • Process: The methodology used to provide the service.
  • Technology: The tools used to deliver the service or the technology itself delivered as the value.

Customers see our services as product offerings that are comprised of various bundled services. Each individual service is mapped in our inventory and run through the BCM program to ensure that the people, processes, and technologies for those services are resilient to a variety of failures.

Our end-to-end program identifies, prioritizes, maps, and tests every service providing more than “box checking” compliance. Instead, we focus on a broad understanding of how to provide the best service to our customers who demand reliable service offerings for their business.

How the BCM program is managed in practice

Through a sophisticated set of tooling, every service (both internal and external facing) is uniquely mapped and shared with a string of compliance tooling addressing privacy, security, BCM, and more. This ensures that every service contains sharable meta-data for other tools regardless of type or criticality.

In the context of this post, records are automatically ported to our BCM management tool. Once there, they are automatically scoped for disaster recovery (DR) requirements that meet certifiable standards and to deliver on our customer promises. These records contain the most familiar elements of a BCM program, including business impact analysis, dependencies, workforce, suppliers, recovery plans, and tests. In addition, we provide insight into potential customer impacts, detection capabilities, and willingness to failover.

Testing recoverability

No amount of tooling, policies, or documents can provide the same level of confidence in service recovery and sustainability as comprehensive testing. Azure services test at various levels ranging from individual unit tests, all the way to complete “region down” scenarios. Every service must show proof of testing and that their recovery meets their stated goals—both internally and what we guarantee to our end customers in the Service Level Agreements (SLAs). Tabletop testing, in which simulated emergencies are merely discussed, is not considered acceptable or compliant for our program.

Our most robust integrated testing takes place in our “Canary” environment that consists of two distinct production datacenter regions: one in Eastern United States and the other in Central United States.

On a regular basis, we test service recovery with a complete zone or region shutdown (simulating a major production outage or catastrophic loss), forcing all services to invoke their recovery plans. These tests not only verify service recoverability, but also test our incident response team’s processes for managing major incidents. For Availability Zones, we test and verify the seamless continuation of service availability in the face of an entire zone loss. These are end-to-end tests that include detection, response, coordination, and recovery.

All processes from detection to response and action are performed as if it were a real service-impacting event. Service responders are the normal on-call engineers. Additionally, we also test synthetic customer responsible functions, such as virtual machine (VM) failover to paired regions, ensuring customer workloads can operate in large scale failure scenarios.

#management #azure #azure business #continuity management