Tamale  Moses

Tamale Moses

1622497860

Role-based access control with Azure AD now in preview

The public preview of role-based access control (RBAC) for the Azure Cosmos DB Core (SQL) API was announced today at Microsoft Ignite. With RBAC in Azure Cosmos DB, you can now:

  • Authenticate your data requests with an Azure Active Directory (AD) identity.
  • Authorize your data requests with a fine-grained, role-based permission model.
  • Audit your diagnostic logs to retrieve the Azure AD identity used when accessing your data.

What is RBAC?

The concepts exposed by the Azure Cosmos DB RBAC should look very familiar to anyone who has used Azure RBAC before.

  • Our new permission model exposes a set of actions that map to database operations (like writing a document or executing a query).
  • You can create role definitions by assembling a list of actions that a role should allow.
  • You associate your role definitions with Azure AD identities through role assignments. Roles can be assigned at the Azure Cosmos DB account, database or container levels.

Example of role definition and assignment

The granularity of the permission model lets you control very precisely what a client is allowed to do. Some examples of custom role definitions:

  • A read-only role that can only fetch documents by their ID, but not run queries or read from the change feed.
  • A role that can only insert new documents to an Azure Cosmos DB container, but not read, replace or delete documents.

Find the complete list of available actions.

Managing your role

To create your role definitions and assignments, you can use new PowerShell cmdlets or Azure CLI commands. Here is a PowerShell example, showing how to create a read-only role and assign it to an Azure AD identity:

$resourceGroupName = “”$accountName = “”

New-AzCosmosDBSqlRoleDefinition -AccountName $accountName -ResourceGroupName $resourceGroupName -Type CustomRole -RoleName MyReadOnlyRole -DataAction @( ‘Microsoft.DocumentDB/databaseAccounts/readMetadata’ , ‘Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/read’ , 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/executeQuery' , ‘Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/readChangeFeed’) ` -AssignableScope “/”

$principalId = “” New-AzCosmosDBSqlRoleAssignment -AccountName $accountName -ResourceGroupName $resourceGroupName -RoleDefinitionName MyReadOnlyRole -Scope $accountName -PrincipalId $principalId

view raw

No more primary keys!

Once your role definitions and assignments have been created, you can start using an Azure AD identity instead of your Azure Cosmos DB account’s primary key. When initializing the SDK, just replace the primary key with a TokenCredential instance that will resolve to the desired identity:

var tokenCredential = new ClientSecretCredential( “”, “”, “”);

var client = new CosmosClient( “”, tokenCredential);

view raw

This is currently supported in our .NET and Java SDKs, with broader support coming soon.

#announcements #security

What is GEEK

Buddha Community

Role-based access control with Azure AD now in preview
Tyrique  Littel

Tyrique Littel

1603530000

Using encrypted access tokens in Azure with Microsoft.Identity.Web

This post shows how to use encrypted access tokens with Azure AD App registrations using Microsoft.Identity.Web. By using encrypted access tokens, only applications with access to the private key can decrypt the tokens. When using encrypted tokens, you can prevent access tokens data being used or read by such tools as https://jwt.ms or https://jwt.io and prevent the payload claims being read.

Code: https://github.com/damienbod/AzureADAuthRazorUiServiceApiCertificate

Posts in this series

Setup

Two applications were created to demonstrate the AAD token encryption. An ASP.NET Core application was created which implements an API using Microsoft.Identity.Web to secure the API. The API uses an encrypted token. Secondly, a UI app was created to login to AAD and request the API using the API access_as_user scope. The decryption, encryption certificate was created in Azure Key Vault and the public key .cer file was downloaded. This public key is used in the Azure App Registration for the token encryption.

Setting up the Azure App Registration

The Azure App registration for the Web API is setup to use token encyption. The token which was created in Azure Key Vault can be added to the keyCredentials array in the App Azure Registration manifest file. The **customKeyIdentifier **is the thumbprint and the usage is set to Encrypt. The value property contains the base64 .cer file which was download from your Key Vault.

"keyCredentials"``: [

{

"customKeyIdentifier"``: "E1454F331F3DBF52523AAF0913DB521849E05AD3"``,

"endDate"``: "2021-10-20T12:19:52Z"``,

"keyId"``: "53095330-1680-4a8d-bf0d-8d0d042fe88b"``,

"startDate"``: "2020-10-20T12:09:52Z"``,

"type"``: "AsymmetricX509Cert"``,

"usage"``: "Encrypt"``,

"value"``: "--your base 64 .cer , ie public key --"``,

"displayName"``: "CN=myTokenEncyptionCert"

},

],

The **tokenEncryptionKeyId **property in the Azure App Registration manifest is used to define the certificate which will be used for token encryption. This is set to the **keyId **of the certificate definition in the **keyCredentials **array.

1

"tokenEncryptionKeyId"``: "53095330-1680-4a8d-bf0d-8d0d042fe88b"

Note: If you upload the certificate to the Azure App registration using the portal, the usage will be set to **verify **and it cannot be used for token encryption.

Configuration of API application

The ASP.NET Core application uses AddMicrosoftIdentityWebApiAuthentication with the default AzureAD configuration. This authorizes the API requests.

**public** **void** ConfigureServices(IServiceCollection services)

{

JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();

IdentityModelEventSource.ShowPII = **true**``;

services.AddMicrosoftIdentityWebApiAuthentication(Configuration);

services.AddControllers(options =>

{

**var** policy = **new** AuthorizationPolicyBuilder()

.RequireAuthenticatedUser()

// .RequireClaim("email") // disabled this to test with users that have no email (no license added)

.Build();

options.Filters.Add(``**new** AuthorizeFilter(policy));

});

}

The app.settings defines the TokenDecryptionCertificates to use the Key Vault Certificate which is used for the token decryption. This is the same certificate which was used in the Azure App Registration. We used the public key from the certificate in the manifest definition.

#asp.net core #.net core #azure #azure key vault #access token #azure ad #encyption #micorosoft.identity.web

Eric  Bukenya

Eric Bukenya

1624713540

Learn NoSQL in Azure: Diving Deeper into Azure Cosmos DB

This article is a part of the series – Learn NoSQL in Azure where we explore Azure Cosmos DB as a part of the non-relational database system used widely for a variety of applications. Azure Cosmos DB is a part of Microsoft’s serverless databases on Azure which is highly scalable and distributed across all locations that run on Azure. It is offered as a platform as a service (PAAS) from Azure and you can develop databases that have a very high throughput and very low latency. Using Azure Cosmos DB, customers can replicate their data across multiple locations across the globe and also across multiple locations within the same region. This makes Cosmos DB a highly available database service with almost 99.999% availability for reads and writes for multi-region modes and almost 99.99% availability for single-region modes.

In this article, we will focus more on how Azure Cosmos DB works behind the scenes and how can you get started with it using the Azure Portal. We will also explore how Cosmos DB is priced and understand the pricing model in detail.

How Azure Cosmos DB works

As already mentioned, Azure Cosmos DB is a multi-modal NoSQL database service that is geographically distributed across multiple Azure locations. This helps customers to deploy the databases across multiple locations around the globe. This is beneficial as it helps to reduce the read latency when the users use the application.

As you can see in the figure above, Azure Cosmos DB is distributed across the globe. Let’s suppose you have a web application that is hosted in India. In that case, the NoSQL database in India will be considered as the master database for writes and all the other databases can be considered as a read replicas. Whenever new data is generated, it is written to the database in India first and then it is synchronized with the other databases.

Consistency Levels

While maintaining data over multiple regions, the most common challenge is the latency as when the data is made available to the other databases. For example, when data is written to the database in India, users from India will be able to see that data sooner than users from the US. This is due to the latency in synchronization between the two regions. In order to overcome this, there are a few modes that customers can choose from and define how often or how soon they want their data to be made available in the other regions. Azure Cosmos DB offers five levels of consistency which are as follows:

  • Strong
  • Bounded staleness
  • Session
  • Consistent prefix
  • Eventual

In most common NoSQL databases, there are only two levels – Strong and EventualStrong being the most consistent level while Eventual is the least. However, as we move from Strong to Eventual, consistency decreases but availability and throughput increase. This is a trade-off that customers need to decide based on the criticality of their applications. If you want to read in more detail about the consistency levels, the official guide from Microsoft is the easiest to understand. You can refer to it here.

Azure Cosmos DB Pricing Model

Now that we have some idea about working with the NoSQL database – Azure Cosmos DB on Azure, let us try to understand how the database is priced. In order to work with any cloud-based services, it is essential that you have a sound knowledge of how the services are charged, otherwise, you might end up paying something much higher than your expectations.

If you browse to the pricing page of Azure Cosmos DB, you can see that there are two modes in which the database services are billed.

  • Database Operations – Whenever you execute or run queries against your NoSQL database, there are some resources being used. Azure terms these usages in terms of Request Units or RU. The amount of RU consumed per second is aggregated and billed
  • Consumed Storage – As you start storing data in your database, it will take up some space in order to store that data. This storage is billed per the standard SSD-based storage across any Azure locations globally

Let’s learn about this in more detail.

#azure #azure cosmos db #nosql #azure #nosql in azure #azure cosmos db

Ruthie  Bugala

Ruthie Bugala

1620435660

How to set up Azure Data Sync between Azure SQL databases and on-premises SQL Server

In this article, you learn how to set up Azure Data Sync services. In addition, you will also learn how to create and set up a data sync group between Azure SQL database and on-premises SQL Server.

In this article, you will see:

  • Overview of Azure SQL Data Sync feature
  • Discuss key components
  • Comparison between Azure SQL Data sync with the other Azure Data option
  • Setup Azure SQL Data Sync
  • More…

Azure Data Sync

Azure Data Sync —a synchronization service set up on an Azure SQL Database. This service synchronizes the data across multiple SQL databases. You can set up bi-directional data synchronization where data ingest and egest process happens between the SQL databases—It can be between Azure SQL database and on-premises and/or within the cloud Azure SQL database. At this moment, the only limitation is that it will not support Azure SQL Managed Instance.

#azure #sql azure #azure sql #azure data sync #azure sql #sql server

Cody  Osinski

Cody Osinski

1623284580

Hybrid Identity Solution with Azure AD and Azure AD B2C

I am going to retire the current stack of technologies used in this blog in favor of more recent technologies, mainly because I currently author this blog using Windows Live Writer which is outdated and has lost the love of community. I am also taking this opportunity to create a new technology stack that is much more modular and allows me to focus only on writing. I am also learning cool new stuff which might be useful to all of us. I am super happy with a few components that I currently use and I would be reusing the things that are working well. The entire source code of this blog is available in my GitHub repository from where you can happily copy and paste stuff. You can also read about how I built the existing blog framework (v1) here. Of course, I would write about how I chose components for my new blogging platform and how you can set one up yourself, so stay tuned (even better, subscribe).

September 8, 2016: This activity is now complete and you are reading this post on my new blogging platform.

Azure Access Control Service is dead (well almost). Azure AD B2C is out, up and running and supports many of the common social accounts and even using new credentials. Both the Azure AD and Azure AD B2C use OAuth 2.0 mechanism to authorize access to resources of users. At this point some of you may want to understand…

What is OAuth 2.0?

If you like reading loads of text, here is what Microsoft’s documentation recommends that you read. For the rest of us, including me, we will use OAuth 2.0 playground to understand what OAuth is. For this activity you will require an account with Google and an interest in YouTube. We will use OAuth based flow to fetch the content that is displayed on your YouTube homepage.

There are four parties in the OAuth flow, namely:

  1. Resource Owner: In our experiment this is you. The Resource Owner or user grants permission to an application to access his\her content (YouTube feed data). The access of application is limited to the scope of authorization (e.g. read only, write only, read-write etc.)
  2. Authorization Server: This server stores the identity information of the Resource Owner, which in our case is Google’s identity server. It accepts user credentials and passes two tokens to the application.
  3. Access Token: The token which the application can use to access the Resource Owner’s content.
  4. Refresh Token: The token that the application can use to get a fresh Access Token before or when the Access Token expires. The Refresh Token may have a lifetime after which it becomes invalid. Once that happens, the user would be required to authenticate himself\herself again.
  5. Client/Application: The Client is the application that wants to access the Resource Owner’s data. Before it may do so, it must be authorized by the Resource Owner and the authorization must be validated by the Authorization Server.
  6. Resource Server: This the application that trusts the Authorization Server and will honor requests that are sent with Access Tokens by the application. This in our case is YouTube. Resource Owner can limit the authorization granted to the client by specifying the Scope. You must have seen the application of Scope in Facebook’s ability for users to authorize a variety of different functions to the client (“access basic information”, “post on wall”, etc.).

#azure ad b2c #azure ad

Ruthie  Bugala

Ruthie Bugala

1620371451

How to access an Azure SQL database from Azure Data Lake Analytics

In this article, we will see how we can access data from an Azure SQL database from Azure Data Lake Analytics.

Introduction

In the previous part of the Azure Data Lake Analytics article series, we learned how to process multiple file sets stored in the Azure Data Lake Storage account using U-SQL. Often data is stored in structured as well as unstructured formats, and one needs to access data from structured stores as well apart from data stored in repositories like Azure Data Storage Account. We will go over the process to access data from an Azure SQL database using Azure Data Lake Analytics.

Initial Setup

We need to have a few pre-requisites in place before we can start our exercise. In the previous part of this article series, we set up an Azure Data Lake Analytics account and created a database on it. We would also need an Azure database with some sample data in it. While creating an Azure database it offers the option to create it with sample data. In this exercise, we are going to use such a database with the sample data as shown below. In the below screen, it shows the azure-sql-server-001 is the name of the database, azure-sql-server-001 is the name of the Azure SQL Server instance, and SalesLT.Address is the name of the table that we intend to access from Azure Data Lake Analytics. Once this setup is in place, we can proceed with the next steps.

#azure #sql azure #azure sql #azure data lake analytics