Deno in Azure Pipelines

As Deno is getting more and more pouplar every day you might want to start using it in your Azure Pipelines. I have created and published Deno Tool Azure DevOps Extension that allows to acquire Deno and run JavaScript and TypeScript scripts.

Download Deno
In order to get Deno to your Azure DevOps agent machine simply add DenoDownload task to your pipeline - it downloads platform specific binary in a given version, caches it for the future use and adds to the PATH environment variable.

- task: DenoDownload@1
  inputs:
    version: 1.0.3

Once this is done you can run Deno scripts in shell or use DenoRun task. With this dedicated run task you have an option to execute scripts either from file using both file system and URL or have your script embeded into YAML file. As you might need some some additional runtime custumization I have added option to set Deno permissions, script arguments and also script cwd

Run Deno script from remote location

- task: DenoRun@1
  inputs:
    filePath: 'https://deno.land/std/examples/welcome.ts'

Run Deno script from file system

- task: DenoRun@1
  inputs:
    filePath: $(System.DefaultWorkingDirectory)/welcome.ts
    cwd: '/tmp'

Run Deno script Inline

- task: DenoRun@1
  inputs:
    targetType: 'inline'
    script: |
      import { existsSync } from 'https://deno.land/std/fs/mod.ts'
      console.log(existsSync('/tmp'))
    permissions: |
      --unstable
      --allow-read

I hope you will like the extension and use it to bring up your DevOps code to another level.

#deno #azure #devops #javascript #typescript

What is GEEK

Buddha Community

Deno in Azure Pipelines
Eric  Bukenya

Eric Bukenya

1624713540

Learn NoSQL in Azure: Diving Deeper into Azure Cosmos DB

This article is a part of the series – Learn NoSQL in Azure where we explore Azure Cosmos DB as a part of the non-relational database system used widely for a variety of applications. Azure Cosmos DB is a part of Microsoft’s serverless databases on Azure which is highly scalable and distributed across all locations that run on Azure. It is offered as a platform as a service (PAAS) from Azure and you can develop databases that have a very high throughput and very low latency. Using Azure Cosmos DB, customers can replicate their data across multiple locations across the globe and also across multiple locations within the same region. This makes Cosmos DB a highly available database service with almost 99.999% availability for reads and writes for multi-region modes and almost 99.99% availability for single-region modes.

In this article, we will focus more on how Azure Cosmos DB works behind the scenes and how can you get started with it using the Azure Portal. We will also explore how Cosmos DB is priced and understand the pricing model in detail.

How Azure Cosmos DB works

As already mentioned, Azure Cosmos DB is a multi-modal NoSQL database service that is geographically distributed across multiple Azure locations. This helps customers to deploy the databases across multiple locations around the globe. This is beneficial as it helps to reduce the read latency when the users use the application.

As you can see in the figure above, Azure Cosmos DB is distributed across the globe. Let’s suppose you have a web application that is hosted in India. In that case, the NoSQL database in India will be considered as the master database for writes and all the other databases can be considered as a read replicas. Whenever new data is generated, it is written to the database in India first and then it is synchronized with the other databases.

Consistency Levels

While maintaining data over multiple regions, the most common challenge is the latency as when the data is made available to the other databases. For example, when data is written to the database in India, users from India will be able to see that data sooner than users from the US. This is due to the latency in synchronization between the two regions. In order to overcome this, there are a few modes that customers can choose from and define how often or how soon they want their data to be made available in the other regions. Azure Cosmos DB offers five levels of consistency which are as follows:

  • Strong
  • Bounded staleness
  • Session
  • Consistent prefix
  • Eventual

In most common NoSQL databases, there are only two levels – Strong and EventualStrong being the most consistent level while Eventual is the least. However, as we move from Strong to Eventual, consistency decreases but availability and throughput increase. This is a trade-off that customers need to decide based on the criticality of their applications. If you want to read in more detail about the consistency levels, the official guide from Microsoft is the easiest to understand. You can refer to it here.

Azure Cosmos DB Pricing Model

Now that we have some idea about working with the NoSQL database – Azure Cosmos DB on Azure, let us try to understand how the database is priced. In order to work with any cloud-based services, it is essential that you have a sound knowledge of how the services are charged, otherwise, you might end up paying something much higher than your expectations.

If you browse to the pricing page of Azure Cosmos DB, you can see that there are two modes in which the database services are billed.

  • Database Operations – Whenever you execute or run queries against your NoSQL database, there are some resources being used. Azure terms these usages in terms of Request Units or RU. The amount of RU consumed per second is aggregated and billed
  • Consumed Storage – As you start storing data in your database, it will take up some space in order to store that data. This storage is billed per the standard SSD-based storage across any Azure locations globally

Let’s learn about this in more detail.

#azure #azure cosmos db #nosql #azure #nosql in azure #azure cosmos db

Noah  Rowe

Noah Rowe

1595494080

Azure DevOps Pipelines: Multi-Stage Pipelines

The last couple of posts have been dealing with Release managed from the Releases area under Azure Pipelines. This week we are going to take what we were doing in that separate area of Azure DevOps and instead make it part of the YAML that currently builds our application. If you need some background on how the project got to this point check out the following posts.

Getting Started with Azure DevOps

Pipeline Creation in Azure DevOps

Azure DevOps Publish Artifacts for ASP.NET Core

Azure DevOps Pipelines: Multiple Jobs in YAML

Azure DevOps Pipelines: Reusable YAML

Azure DevOps Pipelines: Use YAML Across Repos

Azure DevOps Pipelines: Conditionals in YAML

Azure DevOps Pipelines: Naming and Tagging

Azure DevOps Pipelines: Manual Tagging

Azure DevOps Pipelines: Depends On with Conditionals in YAML

Azure DevOps Pipelines: PowerShell Task

Azure DevOps Releases: Auto Create New Release After Pipeline Build

Azure DevOps Releases: Auto Create Release with Pull Requests

Image for post

Recap

The current setup we have uses a YAML based Azure Pipeline to build a couple of ASP.NET Core web applications. Then on the Release side, we have basically a dummy release that doesn’t actually do anything but served as a demo of how to configure a continuous deployment type release. The following is the current YAML for our Pipeline for reference.

name: $(SourceBranchName)_$(date:yyyyMMdd)$(rev:.r)

resources:      
  repositories: 
  - repository: Shared
    name: Playground/Shared
    type: git 
    ref: master #branch name

trigger: none

variables:
  buildConfiguration: 'Release'

jobs:
- job: WebApp1
  displayName: 'Build WebApp1'
  pool:
    vmImage: 'ubuntu-latest'

  steps:
  - task: PowerShell@2
    inputs:
      targetType: 'inline'
      script: 'Get-ChildItem -Path Env:\'

  - template: buildCoreWebProject.yml@Shared
    parameters:
      buildConFiguration: $(buildConfiguration)
      project: WebApp1.csproj
      artifactName: WebApp1

- job: WebApp2
  displayName: 'Build WebApp2'
  condition: and(succeeded(), eq(variables['BuildWebApp2'], 'true'))
  pool:
    vmImage: 'ubuntu-latest'

  steps:
  - template: build.yml
    parameters:
      buildConFiguration: $(buildConfiguration)
      project: WebApp2.csproj
      artifactName: WebApp2

- job: DependentJob
  displayName: 'Build Dependent Job'
  pool:
    vmImage: 'ubuntu-latest'

  dependsOn:
  - WebApp1
  - WebApp2

  steps:
  - template: buildCoreWebProject.yml@Shared
    parameters:
      buildConFiguration: $(buildConfiguration)
      project: WebApp1.csproj
      artifactName: WebApp1Again

- job: TagSources
  displayName: 'Tag Sources'
  pool:
    vmImage: 'ubuntu-latest'

  dependsOn:
  - WebApp1
  - WebApp2
  - DependentJob
  condition: |
    and
    (
      eq(dependencies.WebApp1.result, 'Succeeded'),
      in(dependencies.WebApp2.result, 'Succeeded', 'Skipped'),
      in(dependencies.DependentJob.result, 'Succeeded', 'Skipped')
    )

  steps:
  - checkout: self
    persistCredentials: true
    clean: true
    fetchDepth: 1

  - task: PowerShell@2
    inputs:
      targetType: 'inline'
      script: |
        $env:GIT_REDIRECT_STDERR` = '2>&1'
        $tag = "manual_$(Build.BuildNumber)".replace(' ', '_')
        git tag $tag
        Write-Host "Successfully created tag $tag" 

        git push --tags
         Write-Host "Successfully pushed tag $tag"     

      failOnStderr: false

#azure-pipelines #azure #azure-devops #devops

Ruthie  Bugala

Ruthie Bugala

1620435660

How to set up Azure Data Sync between Azure SQL databases and on-premises SQL Server

In this article, you learn how to set up Azure Data Sync services. In addition, you will also learn how to create and set up a data sync group between Azure SQL database and on-premises SQL Server.

In this article, you will see:

  • Overview of Azure SQL Data Sync feature
  • Discuss key components
  • Comparison between Azure SQL Data sync with the other Azure Data option
  • Setup Azure SQL Data Sync
  • More…

Azure Data Sync

Azure Data Sync —a synchronization service set up on an Azure SQL Database. This service synchronizes the data across multiple SQL databases. You can set up bi-directional data synchronization where data ingest and egest process happens between the SQL databases—It can be between Azure SQL database and on-premises and/or within the cloud Azure SQL database. At this moment, the only limitation is that it will not support Azure SQL Managed Instance.

#azure #sql azure #azure sql #azure data sync #azure sql #sql server

How to Debug a Pipeline in Azure Data Factory

In the previous article, How to schedule Azure Data Factory pipeline executions using Triggers, we discussed the three main types of the Azure Data Factory triggers, how to configure it then use it to schedule a pipeline.

In this article, we will see how to use the Azure Data Factory debug feature to test the pipeline activities during the development stage.

Why debug

When developing complex and multi-stage Azure Data Factory pipelines, it becomes harder to test the functionality and the performance of the pipeline as one block. Instead, it is highly recommended to test such pipelines when you develop each stage, so that you can make sure that this stage is working as expected, returning the correct result with the best performance, before publishing the changes to the data factory.

Take into consideration that debugging any pipeline activity will execute that activity and perform the action configured in it. For example, if this activity is a copy activity from an Azure Storage Account to an Azure SQL Database, the data will be copied, but the only difference is that the pipeline execution logs in the debug mode will be written to the pipeline output tab only and will not be shown under the pipeline runs in the Monitor page.

#azure #sql azure #azure data factory #pipeline

Automating deployments to on premise servers with Azure DevOps

As someone who has spent most of their (very short) career doing one click cloud resource deployments, I was shocked when I jumped onto a legacy project and realised the complexity of the deployment process to staging and production. Using a traditional .NET Framework application stack, the deployment process consisted of the following steps:

  1. Set the configuration target in Visual Studio to release
  2. Build the project
  3. Copy the .dlls using a USB to a client laptop which was configured for VPN access
  4. Copy the .dlls via RDP to the target server
  5. Go into IIS Manager and point the file path to the new version of the application

As you can see and may have experienced, this is a long, slow and error-prone process which can often take over an hour given likelihood of one of those steps not working correctly. For me it was also a real pain point having to use the client laptop, as it had 3 different passwords to get in, none of which I set or could remember. It also meant if we needed to do a deployment I had to be in the office to physically use the laptop — no working from home that day.

My first step was to automate the build process. If we could get Azure Pipelines to at least build the project, I could download the files and copy them over manually. There are plenty of guides online on how to set this up, but the final result meant it gave me a .zip artifact of all the files required for the project. This also took away a common hotspot for errors, which was building locally on my machine. This also meant regardless of who wrote the code, the build process was always identical.

The second step was to** set up a release pipeline**. Within Azure Pipelines, what we wanted to do was create a deployment group, and then register the server we want to deploy to as a target within that deployment group. This will allow us to deploy directly to an on premise server. So, how do we do this?

Requirements:

  • PowerShell 3.0 or higher. On our Windows Server 2003 box, we needed to upgrade from PowerShell 2.0. This is a simple download, install and restart.
  • .NET Framework x64 4.5 or higher

Steps:

  1. Navigate to Deployment Groups under Pipelines in Azure DevOps:

Image for post

Deployment groups menu item in Azure DevOps > Pipelines

2. Create a new deployment group. The idea is you can have several servers that are in the same group and deploy the code to all of them simultaneously (for example for load balancing reasons). In my case I only have one target in my deployment group, so the idea of a group is a bit redundant.

#azure #azure-pipelines #deployment-pipelines #windows-server #azure-devops #devops