Juanita  Apio

Juanita Apio

1623158220

Building VM images in Azure using Packer and Deploying withTerraform,Azure DevOps pipeline

Deploying a VM in Azure is all well and good, but what about the base configuration of the VM itself? That’s where Packer comes in, enabling a custom image to be created and configured with server roles, applications, packages, etc. as required. That custom image can then be used to build your virtual machines. Creating it using Packer enables the benefits of infrastructure as code, which can be source control managed and tracked.

In this article, I’ll show how to create a Packer image that simply has the AD DS and DNS server roles added to a base Windows 2019 datacenter image from the Azure Marketplace. This image is then used to deploy a VM using Terraform.

Creating the Packer Image

Create a new .json file called ad.json.

Add a variables block as follows to the top of the file. This defines which variables we will be passing in to build the Packer image. You can leave them blank as shown below, as the values for the variables will be stored in the Azure DevOps pipeline variables section.

{
  "variables": {
  "client_id": "",
  "client_secret": "",
  "tenant_id": "",
  "subscription_id": "",
  "managed_image_name": "",
  "managed_image_resource_group_name": "",
  "WorkingDirectory": "{{env `System_DefaultWorkingDirectory`}}",
  "publisher": "",
  "offer": "",
  "sku": "",
  "location": "",
  "vm_size": ""
},

Next add a builders section to the template.

"builders": [{
  "type": "azure-arm",
  "client_id": "{{user `client_id`}}",
  "client_secret": "{{user `client_secret`}}",
  "subscription_id": "{{user `subscription_id`}}",
  "tenant_id": "{{user `tenant_id`}}",
  "managed_image_resource_group_name": "{{user    `managed_image_resource_group_name`}}",
  "managed_image_name": "{{user `managed_image_name`}}",
  "os_type": "Windows",
  "image_publisher": "{{user `publisher`}}",
  "image_offer": "{{user `offer`}}",
  "image_sku": "{{user `sku`}}",
  "communicator": "winrm",
  "winrm_use_ssl": "true",
  "winrm_insecure": "true",
  "winrm_timeout": "3m",
  "winrm_username": "packer",
  "location": "{{user `location`}}",
  "vm_size": "{{user `vm_size`}}",
  "async_resourcegroup_delete": true
}],

You can define the values for these variables directly in the builders section, or pull them as required from the Azure DevOps pipeline variables. Anything with ”{{user x}}” will expect the variable to be defined in the variable block.

The “async_resourcegroup_delete”: true option allows the pipeline to progress more quickly. Packer deletes the temporary resource group it creates whilst creating the image when it is finished. This can take a long time, so adding this option means the pipeline is not held up waiting for this to happen.

Now add the provisioners block to the config file. This will define the steps to take on top of the base image. I am installing the windows features, restarting the machine, and then generalizing the image ready for use.

"provisioners": [
 {
   "type": "powershell",
   "inline": [
   "Install-WindowsFeature -Name AD-Domain-Services -IncludeManagementTools",
   "Install-WindowsFeature -Name DNS -IncludeManagementTools"
   ]
 },
 {
   "type": "windows-restart",
   "restart_check_command": "powershell -command \"& {Write-Output 'Machine restarted.'}\""
 },
 {
   "type": "powershell",
   "inline": [
   "if( Test-Path $Env:SystemRoot\\windows\\system32\\Sysprep\\unattend.xml ){ rm $Env:SystemRoot\\windows\\system32\\Sysprep\\unattend.xml -Force}",
   "& $env:SystemRoot\\System32\\Sysprep\\Sysprep.exe /oobe /generalize /quiet /quit /mode:vm",
   "while($true) { $imageState = Get-ItemProperty HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Setup\\State | Select ImageState; Write-Output $imageState.ImageState; if($imageState.ImageState -ne 'IMAGE_STATE_GENERALIZE_RESEAL_TO_OOBE') { Start-Sleep -s 10 } else { break } }"
   ]
 }
 ]
}

Upload the complete file to your Azure DevOps repo ready for use!

#azure #terraform #azure-devops

What is GEEK

Buddha Community

Building VM images in Azure using Packer and Deploying withTerraform,Azure DevOps pipeline
Juanita  Apio

Juanita Apio

1623158220

Building VM images in Azure using Packer and Deploying withTerraform,Azure DevOps pipeline

Deploying a VM in Azure is all well and good, but what about the base configuration of the VM itself? That’s where Packer comes in, enabling a custom image to be created and configured with server roles, applications, packages, etc. as required. That custom image can then be used to build your virtual machines. Creating it using Packer enables the benefits of infrastructure as code, which can be source control managed and tracked.

In this article, I’ll show how to create a Packer image that simply has the AD DS and DNS server roles added to a base Windows 2019 datacenter image from the Azure Marketplace. This image is then used to deploy a VM using Terraform.

Creating the Packer Image

Create a new .json file called ad.json.

Add a variables block as follows to the top of the file. This defines which variables we will be passing in to build the Packer image. You can leave them blank as shown below, as the values for the variables will be stored in the Azure DevOps pipeline variables section.

{
  "variables": {
  "client_id": "",
  "client_secret": "",
  "tenant_id": "",
  "subscription_id": "",
  "managed_image_name": "",
  "managed_image_resource_group_name": "",
  "WorkingDirectory": "{{env `System_DefaultWorkingDirectory`}}",
  "publisher": "",
  "offer": "",
  "sku": "",
  "location": "",
  "vm_size": ""
},

Next add a builders section to the template.

"builders": [{
  "type": "azure-arm",
  "client_id": "{{user `client_id`}}",
  "client_secret": "{{user `client_secret`}}",
  "subscription_id": "{{user `subscription_id`}}",
  "tenant_id": "{{user `tenant_id`}}",
  "managed_image_resource_group_name": "{{user    `managed_image_resource_group_name`}}",
  "managed_image_name": "{{user `managed_image_name`}}",
  "os_type": "Windows",
  "image_publisher": "{{user `publisher`}}",
  "image_offer": "{{user `offer`}}",
  "image_sku": "{{user `sku`}}",
  "communicator": "winrm",
  "winrm_use_ssl": "true",
  "winrm_insecure": "true",
  "winrm_timeout": "3m",
  "winrm_username": "packer",
  "location": "{{user `location`}}",
  "vm_size": "{{user `vm_size`}}",
  "async_resourcegroup_delete": true
}],

You can define the values for these variables directly in the builders section, or pull them as required from the Azure DevOps pipeline variables. Anything with ”{{user x}}” will expect the variable to be defined in the variable block.

The “async_resourcegroup_delete”: true option allows the pipeline to progress more quickly. Packer deletes the temporary resource group it creates whilst creating the image when it is finished. This can take a long time, so adding this option means the pipeline is not held up waiting for this to happen.

Now add the provisioners block to the config file. This will define the steps to take on top of the base image. I am installing the windows features, restarting the machine, and then generalizing the image ready for use.

"provisioners": [
 {
   "type": "powershell",
   "inline": [
   "Install-WindowsFeature -Name AD-Domain-Services -IncludeManagementTools",
   "Install-WindowsFeature -Name DNS -IncludeManagementTools"
   ]
 },
 {
   "type": "windows-restart",
   "restart_check_command": "powershell -command \"& {Write-Output 'Machine restarted.'}\""
 },
 {
   "type": "powershell",
   "inline": [
   "if( Test-Path $Env:SystemRoot\\windows\\system32\\Sysprep\\unattend.xml ){ rm $Env:SystemRoot\\windows\\system32\\Sysprep\\unattend.xml -Force}",
   "& $env:SystemRoot\\System32\\Sysprep\\Sysprep.exe /oobe /generalize /quiet /quit /mode:vm",
   "while($true) { $imageState = Get-ItemProperty HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Setup\\State | Select ImageState; Write-Output $imageState.ImageState; if($imageState.ImageState -ne 'IMAGE_STATE_GENERALIZE_RESEAL_TO_OOBE') { Start-Sleep -s 10 } else { break } }"
   ]
 }
 ]
}

Upload the complete file to your Azure DevOps repo ready for use!

#azure #terraform #azure-devops

Automating deployments to on premise servers with Azure DevOps

As someone who has spent most of their (very short) career doing one click cloud resource deployments, I was shocked when I jumped onto a legacy project and realised the complexity of the deployment process to staging and production. Using a traditional .NET Framework application stack, the deployment process consisted of the following steps:

  1. Set the configuration target in Visual Studio to release
  2. Build the project
  3. Copy the .dlls using a USB to a client laptop which was configured for VPN access
  4. Copy the .dlls via RDP to the target server
  5. Go into IIS Manager and point the file path to the new version of the application

As you can see and may have experienced, this is a long, slow and error-prone process which can often take over an hour given likelihood of one of those steps not working correctly. For me it was also a real pain point having to use the client laptop, as it had 3 different passwords to get in, none of which I set or could remember. It also meant if we needed to do a deployment I had to be in the office to physically use the laptop — no working from home that day.

My first step was to automate the build process. If we could get Azure Pipelines to at least build the project, I could download the files and copy them over manually. There are plenty of guides online on how to set this up, but the final result meant it gave me a .zip artifact of all the files required for the project. This also took away a common hotspot for errors, which was building locally on my machine. This also meant regardless of who wrote the code, the build process was always identical.

The second step was to** set up a release pipeline**. Within Azure Pipelines, what we wanted to do was create a deployment group, and then register the server we want to deploy to as a target within that deployment group. This will allow us to deploy directly to an on premise server. So, how do we do this?

Requirements:

  • PowerShell 3.0 or higher. On our Windows Server 2003 box, we needed to upgrade from PowerShell 2.0. This is a simple download, install and restart.
  • .NET Framework x64 4.5 or higher

Steps:

  1. Navigate to Deployment Groups under Pipelines in Azure DevOps:

Image for post

Deployment groups menu item in Azure DevOps > Pipelines

2. Create a new deployment group. The idea is you can have several servers that are in the same group and deploy the code to all of them simultaneously (for example for load balancing reasons). In my case I only have one target in my deployment group, so the idea of a group is a bit redundant.

#azure #azure-pipelines #deployment-pipelines #windows-server #azure-devops #devops

Eric  Bukenya

Eric Bukenya

1624114320

Creating VM Images in Azure with Packer HCL, using Azure DevOps Pipelines.

In this blog post, I’ll show how to use a Packer file written in HCL, to create an image in Azure. We’ll be using an Azure DevOps pipeline to deploy the image. That resulting image can then be used with Terraform to deploy VMs!

Write Packer templates using HCL

As of v1.5 Packer supports Hashicorp Configuration Language, the language used by Terraform, which is much more human-readable to write and understand than JSON which was used prior to v1.5. Packer is at version 1.7.3 at the time of writing.

It is recommended to migrate away from using .json files for your Packer templates. You can automatically upgrade your old Packer json files to HCL as of v1.6.2 using the hcl2_upgrade command.

packer hcl2_upgrade filename.json

I have another post describing how to use .json templates with the Azure DevOps pipeline here, if you don’t want to upgrade your templates first to hcl.

Packer HCL template

My packer template below installs a plugin to perform windows updates on the image and then generalises it ready for VMs to use it as the source.

#azure-devops #terraform #ci-cd-pipeline #hashicorp-packer #azure

How to Build and Deploy C# Azure Functions using Multi-Stage Pipelines in Azure DevOps

As part of my personal development, I’ve created a personal health platform that uses various different microservices (Built using Azure Functions) that extract data from my Fitbit account and store them in an Azure Cosmos DB database. I have other microservices that pass messages between different services via Azure Service Bus.

For this project, I use Azure DevOps to build my artifacts, run my unit tests and deploy my microservices to Azure. The great thing about DevOps is that we can do all of this within the YAML pipeline.

Yes I said YAML. Honestly, I don’t know what the fuss is all about 😂

In a previous post, I talked about how we can deploy NuGet packages to a private feed in Azure Artifacts using YAML pipelines. If you haven’t read that post yet, you can check it out below!

https://dev.to/willvelida/publishing-nuget-packages-to-a-private-azure-artifacts-feed-with-yaml-build-files-3bnb

In this article, we will turn our attention to building and deploying C## Azure Functions using a single build file.

#What we’ll cover

We’ve got quite a bit to cover, so I’ll break down my YAML file and talk about each stage in the following order:

  • Triggering a Build 👷‍♂️👷‍♀️
  • Using User-Defined Variables in our pipelines 👨‍🔬👩‍🔬
  • Defining Stages 💻
  • Building our project 🔨
  • Running our tests 🧪
  • Getting code coverage 🧾
  • Producing a Build Artifact 🏠
  • Using Secrets from Key Vault 🔑
  • Deploying our Function to Azure ⚡
  • Running our build pipeline 🚀

#azure #azure-devops #azure-functions #dotnet #devops #c#

Noah  Rowe

Noah Rowe

1595494080

Azure DevOps Pipelines: Multi-Stage Pipelines

The last couple of posts have been dealing with Release managed from the Releases area under Azure Pipelines. This week we are going to take what we were doing in that separate area of Azure DevOps and instead make it part of the YAML that currently builds our application. If you need some background on how the project got to this point check out the following posts.

Getting Started with Azure DevOps

Pipeline Creation in Azure DevOps

Azure DevOps Publish Artifacts for ASP.NET Core

Azure DevOps Pipelines: Multiple Jobs in YAML

Azure DevOps Pipelines: Reusable YAML

Azure DevOps Pipelines: Use YAML Across Repos

Azure DevOps Pipelines: Conditionals in YAML

Azure DevOps Pipelines: Naming and Tagging

Azure DevOps Pipelines: Manual Tagging

Azure DevOps Pipelines: Depends On with Conditionals in YAML

Azure DevOps Pipelines: PowerShell Task

Azure DevOps Releases: Auto Create New Release After Pipeline Build

Azure DevOps Releases: Auto Create Release with Pull Requests

Image for post

Recap

The current setup we have uses a YAML based Azure Pipeline to build a couple of ASP.NET Core web applications. Then on the Release side, we have basically a dummy release that doesn’t actually do anything but served as a demo of how to configure a continuous deployment type release. The following is the current YAML for our Pipeline for reference.

name: $(SourceBranchName)_$(date:yyyyMMdd)$(rev:.r)

resources:      
  repositories: 
  - repository: Shared
    name: Playground/Shared
    type: git 
    ref: master #branch name

trigger: none

variables:
  buildConfiguration: 'Release'

jobs:
- job: WebApp1
  displayName: 'Build WebApp1'
  pool:
    vmImage: 'ubuntu-latest'

  steps:
  - task: PowerShell@2
    inputs:
      targetType: 'inline'
      script: 'Get-ChildItem -Path Env:\'

  - template: buildCoreWebProject.yml@Shared
    parameters:
      buildConFiguration: $(buildConfiguration)
      project: WebApp1.csproj
      artifactName: WebApp1

- job: WebApp2
  displayName: 'Build WebApp2'
  condition: and(succeeded(), eq(variables['BuildWebApp2'], 'true'))
  pool:
    vmImage: 'ubuntu-latest'

  steps:
  - template: build.yml
    parameters:
      buildConFiguration: $(buildConfiguration)
      project: WebApp2.csproj
      artifactName: WebApp2

- job: DependentJob
  displayName: 'Build Dependent Job'
  pool:
    vmImage: 'ubuntu-latest'

  dependsOn:
  - WebApp1
  - WebApp2

  steps:
  - template: buildCoreWebProject.yml@Shared
    parameters:
      buildConFiguration: $(buildConfiguration)
      project: WebApp1.csproj
      artifactName: WebApp1Again

- job: TagSources
  displayName: 'Tag Sources'
  pool:
    vmImage: 'ubuntu-latest'

  dependsOn:
  - WebApp1
  - WebApp2
  - DependentJob
  condition: |
    and
    (
      eq(dependencies.WebApp1.result, 'Succeeded'),
      in(dependencies.WebApp2.result, 'Succeeded', 'Skipped'),
      in(dependencies.DependentJob.result, 'Succeeded', 'Skipped')
    )

  steps:
  - checkout: self
    persistCredentials: true
    clean: true
    fetchDepth: 1

  - task: PowerShell@2
    inputs:
      targetType: 'inline'
      script: |
        $env:GIT_REDIRECT_STDERR` = '2>&1'
        $tag = "manual_$(Build.BuildNumber)".replace(' ', '_')
        git tag $tag
        Write-Host "Successfully created tag $tag" 

        git push --tags
         Write-Host "Successfully pushed tag $tag"     

      failOnStderr: false

#azure-pipelines #azure #azure-devops #devops