1626888480
Get the source code: https://youtube.dotnetmicroservices.com/azurepipelines-tutorial-ci
How to create a Continuous Integration Azure Pipeline to automatically build and test all changes to your GitHub repository. You will learn:
#azuredevops #github #YAML
#azuredevops #github #yaml #azure
1598702280
First, we will start with a very simple web application, an index.html
file, that displays the environment and version on the page.
Note_ The web application is very simple. You can use the result of a more complex build (from a React, Angular or VueJs application for example). This is not the purpose of this article._
Here is the index.html
file :
<!doctype html>
<html lang="en">
<head>
<title>Demo Pipeline</title>
</head>
<body>
<h1>Variables ...</h1>
<ul>
<li>Environment : </li>
<li>Version : </li>
</ul>
<script src="assets/script.js"></script>
</body>
</html>
Then the script.js
file :
async function run() {
const response = await fetch('assets/config.json');
const json = await response.json();
Object.entries(json).forEach(([key, value]) => {
const el = document.querySelector(`#${key}`);
if (el) {
el.textContent = value;
} else {
console.warn(`Element with id : ${key} not found ...`);
}
});
}
run().catch((err) => {
console.error(err);
});
And the config.json
file :
#continuous-delivery #ci-cd-pipeline #azure #continuous-integration #azure-devops #devops
1622096940
Once we started developing applications in MuleSoft and storing our code in source control platforms like GitHub, Bitbucket, GitLab, or Azure, just to mention the most common ones, we needed to look into automating the process to deploy our applications either to CloudHub or an on-premise server.
In this post, I will try to explain how a MuleSoft application can be automatically deployed into CloudHub or an on-premise server from Azure DevOps as our main CI platform and source control platform.
The first step is to setup our project in Azure DevOps. For this, you need a Microsoft account, which you can set up here: https://dev.azure.com/.
Then we can create a new project, provide a name and a description, as well as set the privacy; by default, it comes set as “private.”
#integration #azure #mule 4 #pipeline #devops #azure devops
1603177200
DevOps is supposed to help streamline the process of taking code changes and getting them to production for users to enjoy. But what exactly does it mean for the process to be “streamlined”? One way to answer this is to start measuring metrics.
Metrics give us a way to make sure our quality stays the same over time because we have numbers and key identifiers to compare against. Without any metrics being measured, you don’t have a way to measure improvements or regressions. You just have to react to them as they come up.
When you know the indicators that show what condition your system is in, it lets you catch issues faster than if you don’t have a steady-state to compare to. This also helps when you get ready for system upgrades. You’ll be able to give more accurate estimates of the number of resources your systems use.
After you’ve recorded some key metrics for a while, you’ll start noticing places you could improve your application or ways you can reallocate resources to where they are needed more. Knowing the normal operating state of your system’s pipeline is crucial and it takes time to set up a monitoring tool.
The main thing is that you decide to watch some metrics to get an idea of what’s going on when you start the deploy process. In the beginning, it might seem hard to figure out what the best metrics for a pipeline are.
You can conduct chaos engineering experiments to test different conditions and learn more about which metrics are the most important to your system. You can look at things like, time from build to deploy, number of bugs that get caught in different phases of the pipeline, and build size.
Thinking about what you should measure can be one of the harder parts of the effectiveness of the metrics you choose. When you’re considering metrics, look at what the most important results of your pipeline are.
Do you need your app to get through the process as quickly as possible, regardless of errors? Can you figure out why that sporadic issue keeps stopping the deploy process? What’s blocking you from getting your changes to production with confidence?
That’s how you’re going to find those key metrics quickly. Running experiments and looking at common deploy problems will show you what’s important early on. This is one of the ways you can make sure that your metrics are relevant.
#devops #devops-principles #devops-tools #devops-challenges #devops-adoption-challenges #devops-adoption #continuous-deployment #continuous-integration
1595494080
The last couple of posts have been dealing with Release managed from the Releases area under Azure Pipelines. This week we are going to take what we were doing in that separate area of Azure DevOps and instead make it part of the YAML that currently builds our application. If you need some background on how the project got to this point check out the following posts.
Getting Started with Azure DevOps
Pipeline Creation in Azure DevOps
Azure DevOps Publish Artifacts for ASP.NET Core
Azure DevOps Pipelines: Multiple Jobs in YAML
Azure DevOps Pipelines: Reusable YAML
Azure DevOps Pipelines: Use YAML Across Repos
Azure DevOps Pipelines: Conditionals in YAML
Azure DevOps Pipelines: Naming and Tagging
Azure DevOps Pipelines: Manual Tagging
Azure DevOps Pipelines: Depends On with Conditionals in YAML
Azure DevOps Pipelines: PowerShell Task
Azure DevOps Releases: Auto Create New Release After Pipeline Build
Azure DevOps Releases: Auto Create Release with Pull Requests
The current setup we have uses a YAML based Azure Pipeline to build a couple of ASP.NET Core web applications. Then on the Release side, we have basically a dummy release that doesn’t actually do anything but served as a demo of how to configure a continuous deployment type release. The following is the current YAML for our Pipeline for reference.
name: $(SourceBranchName)_$(date:yyyyMMdd)$(rev:.r)
resources:
repositories:
- repository: Shared
name: Playground/Shared
type: git
ref: master #branch name
trigger: none
variables:
buildConfiguration: 'Release'
jobs:
- job: WebApp1
displayName: 'Build WebApp1'
pool:
vmImage: 'ubuntu-latest'
steps:
- task: PowerShell@2
inputs:
targetType: 'inline'
script: 'Get-ChildItem -Path Env:\'
- template: buildCoreWebProject.yml@Shared
parameters:
buildConFiguration: $(buildConfiguration)
project: WebApp1.csproj
artifactName: WebApp1
- job: WebApp2
displayName: 'Build WebApp2'
condition: and(succeeded(), eq(variables['BuildWebApp2'], 'true'))
pool:
vmImage: 'ubuntu-latest'
steps:
- template: build.yml
parameters:
buildConFiguration: $(buildConfiguration)
project: WebApp2.csproj
artifactName: WebApp2
- job: DependentJob
displayName: 'Build Dependent Job'
pool:
vmImage: 'ubuntu-latest'
dependsOn:
- WebApp1
- WebApp2
steps:
- template: buildCoreWebProject.yml@Shared
parameters:
buildConFiguration: $(buildConfiguration)
project: WebApp1.csproj
artifactName: WebApp1Again
- job: TagSources
displayName: 'Tag Sources'
pool:
vmImage: 'ubuntu-latest'
dependsOn:
- WebApp1
- WebApp2
- DependentJob
condition: |
and
(
eq(dependencies.WebApp1.result, 'Succeeded'),
in(dependencies.WebApp2.result, 'Succeeded', 'Skipped'),
in(dependencies.DependentJob.result, 'Succeeded', 'Skipped')
)
steps:
- checkout: self
persistCredentials: true
clean: true
fetchDepth: 1
- task: PowerShell@2
inputs:
targetType: 'inline'
script: |
$env:GIT_REDIRECT_STDERR` = '2>&1'
$tag = "manual_$(Build.BuildNumber)".replace(' ', '_')
git tag $tag
Write-Host "Successfully created tag $tag"
git push --tags
Write-Host "Successfully pushed tag $tag"
failOnStderr: false
#azure-pipelines #azure #azure-devops #devops
1596976020
As someone who has spent most of their (very short) career doing one click cloud resource deployments, I was shocked when I jumped onto a legacy project and realised the complexity of the deployment process to staging and production. Using a traditional .NET Framework application stack, the deployment process consisted of the following steps:
release
As you can see and may have experienced, this is a long, slow and error-prone process which can often take over an hour given likelihood of one of those steps not working correctly. For me it was also a real pain point having to use the client laptop, as it had 3 different passwords to get in, none of which I set or could remember. It also meant if we needed to do a deployment I had to be in the office to physically use the laptop — no working from home that day.
My first step was to automate the build process. If we could get Azure Pipelines to at least build the project, I could download the files and copy them over manually. There are plenty of guides online on how to set this up, but the final result meant it gave me a .zip artifact of all the files required for the project. This also took away a common hotspot for errors, which was building locally on my machine. This also meant regardless of who wrote the code, the build process was always identical.
The second step was to** set up a release pipeline**. Within Azure Pipelines, what we wanted to do was create a deployment group, and then register the server we want to deploy to as a target within that deployment group. This will allow us to deploy directly to an on premise server. So, how do we do this?
Requirements:
Steps:
Deployment groups menu item in Azure DevOps > Pipelines
2. Create a new deployment group. The idea is you can have several servers that are in the same group and deploy the code to all of them simultaneously (for example for load balancing reasons). In my case I only have one target in my deployment group, so the idea of a group is a bit redundant.
#azure #azure-pipelines #deployment-pipelines #windows-server #azure-devops #devops