Nigel  Uys

Nigel Uys


Ansible-role-jenkins: Ansible Role - Jenkins CI

Ansible Role: Jenkins CI

Installs Jenkins CI on RHEL/CentOS and Debian/Ubuntu servers.


Requires curl to be installed on the server. Also, newer versions of Jenkins require Java 8+ (see the test playbooks inside the molecule/default directory for an example of how to use newer versions of Java for your OS).

Role Variables

Available variables are listed below, along with default values (see defaults/main.yml):

jenkins_package_state: present

The state of the jenkins package install. By default this role installs Jenkins but will not upgrade Jenkins (when using package-based installs). If you want to always update to the latest version, change this to latest.

jenkins_hostname: localhost

The system hostname; usually localhost works fine. This will be used during setup to communicate with the running Jenkins instance via HTTP requests.

jenkins_home: /var/lib/jenkins

The Jenkins home directory which, amongst others, is being used for storing artifacts, workspaces and plugins. This variable allows you to override the default /var/lib/jenkins location.

jenkins_http_port: 8080

The HTTP port for Jenkins' web interface.

jenkins_admin_username: admin
jenkins_admin_password: admin

Default admin account credentials which will be created the first time Jenkins is installed.

jenkins_admin_password_file: ""

Default admin password file which will be created the first time Jenkins is installed as /var/lib/jenkins/secrets/initialAdminPassword

jenkins_jar_location: /opt/jenkins-cli.jar

The location at which the jenkins-cli.jar jarfile will be kept. This is used for communicating with Jenkins via the CLI.

  - blueocean
  - name: influxdb
    version: "1.12.1"

Jenkins plugins to be installed automatically during provisioning. Defaults to empty list ([]). Items can use name or dictionary with name and version keys to pin specific version of a plugin.

jenkins_plugins_install_dependencies: true

Whether Jenkins plugins to be installed should also install any plugin dependencies.

jenkins_plugins_state: present

Use latest to ensure all plugins are running the most up-to-date version. For any plugin that has a specific version set in jenkins_plugins list, state present will be used instead of jenkins_plugins_state value.

jenkins_plugin_updates_expiration: 86400

Number of seconds after which a new copy of the update-center.json file is downloaded. Set it to 0 if no cache file should be used.

jenkins_updates_url: ""

The URL to use for Jenkins plugin updates and update-center information.

jenkins_plugin_timeout: 30

The server connection timeout, in seconds, when installing Jenkins plugins.

jenkins_version: "2.346"
jenkins_pkg_url: ""

(Optional) Then Jenkins version can be pinned to any version available on (Debian/Ubuntu) or (RHEL/CentOS). If the Jenkins version you need is not available in the default package URLs, you can override the URL with your own; set jenkins_pkg_url (Note: the role depends on the same naming convention that uses).

jenkins_url_prefix: ""

Used for setting a URL prefix for your Jenkins installation. The option is added as --prefix={{ jenkins_url_prefix }} to the Jenkins initialization java invocation, so you can access the installation at a path like{{ jenkins_url_prefix }}. Make sure you start the prefix with a / (e.g. /jenkins).

jenkins_connection_delay: 5
jenkins_connection_retries: 60

Amount of time and number of times to wait when connecting to Jenkins after initial startup, to verify that Jenkins is running. Total time to wait = delay * retries, so by default this role will wait up to 300 seconds before timing out.

jenkins_prefer_lts: false

By default, this role will install the latest version of Jenkins using the official repositories according to the platform. You can install the current LTS version instead by setting this to false.

The default repositories (listed below) can be overridden as well.

# For RedHat/CentOS:
jenkins_repo_url:{{ '-stable' if (jenkins_prefer_lts | bool) else '' }}/jenkins.repo
jenkins_repo_key_url:{{ '-stable' if (jenkins_prefer_lts | bool) else '' }}/

# For Debian/Ubuntu:
jenkins_repo_url: deb{{ '-stable' if (jenkins_prefer_lts | bool) else '' }} binary/
jenkins_repo_key_url:{{ '-stable' if (jenkins_prefer_lts | bool) else '' }}/

It is also possible to prevent the repo file from being added by setting jenkins_repo_url: ''. This is useful if, for example, you sign your own packages or run internal package management (e.g. Spacewalk).

jenkins_options: ""

Extra options (e.g. setting the HTTP keep alive timeout) to pass to Jenkins on startup via JENKINS_OPTS in the systemd override.conf file can be configured using the var jenkins_options. By default, no options are specified.

jenkins_java_options: "-Djenkins.install.runSetupWizard=false"

Extra Java options for the Jenkins launch command configured via JENKINS_JAVA_OPTS in the systemd override.conf file can be set with the var jenkins_java_options. For example, if you want to configure the timezone Jenkins uses, add -Dorg.apache.commons.jelly.tags.fmt.timeZone=America/New_York. By default, the option to disable the Jenkins 2.0 setup wizard is added.

  - option: "JENKINS_OPTS"
    value: "{{ jenkins_options }}"
  - option: "JAVA_OPTS"
    value: "{{ jenkins_java_options }}"
  - option: "JENKINS_HOME"
    value: "{{ jenkins_home }}"
  - option: "JENKINS_PREFIX"
    value: "{{ jenkins_url_prefix }}"
  - option: "JENKINS_PORT"
    value: "{{ jenkins_http_port }}"

Changes made to the Jenkins systemd override.conf file; the default set of changes set the configured URL prefix, Jenkins home directory, Jenkins port and adds the configured Jenkins and Java options for Jenkins' startup. You can add other option/value pairs if you need to set other options for the Jenkins systemd override.conf file.

jenkins_proxy_host: ""
jenkins_proxy_port: ""
  - ""
  - "localhost"

If you are running Jenkins behind a proxy server, configure these options appropriately. Otherwise Jenkins will be configured with a direct Internet connection.



Example Playbook

- hosts: jenkins
  become: true
      - openjdk-8-jdk

    - role:
    - role: geerlingguy.jenkins

Note that java_packages may need different versions depending on your distro (e.g. openjdk-11-jdk for Debian 10, or java-1.8.0-openjdk for RHEL 7 or 8).

Download Details:

Author: Geerlingguy
Source Code: 
License: MIT license

#ansible #jenkins #ci #role 

Ansible-role-jenkins: Ansible Role - Jenkins CI
Anissa  Beier

Anissa Beier


CI/CD Best Practices You Need to Know

In this article, we will learn about CI/CD Best Practices You Need to Know. Continuous Integration and Delivery (CI/CD) take software development from code to a live product. CI/CD forms part of DevOps processes, with many commonly agreed-upon best practices you can follow to improve your deployment pipeline.

If you work in DevOps, you've probably used a build server like Jenkins and a deployment tool like Octopus Deploy to complete your deployment process. Octopus supports the Continuous Delivery side of CI/CD, providing a best-in-category product that makes complex deployments easier.

At Octopus, we believe in the power of 8. An octopus has 8 limbs, so here are 8 best practices to help your deployment journey.

You can also learn more in our DevOps engineer's handbook.

Adopt Agile methodologies

Agile methodologies are vital to CI/CD and DevOps. Agile is a project management approach involving continuous collaboration with stakeholders and continuous improvement at each stage of the deployment process.

The principle of Agile is having frequent feedback through small development iterations so developers can closely align the final product with the user's needs. Agile methodologies contrast traditional waterfall methods, where projects are scoped and delivered in a single phase.

We recommend software projects are managed according to Agile and Lean principles so the continuous feedback loop can improve the product. We've seen Agile implemented as a checklist for upper management to tick off. In the initial stages, software teams apply Agile to meet the checklist. As teams have permission to explore the Agile space, they start to see the real benefits.

Use version-controlled code, connected to the deployment process, committed frequently

If you work in software, you've almost certainly used Git. The wars on source-controlled code have been fought and won, and Git is now synonymous with source control.

Source-controlled code allows a complete history and rollback of code to previous versions. You can also resolve conflicts by using Git's merging methods.

Committing a code change should trigger a CI/CD pipeline build. This trigger allows developers to test and validate changes to the codebase earlier. After a code change is set up to trigger an automated build, developers should be encouraged to commit their code at least once a day. Daily commits trigger automated tests more frequently so developers notice any errors sooner.

Use Configuration as Code for your deployment process

Config as Code represents your deployment process in a Git-based system. Deployments inherit all the benefits of Git, such as branching, version control, and approvals as pull requests.

Config as Code lets you store your deployment process in Git. You can test changes to deployments in a branch and validate them through a pull request. Git-based deployments make it easier to transfer a deployment set up from one environment to another.

In 2022 Q1, we released Config as Code for Octopus Deploy, and believe we set an industry standard. Other Config as Code solutions sacrifice usability for functionality. In Octopus, you get all the features of Config as Code, whether you use the UI or the version-controlled implementation.

Choose a tool that lets you keep builds green

A green build in a CI/CD pipeline means that every test passed, and the release has progressed to the next stage. Software teams aim to keep builds green.

You should choose a deployment tool that surfaces information to help keep builds green. Many deployment processes only use a build server that pushes releases into production. In practice, only using a build server makes it harder to manage a release between different deployment stages. Using a dedicated deployment tool gives you a dedicated management layer to keep builds green.

A build server doesn't include the concept of deployment stages. Octopus Deploy, however, separates a release into Test, Dev, and Production environments, and environments can exist at different release versions in each stage. Our UI shows each release's deployment stage and transitions releases between stages. The Octopus UI also shows logs and error messages to help developers quickly identify failing builds.

Continuously automate your tests

Testing code changes is essential to producing reliable releases. The testing suite should cover all use cases for the product, from functional to non-functional tests. These tests should be automated so that a code change can trigger an automated test and build. Automated tests improve the agility of a software development project to get releases live faster.

A survey by Mabel on the state of testing in DevOps indicates that automated testing (at least 4 to 5 different types of tests) is key to customer happiness. In the 2021 State of DevOps DORA Report, continuous testing is an indicator of success. Elite performers who meet their reliability targets are 3.7 times more likely to leverage continuous testing.

Strengthen the feedback loop through monitoring

Developers use telemetry data (logs, metrics, and traces) to understand their system's internal state. Telemetry unlocks observability so developers can act on data to fix their system. When you have telemetry data, you can use observability tools to add monitoring capabilities to your system.

Monitoring key system metrics can help diagnose a system for vulnerabilities and identify improvements. In the DevOps community, DORA metrics are commonly accepted as crucial metrics for the success of a deployment pipeline.

Octopus lets you measure results, compare project statuses, and continuously improve with DevOps Insights focused on the DORA metrics.

Use technologies that are fit-for-purpose

Every year, there are a new technologies that people claim will revolutionize the IT playing field. Whether it's containerization, machine learning, or blockchain, some technologies change the playing field, while others are too immature to make a real impact. When managing a CI/CD pipeline, it's essential to only choose technologies fit-for-purpose.

While being cloud-first makes sense for some parts, forcing everything onto the cloud might not be the right solution. Adoption of new technologies can bring significant improvements, but taking a measured approach avoids unnecessary pain when the costs of adoption outweigh the benefits.

Take security seriously

As software projects get larger, the security risks increase with more data handling, users, and dependencies. Your deployment process should have a security strategy.

Many cloud providers, like AWS, Azure, and Google, have built-in security features such as IAM, secrets, and role-based permissions. You can use these features to manage some security concerns.

Customers are increasingly concerned with security, and companies need to invest in certifications such as ISO 27001 and SOC II to certify their compliance with security regulations.

On May 12, 2021, The US government released Executive Order 14028, "Improving the Nation's Cybersecurity". The Order requires all vendors of government software projects to produce a Software Bill of Materials (SBOM). The SBOMs detail all software components so that governments can screen software for cybersecurity. If you want an example of how to produce an SBOM and attach it to your deployment process, we created a free tool called the Octopus Workflow Builder that can help.


CI/CD is part of the DevOps model and helps bring software projects from code to customers. If you work in DevOps and implement CI/CD, you should follow industry-standard best practices for your pipeline. To help, this post covered 8 best practices you can use to make the most of CI/CD.

Many tools can help you with CI/CD, from build servers and deployment tools to monitoring solutions. Octopus Deploy fits into CI/CD as a Continuous Deployment solution making complex deployments easier.

Original article sourced at:

#cd #ci 

CI/CD Best Practices You Need to Know
Rupert  Beatty

Rupert Beatty


Complete Guide: Continuous Integration and Continuous Delivery

Introduction to Continuous Integration and Delivery

CI/CD (Continuous Integration and Continuous Delivery) incorporate values, a set of operating principles, and a collection of practices that enable application development teams to deliver changes more reliably and regularly; this is also known as CI/CD pipeline. But what do the individual terms mean?

What is Continuous Integration (CI)?

It is an approach in which developers merge their code into a shared repository several times a day. For verification of the integrated code, automated tests and builds are run for it.

What is Continuous Delivery (CD)?

Continuous delivery is a strategy in which the development teams ensure the software is reliable to release at any time. On each commit, the software passes through the automated testing process. If it successfully passes the testing, and it is ready for release into production.

What is CI/CD Pipeline?

It is the short form for Continuous Integration, and CD is the short form for it. CI/CD Pipeline is a crucial part of the modern DevOps environment. The pipeline is a deployable path that the software follows to its production with CI and Continuous Delivery practices. It is a development lifecycle for software and includes the CI/CD pipeline, which has various stages or phases through which the software passes.What is CI/CD Pipeline?

Version Control Phase

In this phase of the CI/CD pipeline, the developers' code have version control software or systems such as git, apache subversion, and more. It controls the commit history of the software code so that it may change if needed. 

Build Phase

This phase is the first phase of this pipeline system. Developers build their code, and then they pass their code through the version control system or software. After this, the code returns to the build phase and compilation. 

Unit Testing and Staging

When software reaches this stage, various tests are there on the software. One of the main tests is the Unit test, which test the units of software. After successful testing, the staging phase begins. As software pass the tests to reach here, it is ready to deploy into the staging process.  Here, the software code is deploy to the staging environment/server. View the code and finalize here before the final test conduct on the software. 

Auto Testing Phase

After passing to the staging environment, another set of auto testing is for the software. If the software completes these tests and it is passes to the next phase/stage, the deployment phase. 

Deployment Phase

As the auto testing procedure is over, then it is passes to production. However, if any error occurs during the testing phase or the deployment phase, the software is passes to the development team's version control procedure and check the errors. If errors are there, then need to fix them. Other stages can repeate if necessary.

What are the best tools?

Automate the CI/CD process to get the best results. Various Continuous Integration and its tools help us automate the process precisely and with the least effort. These tools are mostly open-source and help with collaborative software development. Some of these tools are:


Jenkins is the most commonly used tool for the CI/CD process. It is one of the earliest and most powerful tools. It has various interfaces and inbuilt tools, which help us in the CI/CD process's automation. At first, it was introduced as a part of a project named Hudson, released in 2005. Then it was officially released as Jenkins in 2011. It has a vast plugin ecosystem, which helps in delivering the features that we need.

Circle CI

Circle CI is getting popular these days. It is becoming one of the best build platforms. It is a modern tool for the CI/CD process. One of its newest features is Circle CI Orbs. It has sharable code packages that help in setting the build pipeline easily and quickly. 

GitLab CI

It is built into the GitLab, which is a web-based DevOps tool and provides a Git-repository manager, which helps in managing the git repositories. This integrate into GitLab software after being a standalone project and release in September 2015. Here, the process is defined within a code repository. There are some tools named runners, which are used to complete the work. We can choose different executors while configuring runners like Docker, VirtualBox, and many more. It uses YAML configuration syntax to define the process in the repository. 


Buddy is one of the newest and smartest tools. This tool is for the developers and helps in lowering the entry threshold in DevOps. Initially, its name was meat! which was changed to Buddy in November 2015. Initially it was as a cloud-only service. It does not use YAML configuration, but it supports .yml files. 


It is an open-source tool and helps the development teams in the Continuous Delivery and Continuous Integration process. Initially released with the name Cruise in 2007 by ThoughtWorks, it was renamed GoCD in 2010.

What is the CI/CD Process? 

It is a process involving both. The process starts with it, and continuous delivery picks up where it ends. It involves a development approach named DevOps.

What are the benefits of CI/CD?

What all are the benefits of incorporating CI/CD in your business framework? Know the details below:

  • Easy to Debug and Change: It is easier to debug and change the codes when small pieces of code are continuously integrating. We can test these pieces while continuously integrating them with the code repository.
  • Release and Delivery Speed Increases: With CI/CD, the speed of release and delivery is increased along with the development. Releases become more frequent and reliable.
  • Increased Code Quality: The code's quality increases as the code can be tested every time we integrate it with the code repository. The development becomes secure and more reliable. Also, CI/CD pipeline automates the integration and testing work, and more time use to increasing the code quality.
  • Reduces the Cost: It automates the development and testing process, reducing the effort of testing and integration. Reduce the errors with automation, and it saves the time and cost of the developers. This save time and cost to increase the code quality.
  • Increased Flexibility: With CI/CD, the errors are found quickly, and the product can be released more frequently. The flexibility to add new features increases. With automation, one can adopt new changes quickly and reliably.

What are the challenges of Continuous Integration and Delivery?

Listed below are some common pitfalls one may experience while working with CI/CD:

May Automate wrong Processes

To shift from traditional models to DevOps, the existing organizations need to go through a transition process, which can be a long and difficult one. This process can take months and even more if you don’t follow the right transition steps. The steps to adopt CI/CD-

    • The repetition frequency of the process.
    • The dependencies involved in the process & delay produced by them.
    • Length of the process.
    • The urgency in process automation.
    • If the process is prone to errors if not automated.

These points can help in choosing processes to automate based on the priority. These can also help with the CI/CD testing process; we get confuse about whether to automate functional testing or UI testing.

Empower your enterprise with CI/CD pipeline to minimize issues at deployment and faster production rate. Source: Infrastructure as Code Platform for Cloud-Native

Confusion between Continuous Deployment and Delivery

Many organizations fail to distinguish between both. They are two very different concepts. In the case of continuous deployment, the code repository changes passes through the pipeline, and if it is successful, deploy changes immediately to production. In Continuous Deployment, deployment to the production environment is successful without manual approval.

Inadequate Coordination between CI and CD

Continuous Delivery is the next step for Continuous integration. They are two different items. But the implementation of CI/CD takes the collaboration of these two. Not possible to automate collaboration and communication.

Meaningful Dashboards and Metrics may be absent

In many cases, the scrum team may create a dashboard without proper progressive assessment. The team falls prey to the logical misconception that the given metrics must be important. The team may not know what to track and may follow the wrong metrics. Different members of a team may have different preferences. Some also prefer to use traffic indicators for the work. Some may not like the work that others have done. Creating meaningful and useful CI/CD dashboards may be tricky and extremely difficult, as some may not be satisfied with the work. Listening to everyone becomes difficult.

Requires New Skillset

The process is complicate for some developers and testers working on traditional in-house software development techniques. It has two solutions: either re-training the employees for the automation process or hiring new people who know the processor training. Both of the solutions are a cost to the organization.

Maintenance is not Easy

After the transition, maintenance is necessary to ensure that the pipeline is properly working and there are no automation processes. The bigger the organization, the difficult it is to maintain the pipelines for different services.

What are the best practices of Continuous Integration and Delivery?

There are some practices in the CI/CD process, which greatly enhance the performance of the process, and adhering to them can help avoid some common problems : 

CI/CD only Deploy Production.

In the case of CI/CD, the failures are immediately visible, and the production is stopped until the cause of the failure is found and is corrected. It is an important mechanism and keeps further environments safe from the distrustful code. It is a  process that is made solely for software development, integration, and delivery work, so it has advantages over other procedures, i.e., it is automated and hence faster.

The fastest tests should be the earliest to run.

Some tests are comparatively faster than others. We should run these tests early. Running the fastest tests first helps in finding the errors faster. It is essential to find errors in software development as soon as possible to prevent further problems. 

Running the tests locally before committing to the CI/CD pipeline.

The developers should run the tests locally before committing or sharing them at the CI/CD pipeline or shared repository. This step is beneficial as it helps troubleshoot the software problems before sharing with others and is advantageous for the developer. Continuous Integratio

Keep the CI/CD pipelines fast

CI/CD pipelines are the core part of the CI/CD process; they are responsible for faster integration and faster delivery. We should find and apply methods to improve the speed and optimize the pipeline environment.


CI/CD is among the best practices for the DevOps teams to implement using DevOps Assembly Line. Additionally, it's a unique methodology for the agile enterprise that facilitates the development team to achieve the business requirements, best code quality, and security because deployment steps are automated.

Original article source at:

#ci #cd 

Complete Guide: Continuous Integration and Continuous Delivery

How to Use infracost CI Template for Gitlab To Forecast Cost

Infracost is an open-source tool used to forecast & estimates your cloud cost on every pull request on terraform.

Multiple scenarios & scripts can be created to forecast cloud costs. It supports AWS, Azure, GCP cloud platforms & over 230 Terraform resources.

It also works with Terraform Cloud & Terragrunt. Infracost can use hosted Cloud Pricing API or self-host.

Infracost can be integrated with any CICD tool which will break down the cost of new terraform resources every time a Pull request or Merge request is created. In this blog, we will see how we can use Gitlab CI templates for the merge request pipeline to estimate & forecast cloud costs.


  • Gitlab knowledge
  • Gitlab Repo
  • CICD variables

Directory Structure

The directory structure for your application or pipeline repo should look like this. Command to check directory structure is tree -I .git -a

tree -I .git -a

The output will look like this:

├── .gitlab
│   └── plan-json.yml
├── .gitlab-ci.yml
└── terraform
    ├── .infracost
    │   └── terraform_modules

Steps to create job template

1. Create .gitlab-ci.yml file in the main directory. The main Gitlab pipeline is defined in .gitlab-ci.yml file.

This acts as parent job which triggers a downstream pipeline that is called the child pipeline.

- all_stage

  stage: all_stage
  - if: "$CI_COMMIT_TAG"
    include: ".gitlab/plan-json.yml"
    strategy: depend

2. Create GitLab merge request job template.

Create a file plan-json.yml & copy the below content in it. This will act as downstream pipeline triggered by the parent pipeline.

  - if: "$CI_COMMIT_TAG"

  # If your terraform files are in a subdirectory, set TF_ROOT accordingly
  TF_ROOT: terraform

  - plan
  - infracost

  key: "${TF_ROOT}"
    - ${TF_ROOT}/.terraform

  stage: plan
    name: hashicorp/terraform:latest
    entrypoint: [""]
    - cd ${TF_ROOT}
    - terraform init
    - terraform plan -out tfplan.binary
    - terraform show -json tfplan.binary > tplan.json
      - ${TF_ROOT}/tplan.json

  stage: infracost
    name: infracost/infracost:ci-0.10
    entrypoint: [""] 
    - plan
    - git clone $CI_REPOSITORY_URL --branch=$CI_MERGE_REQUEST_TARGET_BRANCH_NAME --single-branch /tmp/base
    - ls /tmp/base
    - infracost configure set api_key $INFRACOST_API_KEY
    - infracost breakdown --path=/tmp/base/${TF_ROOT} --format=json --out-file=infracost-base.json
    - INFRACOST_ENABLE_CLOUD=true infracost diff --path=${TF_ROOT} --compare-to=infracost-base.json --format=json --out-file=infracost.json
    - infracost comment gitlab --path=infracost.json --repo=$CI_PROJECT_PATH --merge-request=$CI_MERGE_REQUEST_IID --gitlab-server-url=$CI_SERVER_URL --gitlab-token=$GITLAB_TOKEN --behavior=update


Explanation: First this downstream pipeline creates terraform plan in JSON from the current branch. Then it plans out in JSON but from the target branch that is main in this case. Then it compares both terraform plans & breakdowns the cost difference and finally comments on the merge request.

3. Now whenever you create a MR in Gitlab it will forecast terraform infrastructure cost for you

  1. Checkout a branch from Master
  2. Make some changes in terraform file
  3. Push the newly created branch
  4. Create an MR on the base main branch
  5. The pipeline will automatically trigger as soon as MR is created.

When pipeline succeeds It will comment down on your MR with cost estimation & breakdowns


  • Use extends: keyword to extend this job template.
  • (Optionally) You can keep this stage as it is and apply rules to control the behavior of the job.
  • Put your terraform code into terraform/ dir.
  • Create a MR on your GitLab repo.


Infracost is used to forecast cloud infra costs before it creates any resources. You can integrate this tool with any CICD to forecast cost whenever a pull or merge request is created.

Original article source at:

#gitlab #ci #template 

How to Use infracost CI Template for Gitlab To Forecast Cost
Elian  Harber

Elian Harber


Golangci-lint: Fast Linters Runner for Go


Fast linters runner for Go


golangci-lint is a fast Go linters runner. It runs linters in parallel, uses caching, supports yaml config, has integrations with all major IDE and has dozens of linters included.

Install golangci-lint


  • Very fast: runs linters in parallel, reuses Go build cache and caches analysis results.
  • ⚙️ Yaml-based configuration.
  • 🖥 integrations with VS Code, Sublime Text, GoLand, GNU Emacs, Vim, Atom, GitHub Actions.
  • 🥇 A lot of linters included, no need to install them.
  • 📈 Minimum number of false positives because of tuned default settings.
  • 🔥nice output with colors, source code lines and marked identifiers.


golangci-lint demo

Short 1.5 min video demo of analyzing beego. asciicast


Documentation is hosted at

Stargazers over time

Stargazers over time

Download Details:

Author: Golangci
Source Code: 
License: GPL-3.0 license

#go #golang #ci #linter 

Golangci-lint: Fast Linters Runner for Go
Gordon  Taylor

Gordon Taylor


Get Details About The Current Continuous integration Environment


Get details about the current Continuous Integration environment.

Please open an issue if your CI server isn't properly detected :)  


npm install ci-info --save


var ci = require('ci-info')

if (ci.isCI) {
  console.log('The name of the CI server is:',
} else {
  console.log('This program is not running on a CI server')

Supported CI tools

Officially supported CI servers:

Azure Pipelinesci.AZURE_PIPELINES
Bamboo by Atlassianci.BAMBOO🚫
Bitbucket Pipelinesci.BITBUCKET
Cirrus CIci.CIRRUS
Expo Application Servicesci.EAS🚫
Jenkins CIci.JENKINS
Magnum CIci.MAGNUM🚫
Netlify CIci.NETLIFY
Sail CIci.SAIL
Solano CIci.SOLANO
Strider CDci.STRIDER🚫
TeamCity by JetBrainsci.TEAMCITY🚫
Travis CIci.TRAVIS
Visual Studio App Centerci.APPCENTER🚫


Returns a string containing name of the CI server the code is running on. If CI server is not detected, it returns null.

Don't depend on the value of this string not to change for a specific vendor. If you find your self writing === 'Travis CI', you most likely want to use ci.TRAVIS instead.


Returns a boolean. Will be true if the code is running on a CI server, otherwise false.

Some CI servers not listed here might still trigger the ci.isCI boolean to be set to true if they use certain vendor neutral environment variables. In those cases will be null and no vendor specific boolean will be set to true.


Returns a boolean if PR detection is supported for the current CI server. Will be true if a PR is being tested, otherwise false. If PR detection is not supported for the current CI server, the value will be null.


A vendor specific boolean constant is exposed for each support CI vendor. A constant will be true if the code is determined to run on the given CI server, otherwise false.

Examples of vendor constants are ci.TRAVIS or ci.APPVEYOR. For a complete list, see the support table above.

Deprecated vendor constants that will be removed in the next major release:

  • ci.TDDIUM (Solano CI) This have been renamed ci.SOLANO

Download Details:

Author: Watson
Source Code: 
License: MIT license

#javascript #ci #environment 

Get Details About The Current Continuous integration Environment
Gordon  Taylor

Gordon Taylor


is-ci: Detect If The Current Environment Is A CI Server


Returns true if the current environment is a Continuous Integration server.


npm install is-ci --save

Programmatic Usage

const isCI = require('is-ci')

if (isCI) {
  console.log('The code is running on a CI server')

CLI Usage

For CLI usage you need to have the is-ci executable in your PATH. There's a few ways to do that:

  • Either install the module globally using npm install is-ci -g
  • Or add the module as a dependency to your app in which case it can be used inside your package.json scripts as is
  • Or provide the full path to the executable, e.g. ./node_modules/.bin/is-ci
is-ci && echo "This is a CI server"

Supported CI tools

Refer to ci-info docs for all supported CI's

Please open an issue if your CI server isn't properly detected :)  

Download Details:

Author: Watson
Source Code: 
License: MIT license

#javascript #ci #environment 

is-ci: Detect If The Current Environment Is A CI Server
Rachel Cole

Rachel Cole


GitLab CI/CD Tutorial | Build Production-ready CI/CD Pipelines

GitLab CI/CD Full Course released - CI/CD with Docker | K8s | Microservices!

Learn how to build production-ready CI/CD pipelines in one comprehensive and practical course!

GitLab CI/CD is one of the most popular CI/CD platforms! More and more companies are adopting it. So, the need for Developers or DevOps engineers, who know how to build complete CI/CD pipelines on GitLab is increasing.

While many GitLab courses teach you only the basics, we will dive into more advanced demos, like implementing dynamic versioning, using cache to speed up the pipeline execution or deploying to a K8s cluster. So, you'll have built several CI/CD pipelines with real life examples & best practices!

As usual you can expect complex topics explained in a simple way, animations to help you understand the concepts better and lots of hands-on demos!

▬▬▬▬▬▬  🚀 By the end of this course, you'll be able to... 🚀  ▬▬▬▬▬▬ 
✅  Confidently use GitLab CI/CD at your work
✅  Set up self-managed GitLab Runners
✅  Build and deploy containers with Docker Compose
✅  Build a Multi-Stage Pipeline
✅  Configure a CI/CD Pipeline for a Monorepo Microservices
✅  Configure a CI/CD Pipeline for a Polyrepo Microservices
✅  Deploy to a managed Kubernetes cluster
✅  Setup a CI/CD pipeline with best practices

▬▬▬▬▬▬  📚 What you'll learn 📚 ▬▬▬▬▬▬ 
✅  Pipelines, Jobs, Stages
✅  Regular & Secret Variables
✅  Workflow Rules
✅  Speed up Pipeline using Cache
✅  Configure Job Artifacts (test report, passing files and env vars)
✅  Conditionals
✅  GitLab Runners & Executors
✅  GitLab's built-in Docker registry
✅  GitLab Environments
✅  GitLab's Job Templates
✅  Reuse pipeline configuration by writing own job ci-templates library
✅  needs, dependencies, extends etc.

►  More Infos here:

#gitlab #gitlabcicd #docker #k8s #microservices #ci #cd 

GitLab CI/CD Tutorial | Build Production-ready CI/CD Pipelines
Veronica  Roob

Veronica Roob


Awesome PHP: Libraries and Applications for Continuous Integration

Continuous Integration

Libraries and applications for continuous integration.

  • CircleCI - A continuous integration platform.
  • GitlabCi - Let GitLab CI test, build, deploy your code. TravisCi like.
  • Jenkins - A continuous integration platform with PHP support.
  • JoliCi - A continuous integration client written in PHP and powered by Docker.
  • PHPCI - An open source continuous integration platform for PHP.
  • SemaphoreCI - A continuous integration platform for open source and private projects.
  • Shippable - A Docker based continious integration platform for open source and private projects.
  • Travis CI - A continuous integration platform.
  • Setup PHP - A GitHub Action for PHP.

Author: ziadoz
Source Code:
License: WTFPL License

#php #ci 

Awesome PHP: Libraries and Applications for Continuous Integration
Veronica  Roob

Veronica Roob


PHPCI: A Free and Open Source Continuous Integration Tool


PHPCI is a free and open source (BSD License) continuous integration tool specifically designed for PHP. We've built it with simplicity in mind, so whilst it doesn't do everything Jenkins can do, it is a breeze to set up and use.

What it does:

  • Clones your project from Github, Bitbucket or a local path
  • Allows you to set up and tear down test databases.
  • Installs your project's Composer dependencies.
  • Runs through any combination of the supported plugins.
  • You can mark directories for the plugins to ignore.
  • You can mark certain plugins as being allowed to fail (but still run.)

What it doesn't do (yet):

  • Virtualised testing.
  • Multiple PHP-version tests.
  • Install PEAR or PECL extensions.
  • Deployments - We strongly recommend using Deployer

Getting Started:

We've got documentation on our website on installing PHPCI and adding support for PHPCI to your projects.


Contributions from others would be very much appreciated! Please read our guide to contributing for more information on how to get involved.


Your best place to go is the mailing list. If you're already a member of the mailing list, you can simply email

Author: dancryer
Source Code:
License: BSD-2-Clause License

#php #ci 

PHPCI: A Free and Open Source Continuous Integration Tool
Cleora  Roob

Cleora Roob


Npm Ci Vs Npm install : And Why You Should Use Npm CI for Your Node.js Devops Pipelines

Npm Ci Vs Npm install : And Why You Should Use Npm CI for Your Node.js Devops Pipelines

this video gives a full on deep dive of npm ci vs npm install and why you should use npm ci in your nodejs devops production pipelines. it shows why npm ci is both faster and more reproducible than npm install.

#npm #ci #devops #pipelines

Npm Ci Vs Npm install : And Why You Should Use Npm CI for Your Node.js Devops Pipelines
Archie  Powell

Archie Powell


How CI Tightens Enterprise Data Security

With data security becoming ever-more challenging, continuous intelligence can offer hope to the enterprise.

The importance of a strong data security strategy is pretty clear. Even little “Mom and Pop” businesses worry about hackers stealing personal information, planting ransomware, and launching denial of service attacks. For the enterprise, the job of the CISO and their team keeps getting tougher, particularly since the pandemic changed everything.

Take the case of the Texas health and human services agency. Prior to the pandemic that department saw 90 million attack attempts per year. Since Covid-19 hit the attacks have increased five times over to 532 million attacks in a year. Meanwhile, CISOs across industries are relying on outdated, report-based threat intelligence.

Security challenges have outpaced the ability for humans and yesterday’s security tools to deal with them. The old idea of human staff chasing down every alert generated by security software doesn’t scale in the face of a million attacks per day. Plus, experts estimate that up to half of all alerts are based on false positives. So, blocking everything isn’t feasible.

A scary new world

In addition, today’s reality for the evolving enterprise is that cloud computing and third-party apps are core concepts. That means massive amounts of data are stored or created outside of the legacy on-premise systems. Then factor in how a growing number of employees are working from home.

CI-enabled data security

That’s where continuous intelligence (CI) can play a key role in a cybersecurity strategy. Think of CI as the ability for security tools to constantly learn what is going on within enterprise systems and which threats require immediate action.

#artificial intelligence technologies #continuous intelligence #data #security #ci

How CI Tightens Enterprise Data Security
Zachariah  Wiza

Zachariah Wiza


Tekton vs. Argo Workflows - Kubernetes-Native CI/CD Pipelines

Which self-managed kubernetes-native CI/CD pipeline is the best choice? Is it Tekton or Argo Workflows? Which one should you pick?

#tekton #argo #argoworkflows #argopipelines

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬
➡ Gist with the commands:
🎬 Tekton:
🎬 Argo Workflows and Pipelines:
🎬 Argo Events:
🎬 Automation of Everything:
🎬 Kustomize:
🎬 GitHub CLI:
🎬 Kaniko:

▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬
00:00 Tekton vs. Argo Workflows and Pipelines
01:03 Comparison criteria
01:45 Templating
03:46 Pipelines
08:57 Web UI
11:23 Events and triggers
13:56 Catalogs and hubs
16:41 Documentation
17:12 Community
18:55 Final verdict

▬▬▬▬▬▬ 🚀 Courses, books, and podcasts 🚀 ▬▬▬▬▬▬
📚 DevOps Catalog, Patterns, And Blueprints:
📚 Books and courses:
🎤 Podcast:
💬 Live streams:

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬
➡ Twitter:
➡ LinkedIn:

#kubernetes #cd #ci

Tekton vs. Argo Workflows - Kubernetes-Native CI/CD Pipelines
Zachariah  Wiza

Zachariah Wiza


Tekton - Kubernetes Cloud-Native CI/CD Pipelines And Workflows

Tekton is a powerful and flexible open-source framework for creating CI/CD systems aiming at becoming a de-facto standard for running pipelines and workflows in Kubernetes. It allows developers to build, test, and deploy across cloud providers and on-premise systems.

#tekton #kubernetes #ci #cd

▬▬▬▬▬▬ Timecodes ⏱ ▬▬▬▬▬▬
00:00 Intro
00:59 What is Tekton?
02:15 Setup
02:39 Tekton tasks
03:59 Tekton pipelines
11:14 Running Tekton pipelines
15:59 Tekton Web UI
18:00 Handling events
19:47 Tekton Hub
20:53 Pros and cons of using Tekton

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬
➡ Gist with the commands:
🔗 Tekton:
🎬 Argo Workflows and Pipelines:
🎬 Automation of Everything:
🎬 Kaniko:

▬▬▬▬▬▬ 🚀 Courses, books, and podcasts 🚀 ▬▬▬▬▬▬
📚 DevOps Catalog, Patterns, And Blueprints:
📚 Books and courses:
🎤 Podcast:
💬 Live streams:

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬
➡ Twitter:
➡ LinkedIn:

#kubernetes #ci #cd

Tekton - Kubernetes Cloud-Native CI/CD Pipelines And Workflows
Zachariah  Wiza

Zachariah Wiza


Github Actions Review and Tutorial

What is GitHub actions? How to use GitHub Actions, and does it work? How can we leverage its marketplace, and what is the pricing? Is it a good solution for CI/CD pipelines?

Let’s answer those and other questions through tutorial and review.

#github #githubactions #ci #cd

▬▬▬▬▬▬ Timecodes ⏱ ▬▬▬▬▬▬
00:00 What is GitHub Actions?
02:59 Setup
03:39 Exploring GitHub Actions syntax
10:44 Running GitHub Actions
12:46 Exploring other scenarios
18:39 Pricing
19:57 Pros and cons
25:53 Who should use GitHub Actions?

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬
➡ Gist with the commands:
🔗 GitHub Actions:
🎬 Continuous integration, delivery, deployment, and testing explained:
🎬 GitHub CLI:
🎬 K3d:
🎬 Kustomize:

▬▬▬▬▬▬ 🚀 Courses, books, and podcasts 🚀 ▬▬▬▬▬▬
📚 DevOps Catalog, Patterns, And Blueprints:
📚 Books and courses:
🎤 Podcast:
💬 Live streams:

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬
➡ Twitter:
➡ LinkedIn:

#github #ci #cd

Github Actions Review and Tutorial