1673444520
Installs Jenkins CI on RHEL/CentOS and Debian/Ubuntu servers.
Requires curl
to be installed on the server. Also, newer versions of Jenkins require Java 8+ (see the test playbooks inside the molecule/default
directory for an example of how to use newer versions of Java for your OS).
Available variables are listed below, along with default values (see defaults/main.yml
):
jenkins_package_state: present
The state of the jenkins
package install. By default this role installs Jenkins but will not upgrade Jenkins (when using package-based installs). If you want to always update to the latest version, change this to latest
.
jenkins_hostname: localhost
The system hostname; usually localhost
works fine. This will be used during setup to communicate with the running Jenkins instance via HTTP requests.
jenkins_home: /var/lib/jenkins
The Jenkins home directory which, amongst others, is being used for storing artifacts, workspaces and plugins. This variable allows you to override the default /var/lib/jenkins
location.
jenkins_http_port: 8080
The HTTP port for Jenkins' web interface.
jenkins_admin_username: admin
jenkins_admin_password: admin
Default admin account credentials which will be created the first time Jenkins is installed.
jenkins_admin_password_file: ""
Default admin password file which will be created the first time Jenkins is installed as /var/lib/jenkins/secrets/initialAdminPassword
jenkins_jar_location: /opt/jenkins-cli.jar
The location at which the jenkins-cli.jar
jarfile will be kept. This is used for communicating with Jenkins via the CLI.
jenkins_plugins:
- blueocean
- name: influxdb
version: "1.12.1"
Jenkins plugins to be installed automatically during provisioning. Defaults to empty list ([]
). Items can use name or dictionary with name
and version
keys to pin specific version of a plugin.
jenkins_plugins_install_dependencies: true
Whether Jenkins plugins to be installed should also install any plugin dependencies.
jenkins_plugins_state: present
Use latest
to ensure all plugins are running the most up-to-date version. For any plugin that has a specific version set in jenkins_plugins
list, state present
will be used instead of jenkins_plugins_state
value.
jenkins_plugin_updates_expiration: 86400
Number of seconds after which a new copy of the update-center.json file is downloaded. Set it to 0 if no cache file should be used.
jenkins_updates_url: "https://updates.jenkins.io"
The URL to use for Jenkins plugin updates and update-center information.
jenkins_plugin_timeout: 30
The server connection timeout, in seconds, when installing Jenkins plugins.
jenkins_version: "2.346"
jenkins_pkg_url: "http://www.example.com"
(Optional) Then Jenkins version can be pinned to any version available on http://pkg.jenkins-ci.org/debian/
(Debian/Ubuntu) or http://pkg.jenkins-ci.org/redhat/
(RHEL/CentOS). If the Jenkins version you need is not available in the default package URLs, you can override the URL with your own; set jenkins_pkg_url
(Note: the role depends on the same naming convention that http://pkg.jenkins-ci.org/
uses).
jenkins_url_prefix: ""
Used for setting a URL prefix for your Jenkins installation. The option is added as --prefix={{ jenkins_url_prefix }}
to the Jenkins initialization java
invocation, so you can access the installation at a path like http://www.example.com{{ jenkins_url_prefix }}
. Make sure you start the prefix with a /
(e.g. /jenkins
).
jenkins_connection_delay: 5
jenkins_connection_retries: 60
Amount of time and number of times to wait when connecting to Jenkins after initial startup, to verify that Jenkins is running. Total time to wait = delay
* retries
, so by default this role will wait up to 300 seconds before timing out.
jenkins_prefer_lts: false
By default, this role will install the latest version of Jenkins using the official repositories according to the platform. You can install the current LTS version instead by setting this to false
.
The default repositories (listed below) can be overridden as well.
# For RedHat/CentOS:
jenkins_repo_url: https://pkg.jenkins.io/redhat{{ '-stable' if (jenkins_prefer_lts | bool) else '' }}/jenkins.repo
jenkins_repo_key_url: https://pkg.jenkins.io/redhat{{ '-stable' if (jenkins_prefer_lts | bool) else '' }}/jenkins.io.key
# For Debian/Ubuntu:
jenkins_repo_url: deb https://pkg.jenkins.io/debian{{ '-stable' if (jenkins_prefer_lts | bool) else '' }} binary/
jenkins_repo_key_url: https://pkg.jenkins.io/debian{{ '-stable' if (jenkins_prefer_lts | bool) else '' }}/jenkins.io.key
It is also possible to prevent the repo file from being added by setting jenkins_repo_url: ''
. This is useful if, for example, you sign your own packages or run internal package management (e.g. Spacewalk).
jenkins_options: ""
Extra options (e.g. setting the HTTP keep alive timeout) to pass to Jenkins on startup via JENKINS_OPTS
in the systemd override.conf file can be configured using the var jenkins_options
. By default, no options are specified.
jenkins_java_options: "-Djenkins.install.runSetupWizard=false"
Extra Java options for the Jenkins launch command configured via JENKINS_JAVA_OPTS
in the systemd override.conf file can be set with the var jenkins_java_options
. For example, if you want to configure the timezone Jenkins uses, add -Dorg.apache.commons.jelly.tags.fmt.timeZone=America/New_York
. By default, the option to disable the Jenkins 2.0 setup wizard is added.
jenkins_init_changes:
- option: "JENKINS_OPTS"
value: "{{ jenkins_options }}"
- option: "JAVA_OPTS"
value: "{{ jenkins_java_options }}"
- option: "JENKINS_HOME"
value: "{{ jenkins_home }}"
- option: "JENKINS_PREFIX"
value: "{{ jenkins_url_prefix }}"
- option: "JENKINS_PORT"
value: "{{ jenkins_http_port }}"
Changes made to the Jenkins systemd override.conf file; the default set of changes set the configured URL prefix, Jenkins home directory, Jenkins port and adds the configured Jenkins and Java options for Jenkins' startup. You can add other option/value pairs if you need to set other options for the Jenkins systemd override.conf file.
jenkins_proxy_host: ""
jenkins_proxy_port: ""
jenkins_proxy_noproxy:
- "127.0.0.1"
- "localhost"
If you are running Jenkins behind a proxy server, configure these options appropriately. Otherwise Jenkins will be configured with a direct Internet connection.
None.
- hosts: jenkins
become: true
vars:
jenkins_hostname: jenkins.example.com
java_packages:
- openjdk-8-jdk
roles:
- role: geerlingguy.java
- role: geerlingguy.jenkins
Note that java_packages
may need different versions depending on your distro (e.g. openjdk-11-jdk
for Debian 10, or java-1.8.0-openjdk
for RHEL 7 or 8).
Author: Geerlingguy
Source Code: https://github.com/geerlingguy/ansible-role-jenkins
License: MIT license
1671895080
In this article, we will learn about CI/CD Best Practices You Need to Know. Continuous Integration and Delivery (CI/CD) take software development from code to a live product. CI/CD forms part of DevOps processes, with many commonly agreed-upon best practices you can follow to improve your deployment pipeline.
If you work in DevOps, you've probably used a build server like Jenkins and a deployment tool like Octopus Deploy to complete your deployment process. Octopus supports the Continuous Delivery side of CI/CD, providing a best-in-category product that makes complex deployments easier.
At Octopus, we believe in the power of 8. An octopus has 8 limbs, so here are 8 best practices to help your deployment journey.
You can also learn more in our DevOps engineer's handbook.
Agile methodologies are vital to CI/CD and DevOps. Agile is a project management approach involving continuous collaboration with stakeholders and continuous improvement at each stage of the deployment process.
The principle of Agile is having frequent feedback through small development iterations so developers can closely align the final product with the user's needs. Agile methodologies contrast traditional waterfall methods, where projects are scoped and delivered in a single phase.
We recommend software projects are managed according to Agile and Lean principles so the continuous feedback loop can improve the product. We've seen Agile implemented as a checklist for upper management to tick off. In the initial stages, software teams apply Agile to meet the checklist. As teams have permission to explore the Agile space, they start to see the real benefits.
If you work in software, you've almost certainly used Git. The wars on source-controlled code have been fought and won, and Git is now synonymous with source control.
Source-controlled code allows a complete history and rollback of code to previous versions. You can also resolve conflicts by using Git's merging methods.
Committing a code change should trigger a CI/CD pipeline build. This trigger allows developers to test and validate changes to the codebase earlier. After a code change is set up to trigger an automated build, developers should be encouraged to commit their code at least once a day. Daily commits trigger automated tests more frequently so developers notice any errors sooner.
Config as Code represents your deployment process in a Git-based system. Deployments inherit all the benefits of Git, such as branching, version control, and approvals as pull requests.
Config as Code lets you store your deployment process in Git. You can test changes to deployments in a branch and validate them through a pull request. Git-based deployments make it easier to transfer a deployment set up from one environment to another.
In 2022 Q1, we released Config as Code for Octopus Deploy, and believe we set an industry standard. Other Config as Code solutions sacrifice usability for functionality. In Octopus, you get all the features of Config as Code, whether you use the UI or the version-controlled implementation.
A green build in a CI/CD pipeline means that every test passed, and the release has progressed to the next stage. Software teams aim to keep builds green.
You should choose a deployment tool that surfaces information to help keep builds green. Many deployment processes only use a build server that pushes releases into production. In practice, only using a build server makes it harder to manage a release between different deployment stages. Using a dedicated deployment tool gives you a dedicated management layer to keep builds green.
A build server doesn't include the concept of deployment stages. Octopus Deploy, however, separates a release into Test, Dev, and Production environments, and environments can exist at different release versions in each stage. Our UI shows each release's deployment stage and transitions releases between stages. The Octopus UI also shows logs and error messages to help developers quickly identify failing builds.
Testing code changes is essential to producing reliable releases. The testing suite should cover all use cases for the product, from functional to non-functional tests. These tests should be automated so that a code change can trigger an automated test and build. Automated tests improve the agility of a software development project to get releases live faster.
A survey by Mabel on the state of testing in DevOps indicates that automated testing (at least 4 to 5 different types of tests) is key to customer happiness. In the 2021 State of DevOps DORA Report, continuous testing is an indicator of success. Elite performers who meet their reliability targets are 3.7 times more likely to leverage continuous testing.
Developers use telemetry data (logs, metrics, and traces) to understand their system's internal state. Telemetry unlocks observability so developers can act on data to fix their system. When you have telemetry data, you can use observability tools to add monitoring capabilities to your system.
Monitoring key system metrics can help diagnose a system for vulnerabilities and identify improvements. In the DevOps community, DORA metrics are commonly accepted as crucial metrics for the success of a deployment pipeline.
Octopus lets you measure results, compare project statuses, and continuously improve with DevOps Insights focused on the DORA metrics.
Every year, there are a new technologies that people claim will revolutionize the IT playing field. Whether it's containerization, machine learning, or blockchain, some technologies change the playing field, while others are too immature to make a real impact. When managing a CI/CD pipeline, it's essential to only choose technologies fit-for-purpose.
While being cloud-first makes sense for some parts, forcing everything onto the cloud might not be the right solution. Adoption of new technologies can bring significant improvements, but taking a measured approach avoids unnecessary pain when the costs of adoption outweigh the benefits.
As software projects get larger, the security risks increase with more data handling, users, and dependencies. Your deployment process should have a security strategy.
Many cloud providers, like AWS, Azure, and Google, have built-in security features such as IAM, secrets, and role-based permissions. You can use these features to manage some security concerns.
Customers are increasingly concerned with security, and companies need to invest in certifications such as ISO 27001 and SOC II to certify their compliance with security regulations.
On May 12, 2021, The US government released Executive Order 14028, "Improving the Nation's Cybersecurity". The Order requires all vendors of government software projects to produce a Software Bill of Materials (SBOM). The SBOMs detail all software components so that governments can screen software for cybersecurity. If you want an example of how to produce an SBOM and attach it to your deployment process, we created a free tool called the Octopus Workflow Builder that can help.
CI/CD is part of the DevOps model and helps bring software projects from code to customers. If you work in DevOps and implement CI/CD, you should follow industry-standard best practices for your pipeline. To help, this post covered 8 best practices you can use to make the most of CI/CD.
Many tools can help you with CI/CD, from build servers and deployment tools to monitoring solutions. Octopus Deploy fits into CI/CD as a Continuous Deployment solution making complex deployments easier.
Original article sourced at: https://octopus.com
1671140040
CI/CD (Continuous Integration and Continuous Delivery) incorporate values, a set of operating principles, and a collection of practices that enable application development teams to deliver changes more reliably and regularly; this is also known as CI/CD pipeline. But what do the individual terms mean?
It is an approach in which developers merge their code into a shared repository several times a day. For verification of the integrated code, automated tests and builds are run for it.
Continuous delivery is a strategy in which the development teams ensure the software is reliable to release at any time. On each commit, the software passes through the automated testing process. If it successfully passes the testing, and it is ready for release into production.
It is the short form for Continuous Integration, and CD is the short form for it. CI/CD Pipeline is a crucial part of the modern DevOps environment. The pipeline is a deployable path that the software follows to its production with CI and Continuous Delivery practices. It is a development lifecycle for software and includes the CI/CD pipeline, which has various stages or phases through which the software passes.
In this phase of the CI/CD pipeline, the developers' code have version control software or systems such as git, apache subversion, and more. It controls the commit history of the software code so that it may change if needed.
This phase is the first phase of this pipeline system. Developers build their code, and then they pass their code through the version control system or software. After this, the code returns to the build phase and compilation.
When software reaches this stage, various tests are there on the software. One of the main tests is the Unit test, which test the units of software. After successful testing, the staging phase begins. As software pass the tests to reach here, it is ready to deploy into the staging process. Here, the software code is deploy to the staging environment/server. View the code and finalize here before the final test conduct on the software.
After passing to the staging environment, another set of auto testing is for the software. If the software completes these tests and it is passes to the next phase/stage, the deployment phase.
As the auto testing procedure is over, then it is passes to production. However, if any error occurs during the testing phase or the deployment phase, the software is passes to the development team's version control procedure and check the errors. If errors are there, then need to fix them. Other stages can repeate if necessary.
Automate the CI/CD process to get the best results. Various Continuous Integration and its tools help us automate the process precisely and with the least effort. These tools are mostly open-source and help with collaborative software development. Some of these tools are:
Jenkins is the most commonly used tool for the CI/CD process. It is one of the earliest and most powerful tools. It has various interfaces and inbuilt tools, which help us in the CI/CD process's automation. At first, it was introduced as a part of a project named Hudson, released in 2005. Then it was officially released as Jenkins in 2011. It has a vast plugin ecosystem, which helps in delivering the features that we need.
Circle CI is getting popular these days. It is becoming one of the best build platforms. It is a modern tool for the CI/CD process. One of its newest features is Circle CI Orbs. It has sharable code packages that help in setting the build pipeline easily and quickly.
It is built into the GitLab, which is a web-based DevOps tool and provides a Git-repository manager, which helps in managing the git repositories. This integrate into GitLab software after being a standalone project and release in September 2015. Here, the process is defined within a code repository. There are some tools named runners, which are used to complete the work. We can choose different executors while configuring runners like Docker, VirtualBox, and many more. It uses YAML configuration syntax to define the process in the repository.
Buddy is one of the newest and smartest tools. This tool is for the developers and helps in lowering the entry threshold in DevOps. Initially, its name was meat! which was changed to Buddy in November 2015. Initially it was as a cloud-only service. It does not use YAML configuration, but it supports .yml files.
It is an open-source tool and helps the development teams in the Continuous Delivery and Continuous Integration process. Initially released with the name Cruise in 2007 by ThoughtWorks, it was renamed GoCD in 2010.
It is a process involving both. The process starts with it, and continuous delivery picks up where it ends. It involves a development approach named DevOps.
What all are the benefits of incorporating CI/CD in your business framework? Know the details below:
Listed below are some common pitfalls one may experience while working with CI/CD:
To shift from traditional models to DevOps, the existing organizations need to go through a transition process, which can be a long and difficult one. This process can take months and even more if you don’t follow the right transition steps. The steps to adopt CI/CD-
These points can help in choosing processes to automate based on the priority. These can also help with the CI/CD testing process; we get confuse about whether to automate functional testing or UI testing.
Empower your enterprise with CI/CD pipeline to minimize issues at deployment and faster production rate. Source: Infrastructure as Code Platform for Cloud-Native
Many organizations fail to distinguish between both. They are two very different concepts. In the case of continuous deployment, the code repository changes passes through the pipeline, and if it is successful, deploy changes immediately to production. In Continuous Deployment, deployment to the production environment is successful without manual approval.
Continuous Delivery is the next step for Continuous integration. They are two different items. But the implementation of CI/CD takes the collaboration of these two. Not possible to automate collaboration and communication.
In many cases, the scrum team may create a dashboard without proper progressive assessment. The team falls prey to the logical misconception that the given metrics must be important. The team may not know what to track and may follow the wrong metrics. Different members of a team may have different preferences. Some also prefer to use traffic indicators for the work. Some may not like the work that others have done. Creating meaningful and useful CI/CD dashboards may be tricky and extremely difficult, as some may not be satisfied with the work. Listening to everyone becomes difficult.
The process is complicate for some developers and testers working on traditional in-house software development techniques. It has two solutions: either re-training the employees for the automation process or hiring new people who know the processor training. Both of the solutions are a cost to the organization.
After the transition, maintenance is necessary to ensure that the pipeline is properly working and there are no automation processes. The bigger the organization, the difficult it is to maintain the pipelines for different services.
There are some practices in the CI/CD process, which greatly enhance the performance of the process, and adhering to them can help avoid some common problems :
In the case of CI/CD, the failures are immediately visible, and the production is stopped until the cause of the failure is found and is corrected. It is an important mechanism and keeps further environments safe from the distrustful code. It is a process that is made solely for software development, integration, and delivery work, so it has advantages over other procedures, i.e., it is automated and hence faster.
Some tests are comparatively faster than others. We should run these tests early. Running the fastest tests first helps in finding the errors faster. It is essential to find errors in software development as soon as possible to prevent further problems.
The developers should run the tests locally before committing or sharing them at the CI/CD pipeline or shared repository. This step is beneficial as it helps troubleshoot the software problems before sharing with others and is advantageous for the developer. Continuous Integratio
CI/CD pipelines are the core part of the CI/CD process; they are responsible for faster integration and faster delivery. We should find and apply methods to improve the speed and optimize the pipeline environment.
CI/CD is among the best practices for the DevOps teams to implement using DevOps Assembly Line. Additionally, it's a unique methodology for the agile enterprise that facilitates the development team to achieve the business requirements, best code quality, and security because deployment steps are automated.
Original article source at: https://www.xenonstack.com/
1671004162
Infracost is an open-source tool used to forecast & estimates your cloud cost on every pull request on terraform.
Multiple scenarios & scripts can be created to forecast cloud costs. It supports AWS, Azure, GCP cloud platforms & over 230 Terraform resources.
It also works with Terraform Cloud & Terragrunt. Infracost can use hosted Cloud Pricing API or self-host.
Infracost can be integrated with any CICD tool which will break down the cost of new terraform resources every time a Pull request or Merge request is created. In this blog, we will see how we can use Gitlab CI templates for the merge request pipeline to estimate & forecast cloud costs.
The directory structure for your application or pipeline repo should look like this. Command to check directory structure is tree -I .git -a
tree -I .git -a
The output will look like this:
.
├── .gitlab
│ └── plan-json.yml
├── .gitlab-ci.yml
├── README.md
└── terraform
├── .infracost
│ └── terraform_modules
├── main.tf
└── README.md
1. Create .gitlab-ci.yml file in the main directory. The main Gitlab pipeline is defined in .gitlab-ci.yml file.
This acts as parent job which triggers a downstream pipeline that is called the child pipeline.
stages:
- all_stage
mr-gitlab-terraform:
stage: all_stage
rules:
- if: "$CI_MERGE_REQUEST_IID"
- if: "$CI_COMMIT_TAG"
- if: "$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH"
trigger:
include: ".gitlab/plan-json.yml"
strategy: depend
2. Create GitLab merge request job template.
Create a file plan-json.yml & copy the below content in it. This will act as downstream pipeline triggered by the parent pipeline.
workflow:
rules:
- if: "$CI_MERGE_REQUEST_IID"
- if: "$CI_COMMIT_TAG"
- if: "$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH"
variables:
# If your terraform files are in a subdirectory, set TF_ROOT accordingly
TF_ROOT: terraform
stages:
- plan
- infracost
cache:
key: "${TF_ROOT}"
paths:
- ${TF_ROOT}/.terraform
plan:
stage: plan
image:
name: hashicorp/terraform:latest
entrypoint: [""]
before_script:
- cd ${TF_ROOT}
- terraform init
script:
- terraform plan -out tfplan.binary
- terraform show -json tfplan.binary > tplan.json
artifacts:
paths:
- ${TF_ROOT}/tplan.json
infracost:
stage: infracost
image:
name: infracost/infracost:ci-0.10
entrypoint: [""]
dependencies:
- plan
script:
- git clone $CI_REPOSITORY_URL --branch=$CI_MERGE_REQUEST_TARGET_BRANCH_NAME --single-branch /tmp/base
- ls /tmp/base
- infracost configure set api_key $INFRACOST_API_KEY
- infracost breakdown --path=/tmp/base/${TF_ROOT} --format=json --out-file=infracost-base.json
- INFRACOST_ENABLE_CLOUD=true infracost diff --path=${TF_ROOT} --compare-to=infracost-base.json --format=json --out-file=infracost.json
- infracost comment gitlab --path=infracost.json --repo=$CI_PROJECT_PATH --merge-request=$CI_MERGE_REQUEST_IID --gitlab-server-url=$CI_SERVER_URL --gitlab-token=$GITLAB_TOKEN --behavior=update
variables:
INFRACOST_API_KEY: $INFRACOST_API_KEY
GITLAB_TOKEN: $GITLAB_TOKEN
Explanation: First this downstream pipeline creates terraform plan in JSON from the current branch. Then it plans out in JSON but from the target branch that is main in this case. Then it compares both terraform plans & breakdowns the cost difference and finally comments on the merge request.
3. Now whenever you create a MR in Gitlab it will forecast terraform infrastructure cost for you
When pipeline succeeds It will comment down on your MR with cost estimation & breakdowns
extends:
keyword to extend this job template.terraform/
dir.Infracost is used to forecast cloud infra costs before it creates any resources. You can integrate this tool with any CICD to forecast cost whenever a pull or merge request is created.
Original article source at: https://blog.knoldus.com/
1667480069
Fast linters runner for Go
golangci-lint
is a fast Go linters runner. It runs linters in parallel, uses caching, supports yaml
config, has integrations with all major IDE and has dozens of linters included.
golangci-lint
identifiers
.Short 1.5 min video demo of analyzing beego.
Documentation is hosted at https://golangci-lint.run.
Author: Golangci
Source Code: https://github.com/golangci/golangci-lint
License: GPL-3.0 license
1660701600
Get details about the current Continuous Integration environment.
Please open an issue if your CI server isn't properly detected :)
npm install ci-info --save
var ci = require('ci-info')
if (ci.isCI) {
console.log('The name of the CI server is:', ci.name)
} else {
console.log('This program is not running on a CI server')
}
Officially supported CI servers:
Name | Constant | isPR |
---|---|---|
AWS CodeBuild | ci.CODEBUILD | 🚫 |
AppVeyor | ci.APPVEYOR | ✅ |
Azure Pipelines | ci.AZURE_PIPELINES | ✅ |
Appcircle | ci.APPCIRCLE | 🚫 |
Bamboo by Atlassian | ci.BAMBOO | 🚫 |
Bitbucket Pipelines | ci.BITBUCKET | ✅ |
Bitrise | ci.BITRISE | ✅ |
Buddy | ci.BUDDY | ✅ |
Buildkite | ci.BUILDKITE | ✅ |
CircleCI | ci.CIRCLE | ✅ |
Cirrus CI | ci.CIRRUS | ✅ |
Codefresh | ci.CODEFRESH | ✅ |
Codeship | ci.CODESHIP | 🚫 |
Drone | ci.DRONE | ✅ |
dsari | ci.DSARI | 🚫 |
Expo Application Services | ci.EAS | 🚫 |
GitHub Actions | ci.GITHUB_ACTIONS | ✅ |
GitLab CI | ci.GITLAB | ✅ |
GoCD | ci.GOCD | 🚫 |
Hudson | ci.HUDSON | 🚫 |
Jenkins CI | ci.JENKINS | ✅ |
LayerCI | ci.LAYERCI | ✅ |
Magnum CI | ci.MAGNUM | 🚫 |
Netlify CI | ci.NETLIFY | ✅ |
Nevercode | ci.NEVERCODE | ✅ |
Render | ci.RENDER | ✅ |
Sail CI | ci.SAIL | ✅ |
Screwdriver | ci.SCREWDRIVER | ✅ |
Semaphore | ci.SEMAPHORE | ✅ |
Shippable | ci.SHIPPABLE | ✅ |
Solano CI | ci.SOLANO | ✅ |
Strider CD | ci.STRIDER | 🚫 |
TaskCluster | ci.TASKCLUSTER | 🚫 |
TeamCity by JetBrains | ci.TEAMCITY | 🚫 |
Travis CI | ci.TRAVIS | ✅ |
Vercel | ci.VERCEL | 🚫 |
Visual Studio App Center | ci.APPCENTER | 🚫 |
ci.name
Returns a string containing name of the CI server the code is running on. If CI server is not detected, it returns null
.
Don't depend on the value of this string not to change for a specific vendor. If you find your self writing ci.name === 'Travis CI'
, you most likely want to use ci.TRAVIS
instead.
ci.isCI
Returns a boolean. Will be true
if the code is running on a CI server, otherwise false
.
Some CI servers not listed here might still trigger the ci.isCI
boolean to be set to true
if they use certain vendor neutral environment variables. In those cases ci.name
will be null
and no vendor specific boolean will be set to true
.
ci.isPR
Returns a boolean if PR detection is supported for the current CI server. Will be true
if a PR is being tested, otherwise false
. If PR detection is not supported for the current CI server, the value will be null
.
ci.<VENDOR-CONSTANT>
A vendor specific boolean constant is exposed for each support CI vendor. A constant will be true
if the code is determined to run on the given CI server, otherwise false
.
Examples of vendor constants are ci.TRAVIS
or ci.APPVEYOR
. For a complete list, see the support table above.
Deprecated vendor constants that will be removed in the next major release:
ci.TDDIUM
(Solano CI) This have been renamed ci.SOLANO
Author: Watson
Source Code: https://github.com/watson/ci-info
License: MIT license
1660697760
Returns true
if the current environment is a Continuous Integration server.
npm install is-ci --save
const isCI = require('is-ci')
if (isCI) {
console.log('The code is running on a CI server')
}
For CLI usage you need to have the is-ci
executable in your PATH
. There's a few ways to do that:
npm install is-ci -g
./node_modules/.bin/is-ci
is-ci && echo "This is a CI server"
Refer to ci-info docs for all supported CI's
Author: Watson
Source Code: https://github.com/watson/is-ci
License: MIT license
1652667194
Learn how to build production-ready CI/CD pipelines in one comprehensive and practical course!
GitLab CI/CD is one of the most popular CI/CD platforms! More and more companies are adopting it. So, the need for Developers or DevOps engineers, who know how to build complete CI/CD pipelines on GitLab is increasing.
While many GitLab courses teach you only the basics, we will dive into more advanced demos, like implementing dynamic versioning, using cache to speed up the pipeline execution or deploying to a K8s cluster. So, you'll have built several CI/CD pipelines with real life examples & best practices!
As usual you can expect complex topics explained in a simple way, animations to help you understand the concepts better and lots of hands-on demos!
▬▬▬▬▬▬ 🚀 By the end of this course, you'll be able to... 🚀 ▬▬▬▬▬▬
✅ Confidently use GitLab CI/CD at your work
✅ Set up self-managed GitLab Runners
✅ Build and deploy containers with Docker Compose
✅ Build a Multi-Stage Pipeline
✅ Configure a CI/CD Pipeline for a Monorepo Microservices
✅ Configure a CI/CD Pipeline for a Polyrepo Microservices
✅ Deploy to a managed Kubernetes cluster
✅ Setup a CI/CD pipeline with best practices
▬▬▬▬▬▬ 📚 What you'll learn 📚 ▬▬▬▬▬▬
✅ Pipelines, Jobs, Stages
✅ Regular & Secret Variables
✅ Workflow Rules
✅ Speed up Pipeline using Cache
✅ Configure Job Artifacts (test report, passing files and env vars)
✅ Conditionals
✅ GitLab Runners & Executors
✅ GitLab's built-in Docker registry
✅ GitLab Environments
✅ GitLab's Job Templates
✅ Reuse pipeline configuration by writing own job ci-templates library
✅ needs, dependencies, extends etc.
► More Infos here: https://www.techworld-with-nana.com/gitlab-cicd-course
#gitlab #gitlabcicd #docker #k8s #microservices #ci #cd
1650218520
Libraries and applications for continuous integration.
Author: ziadoz
Source Code: https://github.com/ziadoz/awesome-php
License: WTFPL License
1650192420
PHPCI
PHPCI is a free and open source (BSD License) continuous integration tool specifically designed for PHP. We've built it with simplicity in mind, so whilst it doesn't do everything Jenkins can do, it is a breeze to set up and use.
We've got documentation on our website on installing PHPCI and adding support for PHPCI to your projects.
Contributions from others would be very much appreciated! Please read our guide to contributing for more information on how to get involved.
Your best place to go is the mailing list. If you're already a member of the mailing list, you can simply email php-ci@googlegroups.com.
Author: dancryer
Source Code: https://github.com/dancryer/phpci
License: BSD-2-Clause License
1626518760
Npm Ci Vs Npm install : And Why You Should Use Npm CI for Your Node.js Devops Pipelines
this video gives a full on deep dive of npm ci vs npm install and why you should use npm ci in your nodejs devops production pipelines. it shows why npm ci is both faster and more reproducible than npm install.
#npm #ci #devops #pipelines
1625945700
With data security becoming ever-more challenging, continuous intelligence can offer hope to the enterprise.
The importance of a strong data security strategy is pretty clear. Even little “Mom and Pop” businesses worry about hackers stealing personal information, planting ransomware, and launching denial of service attacks. For the enterprise, the job of the CISO and their team keeps getting tougher, particularly since the pandemic changed everything.
Take the case of the Texas health and human services agency. Prior to the pandemic that department saw 90 million attack attempts per year. Since Covid-19 hit the attacks have increased five times over to 532 million attacks in a year. Meanwhile, CISOs across industries are relying on outdated, report-based threat intelligence.
Security challenges have outpaced the ability for humans and yesterday’s security tools to deal with them. The old idea of human staff chasing down every alert generated by security software doesn’t scale in the face of a million attacks per day. Plus, experts estimate that up to half of all alerts are based on false positives. So, blocking everything isn’t feasible.
In addition, today’s reality for the evolving enterprise is that cloud computing and third-party apps are core concepts. That means massive amounts of data are stored or created outside of the legacy on-premise systems. Then factor in how a growing number of employees are working from home.
That’s where continuous intelligence (CI) can play a key role in a cybersecurity strategy. Think of CI as the ability for security tools to constantly learn what is going on within enterprise systems and which threats require immediate action.
#artificial intelligence technologies #continuous intelligence #data #security #ci
1625367180
Which self-managed kubernetes-native CI/CD pipeline is the best choice? Is it Tekton or Argo Workflows? Which one should you pick?
#tekton #argo #argoworkflows #argopipelines
▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬
➡ Gist with the commands: https://gist.github.com/f11e690295311d8d7dd53b5128bd6d3e
🎬 Tekton: https://youtu.be/7mvrpxz_BfE
🎬 Argo Workflows and Pipelines: https://youtu.be/UMaivwrAyTA
🎬 Argo Events: https://youtu.be/sUPkGChvD54
🎬 Automation of Everything: https://youtu.be/XNXJtxkUKeY
🎬 Kustomize: https://youtu.be/Twtbg6LFnAg
🎬 GitHub CLI: https://youtu.be/BII6ZY2Rnlc
🎬 Kaniko: https://youtu.be/EgwVQN6GNJg
▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬
00:00 Tekton vs. Argo Workflows and Pipelines
01:03 Comparison criteria
01:45 Templating
03:46 Pipelines
08:57 Web UI
11:23 Events and triggers
13:56 Catalogs and hubs
16:41 Documentation
17:12 Community
18:55 Final verdict
▬▬▬▬▬▬ 🚀 Courses, books, and podcasts 🚀 ▬▬▬▬▬▬
📚 DevOps Catalog, Patterns, And Blueprints: https://www.devopstoolkitseries.com/posts/catalog/
📚 Books and courses: https://www.devopstoolkitseries.com
🎤 Podcast: https://www.devopsparadox.com/
💬 Live streams: https://www.youtube.com/c/DevOpsParadox
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬
➡ Twitter: https://twitter.com/vfarcic
➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
#kubernetes #cd #ci
1625345040
Tekton is a powerful and flexible open-source framework for creating CI/CD systems aiming at becoming a de-facto standard for running pipelines and workflows in Kubernetes. It allows developers to build, test, and deploy across cloud providers and on-premise systems.
#tekton #kubernetes #ci #cd
▬▬▬▬▬▬ Timecodes ⏱ ▬▬▬▬▬▬
00:00 Intro
00:59 What is Tekton?
02:15 Setup
02:39 Tekton tasks
03:59 Tekton pipelines
11:14 Running Tekton pipelines
15:59 Tekton Web UI
18:00 Handling events
19:47 Tekton Hub
20:53 Pros and cons of using Tekton
▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬
➡ Gist with the commands: https://gist.github.com/633004d16587ba230fe4dbbcf97adf7e
🔗 Tekton: https://tekton.dev/
🎬 Argo Workflows and Pipelines: https://youtu.be/UMaivwrAyTA
🎬 Automation of Everything: https://youtu.be/XNXJtxkUKeY
🎬 Kaniko: https://youtu.be/EgwVQN6GNJg
▬▬▬▬▬▬ 🚀 Courses, books, and podcasts 🚀 ▬▬▬▬▬▬
📚 DevOps Catalog, Patterns, And Blueprints: https://www.devopstoolkitseries.com/posts/catalog/
📚 Books and courses: https://www.devopstoolkitseries.com
🎤 Podcast: https://www.devopsparadox.com/
💬 Live streams: https://www.youtube.com/c/DevOpsParadox
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬
➡ Twitter: https://twitter.com/vfarcic
➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
#kubernetes #ci #cd
1625337720
What is GitHub actions? How to use GitHub Actions, and does it work? How can we leverage its marketplace, and what is the pricing? Is it a good solution for CI/CD pipelines?
Let’s answer those and other questions through tutorial and review.
#github #githubactions #ci #cd
▬▬▬▬▬▬ Timecodes ⏱ ▬▬▬▬▬▬
00:00 What is GitHub Actions?
02:59 Setup
03:39 Exploring GitHub Actions syntax
10:44 Running GitHub Actions
12:46 Exploring other scenarios
18:39 Pricing
19:57 Pros and cons
25:53 Who should use GitHub Actions?
▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬
➡ Gist with the commands: https://gist.github.com/181614ae807a0cb961271b11bbd18d63
🔗 GitHub Actions: https://github.com/features/actions
🎬 Continuous integration, delivery, deployment, and testing explained: https://youtu.be/0ivcSjpUzl4
🎬 GitHub CLI: https://youtu.be/BII6ZY2Rnlc
🎬 K3d: https://youtu.be/mCesuGk-Fks
🎬 Kustomize: https://youtu.be/Twtbg6LFnAg
▬▬▬▬▬▬ 🚀 Courses, books, and podcasts 🚀 ▬▬▬▬▬▬
📚 DevOps Catalog, Patterns, And Blueprints: https://www.devopstoolkitseries.com/posts/catalog/
📚 Books and courses: https://www.devopstoolkitseries.com
🎤 Podcast: https://www.devopsparadox.com/
💬 Live streams: https://www.youtube.com/c/DevOpsParadox
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬
➡ Twitter: https://twitter.com/vfarcic
➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
#github #ci #cd