Maud  Rosenbaum

Maud Rosenbaum

1602938520

Continuous Deployment Shouldn't Be Hard

Introduction

Over the past decade, continuous integration (CI) and continuous delivery (CD) have become staples of the software development lifecycle. CI automates the process of merging code and checking for basic regressions and code quality issues, relieving some of the code review burdens on your dev team. CD and automated deployments eliminate the overhead involved each time a new feature or a hotfix needs to get deployed.

Imagine if there were no more nights and weekends spent packaging builds and manually deploying across servers! A functional CI/CD setup makes it significantly easier to have a truly agile workflow, as you can deploy as frequently as you want to.

However, CD, in particular, can be difficult to set up, oftentimes involving learning a whole new set of skills involving Dockerfiles, YAML, and the idiosyncrasies of each app and environment. Especially for smaller teams, these complexities make the idea of having automated deployments just a dream.

Continuous deployment doesn’t need to be this hard to set up.

As a full-stack developer and consultant who often helps dev teams to increase the value they deliver each sprint, when Heroku Flow came onto my radar I knew it was time to take a closer look. Could this be the simple, straightforward solution I’d been looking for?

What Is Heroku Flow?

Heroku Flow is the umbrella for a few different Heroku products which work together to provide a full CI/CD suite of tools. For CI, there’s Heroku CI. For CD, there is Heroku Pipelines, which allows you to specify a group of environments within which to promote builds, and Heroku Review Apps, which give you on-demand builds of each pull request. Bringing it all together is the GitHub Integration, which allows the process to be automatically triggered simply by pushing to your default branch.

Let’s set up a sample application and see what it takes. Do note that Heroku Review Apps is currently only available with the GitHub Integration.

#devops #heroku #ci/cd #continuous deployment #ci/cd pipeline #web programming

What is GEEK

Buddha Community

Continuous Deployment Shouldn't Be Hard
Christa  Stehr

Christa Stehr

1602964260

50+ Useful Kubernetes Tools for 2020 - Part 2

Introduction

Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

8 Fallacies of Continuous Delivery

A quintessential piece for anyone working with distributed systems is the Fallacies of Distributed Computing by L Peter Deutsch. Even when working with modern platforms such as Kubernetes, the assertions made in the Fallacies of Distributed Computing prove to be very true around latency, bandwidth and system administration.

Continuous Delivery practices and systems are increasing in popularity. When designing, implementing or maintaining Continuous Delivery systems, fallacies do exist. Similar to the eight Fallacies of Distributed Computing, there are eight Fallacies of Continuous Delivery.

1. You Will Always Deploy Successfully

A common pitfall in any system development is to build for the happy path. Because software requires innovation and iteration, deployments will fail, and a failure and recovery path needs to be accounted for.

In lower environments, confidence-building steps such as automated tests will have a higher failure / not-passing rate, as confidence is built into the deployment and feedback loops allow for corrections to eventually pass the test coverage.

2. Your Administrators Will Stay

People never stay in the same position forever. Deep expertise in bespoke deployments is at risk with those with tribal knowledge off-board. This also causes a steeper learning curve for those who onboard as platform administrators or onboard their application to the Continuous Delivery system.

**3. Deployments are Always Homogeneous **

A deployment is a culmination of potentially multiple teams and their respective services. There are several approaches to deployment, but because of variations in the scope of changes, rarely are two changes exactly the same. Certain deployments require downtime, while others may require a rolling or canary release strategy.

**4. Rollback Cost is Zero **

The time to decide or make a judgment call to rollback or roll forward certainly carries a cost. Depending on the criticality of the impacted system(s), the clock is ticking, battling the technical point of no return and impact to the business. Once a rollback or roll forward decision is made and executed, validation still needs to occur.

#continuous-integration #continuous-delivery #continuous-deployment #kubernetes #app-development #distributed-computing #devops #hackernoon-top-story

Madyson  Reilly

Madyson Reilly

1602924480

The Role of Continuous Integration in Agile

l tasks, developers can focus on more enjoyable, value-adding work. And because the delivery lifecycle doesn’t have to wait for human intervention, bottlenecks are eliminated and time to delivery is faster.

Additionally, any errors are found easily and resolved quickly because small batches of code are released frequently.

Continuous integration has many benefits, including:

  • Rapid integration
  • Improved visibility
  • Increase coordination and communication
  • Improved quality
  • Reduced risk
  • Fast resolution of issues
  • Reallocation of resources to strategic objectives

#devops #continuous delivery #continuous deployment #agile methodology #devops and agile #continuous integratinon

Tutorial to Continuous Deployment Pipelines for ML-based Web Apps on Google Cloud

A comprehensive guide to ML App Deployment using Flask

Machine Learning (ML) models typically leverage the capability to learn patterns from previously seen (training) data and apply them to predict the outcome for new (test) data. Deploying ML models as web apps can help test the efficacy of trained ML models by allowing access to multiple users and testing environments, thereby gathering test performance metrics. However, ML web app deployment in production can be a highly complex process that may need to ensure minimal down-time for users in the event of app update processes. Cloud-based deployment solutions such as Google Cloud Platform (GCP) have highly simplified the process of continuous integration and continuous deployment (CI/CD) through pipelines and triggers that can be designed to ensure sanity checks as well as the integrity of the integrated code base for updated application runs.

In this tutorial, we will review a detailed example wherein we deploy a ML model web app on GCP through a CD pipeline. To follow the steps from data analysis to final app deployment, fork the github repository at [1] and follow the steps below.

#ml-application #continuous-deployment #flask-restful #deployment #google-cloud-run

Robbie  Barton

Robbie Barton

1591319280

Continuous Delivery Platform for Cloud Native Applications

What is a Continuous Delivery Platform?

NexaStack is a Continuous Delivery Platform to Automate, Monitor, Analyse your Cloud-Native Application Delivery. NexaStack Continuous Delivery Platform Accelerates your Data Driven Application Delivery with proper visibility and security. NexaStack got initiated as an internal project of XenonStack in the year 2014, developed as a Continuous Delivery Platform to serve the internal requirements of our team.

#blogs #serverless #continuous delivery #continuous deployment #continuous integration