Gordon  Taylor

Gordon Taylor


GitOps Explained with Emoji

The history of software development could be written as constant acceleration. From the steady waterfall model of releasing new versions once a year, to agile making small features weekly, to modern cloud architecture pushing out code changes as often as you or I might take a coffee break.

How have we achieved this acceleration? Through the evolution of tools that make code deployment safer, easier and more observable.

GitOps is not a revolution, but rather a system to support more frequent releases that drive modern software development. It’s a standardized workflow for deploying, configuring, monitoring, updating and managing infrastructure as code. Most importantly, GitOps enables you to move faster safely.

‘Move Fast and Break Things’ Has its Limits

In the heady days of agile’s first introduction, speed was everything. Release every week, and who cares if you introduce bugs? Users would find them, you’d fix them and the evolving product would look more and more like the ideal product for users every day.

These ideals work when building a digital marketing agency or a goofy storefront that sells pencil toppers. But “who cares about a few bugs” doesn’t fly when you’re running an online bank concerned with compliance. Further, it’s one thing to introduce bugs by changing the product in ways that users don’t love. It’s quite another to break production and take the site down.

As we’ve encouraged developers to embrace DevOps and empower small teams to release microservices, it’s easier than ever to experiment with operations and make changes that break things.

The answer might seem like “slow down and be more careful,” but the actual answer is even more speed.

#microservices #contributed #sponsored

What is GEEK

Buddha Community

GitOps Explained with Emoji

GitOps on Google Cloud Platform - GitOps and Useful GitOps Tools

GitOps is a fairly new (2017) style of implementing DevOps practices that has quickly grown in popularity. This 3-part blog series will:

  • Explain the fundamentals of GitOps and the tools you will need in your repertoire to make the principles and practical approach a success in your enterprise.
  • Provide a walkthrough of the steps needed to setup and install all the necessary components for a successful GitOps automation process.
  • Outline the inner workings of the Caylent GitOps Accelerator and all its benefits.

Origin of GitOps

The term GitOps was coined in a blog post titled “GitOps—Operations by Pull Request” published on August 7, 2017 by Alexis Richardson, the CEO of Weaveworks. However, the fundamental concepts that underpin the GitOps methodology were largely devised by Google and codified in the now-famous Site Reliability Engineering book that was published in March 2016. However, SRE is a very Google-specific methodology that at the time was difficult to implement anywhere else. This has changed in the intervening years, but in the meantime GitOps has evolved to fill in some of the tools and practices that make it possible for everyone to manage systems the way Google does internally.

#gitops #cloud build #gitops tools #google cloud platform #cloud

Explaining the Explainable AI: A 2-Stage Approach

As artificial intelligence (AI) models, especially those using deep learning, have gained prominence over the last eight or so years [8], they are now significantly impacting society, ranging from loan decisions to self-driving cars. Inherently though, a majority of these models are opaque, and hence following their recommendations blindly in human critical applications can raise issues such as fairness, safety, reliability, along with many others. This has led to the emergence of a subfield in AI called explainable AI (XAI) [7]. XAI is primarily concerned with understanding or interpreting the decisions made by these opaque or black-box models so that one can appropriate trust, and in some cases, have even better performance through human-machine collaboration [5].

While there are multiple views on what XAI is [12] and how explainability can be formalized [4, 6], it is still unclear as to what XAI truly is and why it is hard to formalize mathematically. The reason for this lack of clarity is that not only must the model and/or data be considered but also the final consumer of the explanation. Most XAI methods [11, 9, 3], given this intermingled view, try to meet all these requirements at the same time. For example, many methods try to identify a sparse set of features that replicate the decision of the model. The sparsity is a proxy for the consumer’s mental model. An important question asks whether we can disentangle the steps that XAI methods are trying to accomplish? This may help us better understand the truly challenging parts as well as the simpler parts of XAI, not to mention it may motivate different types of methods.

Two-Stages of XAI

We conjecture that the XAI process can be broadly disentangled into two parts, as depicted in Figure 1. The first part is uncovering what is truly happening in the model that we want to understand, while the second part is about conveying that information to the user in a consumable way. The first part is relatively easy to formalize as it mainly deals with analyzing how well a simple proxy model might generalize either locally or globally with respect to (w.r.t.) data that is generated using the black-box model. Rather than having generalization guarantees w.r.t. the underlying distribution, we now want them w.r.t. the (conditional) output distribution of the model. Once we have some way of figuring out what is truly important, a second step is to communicate this information. This second part is much less clear as we do not have an objective way of characterizing an individual’s mind. This part, we believe, is what makes explainability as a whole so challenging to formalize. A mainstay for a lot of XAI research over the last year or so has been to conduct user studies to evaluate new XAI methods.

#overviews #ai #explainability #explainable ai #xai

GitOps Guide to the Galaxy (Ep 11): Working with Helm

If you’re using Helm for deployment what happens when you also use GitOps? GitOps lends itself easily in a Kubernetes environment when you have static YAML used declaratively. Helm is a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters. In this episode, we’ll walk through how to integrate Helm in your GitOps workflow and cover what steps and tools are needed.

What is GitOps Guide to the Galaxy?
Every other Thursday at 3am ET hosts Christian Hernandez and Chris Short sit down to discuss everything in the GitOps universe, from end-to-end CICD pipelines to creating Git workflows.

#gitops #kubernetes #gitops

GitOps – DevOps for Infrastructure Automation

GitOps offers a way to automate and manage infrastructure by using proven DevOps best practices such as version control, code review, and CI/CD pipelines.

GitOps offers a way to automate and manage infrastructure. It does this by using the same DevOps best practices that many teams already use, such as version control, code review, and CI/CD pipelines.

Companies have been adopting DevOps because of its great potential to improve productivity and software quality. Along the way, we’ve found ways to automate the software development lifecycle. But when it comes to infrastructure setup and deployments, it’s still mostly a manual process.

With GitOps teams can automate the infrastructure provisioning process. This is due to the ability to write your infrastructure as code (IaC) with the use of declaration files. We can store them in a Git repository, exactly as we store application development code.

#devops #infrastructure #gitops #gitops workflow

Agnes  Sauer

Agnes Sauer


Why Explainable AI is compulsory for Data Scientists?

Let’s understand why an explainable AI is making lot of fuss nowadays. Consider an example a person(consumer) Mr. X goes to bank for a personal loan and bank takes his demographic details, credit bureau details and last 6 month bank statement. After taking all the documents bank runs this on their production deployed machine Learning Model for checking whether this person will default on loan or not.

Image for post

A complex ML model which is deployed on their production says that this person has 55% chances of getting default on his loan and subsequently bank rejects Mr. X personal loan application.

Now Mr X is very angry and puzzled about his application rejection. So he went to bank manager for the explanation why his personal loan application got rejected. He looks his application and got puzzled that his application is good for granting a loan but why model has predicted false. This chaos has created doubt in manager’s mind about each loan that was previously rejected by the machine learning model. Although accuracy of the model is more than 98% percentage. But still it fails to gain the trust.

Every data scientist wants to deploy model on production which has highest accuracy in prediction of output. Below is the graph shown between interpretation and accuracy of the model.

Image for post

Interpreability Vs Accuracy of the Model

If you notice the increasing the accuracy of the model the interpreability of the model decrease significantly and that obstructs complex model to be used in production.

This is where Explainable AI rescue us. In Explainable AI does not only predict the outcome it also explain the process and features included to reach at the conclusion. Isn’t great right that model is explaining itself.

ML and AI application has reached to almost in each industry like Banking & Finance, Healthcare, Manufacturing, E commerce, etc. But still people are afraid to use the complex model in their field just because of they think that the complex machine learning model are black box and will not able to explain the output to businesses and stakeholders. I hope until now you have understood why Explainable AI is required for better and efficient use of machine learning and deep learning models.

Now, Let’s understand what is Explainable AI and How does it works ?

Explainable AI is set of tools and methods in Artificial Intelligence (AI) to explain the model output process that how an model has reached to particular output for a given data points.

Consider the above example where Mr. X loan has rejected and Bank Manager is not able to figure out why his application got rejected.Here an explainable can give the important features and their importance considered by the model to reach at this output. So now Manager has his report,

  1. He has more confidence on the model and it’s output.
  2. He can use more complex model as he is able to explain the output of the model to business and stakeholders.
  3. Now Mr. X got an explanation from bank about their loan rejection. He exactly knows what needs to be improved in order to get loan from the banks

#explainable-ai #explainability #artificial-intelligence #machine-learning-ai #machine-learning #deep learning