Implement etcd backup and restore — CKA Exam Preparation Series

This is one of the numerous posts by TechCommanders in a series for studying for the Certified Kubernetes Administrator (CKA) Exam.

Become a Certified Kubernetes Administrator (CKA)!

Etcd is a vital component of Kubernetes cluster. The Etcd nodes exchange the information through the Raft distributed consensus algorithm. For this tutorial we will be using Rancher’s RKE clusters and learn how we can back up etcd from one cluster and restore it to another. This concept will help you in scenarios like when your running cluster goes wrong due to any reason and you need to transfer it to a new one or the spare one.

So, before getting started with backup and restoration, let’s get a brief idea about Rancher’s RKE.

What is RKE?

It is a CNCF-certified Kubernetes distribution that runs entirely within docker containers. RKE is managed by a company named Rancher Labs. It is a management platform for Kubernetes which also gives us liability to manage many clusters within the same interface. It covers cluster provisioning, user access control, workload deployment, and many more things.

This is also well-known in an open-source environment and the binary file for the installation on different OS is available here. Once installed, we can list all the available commands in a terminal like this.

#cka #kubernetes #cka-training #kubernetes-cluster #ckad

What is GEEK

Buddha Community

Implement etcd backup and restore — CKA Exam Preparation Series

Implement etcd backup and restore — CKA Exam Preparation Series

This is one of the numerous posts by TechCommanders in a series for studying for the Certified Kubernetes Administrator (CKA) Exam.

Become a Certified Kubernetes Administrator (CKA)!

Etcd is a vital component of Kubernetes cluster. The Etcd nodes exchange the information through the Raft distributed consensus algorithm. For this tutorial we will be using Rancher’s RKE clusters and learn how we can back up etcd from one cluster and restore it to another. This concept will help you in scenarios like when your running cluster goes wrong due to any reason and you need to transfer it to a new one or the spare one.

So, before getting started with backup and restoration, let’s get a brief idea about Rancher’s RKE.

What is RKE?

It is a CNCF-certified Kubernetes distribution that runs entirely within docker containers. RKE is managed by a company named Rancher Labs. It is a management platform for Kubernetes which also gives us liability to manage many clusters within the same interface. It covers cluster provisioning, user access control, workload deployment, and many more things.

This is also well-known in an open-source environment and the binary file for the installation on different OS is available here. Once installed, we can list all the available commands in a terminal like this.

#cka #kubernetes #cka-training #kubernetes-cluster #ckad

Learn how to Secure a Kubernetes Cluster — CKA Exam Preparation Series

This is one of the numerous posts by TechCommanders in a series for studying for the Certified Kubernetes Administrator (CKA) Exam.

Become a Certified Kubernetes Administrator (CKA)!

Before learning how to secure a Kubernetes Cluster. Let’s read about why it is important to secure a Kubernetes cluster. Last year, somewhere around January the world’s biggest orchestration system, experienced a major security vulnerability that hit the project ecosystem hard.

We won’t get much into the vulnerability but would like to provide you with an overview. Using this vulnerability attackers can compromise clusters using an API server. This allows them to execute malicious code and put in malware.

The other case that we had come across was because of the wrong configuration of the Kubernetes cluster which led to the installation of cryptocurrency mining software on tesla resources.

One attacker took the advantage of a non-protected Kubernetes Panel by which they were allowed to access the pods and make changes in a larger part of Tesla on AWS.

So, the organizations which are using this orchestration system or shifting to this should be aware of the best security practice to support customer data. Follow the following advice to protect your infrastructure.

#cka-training #kubernetes-cluster #kubernetes #cka #ckad

Art  Lind

Art Lind

1597986304

Backup and Restoring In Jenkins

If you are using Jenkins for a while then you must be aware about the importance of jobs related data and what can happen when the data is lost. This blog provides some ways through which backup and restoring in Jenkins can be carried out.

The data loss can be the result of hardware or software failure, data corruption, or a human-caused event, or accidental deletion of data.

The purpose of the backup is to create a copy of data that can be restored in the event of a primary data failure. Backup copies allow data to be restored from an earlier point in time to help the business recover from an unplanned event.

Why Jenkins needs Backup and Restoring mechanism ?

In Jenkins, all the settings, build logs, and archives of the artifacts are stored under the JENKINS_HOME directory as Jenkins doesn’t use any database. Setting access rights, selecting the necessary plugins and job configuration is quite a laborious process, so it’s a good idea to organize regular backups of all the necessary settings and parameters.

How to perform Backup and Restoring In Jenkins

In this post we will be exploring 2 ways :

  • By creating a freestyle project
  • By using ThinBackup plugin

How to take backups in Jenkins ?

The simplest way is to just keep Jobs’s folder separately as backup and whenever it’s needed just copy it back.

As the build jobs created under this directory contains all the details of each and every individual jobs configured in the Jenkins install. The files related to jobs can be replicated to multiple locations.

But again it’s a manual task to copy the files from one location to another so instead let’s leverage Jenkins for B & R process by automating it.

Creating a freestyle project to take regular backups

Before proceeding initiate git repository and connect with it.

Now lets configure a freestyle project that takes regular backup .

  • Create a new item in Jenkins and choose Freestyle project
  • In general section provide description if required and omit everything else.
  • Choose None in Source Code Management section
  • In Build Triggers section choose Build periodically and provide a cron expression to trigger your backup . For example :
  • 0 12 * * * // will trigger at 12:00pm
  • 45 12 * * * // will trigger at 12:45pm
  • Then choose Execute shell from the Build section and write a shell script ( or you can use the script provided in extra section at the bottom of this post ) to push the contents of your JENKINS_HOME to GITHUB as a backup which can be pulled whenever required from the same .
  • Finally save the job and you have automated your backup process.
  • You can also make this job run after every job you build by selecting build other project option from the post-build section of other project builds.

To restore Jenkins

  • Go to JENKINS_HOME directory and initiate a new git repository
  • Cleans the working tree by recursively removing files that are not under version control
  • Add a new remote and you just have Pull all data from GitHub
  • Now all you have to do is restart Jenkins and its restored.

Using ThinBackup

Jenkins can be made enormously powerful by integrating several plugins. Here we will use a plugin for the backup management in Jenkins – ThinBackup plugin.

This plugin backs up the job specific configuration files.

#devops #jenkins #tech blogs #backup #devops #jenkins plugins #restore #thinbackup

Paris  Turcotte

Paris Turcotte

1626476580

My Experience With The CKA Exam

Last Monday 22 March I took the exam for the CKA certificate from the Linux Foundation, and I’m here to tell you about my experience of how I prepared for it and what to expect in the exam. I highly recommend that you also read a post from my colleague Fernando from a few months ago about the CKAD exam, which goes into much more detail about the aspects of these exams and helps you to pass them. I got some essential tips from reading his post!

Previous Experiences with Kubernetes

In all honesty, exactly a year ago I knew nothing about Kubernetes - it wasn’t because I had never worked with it, but that I wasn’t even clear about the concept of “container”. It’s true that this last year I’ve been working with this tool frequently (which undoubtedly helped me when preparing for the exam), but I think you need no more than a little previous experience.

How I Prepared for The Exam

Despite the knowledge I already had of K8s resources, using kubectl or managing clusters, we felt that it was necessary for me to sign up for this course on Udemy: link. It begins by explaining the most basic concepts of both administration and Kubernetes resources, which can help people with less experience, and in no time it begins to explain more advanced content.

However, in the exam, many questions are about the most basic content, so I personally recommend paying attention to these lessons as well. The course is made up of videos and practical labs and even has some exams at the end with questions very similar to the real exam.

#kubernetes #exams #cka

Loma  Baumbach

Loma Baumbach

1600524480

How to Configure Storage on a Kubernetes Cluster- CKA Exam Preparation Series

This is one of the numerous posts by TechCommanders in a series for studying for the Certified Kubernetes Administrator (CKA) Exam.

Become a Certified Kubernetes Administrator (CKA)!

Before starting with how we can configure storage I would like to explain to you the need for this. Let’s start by taking an example: Imagine you have a PHP application deployed which generates a PDF file and updates the status on the database with generated and then renders it.

Now, I am increasing the instances of my PHP application. So, now if I ask them to create a pdf file one instance will update the status to generate whereas the other instance tries to find the file for rendering, but it wasn’t uploaded to the database yet. Getting my point here? This is the general scenario we also see at the time of deadlock. It’s not a complete deadlock but can be related to it.

This issue can’t be understood easily. For understanding the gravity of the issue, you need to deploy the application and scale it up. To resolve these kinds of issues Kubernetes came up with CSI, Container Storage Interface.

We need to design applications and decouple the logic and static files to get this point. Assume there are some files that need to be saved on a shared space so that it can be accessible by all the possible replicas of the application.

Image for post

In this image, we have figured out the nodes we need to scale up the environment, but storage is still a question. At least, by this image, we can see that we need a shared storage for all of our replicas. In Kubernetes, we have storage in 2 parts.

The storage service can be on its own or on the same server the K8s cluster is put on and the provisioner. The provisioner is a piece of software that respects the Container Storage Interface and is deployed to Kubernetes.

The provisioner is important as it handles the creation of Persistent Volumes and their deletion. Depending on this storage service, you can find a suitable provisioner. There are different provisioners available and the choice depends mostly on the two factors where you are deploying and what you want to achieve.

#kubernetes #devops #cka #kubernetes-cluster #google-cloud-platform