Fabiola  Auma

Fabiola Auma

1667778300

Libvirt K8s Provisioner: Automate Your K8s installation | Kubernetes

libvirt-k8s-provisioner - Automate your cluster provisioning from 0 to k8s!

Welcome to the home of the project!

With this project, you can build up in minutes a fully working k8s cluster (single master/HA) with as many worker nodes as you want.

DISCLAIMER

It is a hobby project, so it's not supported for production usage, but feel free to open issues and/or contributing to it!

How does it work?

Kubernetes version that is installed can be choosen between:

  • 1.25 - Latest 1.25 release (1.25.0)
  • 1.24 - Latest 1.24 release (1.24.4)
  • 1.23 - Latest 1.23 release (1.23.10)
  • 1.22 - Latest 1.22 release (1.22.13)
  • 1.21 - Latest 1.21 release (1.21.14)

Terraform will take care of the provisioning of:

  • Loadbalancer machine with haproxy installed and configured for HA clusters
  • k8s Master(s) VM(s)
  • k8s Worker(s) VM(s)

It also takes care of preparing the host machine with needed packages, configuring:

You can customize the setup choosing:

  • container runtime that you want to use (docker, cri-o, containerd).
  • schedulable master if you want to schedule on your master nodes or leave the taint.
  • service CIDR to be used during installation.
  • pod CIDR to be used during installation.
  • network plugin to be used, based on the documentation. Project Calico Flannel Project Cilium
  • additional SANS to be added to api-server
  • nginx-ingress-controller, haproxy-ingress-controller or Project Contour if you want to enable ingress management.
  • metalLB to manage bare-metal LoadBalancer services - WIP - Only L2 configuration can be set-up via playbook.
  • Rook-Ceph - To manage persistent storage, also configurable with single storage node.

All VMs are specular,prepared with:

OS:

cloud-init:

  • user: kube
  • pass: kuberocks
  • ssh-key: generated during vm-provisioning and stores in the project folder

The user is capable of logging via SSH too.

Quickstart

The playbook is meant to be ran against a local host or a remote host that has access to subnets that will be created, defined under vm_host group, depending on how many clusters you want to configure at once.

First of all, you need to install required collections to get started:

ansible-galaxy collection install -r requirements.yml

Once the collections are installed, you can simply run the playbook:

ansible-playbook main.yml

You can quickly make it work by configuring the needed vars, but you can go straight with the defaults!

You can also install your cluster using the Makefile with:

To install collections:

make setup

To install the cluster:

make create

Quickstart with Execution Environment

The playbooks are compatible with the newly introduced Execution environments (EE). To use them with an execution environment you need to have ansible-builder and ansible-navigator installed.

Build EE image

To build the EE image, jump in the execution-environment folder and run the build:

ansible-builder build -f execution-environment/execution-environment.yml -t k8s-ee

Run playbooks

To run the playbooks use ansible navigator:

ansible-navigator run main.yml -m stdout

Recommended sizing

Recommended sizings are:

RolevCPURAM
master22G
worker22G

vars/k8s_cluster.yml

General configuration

k8s:
  cluster_name: k8s-test
  cluster_os: Ubuntu
  cluster_version: 1.24
  container_runtime: crio
  master_schedulable: false

# Nodes configuration

  control_plane:
    vcpu: 2
    mem: 2
    vms: 3
    disk: 30

  worker_nodes:
    vcpu: 2
    mem: 2
    vms: 1
    disk: 30

# Network configuration

  network:
    network_cidr: 192.168.200.0/24
    domain: k8s.test
    additional_san: ""
    pod_cidr: 10.20.0.0/16
    service_cidr: 10.110.0.0/16
    cni_plugin: cilium

rook_ceph:
  install_rook: false
  volume_size: 50
      rook_cluster_size: 1

# Ingress controller configuration [nginx/haproxy]

ingress_controller:
  install_ingress_controller: true
  type: haproxy
      node_port:
        http: 31080
        https: 31443

# Section for metalLB setup

metallb:
  install_metallb: false
    l2:
    iprange: 192.168.200.210-192.168.200.250

Size for disk and mem is in GB. disk allows to provision space in the cloud image for pod's ephemeral storage.

cluster_version can be 1.20, 1.21, 1.22, 1.23, 1.24, 1.25 to install the corresponding latest version for the release

VMS are created with these names by default (customizing them is work in progress):

- **cluster_name**-loadbalancer.**domain**
- **cluster_name**-master-N.**domain**
- **cluster_name**-worker-N.**domain**

It is possible to choose CentOS/Ubuntu as kubernetes hosts OS

Multiple clusters - Thanks to @3rd-st-ninja for the input

Since last release, it is now possible to provision multiple clusters on the same host. Each cluster will be self consistent and will have its own folder under the //home/user/k8ssetup/clusters folder in playbook root folder.

clusters
└── k8s-provisioner
    ├── admin.kubeconfig
    ├── haproxy.cfg
    ├── id_rsa
    ├── id_rsa.pub
    ├── libvirt-resources
    │   ├── libvirt-resources.tf
    │   └── terraform.tfstate
    ├── loadbalancer
    │   ├── cloud_init.cfg
    │   ├── k8s-loadbalancer.tf
    │   └── terraform.tfstate
    ├── masters
    │   ├── cloud_init.cfg
    │   ├── k8s-master.tf
    │   └── terraform.tfstate
    ├── workers
    │   ├── cloud_init.cfg
    │   ├── k8s-workers.tf
    │   └── terraform.tfstate
    └── workers-rook
        ├── cloud_init.cfg
        └── k8s-workers.tf

In the main folder will be provided a custom script for removing the single cluster, without touching others.

k8s-provisioner-cleanup-playbook.yml

As well as a separated inventory for each cluster:

k8s-provisioner-inventory-k8s

In order to keep clusters separated, ensure that you use a different k8s.cluster_name,k8s.network.domain and k8s.network.network_cidr variables.

Rook

Rook setup actually creates a dedicated kind of worker, with an additional volume on the VMs that are required. Now it is possible to select the size of Rook cluster using rook_ceph.rook_cluster_size variable in the settings.

MetalLB

Basic setup taken from the documentation. At the moment, the parameter l2 reports the IPs that can be used (defaults to some IPs in the same subnet of the hosts) as 'external' IPs for accessing the applications

Suggestion and improvements are highly recommended! Alex


Download Details:

Author: kubealex
Source Code: https://github.com/kubealex/libvirt-k8s-provisioner

License: MIT license

#kubernetes 

What is GEEK

Buddha Community

Libvirt K8s Provisioner: Automate Your K8s installation | Kubernetes
Christa  Stehr

Christa Stehr

1602964260

50+ Useful Kubernetes Tools for 2020 - Part 2

Introduction

Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Maud  Rosenbaum

Maud Rosenbaum

1601051854

Kubernetes in the Cloud: Strategies for Effective Multi Cloud Implementations

Kubernetes is a highly popular container orchestration platform. Multi cloud is a strategy that leverages cloud resources from multiple vendors. Multi cloud strategies have become popular because they help prevent vendor lock-in and enable you to leverage a wide variety of cloud resources. However, multi cloud ecosystems are notoriously difficult to configure and maintain.

This article explains how you can leverage Kubernetes to reduce multi cloud complexities and improve stability, scalability, and velocity.

Kubernetes: Your Multi Cloud Strategy

Maintaining standardized application deployments becomes more challenging as your number of applications and the technologies they are based on increase. As environments, operating systems, and dependencies differ, management and operations require more effort and extensive documentation.

In the past, teams tried to get around these difficulties by creating isolated projects in the data center. Each project, including its configurations and requirements were managed independently. This required accurately predicting performance and the number of users before deployment and taking down applications to update operating systems or applications. There were many chances for error.

Kubernetes can provide an alternative to the old method, enabling teams to deploy applications independent of the environment in containers. This eliminates the need to create resource partitions and enables teams to operate infrastructure as a unified whole.

In particular, Kubernetes makes it easier to deploy a multi cloud strategy since it enables you to abstract away service differences. With Kubernetes deployments you can work from a consistent platform and optimize services and applications according to your business needs.

The Compelling Attributes of Multi Cloud Kubernetes

Multi cloud Kubernetes can provide multiple benefits beyond a single cloud deployment. Below are some of the most notable advantages.

Stability

In addition to the built-in scalability, fault tolerance, and auto-healing features of Kubernetes, multi cloud deployments can provide service redundancy. For example, you can mirror applications or split microservices across vendors. This reduces the risk of a vendor-related outage and enables you to create failovers.

#kubernetes #multicloud-strategy #kubernetes-cluster #kubernetes-top-story #kubernetes-cluster-install #kubernetes-explained #kubernetes-infrastructure #cloud

Origin Scale

Origin Scale

1620805745

Automation Management System

Want to try automated inventory management system for small businesses? Originscale automation software automate your data flow across orders, inventory, and purchasing. TRY FOR FREE

#automation #automation software #automated inventory management #automated inventory management system #automation management system #inventory automation

Understanding Kubernetes Operators

Automation is one of the fundamental components that makes Kubernetes so robust as a containerization engine. Even complex cloud infrastructure creation can be automated in order to simplify the process of managing cloud deployments. Despite the capability of leveraging so many resources and components to support an application, your cloud environment can still be fairly manageable.

Despite the many tools available on Kubernetes, the effort to make cloud infrastructure management more scalable and automated is ongoing. Kubernetes operator is one of the tools designed to push automation past its limits. You can do so much more without having to rely on manual inputs every time.

Getting to Know Kubernetes Operators

A Kubernetes operator, by definition, is an orchestration framework. It is a tool that lets you orchestrate and maintain cloud infrastructures with little to no human input. Kubernetes define operators as software extensions designed to utilize custom resources to manage applications and their components.

Kubernetes operators are not complex at all. Operators use controllers and the Kubernetes API to handle packaging, deployment, management, and maintenance of applications and the custom resources that they need. The whole process is fully automated, plus you can still rely on _kubectl _tooling for commands and operations.

In other words, an operator is basically a custom Kubernetes controller that integrates custom resources for management purposes. You can define parameters and configurations inside the custom resources directly, and then let the operators translate those parameters and run autonomously. Kubernetes operators’ continuous nature is their defining factor.

#blog #kubernetes #automation #kubernetes api #kubernetes deployment #kubernetes operators

Fabiola  Auma

Fabiola Auma

1667778300

Libvirt K8s Provisioner: Automate Your K8s installation | Kubernetes

libvirt-k8s-provisioner - Automate your cluster provisioning from 0 to k8s!

Welcome to the home of the project!

With this project, you can build up in minutes a fully working k8s cluster (single master/HA) with as many worker nodes as you want.

DISCLAIMER

It is a hobby project, so it's not supported for production usage, but feel free to open issues and/or contributing to it!

How does it work?

Kubernetes version that is installed can be choosen between:

  • 1.25 - Latest 1.25 release (1.25.0)
  • 1.24 - Latest 1.24 release (1.24.4)
  • 1.23 - Latest 1.23 release (1.23.10)
  • 1.22 - Latest 1.22 release (1.22.13)
  • 1.21 - Latest 1.21 release (1.21.14)

Terraform will take care of the provisioning of:

  • Loadbalancer machine with haproxy installed and configured for HA clusters
  • k8s Master(s) VM(s)
  • k8s Worker(s) VM(s)

It also takes care of preparing the host machine with needed packages, configuring:

You can customize the setup choosing:

  • container runtime that you want to use (docker, cri-o, containerd).
  • schedulable master if you want to schedule on your master nodes or leave the taint.
  • service CIDR to be used during installation.
  • pod CIDR to be used during installation.
  • network plugin to be used, based on the documentation. Project Calico Flannel Project Cilium
  • additional SANS to be added to api-server
  • nginx-ingress-controller, haproxy-ingress-controller or Project Contour if you want to enable ingress management.
  • metalLB to manage bare-metal LoadBalancer services - WIP - Only L2 configuration can be set-up via playbook.
  • Rook-Ceph - To manage persistent storage, also configurable with single storage node.

All VMs are specular,prepared with:

OS:

cloud-init:

  • user: kube
  • pass: kuberocks
  • ssh-key: generated during vm-provisioning and stores in the project folder

The user is capable of logging via SSH too.

Quickstart

The playbook is meant to be ran against a local host or a remote host that has access to subnets that will be created, defined under vm_host group, depending on how many clusters you want to configure at once.

First of all, you need to install required collections to get started:

ansible-galaxy collection install -r requirements.yml

Once the collections are installed, you can simply run the playbook:

ansible-playbook main.yml

You can quickly make it work by configuring the needed vars, but you can go straight with the defaults!

You can also install your cluster using the Makefile with:

To install collections:

make setup

To install the cluster:

make create

Quickstart with Execution Environment

The playbooks are compatible with the newly introduced Execution environments (EE). To use them with an execution environment you need to have ansible-builder and ansible-navigator installed.

Build EE image

To build the EE image, jump in the execution-environment folder and run the build:

ansible-builder build -f execution-environment/execution-environment.yml -t k8s-ee

Run playbooks

To run the playbooks use ansible navigator:

ansible-navigator run main.yml -m stdout

Recommended sizing

Recommended sizings are:

RolevCPURAM
master22G
worker22G

vars/k8s_cluster.yml

General configuration

k8s:
  cluster_name: k8s-test
  cluster_os: Ubuntu
  cluster_version: 1.24
  container_runtime: crio
  master_schedulable: false

# Nodes configuration

  control_plane:
    vcpu: 2
    mem: 2
    vms: 3
    disk: 30

  worker_nodes:
    vcpu: 2
    mem: 2
    vms: 1
    disk: 30

# Network configuration

  network:
    network_cidr: 192.168.200.0/24
    domain: k8s.test
    additional_san: ""
    pod_cidr: 10.20.0.0/16
    service_cidr: 10.110.0.0/16
    cni_plugin: cilium

rook_ceph:
  install_rook: false
  volume_size: 50
      rook_cluster_size: 1

# Ingress controller configuration [nginx/haproxy]

ingress_controller:
  install_ingress_controller: true
  type: haproxy
      node_port:
        http: 31080
        https: 31443

# Section for metalLB setup

metallb:
  install_metallb: false
    l2:
    iprange: 192.168.200.210-192.168.200.250

Size for disk and mem is in GB. disk allows to provision space in the cloud image for pod's ephemeral storage.

cluster_version can be 1.20, 1.21, 1.22, 1.23, 1.24, 1.25 to install the corresponding latest version for the release

VMS are created with these names by default (customizing them is work in progress):

- **cluster_name**-loadbalancer.**domain**
- **cluster_name**-master-N.**domain**
- **cluster_name**-worker-N.**domain**

It is possible to choose CentOS/Ubuntu as kubernetes hosts OS

Multiple clusters - Thanks to @3rd-st-ninja for the input

Since last release, it is now possible to provision multiple clusters on the same host. Each cluster will be self consistent and will have its own folder under the //home/user/k8ssetup/clusters folder in playbook root folder.

clusters
└── k8s-provisioner
    ├── admin.kubeconfig
    ├── haproxy.cfg
    ├── id_rsa
    ├── id_rsa.pub
    ├── libvirt-resources
    │   ├── libvirt-resources.tf
    │   └── terraform.tfstate
    ├── loadbalancer
    │   ├── cloud_init.cfg
    │   ├── k8s-loadbalancer.tf
    │   └── terraform.tfstate
    ├── masters
    │   ├── cloud_init.cfg
    │   ├── k8s-master.tf
    │   └── terraform.tfstate
    ├── workers
    │   ├── cloud_init.cfg
    │   ├── k8s-workers.tf
    │   └── terraform.tfstate
    └── workers-rook
        ├── cloud_init.cfg
        └── k8s-workers.tf

In the main folder will be provided a custom script for removing the single cluster, without touching others.

k8s-provisioner-cleanup-playbook.yml

As well as a separated inventory for each cluster:

k8s-provisioner-inventory-k8s

In order to keep clusters separated, ensure that you use a different k8s.cluster_name,k8s.network.domain and k8s.network.network_cidr variables.

Rook

Rook setup actually creates a dedicated kind of worker, with an additional volume on the VMs that are required. Now it is possible to select the size of Rook cluster using rook_ceph.rook_cluster_size variable in the settings.

MetalLB

Basic setup taken from the documentation. At the moment, the parameter l2 reports the IPs that can be used (defaults to some IPs in the same subnet of the hosts) as 'external' IPs for accessing the applications

Suggestion and improvements are highly recommended! Alex


Download Details:

Author: kubealex
Source Code: https://github.com/kubealex/libvirt-k8s-provisioner

License: MIT license

#kubernetes