1667624400
A collection of playbooks for deploying/managing/upgrading a Kubernetes cluster onto machines, they are fully automated command to bring up a Kubernetes cluster on bare-metal or VMs.
Feature list:
In this section you will deploy a cluster via vagrant.
Prerequisites:
sshpass
tool.$ brew install http://git.io/sshpass.rb
The getting started guide will use Vagrant with VirtualBox to deploy a Kubernetes cluster onto virtual machines. You can deploy the cluster with a single command:
$ ./hack/setup-vms
Cluster Size: 1 master, 2 worker.
VM Size: 1 vCPU, 2048 MB
VM Info: ubuntu16, virtualbox
CNI binding iface: eth1
Start to deploy?(y):
- You also can use
sudo ./hack/setup-vms -p libvirt -i eth1
command to deploy the cluster onto KVM.
If you want to access API you need to create RBAC object define the permission of role. For example using cluster-admin
role:
$ kubectl create clusterrolebinding open-api --clusterrole=cluster-admin --user=system:anonymous
Login the addon's dashboard:
As of release 1.7 Dashboard no longer has full admin privileges granted by default, so you need to create a token to access the resources:
$ kubectl -n kube-system create sa dashboard
$ kubectl create clusterrolebinding dashboard --clusterrole cluster-admin --serviceaccount=kube-system:dashboard
$ kubectl -n kube-system get sa dashboard -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2017-11-27T17:06:41Z
name: dashboard
namespace: kube-system
resourceVersion: "69076"
selfLink: /api/v1/namespaces/kube-system/serviceaccounts/dashboard
uid: 56b880bf-d395-11e7-9528-448a5ba4bd34
secrets:
- name: dashboard-token-vg52j
$ kubectl -n kube-system describe secrets dashboard-token-vg52j
...
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtdG9rZW4tdmc1MmoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNTZiODgwYmYtZDM5NS0xMWU3LTk1MjgtNDQ4YTViYTRiZDM0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZCJ9.bVRECfNS4NDmWAFWxGbAi1n9SfQ-TMNafPtF70pbp9Kun9RbC3BNR5NjTEuKjwt8nqZ6k3r09UKJ4dpo2lHtr2RTNAfEsoEGtoMlW8X9lg70ccPB0M1KJiz3c7-gpDUaQRIMNwz42db7Q1dN7HLieD6I4lFsHgk9NPUIVKqJ0p6PNTp99pBwvpvnKX72NIiIvgRwC2cnFr3R6WdUEsuVfuWGdF-jXyc6lS7_kOiXp2yh6Ym_YYIr3SsjYK7XUIPHrBqWjF-KXO_AL3J8J_UebtWSGomYvuXXbbAUefbOK4qopqQ6FzRXQs00KrKa8sfqrKMm_x71Kyqq6RbFECsHPA
Copy and paste the
token
to dashboard.
In this section you will manually deploy a cluster on your machines.
Prerequisites:
deploy
node.For machine example:
IP Address | Role | CPU | Memory |
---|---|---|---|
172.16.35.9 | vip | - | - |
172.16.35.10 | k8s-m1 | 4 | 8G |
172.16.35.11 | k8s-n1 | 4 | 8G |
172.16.35.12 | k8s-n2 | 4 | 8G |
172.16.35.13 | k8s-n3 | 4 | 8G |
Add the machine info gathered above into a file called inventory/hosts.ini
. For inventory example:
[etcds]
k8s-m1
k8s-n[1:2]
[masters]
k8s-m1
k8s-n1
[nodes]
k8s-n[1:3]
[kube-cluster:children]
masters
nodes
Set the variables in group_vars/all.yml
to reflect you need options. For example:
# overide kubernetes version(default: 1.10.6)
kube_version: 1.11.2
# container runtime, supported: docker, nvidia-docker, containerd.
container_runtime: docker
# container network, supported: calico, flannel.
cni_enable: true
container_network: calico
cni_iface: ''
# highly available variables
vip_interface: ''
vip_address: 172.16.35.9
# etcd variables
etcd_iface: ''
# kubernetes extra addons variables
enable_dashboard: true
enable_logging: false
enable_monitoring: false
enable_ingress: false
enable_metric_server: true
# monitoring grafana user/password
monitoring_grafana_user: "admin"
monitoring_grafana_password: "p@ssw0rd"
If everything is ready, just run cluster.yml
playbook to deploy the cluster:
$ ansible-playbook -i inventory/hosts.ini cluster.yml
And then run addons.yml
to create addons:
$ ansible-playbook -i inventory/hosts.ini addons.yml
Verify that you have deployed the cluster, check the cluster as following commands:
$ kubectl -n kube-system get po,svc
NAME READY STATUS RESTARTS AGE IP NODE
po/haproxy-master1 1/1 Running 0 2h 172.16.35.10 k8s-m1
...
Finally, if you want to clean the cluster and redeploy, you can reset the cluster by reset-cluster.yml
playbook.:
$ ansible-playbook -i inventory/hosts.ini reset-cluster.yml
Pull requests are always welcome!!! I am always thrilled to receive pull requests.
Author: kairen
Source Code: https://github.com/kairen/kube-ansible
License: Apache-2.0 license
1602964260
Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.
According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.
(State of Kubernetes and Container Security, 2020)
And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.
(State of Kubernetes and Container Security, 2020)
#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml
1601051854
Kubernetes is a highly popular container orchestration platform. Multi cloud is a strategy that leverages cloud resources from multiple vendors. Multi cloud strategies have become popular because they help prevent vendor lock-in and enable you to leverage a wide variety of cloud resources. However, multi cloud ecosystems are notoriously difficult to configure and maintain.
This article explains how you can leverage Kubernetes to reduce multi cloud complexities and improve stability, scalability, and velocity.
Maintaining standardized application deployments becomes more challenging as your number of applications and the technologies they are based on increase. As environments, operating systems, and dependencies differ, management and operations require more effort and extensive documentation.
In the past, teams tried to get around these difficulties by creating isolated projects in the data center. Each project, including its configurations and requirements were managed independently. This required accurately predicting performance and the number of users before deployment and taking down applications to update operating systems or applications. There were many chances for error.
Kubernetes can provide an alternative to the old method, enabling teams to deploy applications independent of the environment in containers. This eliminates the need to create resource partitions and enables teams to operate infrastructure as a unified whole.
In particular, Kubernetes makes it easier to deploy a multi cloud strategy since it enables you to abstract away service differences. With Kubernetes deployments you can work from a consistent platform and optimize services and applications according to your business needs.
The Compelling Attributes of Multi Cloud Kubernetes
Multi cloud Kubernetes can provide multiple benefits beyond a single cloud deployment. Below are some of the most notable advantages.
Stability
In addition to the built-in scalability, fault tolerance, and auto-healing features of Kubernetes, multi cloud deployments can provide service redundancy. For example, you can mirror applications or split microservices across vendors. This reduces the risk of a vendor-related outage and enables you to create failovers.
#kubernetes #multicloud-strategy #kubernetes-cluster #kubernetes-top-story #kubernetes-cluster-install #kubernetes-explained #kubernetes-infrastructure #cloud
1667624400
A collection of playbooks for deploying/managing/upgrading a Kubernetes cluster onto machines, they are fully automated command to bring up a Kubernetes cluster on bare-metal or VMs.
Feature list:
In this section you will deploy a cluster via vagrant.
Prerequisites:
sshpass
tool.$ brew install http://git.io/sshpass.rb
The getting started guide will use Vagrant with VirtualBox to deploy a Kubernetes cluster onto virtual machines. You can deploy the cluster with a single command:
$ ./hack/setup-vms
Cluster Size: 1 master, 2 worker.
VM Size: 1 vCPU, 2048 MB
VM Info: ubuntu16, virtualbox
CNI binding iface: eth1
Start to deploy?(y):
- You also can use
sudo ./hack/setup-vms -p libvirt -i eth1
command to deploy the cluster onto KVM.
If you want to access API you need to create RBAC object define the permission of role. For example using cluster-admin
role:
$ kubectl create clusterrolebinding open-api --clusterrole=cluster-admin --user=system:anonymous
Login the addon's dashboard:
As of release 1.7 Dashboard no longer has full admin privileges granted by default, so you need to create a token to access the resources:
$ kubectl -n kube-system create sa dashboard
$ kubectl create clusterrolebinding dashboard --clusterrole cluster-admin --serviceaccount=kube-system:dashboard
$ kubectl -n kube-system get sa dashboard -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2017-11-27T17:06:41Z
name: dashboard
namespace: kube-system
resourceVersion: "69076"
selfLink: /api/v1/namespaces/kube-system/serviceaccounts/dashboard
uid: 56b880bf-d395-11e7-9528-448a5ba4bd34
secrets:
- name: dashboard-token-vg52j
$ kubectl -n kube-system describe secrets dashboard-token-vg52j
...
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtdG9rZW4tdmc1MmoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNTZiODgwYmYtZDM5NS0xMWU3LTk1MjgtNDQ4YTViYTRiZDM0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZCJ9.bVRECfNS4NDmWAFWxGbAi1n9SfQ-TMNafPtF70pbp9Kun9RbC3BNR5NjTEuKjwt8nqZ6k3r09UKJ4dpo2lHtr2RTNAfEsoEGtoMlW8X9lg70ccPB0M1KJiz3c7-gpDUaQRIMNwz42db7Q1dN7HLieD6I4lFsHgk9NPUIVKqJ0p6PNTp99pBwvpvnKX72NIiIvgRwC2cnFr3R6WdUEsuVfuWGdF-jXyc6lS7_kOiXp2yh6Ym_YYIr3SsjYK7XUIPHrBqWjF-KXO_AL3J8J_UebtWSGomYvuXXbbAUefbOK4qopqQ6FzRXQs00KrKa8sfqrKMm_x71Kyqq6RbFECsHPA
Copy and paste the
token
to dashboard.
In this section you will manually deploy a cluster on your machines.
Prerequisites:
deploy
node.For machine example:
IP Address | Role | CPU | Memory |
---|---|---|---|
172.16.35.9 | vip | - | - |
172.16.35.10 | k8s-m1 | 4 | 8G |
172.16.35.11 | k8s-n1 | 4 | 8G |
172.16.35.12 | k8s-n2 | 4 | 8G |
172.16.35.13 | k8s-n3 | 4 | 8G |
Add the machine info gathered above into a file called inventory/hosts.ini
. For inventory example:
[etcds]
k8s-m1
k8s-n[1:2]
[masters]
k8s-m1
k8s-n1
[nodes]
k8s-n[1:3]
[kube-cluster:children]
masters
nodes
Set the variables in group_vars/all.yml
to reflect you need options. For example:
# overide kubernetes version(default: 1.10.6)
kube_version: 1.11.2
# container runtime, supported: docker, nvidia-docker, containerd.
container_runtime: docker
# container network, supported: calico, flannel.
cni_enable: true
container_network: calico
cni_iface: ''
# highly available variables
vip_interface: ''
vip_address: 172.16.35.9
# etcd variables
etcd_iface: ''
# kubernetes extra addons variables
enable_dashboard: true
enable_logging: false
enable_monitoring: false
enable_ingress: false
enable_metric_server: true
# monitoring grafana user/password
monitoring_grafana_user: "admin"
monitoring_grafana_password: "p@ssw0rd"
If everything is ready, just run cluster.yml
playbook to deploy the cluster:
$ ansible-playbook -i inventory/hosts.ini cluster.yml
And then run addons.yml
to create addons:
$ ansible-playbook -i inventory/hosts.ini addons.yml
Verify that you have deployed the cluster, check the cluster as following commands:
$ kubectl -n kube-system get po,svc
NAME READY STATUS RESTARTS AGE IP NODE
po/haproxy-master1 1/1 Running 0 2h 172.16.35.10 k8s-m1
...
Finally, if you want to clean the cluster and redeploy, you can reset the cluster by reset-cluster.yml
playbook.:
$ ansible-playbook -i inventory/hosts.ini reset-cluster.yml
Pull requests are always welcome!!! I am always thrilled to receive pull requests.
Author: kairen
Source Code: https://github.com/kairen/kube-ansible
License: Apache-2.0 license
1596110100
Using Kubernetes to serve multi tenants is not a trivial task. Kubernetes provides the tools that are necessary(RBAC, Rolebinding, Network Policy, ResourceQuota and etc) to provide isolation between tenants but building/implementing an architecture is solely upon users. In this webinar, we would like to introduce multiple approaches that can be taken to provide multi-tenancy in the kubernetes cluster. We will also talk about how others in the communities are doing to achieve multi-tenancy. We’ll analyze pros and cons of different approaches and share specific use-cases that fit each approach. Finally, we will look in to lessons we’ve learned and we have implemented these factors into our on-premise cloud environment.
#kubernetes #a multi-tenant kubernetes cluster #kubernetes cluster #on-premise cloud environment
1667934720
Build a Kubernetes cluster using Ansible with kubeadm. The goal is easily install a Kubernetes cluster on machines running:
System requirements:
2.4.0+
Usage
Add the system information gathered above into a file called hosts.ini
. For example:
[master]
192.16.35.12
[node]
192.16.35.[10:11]
[kube-cluster:children]
master
node
If you're working with ubuntu, add the following properties to each host ansible_python_interpreter='python3'
:
[master]
192.16.35.12 ansible_python_interpreter='python3'
[node]
192.16.35.[10:11] ansible_python_interpreter='python3'
[kube-cluster:children]
master
node
Before continuing, edit group_vars/all.yml
to your specified configuration.
For example, I choose to run flannel
instead of calico, and thus:
# Network implementation('flannel', 'calico')
network: flannel
Note: Depending on your setup, you may need to modify cni_opts
to an available network interface. By default, kubeadm-ansible
uses eth1
. Your default interface may be eth0
.
After going through the setup, run the site.yaml
playbook:
$ ansible-playbook site.yaml
...
==> master1: TASK [addon : Create Kubernetes dashboard deployment] **************************
==> master1: changed: [192.16.35.12 -> 192.16.35.12]
==> master1:
==> master1: PLAY RECAP *********************************************************************
==> master1: 192.16.35.10 : ok=18 changed=14 unreachable=0 failed=0
==> master1: 192.16.35.11 : ok=18 changed=14 unreachable=0 failed=0
==> master1: 192.16.35.12 : ok=34 changed=29 unreachable=0 failed=0
The playbook will download /etc/kubernetes/admin.conf
file to $HOME/admin.conf
.
If it doesn't work download the admin.conf
from the master node:
$ scp k8s@k8s-master:/etc/kubernetes/admin.conf .
Verify cluster is fully running using kubectl:
$ export KUBECONFIG=~/admin.conf
$ kubectl get node
NAME STATUS AGE VERSION
master1 Ready 22m v1.6.3
node1 Ready 20m v1.6.3
node2 Ready 20m v1.6.3
$ kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
etcd-master1 1/1 Running 0 23m
...
Resetting the environment
Finally, reset all kubeadm installed state using reset-site.yaml
playbook:
$ ansible-playbook reset-site.yaml
Additional features
These are features that you could want to install to make your life easier.
Enable/disable these features in group_vars/all.yml
(all disabled by default):
# Additional feature to install
additional_features:
helm: false
metallb: false
healthcheck: false
This will install helm in your cluster (https://helm.sh/) so you can deploy charts.
This will install MetalLB (https://metallb.universe.tf/), very useful if you deploy the cluster locally and you need a load balancer to access the services.
This will install k8s-healthcheck (https://github.com/emrekenci/k8s-healthcheck), a small application to report cluster status.
Utils
Collection of scripts/utilities
This Vagrantfile is taken from https://github.com/ecomm-integration-ballerina/kubernetes-cluster and slightly modified to copy ssh keys inside the cluster (install https://github.com/dotless-de/vagrant-vbguest is highly recommended)
Tips & Tricks
If you use vagrant or your remote user is root, add this to hosts.ini
[master]
192.16.35.12 ansible_user='root'
[node]
192.16.35.[10:11] ansible_user='root'
As of release 1.7 Dashboard no longer has full admin privileges granted by default, so you need to create a token to access the resources:
$ kubectl -n kube-system create sa dashboard
$ kubectl create clusterrolebinding dashboard --clusterrole cluster-admin --serviceaccount=kube-system:dashboard
$ kubectl -n kube-system get sa dashboard -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2017-11-27T17:06:41Z
name: dashboard
namespace: kube-system
resourceVersion: "69076"
selfLink: /api/v1/namespaces/kube-system/serviceaccounts/dashboard
uid: 56b880bf-d395-11e7-9528-448a5ba4bd34
secrets:
- name: dashboard-token-vg52j
$ kubectl -n kube-system describe secrets dashboard-token-vg52j
...
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtdG9rZW4tdmc1MmoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNTZiODgwYmYtZDM5NS0xMWU3LTk1MjgtNDQ4YTViYTRiZDM0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZCJ9.bVRECfNS4NDmWAFWxGbAi1n9SfQ-TMNafPtF70pbp9Kun9RbC3BNR5NjTEuKjwt8nqZ6k3r09UKJ4dpo2lHtr2RTNAfEsoEGtoMlW8X9lg70ccPB0M1KJiz3c7-gpDUaQRIMNwz42db7Q1dN7HLieD6I4lFsHgk9NPUIVKqJ0p6PNTp99pBwvpvnKX72NIiIvgRwC2cnFr3R6WdUEsuVfuWGdF-jXyc6lS7_kOiXp2yh6Ym_YYIr3SsjYK7XUIPHrBqWjF-KXO_AL3J8J_UebtWSGomYvuXXbbAUefbOK4qopqQ6FzRXQs00KrKa8sfqrKMm_x71Kyqq6RbFECsHPA
$ kubectl proxy
Copy and paste the
token
from above to dashboard.
Login the dashboard:
Author: kairen
Source Code: https://github.com/kairen/kubeadm-ansible
License: Apache-2.0 license