By default, pods of Kubernetes services are not accessible from the external network, but only by other pods within the Kubernetes cluster. Kubernetes has a built‑in configuration for HTTP load balancing, called Ingress, that defines rules for external connectivity to Kubernetes services. Users who need to provide external access to their Kubernetes services create an Ingress resource that defines rules, including the URI path, backing service name, and other information. The Ingress controller can then automatically program a frontend load balancer to enable Ingress configuration. The NGINX Ingress Controller for Kubernetes is what enables Kubernetes to configure NGINX and NGINX Plus for load balancing Kubernetes services.
Ingress is the built‑in Kubernetes load‑balancing framework for HTTP traffic. With Ingress, you control the routing of external traffic. When running on public clouds like AWS or GKE, the load-balancing feature is available out of the box. You don’t need to define Ingress rules. In this post, I will focus on creating Kubernetes Nginx Ingress controller running on Vagrant or any other non-cloud based solution, like bare metal deployments. I deployed my test cluster on Vagrant, with kubeadm.
For this lab, let’s create two simple web apps based on d
ockersamples/static-site docker image. Those are Nginx containers that display application name which will help us to identify which app we are accessing. The result, both apps accessible through load balancer:
Here is the app deployment resource, the two same web apps with a different name and two replicas for each:
⚡ cat > app-deployment.yaml <<EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: app1 spec: replicas: 2 template: metadata: labels: app: app1 spec: containers: - name: app1 image: dockersamples/static-site env: - name: AUTHOR value: app1 ports: - containerPort: 80 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: app2 spec: replicas: 2 template: metadata: labels: app: app2 spec: containers: - name: app2 image: dockersamples/static-site env: - name: AUTHOR value: app2 ports: - containerPort: 80 EOF
And same for services:
⚡ cat > app-service.yaml <<EOF apiVersion: v1 kind: Service metadata: name: appsvc1 spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: app1 --- apiVersion: v1 kind: Service metadata: name: appsvc2 spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: app2 EOF
Next, we’ll create above resources:
⚡ kubectl create -f app-deployment.yaml -f app-service.yaml
If you prefer Helm, installation of the Nginx Ingress controller is easier. This article is the hard way, but you will understand the process better.
All resources for Nginx Ingress controller will be in a separate namespace, so let’s create it:
⚡ kubectl create namespace ingress
The first step is to create a default backend endpoint. Default endpoint redirects all requests which are not defined by Ingress rules:
⚡ cat > default-backend-deployment.yaml <<EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: default-backend spec: replicas: 2 template: metadata: labels: app: default-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-backend image: gcr.io/google_containers/defaultbackend:1.0 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi EOF
And to create a default backend service:
⚡ cat > default-backend-service.yaml <<EOF apiVersion: v1 kind: Service metadata: name: default-backend spec: ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: default-backend EOF
We will create those resources in ingress namespace:
⚡ kubectl create -f default-backend-deployment.yaml -f default-backend-service.yaml -n=ingress
Then, we need to create a Nginx config to show a VTS page on our load balancer:
⚡ cat > nginx-ingress-controller-config-map.yaml <<EOF apiVersion: v1 kind: ConfigMap metadata: name: nginx-ingress-controller-conf labels: app: nginx-ingress-lb data: enable-vts-status: 'true' EOF ⚡ kubectl create -f nginx-ingress-controller-config-map.yaml -n=ingress
And here is the actual Nginx Ingress controller deployment:
⚡ cat > nginx-ingress-controller-deployment.yaml <<EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-ingress-controller spec: replicas: 1 revisionHistoryLimit: 3 template: metadata: labels: app: nginx-ingress-lb spec: terminationGracePeriodSeconds: 60 serviceAccount: nginx containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 imagePullPolicy: Always readinessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 5 args: - /nginx-ingress-controller - --default-backend-service=\$(POD_NAMESPACE)/default-backend - --configmap=\$(POD_NAMESPACE)/nginx-ingress-controller-conf - --v=2 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - containerPort: 80 - containerPort: 18080 EOF
\--v=2 argument, which is a log level and it shows the Nginx config diff on start. Don’t create Nginx controller yet.
Before we create Ingress controller and move forward you might need to create RBAC rules. Clusters deployed with
kubeadm have RBAC enabled by default:
⚡ cat > nginx-ingress-controller-roles.yaml <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: nginx --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: nginx-role rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - update - watch - apiGroups: - extensions resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - extensions resources: - ingresses/status verbs: - update --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: nginx-role roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-role subjects: - kind: ServiceAccount name: nginx namespace: ingress EOF ⚡ kubectl create -f nginx-ingress-controller-roles.yaml -n=ingress
So now you can create Ingress controller also:
⚡ kubectl create -f nginx-ingress-controller-deployment.yaml -n=ingress
If you check your pods, you should get something like this:
Everything is ready now. The last step is to define Ingress rules for load balancer status page:
⚡ cat > nginx-ingress.yaml <<EOF apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress spec: rules: - host: test.akomljen.com http: paths: - backend: serviceName: nginx-ingress servicePort: 18080 path: /nginx_status EOF
And Ingress rules for sample web apps:
⚡ cat > app-ingress.yaml <<EOF apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: / name: app-ingress spec: rules: - host: test.akomljen.com http: paths: - backend: serviceName: appsvc1 servicePort: 80 path: /app1 - backend: serviceName: appsvc2 servicePort: 80 path: /app2 EOF
nginx.ingress.kubernetes.io/rewrite-target: / annotation. We are using
/app2 paths, but the apps don’t exist there. This annotation redirects requests to the /. You can create both ingress rules now:
⚡ kubectl create -f nginx-ingress.yaml -n=ingress ⚡ kubectl create -f app-ingress.yaml
The last step is to expose
nginx-ingress-lb deployment for external access. We will expose it with
NodePort, but we could also use
⚡ cat > nginx-ingress-controller-service.yaml <<EOF apiVersion: v1 kind: Service metadata: name: nginx-ingress spec: type: NodePort ports: - port: 80 nodePort: 30000 name: http - port: 18080 nodePort: 32000 name: http-mgmt selector: app: nginx-ingress-lb EOF ⚡ kubectl create -f nginx-ingress-controller-service.yaml -n=ingress
If you are running everything on VirtualBox, as I do, forward ports
32000 from one Kubernetes worker node to
⚡ VBoxManage modifyvm "worker_node_vm_name" --natpf1 "nodeport,tcp,127.0.0.1,30000,,30000" ⚡ VBoxManage modifyvm "worker_node_vm_name" --natpf1 "nodeport2,tcp,127.0.0.1,32000,,32000"
test.akomljen.com domain to hosts file:
⚡ echo "127.0.0.1 test.akomljen.com" | sudo tee -a /etc/hosts
You can verify everything by accessing at those endpoints:
http://test.akomljen.com:30000/app1 http://test.akomljen.com:30000/app2 http://test.akomljen.com:32000/nginx_status
NOTE: You can access apps using DNS name only, not IP directly!
Any other endpoint redirects the request to default backend. Ingress controller is functional now, and you could add more apps to it. For any problems during the setup, please leave a comment. Don’t forget to share this post if you find it useful.
Having an Ingress is the first step towards the more automation on Kubernetes. Now, you can have automatic SSL with Let’s encrypt to increase security also. If you don’t want to manage all those configuration files manually, I suggest you look into Helm. Instaling Ingress controller would be only one command. Stay tuned for the next one.
Let’s share it!
Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.
According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.
And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.
#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml
Technology is hard. As technologists, I think we like it that way. It’s built‑in job security, right? Well, unfortunately, the modern application world has become unproductively hard. We need to make it easier.
That’s why I like describing the current developer paradox as the need to run safely with scissors.
Running with scissors is a simple metaphor for what is the admittedly difficult ask we make of software engineers. Developers need to run. Time to market and feature velocity are critical to the success of digital businesses. As a result, we don’t want to encumber developers with processes or technology choices that slow them down. Instead we empower them to pick tools and stacks that let them deliver code to customers as quickly as possible.
But there’s a catch. In the world of fast releases, multiple daily (or hourly or minutely!) changes, and fail‑fast development, we risk introducing application downtime into digital experiences – that risk is the metaphorical scissors that make it dangerous to run fast. On some level we know it’s wrong to make developers run with scissors. But the speed upside trumps the downtime downside.
That frames the dilemma of our era: we need our developers to run with scissors, but we don’t want anybody to get hurt. Is there a solution?
At NGINX, the answer is “yes”. I’m excited to announce eight new or significantly enhanced solutions built to unleash developer speed without sacrificing the governance, visibility, and control infrastructure teams require.
As my colleague, Gus Robertson, eloquently points out in his recent blog The Essence of Sprint Is Speed, self‑service is an important part of developer empowerment. He talks about developers as the engines of digital transformation. And if they’re not presented with easy-to-use, capable tools, they take matters into their own hands. The result is shadow IT and significant infrastructure risk.
Self‑service turns this on its head. It provides infrastructure teams with a way to release the application delivery and security technologies that developers need for A/B, canary, blue‑green, and circuit‑breaker patterns. But it does so within the guardrails that ensure the consistency, reliability, and security that ensure your apps remain running once in production.
#blog #news #opinion #red hat #nginx controller #nginx app protect #nginx sprint 2020 #nginx ingress controller #nginx service mesh #f5 dns cloud services #nginx analytics cloud service
As more and more enterprises run containerized apps in production, Kubernetes continues to solidify its position as the standard tool for container orchestration. At the same time, demand for cloud computing has been pulled forward by a couple of years because work-at-home initiatives prompted by the COVID‑19 pandemic have accelerated the growth of Internet traffic. Companies are working rapidly to upgrade their infrastructure because their customers are experiencing major network outages and overloads.
To achieve the required level of performance in cloud‑based microservices environments, you need rapid, fully dynamic software that harnesses the scalability and performance of the next‑generation hyperscale data centers. Many organizations that use Kubernetes to manage containers depend on an NGINX‑based Ingress controller to deliver their apps to users.
#blog #tech #ingress controller #nginx ingress controller
We are happy to announce release 1.8.0 of the NGINX Ingress Controller for Kubernetes. This release builds upon the development of our supported solution for Ingress load balancing on Kubernetes platforms, including Red Hat OpenShift, Amazon Elastic Container Service for Kubernetes (EKS), the Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), IBM Cloud Private, Diamanti, and others.
With release 1.8.0, we continue our commitment to providing a flexible, powerful and easy-to-use Ingress Controller, which you can configure with both Kubernetes Ingress Resources and NGINX Ingress Resources:
Release 1.8.0 brings the following major enhancements and improvements:
#blog #news #tech #nginx kubernetes ingress controller #nginx app protect
The demo aims at running an application in Kubernetes behind a Cloud-managed public load balancer also known as an HTTP(s) load balancer which is also known as an** Ingress resource** in Kubernetes dictionary. For this demo, I will be using Google Kubernetes Engine. Also, instead of using a default ingress controller that GCP makes of its own, I will be creating an NGINX ingress controller which will be used by the Ingress resource. Using this NGINX ingress controller we will be allowing IP addresses and eventually blocking others from accessing our application running in GKE. Before we start with the implementation, let us get some of our prerequisites revised.
In Kubernetes, an Ingress is an object or a resource that allows access to your Kubernetes services from outside the Kubernetes cluster. One can configure access by creating a collection of rules that define which inbound connections can reach which services. In GKE, when we specify kind: Ingress in the resource manifest. GKE then creates an Ingress resource making appropriate Google Cloud API calls to create an external HTTP(S) load balancer. The load balancer’s URL maps host rules and path matches, to refer to one or more backend services, where each backend service corresponds to a GKE Service of type NodePort, as referenced in the Ingress.
For the Ingress resource to work, the cluster must have an ingress controller running. There multiple Ingress controllers available and they can be configured with the Ingress resource eg. NGINX Ingress Controller, HAproxy Ingress controller, Traefik, Contour, etc.
We will be using the NGINX Ingress Controller for the demo.
#kubernetes #load-balancer #ingress #nginx #ip