Layne  Fadel

Layne Fadel

1624515600

Traefik Ingress on Azure Kubernetes Service

Having an application deployed on a Kubernetes cluster consisting of multiple microservices, you may want to expose some of them to be accessible through the internet. While it’s obviously for your web app service, maybe you have some additional APIs that you want to expose.

In the world of Kubernetes, any connection to one of your microservices is done using the Service resource. Using the type LoadBalancer of the Kubernetes Service resource leverages the underlying cloud provider to create a cloud provider-specific load balancer for exposing the microservice through an external IP. The problem with that approach is that each microservice would be exposed under a separate IP address.

It would be much more convenient to have them exposed under one and the same host while having different paths to reach the dedicated microservice, right?

This article shows how to do that with a Kubernetes Cluster on Azure and Traefik and is a follow-up to my article about achieving the same using the Azure Application Gateway. A lot of content will be based on that article.

Introduction

Microservices can be exposed inside and outside of Kubernetes using the Kubernetes Service resource. So far, so good. But as already said, if we want to expose them outside the cluster, using the Service resource with the type LoadBalancer, we end up having different IPs for each microservice. This does not want we want, instead, we want to have them exposed under one and the host using different paths.

This is where the Kubernetes Ingress resource comes in handy. Think of an Ingress like a layer on top of Kubernetes Services. It is the single point of entrance for traffic hitting our microservices, which routes traffic to different Kubernetes Services based on specified rules.

The concept of Kubernetes Ingress resource is like an Abstraction. In order to make use of a Kubernetes Ingress, you have to install a specific Ingress Controller. There are plenty of different Implementations of the Kubernetes Ingress Abstraction out there. Nginx and Traefik Ingress are two of them which are very popular in the Kubernetes and Open Source Community, just to name some.

And then of course we have Cloud Providers, where you can use resources like Load Balancers and Gateways as a Kubernetes Ingress. Anyways, in this article, we will focus on the Traefik_ Ingress_.

#microservices #azure-kubernetes-service #ingress #kubernetes #azure kubernetes service

What is GEEK

Buddha Community

Traefik Ingress on Azure Kubernetes Service
Christa  Stehr

Christa Stehr

1602964260

50+ Useful Kubernetes Tools for 2020 - Part 2

Introduction

Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Layne  Fadel

Layne Fadel

1624515600

Traefik Ingress on Azure Kubernetes Service

Having an application deployed on a Kubernetes cluster consisting of multiple microservices, you may want to expose some of them to be accessible through the internet. While it’s obviously for your web app service, maybe you have some additional APIs that you want to expose.

In the world of Kubernetes, any connection to one of your microservices is done using the Service resource. Using the type LoadBalancer of the Kubernetes Service resource leverages the underlying cloud provider to create a cloud provider-specific load balancer for exposing the microservice through an external IP. The problem with that approach is that each microservice would be exposed under a separate IP address.

It would be much more convenient to have them exposed under one and the same host while having different paths to reach the dedicated microservice, right?

This article shows how to do that with a Kubernetes Cluster on Azure and Traefik and is a follow-up to my article about achieving the same using the Azure Application Gateway. A lot of content will be based on that article.

Introduction

Microservices can be exposed inside and outside of Kubernetes using the Kubernetes Service resource. So far, so good. But as already said, if we want to expose them outside the cluster, using the Service resource with the type LoadBalancer, we end up having different IPs for each microservice. This does not want we want, instead, we want to have them exposed under one and the host using different paths.

This is where the Kubernetes Ingress resource comes in handy. Think of an Ingress like a layer on top of Kubernetes Services. It is the single point of entrance for traffic hitting our microservices, which routes traffic to different Kubernetes Services based on specified rules.

The concept of Kubernetes Ingress resource is like an Abstraction. In order to make use of a Kubernetes Ingress, you have to install a specific Ingress Controller. There are plenty of different Implementations of the Kubernetes Ingress Abstraction out there. Nginx and Traefik Ingress are two of them which are very popular in the Kubernetes and Open Source Community, just to name some.

And then of course we have Cloud Providers, where you can use resources like Load Balancers and Gateways as a Kubernetes Ingress. Anyways, in this article, we will focus on the Traefik_ Ingress_.

#microservices #azure-kubernetes-service #ingress #kubernetes #azure kubernetes service

Private Azure Kubernetes Service Clusters with Azure Private Links?

What if I tell you that you can make your AKS cluster private. No, not just setting the ingress controller LoadBalancer IP to a private IP and prevent internet ingress to the pods and applications, but prevent external access to the KubeAPI Sever completely. In other words, the kubectl commands cannot run over the internet and this creates an additional layer of security to your enterprise clusters!

#terraform #azure #kubernetes-security #kubernetes #azure-kubernetes-service

Moving from Azure App Services to Azure Kubernetes Service

My experience setting up an AKS cluster, and a comparison of running applications on Kubernetes vs App Services.

A word of warning — this post is long!

My main motivation for writing this is to serve as a record of my own journey in learning Azure Kubernetes, as well as the exact list of commands and order of execution they were run, in the likely case I need a reminder of what I did to get my AKS cluster working.

If you’re getting started with AKS as well, hopefully you find this useful too!

The last 3 companies I worked at all used Azure App Services for hosting their web applications. It’s a great platform when starting out, for a number of reasons:

  • It’s extremely easy to set up
  • Built-in integration with Azure DevOps
  • Works perfectly with applications written in .NET/.NET Core
  • No management overhead (i.e. no sysadmin activity required)

So if you’ve got an application consisting of one front-end with a few APIs, this set up works quite well.

When the environment starts expanding however, it can be quite a hassle to manage:

  • More and more APIs start to appear (because microservices are all the rage now).
  • Each API needs its own Azure DevOps pipeline setup, its own ARM/Terraform template customizations template, application settings, and secrets setup.

Multiply the above by 30 or more, and it ends up becoming quite a hassle to deal with.

As Kubernetes is hugely popular right now (and I’ve been mucking around with it in my spare time), I decided to do a comparison of how an application’s infrastructure and architecture would differ when run on Kubernetes vs App Services.

#azure-kubernetes-service #devops #kubernetes #azure

Manage Azure Event Hubs with Azure Service Operator on Kubernetes

Azure Service Operator is an open source project to help you provision and manage Azure services using Kubernetes. Developers can use it to provision Azure services from any environment, be it Azure, any other cloud provider or on-premises — Kubernetes is the only common denominator!

It can also be included as a part of CI/CD pipelines to create, use and tear down Azure resources on-demand. Behind the scenes, all the heavy lifting is taken care of by a combination of Custom Resource Definitions which define Azure resources and the corresponding Kubernetes Operator(s) which ensure that the state defined by the Custom Resource Definition is reflected in Azure as well.

Image for post

Azure Service Operator

_Read more in the recent announcement here — _https://cloudblogs.microsoft.com/opensource/2020/06/25/announcing-azure-service-operator-kubernetes/

In this blog post:

  • You will get a high level overview of Azure Service Operator (sometimes referred to as ASO in this blog)
  • How to set it up and use it to provision Azure Event Hubs
  • Deploy apps to Kubernetes which use the Azure Event Hubs cluster

_The code is available in this GitHub repo _https://github.com/abhirockzz/eventhubs-using-aso-on-k8s

Getting started….

Azure Service Operator supports many Azure services including databases (Azure Cosmos DB, PostgreSQL, MySQL, Azure SQL.), core infrastructure components (Virtual MachinesVM Scale sets, Virtual Networks etc.) and others as well.

It also supports Azure Event Hubs which is a fully managed data streaming platform and event ingestion service with support for Apache Kafka and other tools in the Kafka ecosystem. With Azure Service Operator you can provision and manage Azure Event Hubs namespaces, Event Hub and Consumer Groups.

So, let’s dive in without further ado! Before we do that, please note that you will need the following in order to try out this tutorial:

#kubernetes #azure #azure event hubs #postgresql #mysql #azure sq