We kicked off the the first part of the series by setting up a single node Kafka cluster which was accessible to only internal clients within the same Kubernetes cluster, had no encryption, authentication or authorization and used temporary persistence. We will keep iterating/improving on this during the course of this blog series.

This part will cover these topics:

  • Expose Kafka cluster to external applications
  • Apply TLS encryption
  • Explore Kubernetes resources behind the scenes
  • Use Kafka CLI and Go client applications to test our cluster setup

The code is available on GitHub - https://github.com/abhirockzz/kafka-kubernetes-strimzi/

What Do I Need to Try This Out?

kubectl - https://kubernetes.io/docs/tasks/tools/install-kubectl/

I will be using Azure Kubernetes Service (AKS) to demonstrate the concepts, but by and large it is independent of the Kubernetes provider (e.g. feel free to use a local setup such as minikube). If you want to use AKS, all you need is a Microsoft Azure account which you can get for FREE if you don’t have one already.

I will not be repeating some of the common sections (such as Installation/Setup for Helm, Strimzi, Azure Kubernetes Service as well as Strimzi overview) in this or subsequent part of this series and would request you to refer to part one for those details

Let’s Create an Externally Accessible Kafka Cluster

To achieve this, we just need to tweak the Strimzi Kafka resource a little bit. I am highlighting the key part below - here is the original manifest from part 1

Java

#big data #docker #kubernetes #kafka #open souce #cncf #strimzi

Kafka on Kubernetes, the Strimzi Way (Part 2)
2.00 GEEK