Let's Think Kafka Cluster Without Zookeeper With KIP-500

Let's Think Kafka Cluster Without Zookeeper With KIP-500

Let's Think Kafka Cluster Without Zookeeper With KIP-500. Managing a ZooKeeper cluster creates an additional burden on the infrastructure and the admins. With KIP-500, we are going to see a Kafka cluster without the ZooKeeper cluster where the metadata management will be done with Kafka itself.

Right now, Apache Kafka utilizes Apache ZooKeeper to store its metadata. Information such as the partitions, configuration of topics, access control lists, etc. metadata stored in a ZooKeeper cluster. Managing a ZooKeeper cluster creates an additional burden on the infrastructure and the admins. With KIP-500, we are going to see a Kafka cluster without the ZooKeeper cluster where the metadata management will be done with Kafka itself.

Before KIP-500, our Kafka setup looks like depicted below. Here we have a 3 node ZooKeeper cluster and a 4 node Kafka cluster. This setup is a minimum for sustaining 1 Kafka broker failure. The orange Kafka node is a controller node.

3 Node Zookeeper Cluster and 4 Node Kafka Cluster

Let's see what issues we have with the above setup with the involvement of ZooKeeper:

  • Making the ZooKeeper cluster highly available is an issue as without the ZooKeeper cluster the Kafka cluster is DEAD.
  • Availability of the Kafka cluster if the controller dies. Electing another Kafka broker as a controller requires pulling the metadata from the ZooKeeper which leads to the Kafka cluster unavailability. If the number of topics and the partitions is more per topic, the failover Kafka controller time increases.
  • Kafka supports intra-cluster replication to support higher availability and durability. There should be multiple replicas of a partition, each stored in a different broker. One of the replicas is designated as a leader and the rest of the replicas are followers. If a broker fails, partitions on that broker with a leader temporarily become inaccessible. To continue serving the client requests, Kafka will automatically transfer the leader of those inaccessible partitions to some other replicas. This process is done by the Kafka broker who is acting as a controller. The controller broker should get metadata from the ZooKeeper for each of the affected partitions. The communication between the controller broker and the ZooKeeper happens in a serial manner which leads to unavailability of the partition if the leader broker dies.
  • When we delete or create a topic, the Kafka cluster needs to talk to ZooKeeper to get the updated list of topics. To see the impact of topic deletion or creation with the Kafka cluster will take time.
  • The major issue we see is the SCALABILITY issue.

kafka

Bootstrap 5 Complete Course with Examples

Bootstrap 5 Tutorial - Bootstrap 5 Crash Course for Beginners

Nest.JS Tutorial for Beginners

Hello Vue 3: A First Look at Vue 3 and the Composition API

Building a simple Applications with Vue 3

Deno Crash Course: Explore Deno and Create a full REST API with Deno

How to Build a Real-time Chat App with Deno and WebSockets

Convert HTML to Markdown Online

HTML entity encoder decoder Online

Kafka Spark Streaming | Kafka Tutorial

🔥Intellipaat Kafka training: https://intellipaat.com/kafka-training-online/ 👉In this kafka spark streaming tutorial you will learn what is apache kafka, arch...

Diving Deep into Kafka

The objective of this blog is to build some more understanding of Apache Kafka concepts such as Topics, Partitions, Consumer, and Consumer Groups. Kafka's basic concepts have been covered in my previous article. Kafka Topic & Partitions As we know, messages in Kafka are categorized or stored inside Topics. In simple terms, Topic can be construed as a Database table.

Apache Kafka Tutorial - Kafka Tutorial for Beginners

This Apache Kafka Tutorial - Kafka Tutorial for Beginners will help you understand what is Apache Kafka & its features. It covers different components of Apache Kafka & it’s architecture. You'll learn: What is Kafka? Kafka Features, Kafka Components, Kafka architecture, Installing Kafka, Working with Single Node Single Broker Cluster

Miscellaneous ways of Installation of Kafka on Ubuntu 18.04

I have kept this blog as short as possible on two commonly used ways of how to use Kafka Producer-Consumer processes over single node…Prerequisite: Ubuntu 18.04 server and a non-root user with sudo privileges. At least 4GB of RAM is required on the server. Installation without this amount of RAM may cause the Kafka service to fail, with the Java Virtual Machine(JVM) throwing out an “Out Of Memory” exception during startup. Even using Docker services one need to make sure the host machine has more than 4GB of RAM (advisable 8 GB RAM) as it is an absolute requirement for Kafka will consume a big part of RAM.

Kafka for XML Message Integration and Processing

Integration and transformation between XML Messages and Apache Kafka, including Kafka Connect Connector, Middleware, ESB, ETL, SOAP/WSDL.