Edureka Fan

Edureka Fan


What are Kafka Streams?

This Edureka video on “Apache Kafka Streams” will provide you with detailed knowledge about Kafka Streams and its implementation. Below are the topics discussed in this video:

  • What is Kafka?
  • What is a Stream?
  • What exactly is Kafka Stream?
  • Apache Kafka Stream API Architecture
  • Kafka Stream Features
  • Kafka Streams Example
  • Differences between Kafka and Kafka Streams
  • Use cases of Apache Kafka Streams API

#webdev #apache #kafka

What is GEEK

Buddha Community

What are Kafka Streams?
akshay L

akshay L


Kafka Spark Streaming | Kafka Tutorial

In this kafka spark streaming tutorial you will learn what is apache kafka, architecture of apache kafka & how to setup a kafka cluster, what is spark & it’s features, components of spark and hands on demo on integrating spark streaming with apache kafka and integrating spark flume with apache kafka.

# Kafka Spark Streaming #Kafka Tutorial #Kafka Training #Kafka Course #Intellipaat

Stateful Joins with the Kafka Streams Processor API

My team, Expedia Group™ Commerce Data, needed to join events coming in on two (and more in the future) Kafka topics to provide a realtime stream view of our bookings. This is a pretty standard requirement, but our team was not very experienced with Kafka Streams, and we had a few wrinkles that made going with an “out of the box” Kafka Streams join less attractive than dropping down to the Processor API.

What we needed, in a nutshell, was to:

  • Join two or more events,
  • Repartition one event to extract the proper join key,
  • Report on unjoined events,
  • Possibly purge orphaned events to a dead letter topic,
  • Configurable no set join window (for expiration of unjoined events),
  • Oh, and with Kafka Streams newbies at the helm.

Processor API vs DSL

There are two approaches to writing a Kafka Streams application:

Developers prefer the DSL for most situations because it simplifies some common use cases and lets you accomplish a lot with very little code. But you sacrifice some control when using the DSL. There’s a certain amount of magic going on under the covers that’s hidden by the KStream and KTable abstractions. And the out-of-the-box joins available between these abstractions may not fit all use cases.

The most common way I see the DSL characterized is as “expressive,” which just means “hides lots of stuff from you.” Sometimes explicit is better. And for some (like me), the “raw” Processor API just seems to fit my brain better than the DSL abstractions.

Don’t fear the Processor API

Most documentation I found around Kafka Streams leans towards using the DSL (Confluent docs state “it is recommended for most users”), but the Processor API has a much simpler interface than the DSL in many respects. You still build a stream topology, but you only use Source nodes (to read from Kafka topics), Sink nodes (to write to Kafka topics), and Processor nodes (to do stuff to Kafka events flowing through your topology). Plus the DSL is built on top of the Processor API, so if it’s good enough for the DSL, it should be good enough for our humble project (in fact, as a Confluent engineer says, “the DSL compiles down to the Processor API”).

Processor nodes have to implementProcessor, which has a process method you override which takes the key and the value of the event that is traversing your Kafka Streams topology. Processors also have access to aProcessorContext object which contains useful information on the current event being processed (like what topic & partition it was consumed from) and a forward method that is used to send the event to a downstream node in your topology.

To illustrate the difference, here’s a comparison of doing a repartition on a stream in the DSL and the Processor API.

#kafka-streams #kafka #streaming #data-science

Mireille  Von

Mireille Von


Kafka Streams using Spring Cloud Stream | Microservices Example | Tech Primers

This video covers how to leverage Kafka Streams using Spring Cloud stream by creating multiple spring boot microservices

📌 Related Links

🔗 Kafka setup:
🔗 Public Domain API:

📌 Related Videos

🔗 Spring Boot with Spring Kafka Producer example -
🔗 Spring Boot with Spring Kafka Consumer example -

📌 Related Playlist

🔗Spring Boot Primer -
🔗Spring Cloud Primer -
🔗Spring Microservices Primer -
🔗Spring JPA Primer -
🔗Java 8 Streams -
🔗Spring Security Primer -

💥 Join TechPrimers Slack Community:
💥 Telegram:
💥 TechPrimer HindSight (Blog):
💥 Website:
💥 Slack Community:
💥 Twitter:
💥 Facebook:
💥 GitHub: or

🎬Video Editing: FCP

🔥 Disclaimer/Policy:
The content/views/opinions posted here are solely mine and the code samples created by me are open sourced.
You are free to use the code samples in Github after forking and you can modify it for your own use.
All the videos posted here are copyrighted. You cannot re-distribute videos on this channel in other channels or platforms.
#KafkaStreams #SpringCloudStream #TechPrimers

#kafka streams #kafka #spring cloud stream #spring cloud

Salma  Mateos

Salma Mateos


Miscellaneous ways of Installation of Kafka on Ubuntu 18.04

I have kept this blog as short as possible on two commonly used ways of how to use Kafka Producer-Consumer processes over single node single broker acrhitecture with as minimal required features as possible to make it simple.


  1. Ubuntu 18.04 server and a non-root user with sudo privileges.
  2. At least 4GB of RAM is required on the server. Installation without this amount of RAM may cause the Kafka service to fail, with the Java Virtual Machine(JVM) throwing out an “Out Of Memory” exception during startup. Even using Docker services one need to make sure the host machine has more than 4GB of RAM (advisable 8 GB RAM) as it is an absolute requirement for Kafka will consume a big part of RAM.
  3. OpenJDK 8/11 should be installed on the server. Kafka is written in Java, so it requires a JVM, however in order to use Confluent Open Source binaries one need to use Java 8.

**Remember **: While using sudo one must remember the root password.

#kafka #kafka-installation #data-streaming #big-data-streaming #kafka-connect #ubuntu 18.04

Gerhard  Brink

Gerhard Brink


Stateful stream processing with Apache Flink(part 1): An introduction

Apache Flink, a 4th generation Big Data processing framework provides robust **stateful stream processing capabilitie**s. So, in a few parts of the blogs, we will learn what is Stateful stream processing. And how we can use Flink to write a stateful streaming application.

What is stateful stream processing?

In general, stateful stream processing is an application design pattern for processing an unbounded stream of events. Stateful stream processing means a** “State”** is shared between events(stream entities). And therefore past events can influence the way the current events are processed.

Let’s try to understand it with a real-world scenario. Suppose we have a system that is responsible for generating a report. It comprising the total number of vehicles passed from a toll Plaza per hour/day. To achieve it, we will save the count of the vehicles passed from the toll plaza within one hour. That count will be used to accumulate it with the further next hour’s count to find the total number of vehicles passed from toll Plaza within 24 hours. Here we are saving or storing a count and it is nothing but the “State” of the application.

Might be it seems very simple, but in a distributed system it is very hard to achieve stateful stream processing. Stateful stream processing is much more difficult to scale up because we need different workers to share the state. Flink does provide ease of use, high efficiency, and high reliability for the**_ state management_** in a distributed environment.

#apache flink #big data and fast data #flink #streaming #streaming solutions ##apache flink #big data analytics #fast data analytics #flink streaming #stateful streaming #streaming analytics