Integrating Kafka With Spark Structured Streaming

Integrating Kafka With Spark Structured Streaming

Learn the method to integrate Kafka with Spark for consuming streaming data amd discover how to unleash your streaming analytics needs...

Learn the method to integrate Kafka with Spark for consuming streaming data amd discover how to unleash your streaming analytics needs...

Kafka is a messaging broker system that facilitates the passing of messages between producer and consumer. On the other hand, Spark Structure streaming consumes static and streaming data from various sources (like Kafka, Flume, Twitter, etc.) that can be processed and analyzed using a high-level algorithm for Machine Learning and pushes the result out to an external storage system. The main advantage of structured streaming is to get continuous incrementing of the result as the streaming data continue to arrive.

Kafka has its own stream library and is best for transforming Kafka topic-to-topic whereas Spark streaming can be integrated with almost any type of system. For more detail, you can refer to this blog.

In this blog, I’ll cover an end-to-end integration of Kafka with Spark structured streaming by creating Kafka as a source and Spark structured streaming as a sink.

Let’s create a Maven project and add following dependencies in pom.xml.

 <dependency>
     <groupId>org.apache.spark</groupId>
     <artifactId>spark-core_2.11</artifactId>
     <version>2.1.1</version>
 </dependency>
 <dependency>
     <groupId>org.apache.spark</groupId>
     <artifactId>spark-sql_2.11</artifactId>
     <version>2.1.1</version>
 </dependency>
 <dependency>
     <groupId>org.apache.kafka</groupId>
     <artifactId>kafka-clients</artifactId>
     <version>0.10.2.0</version>
 </dependency>
<dependency>
     <groupId>org.apache.spark</groupId>
     <artifactId>spark-streaming-kafka_2.10</artifactId>
     <version>1.6.3</version>
 </dependency>

Now, we will be creating a Kafka producer that produces messages and pushes them to the topic. The consumer will be the Spark structured streaming DataFrame.

First, setting the properties for the Kafka producer.

val props = new Properties()
props.put("bootstrap.servers", "localhost:9092")
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
  • bootstrap.servers: This contains the full list of servers with hostname and port. The list should be in the form of host1: port, host2: port , and so on.

  • key.serializer: Serializer class for the key that implements serializer interface.

  • value.serializer: Serializer class for the key that implements the serializer interface.

Creating a Kafka producer and sending topic over the stream:

val producer = new KafkaProducer[String,String](props)
for(count <- 0 to 10) 
  producer.send(new ProducerRecord[String, String](topic, "title "+count.toString,"data from topic"))
println("Message sent successfully")
producer.close()

The send is asynchronous, and this method will return immediately once the record has been stored in the buffer of records waiting to be sent. This allows sending many records in parallel without blocking to wait for the response after each one. The result of the send is a RecordMetadata specifying the partition the record was sent to and the offset it was assigned. After sending the data, close the producer using the close method.

Kafka as a Source 

Now, Spark will be a consumer of streams produced by Kafka. For this, we need to create a Spark session.

val spark = SparkSession
  .builder
  .appName("sparkConsumer")
  .config("spark.master", "local")
  .getOrCreate()

This is getting the topics from Kafka and reading it in Spark stream by subscribing to a particular topic that is to be provided in option. Following is the code to subscribe Kafka topics in Spark stream and read it using readstream.

val dataFrame = spark
  .readStream
  .format("kafka")
  .option("kafka.bootstrap.servers", "localhost:9092")
  .option("subscribe", "mytopic")
  .load()

Printing the schema of the DataFrame:

 ds1.printSchema()

The output for the schema includes all the fields related to Kafka metadata.

root
 |-- key: binary (nullable = true)
 |-- value: binary (nullable = true)
 |-- topic: string (nullable = true)
 |-- partition: integer (nullable = true)
 |-- offset: long (nullable = true)
 |-- timestamp: timestamp (nullable = true)
 |-- timestampType: integer (nullable = true)

Create a dataset from DataFrame by casting the key and value from the topic as a string:

val dataSet: Dataset[(String, String)] =dataFrame.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
.as[(String, String)]

Write the data in the dataset to the console and hold the program from exit using the method awaitTermination:

val query: StreamingQuery = dataSet.writeStream
 .outputMode("append")
 .format("console")
 .start()
 query.awaitTermination()

The complete code is on my GitHub.

Originally published by Jatin Demla at https://dzone.com

Learn More

☞ Apache Spark with Python - Big Data with PySpark and Spark

☞ Apache Spark 2.0 with Scala - Hands On with Big Data!

☞ Taming Big Data with Apache Spark and Python - Hands On!

☞ Apache Spark with Scala - Learn Spark from a Big Data Guru

☞ Apache Spark Hands on Specialization for Big Data Analytics

☞ Big Data Analysis with Apache Spark Python PySpark

big-data apache-spark

What's new in Bootstrap 5 and when Bootstrap 5 release date?

How to Build Progressive Web Apps (PWA) using Angular 9

What is new features in Javascript ES2020 ECMAScript 2020

Deno Crash Course: Explore Deno and Create a full REST API with Deno

How to Build a Real-time Chat App with Deno and WebSockets

Convert HTML to Markdown Online

HTML entity encoder decoder Online

Random Password Generator Online

HTML Color Picker online | HEX Color Picker | RGB Color Picker

Test-Driven Development for Big Data and Apache Spark

How to implement Apache Kafka, Apache Spark Structured streaming unit testing with Scala, Python and Java for implementing data pipelines

Apache Spark Tutorial For Beginners - Apache Spark Full Course

This video on Apache Spark Tutorial For Beginners - Apache Spark Full Course will help you learn the basics of Big Data, what Apache Spark is, and the architecture of Apache Spark. Yyou will understand how to install Apache Spark on Windows and Ubuntu. You will look at the important components of Spark, such as Spark Streaming, Spark MLlib, and Spark SQL. You will get an idea about implement Spark with Python in PySpark tutorial and look at some of the important Apache Spark interview questions

How to Big Data Analytics using Apache Spark For .Net

This article will give you a gentle introduction and quick getting started guide with Apache Spark for .Net for Big Data Analytics.

Data Validation: Key Solution for Big Data Management Challenges

In this article, we will go over key statistics highlighting the main data validation issues that currently impact big data companies.

Learn Apache Spark - Spark Tutorial for Beginners - Full Course 2020

In this Apache Spark Tutorial for Beginners, you will comprehensively learn all the major concepts of Spark such as Spark RDD, Dataframes, Spark SQL and Spark Streaming. Spark Fundamentals. Spark vs Hadoop. Spark Transformations, Actions and Operations. Spark SQL. Spark Dataframe basics. Spark SQL Hive Integration. Sqoop on Spark