<strong>Apache Spark is one of the most popular open source tools for big data. Learn how to use it to ingest data from a remote MongoDB server.</strong>
Apache Spark is one of the most popular open source tools for big data. Learn how to use it to ingest data from a remote MongoDB server.
Cloudera Data Flow, the answer to all your real-time streaming data problems. Manage your data from edge to enterprise with a no-code approach to developing sophisticated streaming applications easily. Learn more today.
$ wget http://apache.spinellicreations.com/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgz $ tar -xf spark-2.4.0-bin-hadoop2.7.tgz $ cd spark-2.4.0-bin-hadoop2.7
Create a spark-defaults.conf file by copying spark-defaults.conf.template in conf/.
Add the below line to the conf file.
We use the MongoDB Spark Connector.
First, make sure the Mongo instance in the remote server has the
bindIp set to the appropriate value and the correct local IP (not just localhost). Use the authentication root and password below to indicate the credentials of your authenticated Mongo database.
192.168.1.32 is your remote server’s private IP (i.e., the server where Mongo is running). We are reading the
oplog.rs collection in the local database. Change these accordingly. Similarly, we are writing the outputs to the database,
spark-2.4.0-bin-hadoop2.7]$ ./bin/pyspark --conf "spark.mongodb.input.uri=mongodb://root:[email protected]:27017/local.oplog.rs?readPreference=primaryPreferred" --conf "spark.mongodb.output.uri=mongodb://root:[email protected]:27017/sparkoutput" --packages org.mongodb.spark:mongo-spark-connector_2.11:2.4.0 Python 2.7.5 (default, Oct 30 2018, 23:45:53)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Ivy Default Cache set to: /home/pkathi2/.ivy2/cache
The jars for the packages stored in: /home/pkathi2/.ivy2/jars
:: loading settings :: url = jar:file:/home/pkathi2/spark-2.4.0-bin-hadoop2.7/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
org.mongodb.spark#mongo-spark-connector_2.11 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-33a37e02-1a24-498d-9217-e7025eeebd10;1.0
found org.mongodb.spark#mongo-spark-connector_2.11;2.4.0 in central
found org.mongodb#mongo-java-driver;3.9.0 in central
:: resolution report :: resolve 256ms :: artifacts dl 5ms
:: modules in use:
org.mongodb#mongo-java-driver;3.9.0 from central in [default]
org.mongodb.spark#mongo-spark-connector_2.11;2.4.0 from central in [default]
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
| default | 2 | 0 | 0 | 0 || 2 | 0 |
:: retrieving :: org.apache.spark#spark-submit-parent-33a37e02-1a24-498d-9217-e7025eeebd10
0 artifacts copied, 2 already retrieved (0kB/6ms)
WARN NativeCodeLoader: This message means the systme is unable to load native-hadoop library for your platform… using built-in Java classes where applicable.
Set the default log level to “WARN”.
To adjust logging level, use
sc.setLogLevel(newLevel). For SparkR, use
/ / _ ___/ /__
\ / _ / _ `/ __/ '/
/__ / .__/_,// /_/_\ version 2.4.0
Using Python version 2.7.5
SparkSession is available as ‘spark’.
>>> from pyspark.sql import SparkSession
>>> my_spark = SparkSession
... .config("spark.mongodb.input.uri", "mongodb://root:[email protected]:27017/local.oplog.rs?authSource=admin")
... .config("spark.mongodb.output.uri", "mongodb://root:[email protected]:27017/sparkoutput?authSource=admin")
Make sure you are using the correct authentication source (i.e., where you authenticate yourself in the Mongo server).
Now you can perform queries on your remote Mongo collection through the Spark instance. For example, the below query finds the schema from the collection.
>>> df = spark.read.format("com.mongodb.spark.sql.DefaultSource").load() >>> df.printSchema() root |-- h: long (nullable = true) |-- ns: string (nullable = true) |-- o: struct (nullable = true) | |-- $set: struct (nullable = true) | | |-- lastUse: timestamp (nullable = true) | |-- $v: integer (nullable = true)
Originally published by Pradeeban Kathiravelu at https://dzone.com
This video on Apache Spark Tutorial For Beginners - Apache Spark Full Course will help you learn the basics of Big Data, what Apache Spark is, and the architecture of Apache Spark. Yyou will understand how to install Apache Spark on Windows and Ubuntu. You will look at the important components of Spark, such as Spark Streaming, Spark MLlib, and Spark SQL. You will get an idea about implement Spark with Python in PySpark tutorial and look at some of the important Apache Spark interview questions
This video on "Apache Spark in 2020" will provide you with the detailed and comprehensive knowledge about the current IT Job trends based on Apache Spark and why learn Apache Spark in 2020? What is new in Apache Spark? What is Apache Spark? Top 5 Reasons to learn Spark. Salary trends of Spark Developer. Components of Spark. Skills required by Spark Developer. Companies using Apache Spark
PySpark Tutorial For Beginners | Apache Spark With Python Tutorial will help you understand what PySpark is, the different features of PySpark, and the comparison of Spark with Python and Scala. Learn the various PySpark contents - SparkConf, SparkContext, SparkFiles, RDD, StorageLevel, DataFrames, Broadcast and Accumulator. You will get an idea about the various Subpackages in PySpark. You will look at a demo using PySpark SQL to analyze Walmart Stocks data
Apache Spark is a data processing framework that can quickly perform processing tasks on very large data sets, and can also distribute…
Dynamic Partition Pruning in Spark 3.0 With the release of Spark 3.0, big improvements were implemented to enable Spark to execute faster and there came many