Brad  Hintz

Brad Hintz

1599302760

Apache Spark’s Join Algorithms

One of the most frequently used transformations in Apache Spark is Join operation. Joins in Apache Spark allow the developer to combine two or more data frames based on certain (sortable) keys. The syntax for writing a join operation is simple but some times what goes on behind the curtain is lost. Internally, for Joins Apache Spark proposes a couple of Algorithms and then chooses one of them. Not knowing what these internal algorithms are, and which one does spark choose might make a simple Join operation expensive.

While opting for a Join Algorithm, Spark looks at the size of the data frames involved. It considers the Join type and condition specified, and hint (if any) to finally decide upon the algorithm to use. In most of the cases, Sort Merge join and Shuffle Hash join are the two major power horses that drive the Spark SQL joins. But if spark finds the size of one of the data frames less than a certain threshold, Spark puts up Broadcast Join as it’s top contender.

Broadcast Hash Join

Looking at the Physical plan of a Join operation, a Broadcast Hash Join in Spark looks like this

Joins in Apache Spark: Broadcast Join

The above plan shows that the data frame from one of the branches broadcasts to every node containing the other data frame. In each node, Spark then performs the final Join operation. This is Spark’s per-node communication strategy.

Spark uses the Broadcast Hash Join when one of the data frame’s size is less than the threshold set in spark.sql.autoBroadcastJoinThreshold. It’s default value is 10 Mb, but can be changed using the following code

spark.conf.set("spark.sql.autoBroadcastJoinThreshold", 100 * 1024 * 1024)

This algorithm has the advantage that the other side of the join doesn’t require any shuffle. If this other side is very large, not doing the shuffle will bring notable speed-up as compared to other algorithms that would have to do the shuffle.

Broadcasting large datasets can also lead to timeout errors. A configuration spark.sql.broadcastTimeout sets the maximum time that a broadcast operation should take, past which the operation fails. The default timeout value is 5 minutes, but it can be set as follows:

spark.conf.set("spark.sql.broadcastTimeout", time_in_sec)

Sort Merge Join

If neither of the data frames can be broadcasted, then Spark resorts to Sort Merge Join. This algorithm uses the node-node communication strategy, where Spark shuffles the data across the cluster.

Sort Merge Join requires both sides of the join to have correct partitioning and order. Generally, this is ensured by** shuffle and sort** in both branches of the join as depicted below

#apache spark #scala #tech blogs #broadcast join #join opertaions #join optimization #joins in spark #shuffled hash join #sort merge join

What is GEEK

Buddha Community

Apache Spark’s Join Algorithms
Brad  Hintz

Brad Hintz

1599302760

Apache Spark’s Join Algorithms

One of the most frequently used transformations in Apache Spark is Join operation. Joins in Apache Spark allow the developer to combine two or more data frames based on certain (sortable) keys. The syntax for writing a join operation is simple but some times what goes on behind the curtain is lost. Internally, for Joins Apache Spark proposes a couple of Algorithms and then chooses one of them. Not knowing what these internal algorithms are, and which one does spark choose might make a simple Join operation expensive.

While opting for a Join Algorithm, Spark looks at the size of the data frames involved. It considers the Join type and condition specified, and hint (if any) to finally decide upon the algorithm to use. In most of the cases, Sort Merge join and Shuffle Hash join are the two major power horses that drive the Spark SQL joins. But if spark finds the size of one of the data frames less than a certain threshold, Spark puts up Broadcast Join as it’s top contender.

Broadcast Hash Join

Looking at the Physical plan of a Join operation, a Broadcast Hash Join in Spark looks like this

Joins in Apache Spark: Broadcast Join

The above plan shows that the data frame from one of the branches broadcasts to every node containing the other data frame. In each node, Spark then performs the final Join operation. This is Spark’s per-node communication strategy.

Spark uses the Broadcast Hash Join when one of the data frame’s size is less than the threshold set in spark.sql.autoBroadcastJoinThreshold. It’s default value is 10 Mb, but can be changed using the following code

spark.conf.set("spark.sql.autoBroadcastJoinThreshold", 100 * 1024 * 1024)

This algorithm has the advantage that the other side of the join doesn’t require any shuffle. If this other side is very large, not doing the shuffle will bring notable speed-up as compared to other algorithms that would have to do the shuffle.

Broadcasting large datasets can also lead to timeout errors. A configuration spark.sql.broadcastTimeout sets the maximum time that a broadcast operation should take, past which the operation fails. The default timeout value is 5 minutes, but it can be set as follows:

spark.conf.set("spark.sql.broadcastTimeout", time_in_sec)

Sort Merge Join

If neither of the data frames can be broadcasted, then Spark resorts to Sort Merge Join. This algorithm uses the node-node communication strategy, where Spark shuffles the data across the cluster.

Sort Merge Join requires both sides of the join to have correct partitioning and order. Generally, this is ensured by** shuffle and sort** in both branches of the join as depicted below

#apache spark #scala #tech blogs #broadcast join #join opertaions #join optimization #joins in spark #shuffled hash join #sort merge join

Mike  Kozey

Mike Kozey

1656151740

Test_cov_console: Flutter Console Coverage Test

Flutter Console Coverage Test

This small dart tools is used to generate Flutter Coverage Test report to console

How to install

Add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dev_dependencies:
  test_cov_console: ^0.2.2

How to run

run the following command to make sure all flutter library is up-to-date

flutter pub get
Running "flutter pub get" in coverage...                            0.5s

run the following command to generate lcov.info on coverage directory

flutter test --coverage
00:02 +1: All tests passed!

run the tool to generate report from lcov.info

flutter pub run test_cov_console
---------------------------------------------|---------|---------|---------|-------------------|
File                                         |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
 print_cov_constants.dart                    |    0.00 |    0.00 |    0.00 |    no unit testing|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |
---------------------------------------------|---------|---------|---------|-------------------|

Optional parameter

If not given a FILE, "coverage/lcov.info" will be used.
-f, --file=<FILE>                      The target lcov.info file to be reported
-e, --exclude=<STRING1,STRING2,...>    A list of contains string for files without unit testing
                                       to be excluded from report
-l, --line                             It will print Lines & Uncovered Lines only
                                       Branch & Functions coverage percentage will not be printed
-i, --ignore                           It will not print any file without unit testing
-m, --multi                            Report from multiple lcov.info files
-c, --csv                              Output to CSV file
-o, --output=<CSV-FILE>                Full path of output CSV file
                                       If not given, "coverage/test_cov_console.csv" will be used
-t, --total                            Print only the total coverage
                                       Note: it will ignore all other option (if any), except -m
-p, --pass=<MINIMUM>                   Print only the whether total coverage is passed MINIMUM value or not
                                       If the value >= MINIMUM, it will print PASSED, otherwise FAILED
                                       Note: it will ignore all other option (if any), except -m
-h, --help                             Show this help

example run the tool with parameters

flutter pub run test_cov_console --file=coverage/lcov.info --exclude=_constants,_mock
---------------------------------------------|---------|---------|---------|-------------------|
File                                         |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |
---------------------------------------------|---------|---------|---------|-------------------|

report for multiple lcov.info files (-m, --multi)

It support to run for multiple lcov.info files with the followings directory structures:
1. No root module
<root>/<module_a>
<root>/<module_a>/coverage/lcov.info
<root>/<module_a>/lib/src
<root>/<module_b>
<root>/<module_b>/coverage/lcov.info
<root>/<module_b>/lib/src
...
2. With root module
<root>/coverage/lcov.info
<root>/lib/src
<root>/<module_a>
<root>/<module_a>/coverage/lcov.info
<root>/<module_a>/lib/src
<root>/<module_b>
<root>/<module_b>/coverage/lcov.info
<root>/<module_b>/lib/src
...
You must run test_cov_console on <root> dir, and the report would be grouped by module, here is
the sample output for directory structure 'with root module':
flutter pub run test_cov_console --file=coverage/lcov.info --exclude=_constants,_mock --multi
---------------------------------------------|---------|---------|---------|-------------------|
File                                         |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |
---------------------------------------------|---------|---------|---------|-------------------|
---------------------------------------------|---------|---------|---------|-------------------|
File - module_a -                            |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |
---------------------------------------------|---------|---------|---------|-------------------|
---------------------------------------------|---------|---------|---------|-------------------|
File - module_b -                            |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |
---------------------------------------------|---------|---------|---------|-------------------|

Output to CSV file (-c, --csv, -o, --output)

flutter pub run test_cov_console -c --output=coverage/test_coverage.csv

#### sample CSV output file:
File,% Branch,% Funcs,% Lines,Uncovered Line #s
lib/,,,,
test_cov_console.dart,0.00,0.00,0.00,no unit testing
lib/src/,,,,
parser.dart,100.00,100.00,97.22,"97"
parser_constants.dart,100.00,100.00,100.00,""
print_cov.dart,100.00,100.00,82.91,"29,49,51,52,171,174,177,180,183,184,185,186,187,188,279,324,325,387,388,389,390,391,392,393,394,395,398"
print_cov_constants.dart,0.00,0.00,0.00,no unit testing
All files with unit testing,100.00,100.00,86.07,""

Installing

Use this package as an executable

Install it

You can install the package from the command line:

dart pub global activate test_cov_console

Use it

The package has the following executables:

$ test_cov_console

Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add test_cov_console

With Flutter:

 $ flutter pub add test_cov_console

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

dependencies:
  test_cov_console: ^0.2.2

Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:test_cov_console/test_cov_console.dart';

example/lib/main.dart

import 'package:flutter/material.dart';

void main() {
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Flutter Demo',
      theme: ThemeData(
        // This is the theme of your application.
        //
        // Try running your application with "flutter run". You'll see the
        // application has a blue toolbar. Then, without quitting the app, try
        // changing the primarySwatch below to Colors.green and then invoke
        // "hot reload" (press "r" in the console where you ran "flutter run",
        // or simply save your changes to "hot reload" in a Flutter IDE).
        // Notice that the counter didn't reset back to zero; the application
        // is not restarted.
        primarySwatch: Colors.blue,
        // This makes the visual density adapt to the platform that you run
        // the app on. For desktop platforms, the controls will be smaller and
        // closer together (more dense) than on mobile platforms.
        visualDensity: VisualDensity.adaptivePlatformDensity,
      ),
      home: MyHomePage(title: 'Flutter Demo Home Page'),
    );
  }
}

class MyHomePage extends StatefulWidget {
  MyHomePage({Key? key, required this.title}) : super(key: key);

  // This widget is the home page of your application. It is stateful, meaning
  // that it has a State object (defined below) that contains fields that affect
  // how it looks.

  // This class is the configuration for the state. It holds the values (in this
  // case the title) provided by the parent (in this case the App widget) and
  // used by the build method of the State. Fields in a Widget subclass are
  // always marked "final".

  final String title;

  @override
  _MyHomePageState createState() => _MyHomePageState();
}

class _MyHomePageState extends State<MyHomePage> {
  int _counter = 0;

  void _incrementCounter() {
    setState(() {
      // This call to setState tells the Flutter framework that something has
      // changed in this State, which causes it to rerun the build method below
      // so that the display can reflect the updated values. If we changed
      // _counter without calling setState(), then the build method would not be
      // called again, and so nothing would appear to happen.
      _counter++;
    });
  }

  @override
  Widget build(BuildContext context) {
    // This method is rerun every time setState is called, for instance as done
    // by the _incrementCounter method above.
    //
    // The Flutter framework has been optimized to make rerunning build methods
    // fast, so that you can just rebuild anything that needs updating rather
    // than having to individually change instances of widgets.
    return Scaffold(
      appBar: AppBar(
        // Here we take the value from the MyHomePage object that was created by
        // the App.build method, and use it to set our appbar title.
        title: Text(widget.title),
      ),
      body: Center(
        // Center is a layout widget. It takes a single child and positions it
        // in the middle of the parent.
        child: Column(
          // Column is also a layout widget. It takes a list of children and
          // arranges them vertically. By default, it sizes itself to fit its
          // children horizontally, and tries to be as tall as its parent.
          //
          // Invoke "debug painting" (press "p" in the console, choose the
          // "Toggle Debug Paint" action from the Flutter Inspector in Android
          // Studio, or the "Toggle Debug Paint" command in Visual Studio Code)
          // to see the wireframe for each widget.
          //
          // Column has various properties to control how it sizes itself and
          // how it positions its children. Here we use mainAxisAlignment to
          // center the children vertically; the main axis here is the vertical
          // axis because Columns are vertical (the cross axis would be
          // horizontal).
          mainAxisAlignment: MainAxisAlignment.center,
          children: <Widget>[
            Text(
              'You have pushed the button this many times:',
            ),
            Text(
              '$_counter',
              style: Theme.of(context).textTheme.headline4,
            ),
          ],
        ),
      ),
      floatingActionButton: FloatingActionButton(
        onPressed: _incrementCounter,
        tooltip: 'Increment',
        child: Icon(Icons.add),
      ), // This trailing comma makes auto-formatting nicer for build methods.
    );
  }
}

Author: DigitalKatalis
Source Code: https://github.com/DigitalKatalis/test_cov_console 
License: BSD-3-Clause license

#flutter #dart #test 

Edureka Fan

Edureka Fan

1606982795

What is Apache Spark? | Apache Spark Python | Spark Training

This Edureka “What is Apache Spark?” video will help you to understand the Architecture of Spark in depth. It includes an example where we Understand what is Python and Apache Spark.

#big-data #apache-spark #developer #apache #spark

Gunjan  Khaitan

Gunjan Khaitan

1582649280

Apache Spark Tutorial For Beginners - Apache Spark Full Course

This full course video on Apache Spark will help you learn the basics of Big Data, what Apache Spark is, and the architecture of Apache Spark. Then, you will understand how to install Apache Spark on Windows and Ubuntu. You will look at the important components of Spark, such as Spark Streaming, Spark MLlib, and Spark SQL. Finally, you will get an idea about implement Spark with Python in PySpark tutorial and look at some of the important Apache Spark interview questions. Now, let’s get started and learn Apache Spark in detail.

Below topics are explained in this Apache Spark Full Course:

  1. Animated Video 01:15
  2. History of Spark 06:48
  3. What is Spark 07:28
  4. Hadoop vs spark 08:32
  5. Components of Apache Spark 14:14
  6. Spark Architecture 33:26
  7. Applications of Spark 40:05
  8. Spark Use Case 42:08
  9. Running a Spark Application 44:08
  10. Apache Spark insallation on Windows 01:01:03
  11. Apache Spark insallation on Ubuntu 01:31:54
  12. What is Spark Streaming 01:49:31
  13. Spark Streaming data sources 01:50:39
  14. Features of Spark Streaming 01:52:19
  15. Working of Spark Streaming 01:52:53
  16. Discretized Streams 01:54:03
  17. caching/persistence 02:02:17
  18. checkpointing in spark streaming 02:04:34
  19. Demo on Spark Streaming 02:18:27
  20. What is Spark MLlib 02:47:29
  21. What is Machine Learning 02:49:14
  22. Machine Learning Algorithms 02:51:38
  23. Spark MLlib Tools 02:53:01
  24. Spark MLlib Data Types 02:56:42
  25. Machine Learning Pipelines 03:09:05
  26. Spark MLlib Demo 03:18:38
  27. What is Spark SQL 04:01:40
  28. Spark SQL Features 04:03:52
  29. Spark SQL Architecture 04:07:43
  30. Spark SQL Data Frame 04:09:59
  31. Spark SQL Data Source 04:11:55
  32. Spark SQL Demo 04:23:00
  33. What is PySpark 04:52:03
  34. PySpark Features 04:58:02
  35. PySpark with Python and Scala 04:58:54
  36. PySpark Contents 05:00:35
  37. PySpark Subpackages 05:40:10
  38. Companies using PySpark 05:41:16
  39. PySpark Demo 05:41:49
  40. Spark Interview Questions 05:50:43

#bigdata #apache #spark #apache-spark

Anil  Sakhiya

Anil Sakhiya

1595141479

Apache Spark For Beginners In 3 Hours | Apache Spark Training

In this Apache Spark For Beginners, we will have an overview of Spark in Big Data. We will start with an introduction to Apache Spark Programming. Then we will move to know the Spark History. Moreover, we will learn why Spark is needed and covers everything that an individual needed to master its skill in this field. In this Apache Spark tutorial, you will not only learn Spark from the basics but also through this Apache Spark tutorial, you will get to know the Spark architecture and its components such as Spark Core, Spark Programming, Spark SQL, Spark Streaming, and much more.

This “Spark Tutorial” will help you to comprehensively learn all the concepts of Apache Spark. Apache Spark has a bright future. Many companies have recognized the power of Spark and quickly started worked on it. The primary importance of Apache Spark in the Big data industry is because of its in-memory data processing. Spark can also handle many analytics challenges because of its low-latency in-memory data processing capability.

Spark’s shell provides you a simple way to learn the API, as well as a powerful tool to analyze data interactively. It is available in either Scala (which runs on the Java VM and is thus a good way to use existing Java libraries) or Python

This Spark tutorial will comprise of the following topics:

  • 00:00:00 - Introduction
  • 00:00:52 - Spark Fundamentals
  • 00:23:11 - Spark Architecture
  • 01:01:08 - Spark Demo

#apache-spark #apache #spark #big-data #developer