Samuel Tucker

Samuel Tucker

1642837364

Arrow Datafusion: Apache Arrow DataFusion and Ballista Query Engines

DataFusion

DataFusion is an extensible query execution framework, written in Rust, that uses Apache Arrow as its in-memory format.

DataFusion supports both an SQL and a DataFrame API for building logical query plans as well as a query optimizer and execution engine capable of parallel execution against partitioned data sources (CSV and Parquet) using threads.

DataFusion also supports distributed query execution via the Ballista crate.

Use Cases

DataFusion is used to create modern, fast and efficient data pipelines, ETL processes, and database systems, which need the performance of Rust and Apache Arrow and want to provide their users the convenience of an SQL interface or a DataFrame API.

Why DataFusion?

  • High Performance: Leveraging Rust and Arrow's memory model, DataFusion achieves very high performance
  • Easy to Connect: Being part of the Apache Arrow ecosystem (Arrow, Parquet and Flight), DataFusion works well with the rest of the big data ecosystem
  • Easy to Embed: Allowing extension at almost any point in its design, DataFusion can be tailored for your specific usecase
  • High Quality: Extensively tested, both by itself and with the rest of the Arrow ecosystem, DataFusion can be used as the foundation for production systems.

Known Uses

Projects that adapt to or serve as plugins to DataFusion:

Here are some of the projects known to use DataFusion:

(if you know of another project, please submit a PR to add a link!)

Example Usage

Run a SQL query against data stored in a CSV:

use datafusion::prelude::*;
use datafusion::arrow::util::pretty::print_batches;
use datafusion::arrow::record_batch::RecordBatch;

#[tokio::main]
async fn main() -> datafusion::error::Result<()> {
  // register the table
  let mut ctx = ExecutionContext::new();
  ctx.register_csv("example", "tests/example.csv", CsvReadOptions::new()).await?;

  // create a plan to run a SQL query
  let df = ctx.sql("SELECT a, MIN(b) FROM example GROUP BY a LIMIT 100").await?;

  // execute and print results
  df.show().await?;
  Ok(())
}

Use the DataFrame API to process data stored in a CSV:

use datafusion::prelude::*;
use datafusion::arrow::util::pretty::print_batches;
use datafusion::arrow::record_batch::RecordBatch;

#[tokio::main]
async fn main() -> datafusion::error::Result<()> {
  // create the dataframe
  let mut ctx = ExecutionContext::new();
  let df = ctx.read_csv("tests/example.csv", CsvReadOptions::new()).await?;

  let df = df.filter(col("a").lt_eq(col("b")))?
          .aggregate(vec![col("a")], vec![min(col("b"))])?;

  // execute and print results
  df.show_limit(100).await?;
  Ok(())
}

Both of these examples will produce

+---+--------+
| a | MIN(b) |
+---+--------+
| 1 | 2      |
+---+--------+

Using DataFusion as a library

DataFusion is published on crates.io, and is well documented on docs.rs.

To get started, add the following to your Cargo.toml file:

[dependencies]
datafusion = "6.0.0"

Using DataFusion as a binary

DataFusion also includes a simple command-line interactive SQL utility. See the CLI reference for more information.

Roadmap

A quarterly roadmap will be published to give the DataFusion community visibility into the priorities of the projects contributors. This roadmap is not binding.

2022 Q1

DataFusion Core

  • Publish official Arrow2 branch
  • Implementation of memory manager (i.e. to enable spilling to disk as needed)

Benchmarking

  • Inclusion in Db-Benchmark with all quries covered
  • All TPCH queries covered

Performance Improvements

  • Predicate evaluation
  • Improve multi-column comparisons (that can't be vectorized at the moment)
  • Null constant support

New Features

  • Read JSON as table
  • Simplify DDL with Datafusion-Cli
  • Add Decimal128 data type and the attendant features such as Arrow Kernel and UDF support
  • Add new experimental e-graph based optimizer

Ballista

  • Begin work on design documents and plan / priorities for development

Extensions (datafusion-contrib)

  • Stable S3 support
  • Begin design discussions and prototyping of a stream provider

Beyond 2022 Q1

There is no clear timeline for the below, but community members have expressed interest in working on these topics.

DataFusion Core

  • Custom SQL support
  • Split DataFusion into multiple crates
  • Push based query execution and code generation

Ballista

  • Evolve architecture so that it can be deployed in a multi-tenant cloud native environment
  • Ensure Ballista is scalable, elastic, and stable for production usage
  • Develop distributed ML capabilities

Status

General

  •  SQL Parser
  •  SQL Query Planner
  •  Query Optimizer
  •  Constant folding
  •  Join Reordering
  •  Limit Pushdown
  •  Projection push down
  •  Predicate push down
  •  Type coercion
  •  Parallel query execution

SQL Support

  •  Projection
  •  Filter (WHERE)
  •  Filter post-aggregate (HAVING)
  •  Limit
  •  Aggregate
  •  Common math functions
  •  cast
  •  try_cast
  •  VALUES lists
  • Postgres compatible String functions
    •  ascii
    •  bit_length
    •  btrim
    •  char_length
    •  character_length
    •  chr
    •  concat
    •  concat_ws
    •  initcap
    •  left
    •  length
    •  lpad
    •  ltrim
    •  octet_length
    •  regexp_replace
    •  repeat
    •  replace
    •  reverse
    •  right
    •  rpad
    •  rtrim
    •  split_part
    •  starts_with
    •  strpos
    •  substr
    •  to_hex
    •  translate
    •  trim
  • Miscellaneous/Boolean functions
    •  nullif
  • Approximation functions
    •  approx_distinct
  • Common date/time functions
  • nested functions
    •  Array of columns
  •  Schema Queries
    •  SHOW TABLES
    •  SHOW COLUMNS
    •  information_schema.{tables, columns}
    •  information_schema other views
  •  Sorting
  •  Nested types
  •  Lists
  •  Subqueries
  •  Common table expressions
  •  Set Operations
    •  UNION ALL
    •  UNION
    •  INTERSECT
    •  INTERSECT ALL
    •  EXCEPT
    •  EXCEPT ALL
  •  Joins
    •  INNER JOIN
    •  LEFT JOIN
    •  RIGHT JOIN
    •  FULL JOIN
    •  CROSS JOIN
  •  Window
    •  Empty window
    •  Common window functions
    •  Window with PARTITION BY clause
    •  Window with ORDER BY clause
    •  Window with FILTER clause
    •  Window with custom WINDOW FRAME
    •  UDF and UDAF for window functions

Data Sources

  •  CSV
  •  Parquet primitive types
  •  Parquet nested types

Extensibility

DataFusion is designed to be extensible at all points. To that end, you can provide your own custom:

  •  User Defined Functions (UDFs)
  •  User Defined Aggregate Functions (UDAFs)
  •  User Defined Table Source (TableProvider) for tables
  •  User Defined Optimizer passes (plan rewrites)
  •  User Defined LogicalPlan nodes
  •  User Defined ExecutionPlan nodes

Rust Version Compatbility

This crate is tested with the latest stable version of Rust. We do not currently test against other, older versions of the Rust compiler.

Supported SQL

This library currently supports many SQL constructs, including

  • CREATE EXTERNAL TABLE X STORED AS PARQUET LOCATION '...'; to register a table's locations
  • SELECT ... FROM ... together with any expression
  • ALIAS to name an expression
  • CAST to change types, including e.g. Timestamp(Nanosecond, None)
  • Many mathematical unary and binary expressions such as +, /, sqrt, tan, >=.
  • WHERE to filter
  • GROUP BY together with one of the following aggregations: MIN, MAX, COUNT, SUM, AVG, CORR, VAR, COVAR, STDDEV (sample and population)
  • ORDER BY together with an expression and optional ASC or DESC and also optional NULLS FIRST or NULLS LAST

Supported Functions

DataFusion strives to implement a subset of the PostgreSQL SQL dialect where possible. We explicitly choose a single dialect to maximize interoperability with other tools and allow reuse of the PostgreSQL documents and tutorials as much as possible.

Currently, only a subset of the PostgreSQL dialect is implemented, and we will document any deviations.

Schema Metadata / Information Schema Support

DataFusion supports the showing metadata about the tables available. This information can be accessed using the views of the ISO SQL information_schema schema or the DataFusion specific SHOW TABLES and SHOW COLUMNS commands.

More information can be found in the Postgres docs).

To show tables available for use in DataFusion, use the SHOW TABLES command or the information_schema.tables view:

> show tables;
+---------------+--------------------+------------+------------+
| table_catalog | table_schema       | table_name | table_type |
+---------------+--------------------+------------+------------+
| datafusion    | public             | t          | BASE TABLE |
| datafusion    | information_schema | tables     | VIEW       |
+---------------+--------------------+------------+------------+

> select * from information_schema.tables;

+---------------+--------------------+------------+--------------+
| table_catalog | table_schema       | table_name | table_type   |
+---------------+--------------------+------------+--------------+
| datafusion    | public             | t          | BASE TABLE   |
| datafusion    | information_schema | TABLES     | SYSTEM TABLE |
+---------------+--------------------+------------+--------------+

To show the schema of a table in DataFusion, use the SHOW COLUMNS command or the or information_schema.columns view:

> show columns from t;
+---------------+--------------+------------+-------------+-----------+-------------+
| table_catalog | table_schema | table_name | column_name | data_type | is_nullable |
+---------------+--------------+------------+-------------+-----------+-------------+
| datafusion    | public       | t          | a           | Int32     | NO          |
| datafusion    | public       | t          | b           | Utf8      | NO          |
| datafusion    | public       | t          | c           | Float32   | NO          |
+---------------+--------------+------------+-------------+-----------+-------------+

>   select table_name, column_name, ordinal_position, is_nullable, data_type from information_schema.columns;
+------------+-------------+------------------+-------------+-----------+
| table_name | column_name | ordinal_position | is_nullable | data_type |
+------------+-------------+------------------+-------------+-----------+
| t          | a           | 0                | NO          | Int32     |
| t          | b           | 1                | NO          | Utf8      |
| t          | c           | 2                | NO          | Float32   |
+------------+-------------+------------------+-------------+-----------+

Supported Data Types

DataFusion uses Arrow, and thus the Arrow type system, for query execution. The SQL types from sqlparser-rs are mapped to Arrow types according to the following table

SQL Data TypeArrow DataType
CHARUtf8
VARCHARUtf8
UUIDNot yet supported
CLOBNot yet supported
BINARYNot yet supported
VARBINARYNot yet supported
DECIMALFloat64
FLOATFloat32
SMALLINTInt16
INTInt32
BIGINTInt64
REALFloat32
DOUBLEFloat64
BOOLEANBoolean
DATEDate32
TIMETime64(TimeUnit::Millisecond)
TIMESTAMPTimestamp(TimeUnit::Nanosecond)
INTERVALNot yet supported
REGCLASSNot yet supported
TEXTNot yet supported
BYTEANot yet supported
CUSTOMNot yet supported
ARRAYNot yet supported

Roadmap

Please see Roadmap for information of where the project is headed.

Architecture Overview

There is no formal document describing DataFusion's architecture yet, but the following presentations offer a good overview of its different components and how they interact together.

  • (March 2021): The DataFusion architecture is described in Query Engine Design and the Rust-Based DataFusion in Apache Arrow: recording (DataFusion content starts ~ 15 minutes in) and slides
  • (February 2021): How DataFusion is used within the Ballista Project is described in *Ballista: Distributed Compute with Rust and Apache Arrow: recording

Developer's guide

Please see Developers Guide for information about developing DataFusion.

Download Details: 
Author: apache
Source Code: https://github.com/apache/arrow-datafusion 
License: Apache-2.0
 

#python #rust #sql #bigdata #arrow #dataframe #datafusion #apache 

What is GEEK

Buddha Community

Arrow Datafusion: Apache Arrow DataFusion and Ballista Query Engines
Samuel Tucker

Samuel Tucker

1642837364

Arrow Datafusion: Apache Arrow DataFusion and Ballista Query Engines

DataFusion

DataFusion is an extensible query execution framework, written in Rust, that uses Apache Arrow as its in-memory format.

DataFusion supports both an SQL and a DataFrame API for building logical query plans as well as a query optimizer and execution engine capable of parallel execution against partitioned data sources (CSV and Parquet) using threads.

DataFusion also supports distributed query execution via the Ballista crate.

Use Cases

DataFusion is used to create modern, fast and efficient data pipelines, ETL processes, and database systems, which need the performance of Rust and Apache Arrow and want to provide their users the convenience of an SQL interface or a DataFrame API.

Why DataFusion?

  • High Performance: Leveraging Rust and Arrow's memory model, DataFusion achieves very high performance
  • Easy to Connect: Being part of the Apache Arrow ecosystem (Arrow, Parquet and Flight), DataFusion works well with the rest of the big data ecosystem
  • Easy to Embed: Allowing extension at almost any point in its design, DataFusion can be tailored for your specific usecase
  • High Quality: Extensively tested, both by itself and with the rest of the Arrow ecosystem, DataFusion can be used as the foundation for production systems.

Known Uses

Projects that adapt to or serve as plugins to DataFusion:

Here are some of the projects known to use DataFusion:

(if you know of another project, please submit a PR to add a link!)

Example Usage

Run a SQL query against data stored in a CSV:

use datafusion::prelude::*;
use datafusion::arrow::util::pretty::print_batches;
use datafusion::arrow::record_batch::RecordBatch;

#[tokio::main]
async fn main() -> datafusion::error::Result<()> {
  // register the table
  let mut ctx = ExecutionContext::new();
  ctx.register_csv("example", "tests/example.csv", CsvReadOptions::new()).await?;

  // create a plan to run a SQL query
  let df = ctx.sql("SELECT a, MIN(b) FROM example GROUP BY a LIMIT 100").await?;

  // execute and print results
  df.show().await?;
  Ok(())
}

Use the DataFrame API to process data stored in a CSV:

use datafusion::prelude::*;
use datafusion::arrow::util::pretty::print_batches;
use datafusion::arrow::record_batch::RecordBatch;

#[tokio::main]
async fn main() -> datafusion::error::Result<()> {
  // create the dataframe
  let mut ctx = ExecutionContext::new();
  let df = ctx.read_csv("tests/example.csv", CsvReadOptions::new()).await?;

  let df = df.filter(col("a").lt_eq(col("b")))?
          .aggregate(vec![col("a")], vec![min(col("b"))])?;

  // execute and print results
  df.show_limit(100).await?;
  Ok(())
}

Both of these examples will produce

+---+--------+
| a | MIN(b) |
+---+--------+
| 1 | 2      |
+---+--------+

Using DataFusion as a library

DataFusion is published on crates.io, and is well documented on docs.rs.

To get started, add the following to your Cargo.toml file:

[dependencies]
datafusion = "6.0.0"

Using DataFusion as a binary

DataFusion also includes a simple command-line interactive SQL utility. See the CLI reference for more information.

Roadmap

A quarterly roadmap will be published to give the DataFusion community visibility into the priorities of the projects contributors. This roadmap is not binding.

2022 Q1

DataFusion Core

  • Publish official Arrow2 branch
  • Implementation of memory manager (i.e. to enable spilling to disk as needed)

Benchmarking

  • Inclusion in Db-Benchmark with all quries covered
  • All TPCH queries covered

Performance Improvements

  • Predicate evaluation
  • Improve multi-column comparisons (that can't be vectorized at the moment)
  • Null constant support

New Features

  • Read JSON as table
  • Simplify DDL with Datafusion-Cli
  • Add Decimal128 data type and the attendant features such as Arrow Kernel and UDF support
  • Add new experimental e-graph based optimizer

Ballista

  • Begin work on design documents and plan / priorities for development

Extensions (datafusion-contrib)

  • Stable S3 support
  • Begin design discussions and prototyping of a stream provider

Beyond 2022 Q1

There is no clear timeline for the below, but community members have expressed interest in working on these topics.

DataFusion Core

  • Custom SQL support
  • Split DataFusion into multiple crates
  • Push based query execution and code generation

Ballista

  • Evolve architecture so that it can be deployed in a multi-tenant cloud native environment
  • Ensure Ballista is scalable, elastic, and stable for production usage
  • Develop distributed ML capabilities

Status

General

  •  SQL Parser
  •  SQL Query Planner
  •  Query Optimizer
  •  Constant folding
  •  Join Reordering
  •  Limit Pushdown
  •  Projection push down
  •  Predicate push down
  •  Type coercion
  •  Parallel query execution

SQL Support

  •  Projection
  •  Filter (WHERE)
  •  Filter post-aggregate (HAVING)
  •  Limit
  •  Aggregate
  •  Common math functions
  •  cast
  •  try_cast
  •  VALUES lists
  • Postgres compatible String functions
    •  ascii
    •  bit_length
    •  btrim
    •  char_length
    •  character_length
    •  chr
    •  concat
    •  concat_ws
    •  initcap
    •  left
    •  length
    •  lpad
    •  ltrim
    •  octet_length
    •  regexp_replace
    •  repeat
    •  replace
    •  reverse
    •  right
    •  rpad
    •  rtrim
    •  split_part
    •  starts_with
    •  strpos
    •  substr
    •  to_hex
    •  translate
    •  trim
  • Miscellaneous/Boolean functions
    •  nullif
  • Approximation functions
    •  approx_distinct
  • Common date/time functions
  • nested functions
    •  Array of columns
  •  Schema Queries
    •  SHOW TABLES
    •  SHOW COLUMNS
    •  information_schema.{tables, columns}
    •  information_schema other views
  •  Sorting
  •  Nested types
  •  Lists
  •  Subqueries
  •  Common table expressions
  •  Set Operations
    •  UNION ALL
    •  UNION
    •  INTERSECT
    •  INTERSECT ALL
    •  EXCEPT
    •  EXCEPT ALL
  •  Joins
    •  INNER JOIN
    •  LEFT JOIN
    •  RIGHT JOIN
    •  FULL JOIN
    •  CROSS JOIN
  •  Window
    •  Empty window
    •  Common window functions
    •  Window with PARTITION BY clause
    •  Window with ORDER BY clause
    •  Window with FILTER clause
    •  Window with custom WINDOW FRAME
    •  UDF and UDAF for window functions

Data Sources

  •  CSV
  •  Parquet primitive types
  •  Parquet nested types

Extensibility

DataFusion is designed to be extensible at all points. To that end, you can provide your own custom:

  •  User Defined Functions (UDFs)
  •  User Defined Aggregate Functions (UDAFs)
  •  User Defined Table Source (TableProvider) for tables
  •  User Defined Optimizer passes (plan rewrites)
  •  User Defined LogicalPlan nodes
  •  User Defined ExecutionPlan nodes

Rust Version Compatbility

This crate is tested with the latest stable version of Rust. We do not currently test against other, older versions of the Rust compiler.

Supported SQL

This library currently supports many SQL constructs, including

  • CREATE EXTERNAL TABLE X STORED AS PARQUET LOCATION '...'; to register a table's locations
  • SELECT ... FROM ... together with any expression
  • ALIAS to name an expression
  • CAST to change types, including e.g. Timestamp(Nanosecond, None)
  • Many mathematical unary and binary expressions such as +, /, sqrt, tan, >=.
  • WHERE to filter
  • GROUP BY together with one of the following aggregations: MIN, MAX, COUNT, SUM, AVG, CORR, VAR, COVAR, STDDEV (sample and population)
  • ORDER BY together with an expression and optional ASC or DESC and also optional NULLS FIRST or NULLS LAST

Supported Functions

DataFusion strives to implement a subset of the PostgreSQL SQL dialect where possible. We explicitly choose a single dialect to maximize interoperability with other tools and allow reuse of the PostgreSQL documents and tutorials as much as possible.

Currently, only a subset of the PostgreSQL dialect is implemented, and we will document any deviations.

Schema Metadata / Information Schema Support

DataFusion supports the showing metadata about the tables available. This information can be accessed using the views of the ISO SQL information_schema schema or the DataFusion specific SHOW TABLES and SHOW COLUMNS commands.

More information can be found in the Postgres docs).

To show tables available for use in DataFusion, use the SHOW TABLES command or the information_schema.tables view:

> show tables;
+---------------+--------------------+------------+------------+
| table_catalog | table_schema       | table_name | table_type |
+---------------+--------------------+------------+------------+
| datafusion    | public             | t          | BASE TABLE |
| datafusion    | information_schema | tables     | VIEW       |
+---------------+--------------------+------------+------------+

> select * from information_schema.tables;

+---------------+--------------------+------------+--------------+
| table_catalog | table_schema       | table_name | table_type   |
+---------------+--------------------+------------+--------------+
| datafusion    | public             | t          | BASE TABLE   |
| datafusion    | information_schema | TABLES     | SYSTEM TABLE |
+---------------+--------------------+------------+--------------+

To show the schema of a table in DataFusion, use the SHOW COLUMNS command or the or information_schema.columns view:

> show columns from t;
+---------------+--------------+------------+-------------+-----------+-------------+
| table_catalog | table_schema | table_name | column_name | data_type | is_nullable |
+---------------+--------------+------------+-------------+-----------+-------------+
| datafusion    | public       | t          | a           | Int32     | NO          |
| datafusion    | public       | t          | b           | Utf8      | NO          |
| datafusion    | public       | t          | c           | Float32   | NO          |
+---------------+--------------+------------+-------------+-----------+-------------+

>   select table_name, column_name, ordinal_position, is_nullable, data_type from information_schema.columns;
+------------+-------------+------------------+-------------+-----------+
| table_name | column_name | ordinal_position | is_nullable | data_type |
+------------+-------------+------------------+-------------+-----------+
| t          | a           | 0                | NO          | Int32     |
| t          | b           | 1                | NO          | Utf8      |
| t          | c           | 2                | NO          | Float32   |
+------------+-------------+------------------+-------------+-----------+

Supported Data Types

DataFusion uses Arrow, and thus the Arrow type system, for query execution. The SQL types from sqlparser-rs are mapped to Arrow types according to the following table

SQL Data TypeArrow DataType
CHARUtf8
VARCHARUtf8
UUIDNot yet supported
CLOBNot yet supported
BINARYNot yet supported
VARBINARYNot yet supported
DECIMALFloat64
FLOATFloat32
SMALLINTInt16
INTInt32
BIGINTInt64
REALFloat32
DOUBLEFloat64
BOOLEANBoolean
DATEDate32
TIMETime64(TimeUnit::Millisecond)
TIMESTAMPTimestamp(TimeUnit::Nanosecond)
INTERVALNot yet supported
REGCLASSNot yet supported
TEXTNot yet supported
BYTEANot yet supported
CUSTOMNot yet supported
ARRAYNot yet supported

Roadmap

Please see Roadmap for information of where the project is headed.

Architecture Overview

There is no formal document describing DataFusion's architecture yet, but the following presentations offer a good overview of its different components and how they interact together.

  • (March 2021): The DataFusion architecture is described in Query Engine Design and the Rust-Based DataFusion in Apache Arrow: recording (DataFusion content starts ~ 15 minutes in) and slides
  • (February 2021): How DataFusion is used within the Ballista Project is described in *Ballista: Distributed Compute with Rust and Apache Arrow: recording

Developer's guide

Please see Developers Guide for information about developing DataFusion.

Download Details: 
Author: apache
Source Code: https://github.com/apache/arrow-datafusion 
License: Apache-2.0
 

#python #rust #sql #bigdata #arrow #dataframe #datafusion #apache 

Ahebwe  Oscar

Ahebwe Oscar

1620185280

How model queries work in Django

How model queries work in Django

Welcome to my blog, hey everyone in this article we are going to be working with queries in Django so for any web app that you build your going to want to write a query so you can retrieve information from your database so in this article I’ll be showing you all the different ways that you can write queries and it should cover about 90% of the cases that you’ll have when you’re writing your code the other 10% depend on your specific use case you may have to get more complicated but for the most part what I cover in this article should be able to help you so let’s start with the model that I have I’ve already created it.

**Read More : **How to make Chatbot in Python.

Read More : Django Admin Full Customization step by step

let’s just get into this diagram that I made so in here:

django queries aboutDescribe each parameter in Django querset

we’re making a simple query for the myModel table so we want to pull out all the information in the database so we have this variable which is gonna hold a return value and we have our myModel models so this is simply the myModel model name so whatever you named your model just make sure you specify that and we’re gonna access the objects attribute once we get that object’s attribute we can simply use the all method and this will return all the information in the database so we’re gonna start with all and then we will go into getting single items filtering that data and go to our command prompt.

Here and we’ll actually start making our queries from here to do this let’s just go ahead and run** Python manage.py shell** and I am in my project file so make sure you’re in there when you start and what this does is it gives us an interactive shell to actually start working with our data so this is a lot like the Python shell but because we did manage.py it allows us to do things a Django way and actually query our database now open up the command prompt and let’s go ahead and start making our first queries.

#django #django model queries #django orm #django queries #django query #model django query #model query #query with django

Ruth  Nabimanya

Ruth Nabimanya

1624078535

Memoization in Cost-based Optimizers

This blog post discusses memoization - a technique that allows cost-based optimizers to consider billions of alternative plans.

Query optimization is an expensive process that needs to explore multiple alternative ways to execute the query. The query optimization problem is NP-hard, and the number of possible plans grows exponentially with the query’s complexity. For example, a typical TPC-H query may have up to several thousand possible join orders, 2–3 algorithms per join, a couple of access methods per table, some filter/aggregate pushdown alternatives, etc. Combined, this could quickly explode the search space to millions of alternative plans.

This blog post will discuss memoization — an important technique that allows cost-based optimizers to consider billions of alternative plans in a reasonable time.

#database #sql #query optimization #query engine #apache calcite #query optimizer #database analysis

Chaos Engineering with Apache Kafka and Gremlin

https://cnfl.io/podcast-episode-164 | The most secure clusters aren’t built on the hopes that they’ll never break. They are the clusters that are broken on purpose and with a specific goal. When organizations want to avoid systematic weaknesses, chaos engineering with Apache Kafka® is the route to go.

Your system is only as reliable as its highest point of vulnerability. Patrick Brennan (Principal Architect) and Tammy Butow (Principal SRE) from Gremlin discuss how they do their own chaos engineering to manage and resolve high-severity incidents across the company. But why would an engineer break things when they would have to fix them? Brennan explains that finding weaknesses in the cloud environment helps Gremlin to:
► Avoid lengthy downtime when there is an issue (not if, but when)
► Halt lost revenue that results from service interruptions
► Maintain customer satisfaction with their stream processing services
► Steer clear of burnout for the SRE team

Chaos engineering is all about experimenting with injecting failure directly into the clusters on the cloud. The key is to start with a small blast radius and then scale as needed. It is critical that SREs have a plan for failure and then practice an intense communication methodology with the development team. This plan has to be detailed and includes precise diagramming so that nothing in the chaos engineering process is an anomaly. Once the process is confirmed, SREs can automate it, and nothing about it is random.

When something breaks or you find a vulnerability, it only helps the overall network become stronger. This becomes a way to problem-solve across engineering teams collaboratively. Chaos engineering makes it easier for SRE and development teams to do their job, and it helps the organization promote security and reliability to their customers. With Kafka, companies don’t have to wait for an issue to happen. They can make their disorder within microservices on the cloud and fix vulnerabilities before anything catastrophic happens.

EPISODE LINKS
► Try Gremlin’s free tier: https://gremlin.com/free
► Join Gremlin’s Slack channel: https://gremlin.com/slack
► Learn more about Girl Geek Academy: https://girlgeekacademy.com/
► Learn more about gardening: https://www.masterclass.com/classes/ron-finley-teaches-gardening
► Join the Confluent Community: https://cnfl.io/confluent-community-episode-164
► Kafka tutorials, resources, and guides at Confluent Developer: https://cnfl.io/confluent-developer-episode-164
► Kafka streaming in 10 minutes on Confluent Cloud: https://cnfl.io/kafka-demo-episode-164
► Use 60PDCAST for $60 of free Confluent Cloud usage: http://cnfl.io/try-free-podcast-episode-164
► Promo code details: https://cnfl.io/promo-code-details-episode-164

#chaos-engineering #apache #apache-kafka #kafka #gremlin

Apache Arrow and Distributed Compute with Kubernetes

Apache Arrow is a cross-language development platform for In-Memory data that specifies a standardized language-independent columnar memory format for flat and hierarchical data, organized for efficient analytic operations on modern hardware. Also, provides inter-process communication, zero-copy streaming messaging and also computational libraries. C, C++, Java, JavaScript, Python, and Ruby are the languages currently supported include. ” as quoted by the official website.

This project is a move to standardize the In-Memory data representation, used between libraries, systems, languages, and frameworks.

#insights #apache #apache arrow