Flo  D'Amore

Flo D'Amore

1615776033

Introduction to Apache Arrow with Rust

This article explores what I have learned while working with Apache Arrow in Rust

For the past few months, I’ve been working on an application that is responsible for processing in-memory data. The project is exciting to me for two reasons. One is that the project is written entirely in Rust, and the other is the opportunity to learn about new topics and libraries. This article explores what I have learned while working with Apache Arrow.

When I first started working on the project, I was not aware of Apache Arrow. I just needed a way to aggregate data as efficiently as possible. I even built a proof-of-concept that provided most of the functionality that I needed, including joins. It was a rough draft, but I could already tell that there were some performance issues. One fundamental issue was the performance of aggregation functions on the data structure that represented a data set in a row-based format. This design meant that column-based operations like filtering and mathematical operations were costly. There were a few other concerns, so I decided to do some more research which finally led me to Apache Arrow.

What is Apache Arrow

Apache Arrow is a language-agnostic software development platform used to build applications that process and transport large data sets. Not only did this product provide a column-oriented data format, but it also provided a few other helpful libraries and a developer ecosystem maintained by the Apache Software Foundation.

Memory Format

The core feature of Apache Arrow is its in-memory columnar data format which is a specification for structuring tabular data sets in memory and provides a well-defined type system. This makes the format an ideal building block for projects like database systems or data frame libraries. One major benefit of this memory format is that it excels at processing large chunks of data and enables vectorization using SIMD operations.

Libraries

Other libraries are provided as companions to Apache Arrow. They provide common functionality that you wouldn’t want to implement yourself. Two Rust-specific libraries I found helpful were DataFusion and Arrow Flight. DataFusion is a query engine built on Apache Arrow that provides data frame and SQL query APIs. Arrow Flight is a serialization library that is used for transporting data across a network. I’ll go into more detail about these libraries in later articles.

Developer Ecosystem

One important factor in deciding on Apache Arrow was the developer ecosystem. Apache Arrow is maintained by the Apache Software Foundation which provides a governing body and decision-making process. The foundation also works to maintain a community of open-source developers that is open to everyone.

Rust Arrow Implementation

There are multiple implementations of Apache Arrow, but I will focus on the Rust version. This section will be broken into four parts: low-level arrays, high-level constructs, data readers, and compute kernels. Not every aspect of the library will be covered but just enough to get a good idea of how it can be used.

#rust #rustlang #programming

What is GEEK

Buddha Community

Introduction to Apache Arrow with Rust

Serde Rust: Serialization Framework for Rust

Serde

*Serde is a framework for serializing and deserializing Rust data structures efficiently and generically.*

You may be looking for:

Serde in action

Click to show Cargo.toml. Run this code in the playground.

[dependencies]

# The core APIs, including the Serialize and Deserialize traits. Always
# required when using Serde. The "derive" feature is only required when
# using #[derive(Serialize, Deserialize)] to make Serde work with structs
# and enums defined in your crate.
serde = { version = "1.0", features = ["derive"] }

# Each data format lives in its own crate; the sample code below uses JSON
# but you may be using a different one.
serde_json = "1.0"

 

use serde::{Serialize, Deserialize};

#[derive(Serialize, Deserialize, Debug)]
struct Point {
    x: i32,
    y: i32,
}

fn main() {
    let point = Point { x: 1, y: 2 };

    // Convert the Point to a JSON string.
    let serialized = serde_json::to_string(&point).unwrap();

    // Prints serialized = {"x":1,"y":2}
    println!("serialized = {}", serialized);

    // Convert the JSON string back to a Point.
    let deserialized: Point = serde_json::from_str(&serialized).unwrap();

    // Prints deserialized = Point { x: 1, y: 2 }
    println!("deserialized = {:?}", deserialized);
}

Getting help

Serde is one of the most widely used Rust libraries so any place that Rustaceans congregate will be able to help you out. For chat, consider trying the #rust-questions or #rust-beginners channels of the unofficial community Discord (invite: https://discord.gg/rust-lang-community), the #rust-usage or #beginners channels of the official Rust Project Discord (invite: https://discord.gg/rust-lang), or the #general stream in Zulip. For asynchronous, consider the [rust] tag on StackOverflow, the /r/rust subreddit which has a pinned weekly easy questions post, or the Rust Discourse forum. It's acceptable to file a support issue in this repo but they tend not to get as many eyes as any of the above and may get closed without a response after some time.

Download Details:
Author: serde-rs
Source Code: https://github.com/serde-rs/serde
License: View license

#rust  #rustlang 

Samuel Tucker

Samuel Tucker

1642837364

Arrow Datafusion: Apache Arrow DataFusion and Ballista Query Engines

DataFusion

DataFusion is an extensible query execution framework, written in Rust, that uses Apache Arrow as its in-memory format.

DataFusion supports both an SQL and a DataFrame API for building logical query plans as well as a query optimizer and execution engine capable of parallel execution against partitioned data sources (CSV and Parquet) using threads.

DataFusion also supports distributed query execution via the Ballista crate.

Use Cases

DataFusion is used to create modern, fast and efficient data pipelines, ETL processes, and database systems, which need the performance of Rust and Apache Arrow and want to provide their users the convenience of an SQL interface or a DataFrame API.

Why DataFusion?

  • High Performance: Leveraging Rust and Arrow's memory model, DataFusion achieves very high performance
  • Easy to Connect: Being part of the Apache Arrow ecosystem (Arrow, Parquet and Flight), DataFusion works well with the rest of the big data ecosystem
  • Easy to Embed: Allowing extension at almost any point in its design, DataFusion can be tailored for your specific usecase
  • High Quality: Extensively tested, both by itself and with the rest of the Arrow ecosystem, DataFusion can be used as the foundation for production systems.

Known Uses

Projects that adapt to or serve as plugins to DataFusion:

Here are some of the projects known to use DataFusion:

(if you know of another project, please submit a PR to add a link!)

Example Usage

Run a SQL query against data stored in a CSV:

use datafusion::prelude::*;
use datafusion::arrow::util::pretty::print_batches;
use datafusion::arrow::record_batch::RecordBatch;

#[tokio::main]
async fn main() -> datafusion::error::Result<()> {
  // register the table
  let mut ctx = ExecutionContext::new();
  ctx.register_csv("example", "tests/example.csv", CsvReadOptions::new()).await?;

  // create a plan to run a SQL query
  let df = ctx.sql("SELECT a, MIN(b) FROM example GROUP BY a LIMIT 100").await?;

  // execute and print results
  df.show().await?;
  Ok(())
}

Use the DataFrame API to process data stored in a CSV:

use datafusion::prelude::*;
use datafusion::arrow::util::pretty::print_batches;
use datafusion::arrow::record_batch::RecordBatch;

#[tokio::main]
async fn main() -> datafusion::error::Result<()> {
  // create the dataframe
  let mut ctx = ExecutionContext::new();
  let df = ctx.read_csv("tests/example.csv", CsvReadOptions::new()).await?;

  let df = df.filter(col("a").lt_eq(col("b")))?
          .aggregate(vec![col("a")], vec![min(col("b"))])?;

  // execute and print results
  df.show_limit(100).await?;
  Ok(())
}

Both of these examples will produce

+---+--------+
| a | MIN(b) |
+---+--------+
| 1 | 2      |
+---+--------+

Using DataFusion as a library

DataFusion is published on crates.io, and is well documented on docs.rs.

To get started, add the following to your Cargo.toml file:

[dependencies]
datafusion = "6.0.0"

Using DataFusion as a binary

DataFusion also includes a simple command-line interactive SQL utility. See the CLI reference for more information.

Roadmap

A quarterly roadmap will be published to give the DataFusion community visibility into the priorities of the projects contributors. This roadmap is not binding.

2022 Q1

DataFusion Core

  • Publish official Arrow2 branch
  • Implementation of memory manager (i.e. to enable spilling to disk as needed)

Benchmarking

  • Inclusion in Db-Benchmark with all quries covered
  • All TPCH queries covered

Performance Improvements

  • Predicate evaluation
  • Improve multi-column comparisons (that can't be vectorized at the moment)
  • Null constant support

New Features

  • Read JSON as table
  • Simplify DDL with Datafusion-Cli
  • Add Decimal128 data type and the attendant features such as Arrow Kernel and UDF support
  • Add new experimental e-graph based optimizer

Ballista

  • Begin work on design documents and plan / priorities for development

Extensions (datafusion-contrib)

  • Stable S3 support
  • Begin design discussions and prototyping of a stream provider

Beyond 2022 Q1

There is no clear timeline for the below, but community members have expressed interest in working on these topics.

DataFusion Core

  • Custom SQL support
  • Split DataFusion into multiple crates
  • Push based query execution and code generation

Ballista

  • Evolve architecture so that it can be deployed in a multi-tenant cloud native environment
  • Ensure Ballista is scalable, elastic, and stable for production usage
  • Develop distributed ML capabilities

Status

General

  •  SQL Parser
  •  SQL Query Planner
  •  Query Optimizer
  •  Constant folding
  •  Join Reordering
  •  Limit Pushdown
  •  Projection push down
  •  Predicate push down
  •  Type coercion
  •  Parallel query execution

SQL Support

  •  Projection
  •  Filter (WHERE)
  •  Filter post-aggregate (HAVING)
  •  Limit
  •  Aggregate
  •  Common math functions
  •  cast
  •  try_cast
  •  VALUES lists
  • Postgres compatible String functions
    •  ascii
    •  bit_length
    •  btrim
    •  char_length
    •  character_length
    •  chr
    •  concat
    •  concat_ws
    •  initcap
    •  left
    •  length
    •  lpad
    •  ltrim
    •  octet_length
    •  regexp_replace
    •  repeat
    •  replace
    •  reverse
    •  right
    •  rpad
    •  rtrim
    •  split_part
    •  starts_with
    •  strpos
    •  substr
    •  to_hex
    •  translate
    •  trim
  • Miscellaneous/Boolean functions
    •  nullif
  • Approximation functions
    •  approx_distinct
  • Common date/time functions
  • nested functions
    •  Array of columns
  •  Schema Queries
    •  SHOW TABLES
    •  SHOW COLUMNS
    •  information_schema.{tables, columns}
    •  information_schema other views
  •  Sorting
  •  Nested types
  •  Lists
  •  Subqueries
  •  Common table expressions
  •  Set Operations
    •  UNION ALL
    •  UNION
    •  INTERSECT
    •  INTERSECT ALL
    •  EXCEPT
    •  EXCEPT ALL
  •  Joins
    •  INNER JOIN
    •  LEFT JOIN
    •  RIGHT JOIN
    •  FULL JOIN
    •  CROSS JOIN
  •  Window
    •  Empty window
    •  Common window functions
    •  Window with PARTITION BY clause
    •  Window with ORDER BY clause
    •  Window with FILTER clause
    •  Window with custom WINDOW FRAME
    •  UDF and UDAF for window functions

Data Sources

  •  CSV
  •  Parquet primitive types
  •  Parquet nested types

Extensibility

DataFusion is designed to be extensible at all points. To that end, you can provide your own custom:

  •  User Defined Functions (UDFs)
  •  User Defined Aggregate Functions (UDAFs)
  •  User Defined Table Source (TableProvider) for tables
  •  User Defined Optimizer passes (plan rewrites)
  •  User Defined LogicalPlan nodes
  •  User Defined ExecutionPlan nodes

Rust Version Compatbility

This crate is tested with the latest stable version of Rust. We do not currently test against other, older versions of the Rust compiler.

Supported SQL

This library currently supports many SQL constructs, including

  • CREATE EXTERNAL TABLE X STORED AS PARQUET LOCATION '...'; to register a table's locations
  • SELECT ... FROM ... together with any expression
  • ALIAS to name an expression
  • CAST to change types, including e.g. Timestamp(Nanosecond, None)
  • Many mathematical unary and binary expressions such as +, /, sqrt, tan, >=.
  • WHERE to filter
  • GROUP BY together with one of the following aggregations: MIN, MAX, COUNT, SUM, AVG, CORR, VAR, COVAR, STDDEV (sample and population)
  • ORDER BY together with an expression and optional ASC or DESC and also optional NULLS FIRST or NULLS LAST

Supported Functions

DataFusion strives to implement a subset of the PostgreSQL SQL dialect where possible. We explicitly choose a single dialect to maximize interoperability with other tools and allow reuse of the PostgreSQL documents and tutorials as much as possible.

Currently, only a subset of the PostgreSQL dialect is implemented, and we will document any deviations.

Schema Metadata / Information Schema Support

DataFusion supports the showing metadata about the tables available. This information can be accessed using the views of the ISO SQL information_schema schema or the DataFusion specific SHOW TABLES and SHOW COLUMNS commands.

More information can be found in the Postgres docs).

To show tables available for use in DataFusion, use the SHOW TABLES command or the information_schema.tables view:

> show tables;
+---------------+--------------------+------------+------------+
| table_catalog | table_schema       | table_name | table_type |
+---------------+--------------------+------------+------------+
| datafusion    | public             | t          | BASE TABLE |
| datafusion    | information_schema | tables     | VIEW       |
+---------------+--------------------+------------+------------+

> select * from information_schema.tables;

+---------------+--------------------+------------+--------------+
| table_catalog | table_schema       | table_name | table_type   |
+---------------+--------------------+------------+--------------+
| datafusion    | public             | t          | BASE TABLE   |
| datafusion    | information_schema | TABLES     | SYSTEM TABLE |
+---------------+--------------------+------------+--------------+

To show the schema of a table in DataFusion, use the SHOW COLUMNS command or the or information_schema.columns view:

> show columns from t;
+---------------+--------------+------------+-------------+-----------+-------------+
| table_catalog | table_schema | table_name | column_name | data_type | is_nullable |
+---------------+--------------+------------+-------------+-----------+-------------+
| datafusion    | public       | t          | a           | Int32     | NO          |
| datafusion    | public       | t          | b           | Utf8      | NO          |
| datafusion    | public       | t          | c           | Float32   | NO          |
+---------------+--------------+------------+-------------+-----------+-------------+

>   select table_name, column_name, ordinal_position, is_nullable, data_type from information_schema.columns;
+------------+-------------+------------------+-------------+-----------+
| table_name | column_name | ordinal_position | is_nullable | data_type |
+------------+-------------+------------------+-------------+-----------+
| t          | a           | 0                | NO          | Int32     |
| t          | b           | 1                | NO          | Utf8      |
| t          | c           | 2                | NO          | Float32   |
+------------+-------------+------------------+-------------+-----------+

Supported Data Types

DataFusion uses Arrow, and thus the Arrow type system, for query execution. The SQL types from sqlparser-rs are mapped to Arrow types according to the following table

SQL Data TypeArrow DataType
CHARUtf8
VARCHARUtf8
UUIDNot yet supported
CLOBNot yet supported
BINARYNot yet supported
VARBINARYNot yet supported
DECIMALFloat64
FLOATFloat32
SMALLINTInt16
INTInt32
BIGINTInt64
REALFloat32
DOUBLEFloat64
BOOLEANBoolean
DATEDate32
TIMETime64(TimeUnit::Millisecond)
TIMESTAMPTimestamp(TimeUnit::Nanosecond)
INTERVALNot yet supported
REGCLASSNot yet supported
TEXTNot yet supported
BYTEANot yet supported
CUSTOMNot yet supported
ARRAYNot yet supported

Roadmap

Please see Roadmap for information of where the project is headed.

Architecture Overview

There is no formal document describing DataFusion's architecture yet, but the following presentations offer a good overview of its different components and how they interact together.

  • (March 2021): The DataFusion architecture is described in Query Engine Design and the Rust-Based DataFusion in Apache Arrow: recording (DataFusion content starts ~ 15 minutes in) and slides
  • (February 2021): How DataFusion is used within the Ballista Project is described in *Ballista: Distributed Compute with Rust and Apache Arrow: recording

Developer's guide

Please see Developers Guide for information about developing DataFusion.

Download Details: 
Author: apache
Source Code: https://github.com/apache/arrow-datafusion 
License: Apache-2.0
 

#python #rust #sql #bigdata #arrow #dataframe #datafusion #apache 

Awesome  Rust

Awesome Rust

1654894080

Serde JSON: JSON Support for Serde Framework

Serde JSON

Serde is a framework for serializing and deserializing Rust data structures efficiently and generically.

[dependencies]
serde_json = "1.0"

You may be looking for:

JSON is a ubiquitous open-standard format that uses human-readable text to transmit data objects consisting of key-value pairs.

{
    "name": "John Doe",
    "age": 43,
    "address": {
        "street": "10 Downing Street",
        "city": "London"
    },
    "phones": [
        "+44 1234567",
        "+44 2345678"
    ]
}

There are three common ways that you might find yourself needing to work with JSON data in Rust.

  • As text data. An unprocessed string of JSON data that you receive on an HTTP endpoint, read from a file, or prepare to send to a remote server.
  • As an untyped or loosely typed representation. Maybe you want to check that some JSON data is valid before passing it on, but without knowing the structure of what it contains. Or you want to do very basic manipulations like insert a key in a particular spot.
  • As a strongly typed Rust data structure. When you expect all or most of your data to conform to a particular structure and want to get real work done without JSON's loosey-goosey nature tripping you up.

Serde JSON provides efficient, flexible, safe ways of converting data between each of these representations.

Operating on untyped JSON values

Any valid JSON data can be manipulated in the following recursive enum representation. This data structure is serde_json::Value.

enum Value {
    Null,
    Bool(bool),
    Number(Number),
    String(String),
    Array(Vec<Value>),
    Object(Map<String, Value>),
}

A string of JSON data can be parsed into a serde_json::Value by the serde_json::from_str function. There is also from_slice for parsing from a byte slice &[u8] and from_reader for parsing from any io::Read like a File or a TCP stream.

use serde_json::{Result, Value};

fn untyped_example() -> Result<()> {
    // Some JSON input data as a &str. Maybe this comes from the user.
    let data = r#"
        {
            "name": "John Doe",
            "age": 43,
            "phones": [
                "+44 1234567",
                "+44 2345678"
            ]
        }"#;

    // Parse the string of data into serde_json::Value.
    let v: Value = serde_json::from_str(data)?;

    // Access parts of the data by indexing with square brackets.
    println!("Please call {} at the number {}", v["name"], v["phones"][0]);

    Ok(())
}

The result of square bracket indexing like v["name"] is a borrow of the data at that index, so the type is &Value. A JSON map can be indexed with string keys, while a JSON array can be indexed with integer keys. If the type of the data is not right for the type with which it is being indexed, or if a map does not contain the key being indexed, or if the index into a vector is out of bounds, the returned element is Value::Null.

When a Value is printed, it is printed as a JSON string. So in the code above, the output looks like Please call "John Doe" at the number "+44 1234567". The quotation marks appear because v["name"] is a &Value containing a JSON string and its JSON representation is "John Doe". Printing as a plain string without quotation marks involves converting from a JSON string to a Rust string with as_str() or avoiding the use of Value as described in the following section.

The Value representation is sufficient for very basic tasks but can be tedious to work with for anything more significant. Error handling is verbose to implement correctly, for example imagine trying to detect the presence of unrecognized fields in the input data. The compiler is powerless to help you when you make a mistake, for example imagine typoing v["name"] as v["nmae"] in one of the dozens of places it is used in your code.

Parsing JSON as strongly typed data structures

Serde provides a powerful way of mapping JSON data into Rust data structures largely automatically.

use serde::{Deserialize, Serialize};
use serde_json::Result;

#[derive(Serialize, Deserialize)]
struct Person {
    name: String,
    age: u8,
    phones: Vec<String>,
}

fn typed_example() -> Result<()> {
    // Some JSON input data as a &str. Maybe this comes from the user.
    let data = r#"
        {
            "name": "John Doe",
            "age": 43,
            "phones": [
                "+44 1234567",
                "+44 2345678"
            ]
        }"#;

    // Parse the string of data into a Person object. This is exactly the
    // same function as the one that produced serde_json::Value above, but
    // now we are asking it for a Person as output.
    let p: Person = serde_json::from_str(data)?;

    // Do things just like with any other Rust data structure.
    println!("Please call {} at the number {}", p.name, p.phones[0]);

    Ok(())
}

This is the same serde_json::from_str function as before, but this time we assign the return value to a variable of type Person so Serde will automatically interpret the input data as a Person and produce informative error messages if the layout does not conform to what a Person is expected to look like.

Any type that implements Serde's Deserialize trait can be deserialized this way. This includes built-in Rust standard library types like Vec<T> and HashMap<K, V>, as well as any structs or enums annotated with #[derive(Deserialize)].

Once we have p of type Person, our IDE and the Rust compiler can help us use it correctly like they do for any other Rust code. The IDE can autocomplete field names to prevent typos, which was impossible in the serde_json::Value representation. And the Rust compiler can check that when we write p.phones[0], then p.phones is guaranteed to be a Vec<String> so indexing into it makes sense and produces a String.

The necessary setup for using Serde's derive macros is explained on the Using derive page of the Serde site.

Constructing JSON values

Serde JSON provides a json! macro to build serde_json::Value objects with very natural JSON syntax.

use serde_json::json;

fn main() {
    // The type of `john` is `serde_json::Value`
    let john = json!({
        "name": "John Doe",
        "age": 43,
        "phones": [
            "+44 1234567",
            "+44 2345678"
        ]
    });

    println!("first phone number: {}", john["phones"][0]);

    // Convert to a string of JSON and print it out
    println!("{}", john.to_string());
}

The Value::to_string() function converts a serde_json::Value into a String of JSON text.

One neat thing about the json! macro is that variables and expressions can be interpolated directly into the JSON value as you are building it. Serde will check at compile time that the value you are interpolating is able to be represented as JSON.

let full_name = "John Doe";
let age_last_year = 42;

// The type of `john` is `serde_json::Value`
let john = json!({
    "name": full_name,
    "age": age_last_year + 1,
    "phones": [
        format!("+44 {}", random_phone())
    ]
});

This is amazingly convenient, but we have the problem we had before with Value: the IDE and Rust compiler cannot help us if we get it wrong. Serde JSON provides a better way of serializing strongly-typed data structures into JSON text.

Creating JSON by serializing data structures

A data structure can be converted to a JSON string by serde_json::to_string. There is also serde_json::to_vec which serializes to a Vec<u8> and serde_json::to_writer which serializes to any io::Write such as a File or a TCP stream.

use serde::{Deserialize, Serialize};
use serde_json::Result;

#[derive(Serialize, Deserialize)]
struct Address {
    street: String,
    city: String,
}

fn print_an_address() -> Result<()> {
    // Some data structure.
    let address = Address {
        street: "10 Downing Street".to_owned(),
        city: "London".to_owned(),
    };

    // Serialize it to a JSON string.
    let j = serde_json::to_string(&address)?;

    // Print, write to a file, or send to an HTTP server.
    println!("{}", j);

    Ok(())
}

Any type that implements Serde's Serialize trait can be serialized this way. This includes built-in Rust standard library types like Vec<T> and HashMap<K, V>, as well as any structs or enums annotated with #[derive(Serialize)].

Performance

It is fast. You should expect in the ballpark of 500 to 1000 megabytes per second deserialization and 600 to 900 megabytes per second serialization, depending on the characteristics of your data. This is competitive with the fastest C and C++ JSON libraries or even 30% faster for many use cases. Benchmarks live in the serde-rs/json-benchmark repo.

Getting help

Serde is one of the most widely used Rust libraries, so any place that Rustaceans congregate will be able to help you out. For chat, consider trying the #rust-questions or #rust-beginners channels of the unofficial community Discord (invite: https://discord.gg/rust-lang-community), the #rust-usage or #beginners channels of the official Rust Project Discord (invite: https://discord.gg/rust-lang), or the #general stream in Zulip. For asynchronous, consider the [rust] tag on StackOverflow, the /r/rust subreddit which has a pinned weekly easy questions post, or the Rust Discourse forum. It's acceptable to file a support issue in this repo, but they tend not to get as many eyes as any of the above and may get closed without a response after some time.

No-std support

As long as there is a memory allocator, it is possible to use serde_json without the rest of the Rust standard library. This is supported on Rust 1.36+. Disable the default "std" feature and enable the "alloc" feature:

[dependencies]
serde_json = { version = "1.0", default-features = false, features = ["alloc"] }

For JSON support in Serde without a memory allocator, please see the serde-json-core crate.

Link: https://crates.io/crates/serde_json

#rust  #rustlang  #encode   #json 

Arrow Rs: Rust Implementation Of Apache Arrow

Native Rust implementation of Apache Arrow and Parquet

Welcome to the implementation of Arrow, the popular in-memory columnar format, in Rust.

This repo contains the following main components:

CrateDescriptionDocumentation
arrowCore functionality (memory layout, arrays, low level computations)[(README)][arrow-readme]
parquetSupport for Parquet columnar file format[(README)][parquet-readme]
arrow-flightSupport for Arrow-Flight IPC protocol[(README)][flight-readme]

There are two related crates in a different repository

CrateDescriptionDocumentation
DataFusionIn-memory query engine with SQL support[(README)][datafusion-readme]
BallistaDistributed query execution[(README)][ballista-readme]

Collectively, these crates support a vast array of functionality for analytic computations in Rust.

For example, you can write an SQL query or a DataFrame (using the datafusion crate), run it against a parquet file (using the parquet crate), evaluate it in-memory using Arrow's columnar format (using the arrow crate), and send to another process (using the arrow-flight crate).

Generally speaking, the arrow crate offers functionality for using Arrow arrays, and datafusion offers most operations typically found in SQL, including joins and window functions.

You can find more details about each crate in their respective READMEs.

Arrow Rust Community

The dev@arrow.apache.org mailing list serves as the core communication channel for the Arrow community. Instructions for signing up and links to the archives can be found at the Arrow Community page. All major announcements and communications happen there.

The Rust Arrow community also uses the official ASF Slack for informal discussions and coordination. This is a great place to meet other contributors and get guidance on where to contribute. Join us in the #arrow-rust channel.

Unlike other parts of the Arrow ecosystem, the Rust implementation uses GitHub issues as the system of record for new features and bug fixes and this plays a critical role in the release process.

For design discussions we generally collaborate on Google documents and file a GitHub issue linking to the document.

There is more information in the contributing guide.

Download Details:
Author: apache
Source Code: https://github.com/apache/arrow-rs
License: Apache-2.0 License

#rust  #machinelearing 

Rust Lang Course For Beginner In 2021: Guessing Game

 What we learn in this chapter:
- Rust number types and their default
- First exposure to #Rust modules and the std::io module to read input from the terminal
- Rust Variable Shadowing
- Rust Loop keyword
- Rust if/else
- First exposure to #Rust match keyword

=== Content:
00:00 - Intro & Setup
02:11 - The Plan
03:04 - Variable Secret
04:03 - Number Types
05:45 - Mutability recap
06:22 - Ask the user
07:45 - First intro to module std::io
08:29 - Rust naming conventions
09:22 - Read user input io:stdin().read_line(&mut guess)
12:46 - Break & Understand
14:20 - Parse string to number
17:10 - Variable Shadowing
18:46 - If / Else - You Win, You Loose
19:28 - Loop
20:38 - Match
23:19 - Random with rand
26:35 - Run it all
27:09 - Conclusion and next episode

#rust