Awesome  Rust

Awesome Rust

1654296240

Mysql Asyn: Asyncronous Rust Mysql Driver Based on tokio

mysql_async

Tokio based asynchronous MySql client library for The Rust Programming Language.

Installation

The library is hosted on crates.io.

[dependencies]
mysql_async = "<desired version>"

Example

use mysql_async::prelude::*;

#[derive(Debug, PartialEq, Eq, Clone)]
struct Payment {
    customer_id: i32,
    amount: i32,
    account_name: Option<String>,
}

#[tokio::main]
async fn main() -> Result<()> {
    let payments = vec![
        Payment { customer_id: 1, amount: 2, account_name: None },
        Payment { customer_id: 3, amount: 4, account_name: Some("foo".into()) },
        Payment { customer_id: 5, amount: 6, account_name: None },
        Payment { customer_id: 7, amount: 8, account_name: None },
        Payment { customer_id: 9, amount: 10, account_name: Some("bar".into()) },
    ];

    let database_url = /* ... */
    # get_opts();

    let pool = mysql_async::Pool::new(database_url);
    let mut conn = pool.get_conn().await?;

    // Create a temporary table
    r"CREATE TEMPORARY TABLE payment (
        customer_id int not null,
        amount int not null,
        account_name text
    )".ignore(&mut conn).await?;

    // Save payments
    r"INSERT INTO payment (customer_id, amount, account_name)
      VALUES (:customer_id, :amount, :account_name)"
        .with(payments.iter().map(|payment| params! {
            "customer_id" => payment.customer_id,
            "amount" => payment.amount,
            "account_name" => payment.account_name.as_ref(),
        }))
        .batch(&mut conn)
        .await?;

    // Load payments from the database. Type inference will work here.
    let loaded_payments = "SELECT customer_id, amount, account_name FROM payment"
        .with(())
        .map(&mut conn, |(customer_id, amount, account_name)| Payment { customer_id, amount, account_name })
        .await?;

    // Dropped connection will go to the pool
    drop(conn);

    // The Pool must be disconnected explicitly because
    // it's an asynchronous operation.
    pool.disconnect().await?;

    assert_eq!(loaded_payments, payments);

    // the async fn returns Result, so
    Ok(())
}

Pool

The [Pool] structure is an asynchronous connection pool.

Please note:

  • [Pool] is a smart pointer – each clone will point to the same pool instance.
  • [Pool] is Send + Sync + 'static – feel free to pass it around.
  • use [Pool::disconnect] to gracefuly close the pool.
  • [Pool::new] is lazy and won't assert server availability.

Transaction

[Conn::start_transaction] is a wrapper, that starts with START TRANSACTION and ends with COMMIT or ROLLBACK.

Dropped transaction will be implicitly rolled back if it wasn't explicitly committed or rolled back. Note that this behaviour will be triggered by a pool (on conn drop) or by the next query, i.e. may be delayed.

API won't allow you to run nested transactions because some statements causes an implicit commit (START TRANSACTION is one of them), so this behavior is chosen as less error prone.

Value

This enumeration represents the raw value of a MySql cell. Library offers conversion between Value and different rust types via FromValue trait described below.

FromValue trait

This trait is reexported from mysql_common create. Please refer to its crate docs for the list of supported conversions.

Trait offers conversion in two flavours:

from_value(Value) -> T - convenient, but panicking conversion.

Note, that for any variant of Value there exist a type, that fully covers its domain, i.e. for any variant of Value there exist T: FromValue such that from_value will never panic. This means, that if your database schema is known, than it's possible to write your application using only from_value with no fear of runtime panic.

Also note, that some convertions may fail even though the type seem sufficient, e.g. in case of invalid dates (see sql mode).

from_value_opt(Value) -> Option<T> - non-panicking, but less convenient conversion.

This function is useful to probe conversion in cases, where source database schema is unknown.

MySql query protocols

Text protocol

MySql text protocol is implemented in the set of Queryable::query* methods and in the [prelude::Query] trait if query is [prelude::AsQuery]. It's useful when your query doesn't have parameters.

Note: All values of a text protocol result set will be encoded as strings by the server, so from_value conversion may lead to additional parsing costs.

Binary protocol and prepared statements.

MySql binary protocol is implemented in the set of exec* methods, defined on the [prelude::Queryable] trait and in the [prelude::Query] trait if query is [QueryWithParams]. Prepared statements is the only way to pass rust value to the MySql server. MySql uses ? symbol as a parameter placeholder.

Note: it's only possible to use parameters where a single MySql value is expected, i.e. you can't execute something like SELECT ... WHERE id IN ? with a vector as a parameter. You'll need to build a query that looks like SELECT ... WHERE id IN (?, ?, ...) and to pass each vector element as a parameter.

Named parameters

MySql itself doesn't have named parameters support, so it's implemented on the client side. One should use :name as a placeholder syntax for a named parameter. Named parameters uses the following naming convention:

  • parameter name must start with either _ or a..z
  • parameter name may continue with _, a..z and 0..9

Note: this rules mean that, say, the statment SELECT :fooBar will be translated to SELECT ?Bar so please be careful.

Named parameters may be repeated within the statement, e.g SELECT :foo, :foo will require a single named parameter foo that will be repeated on the corresponding positions during statement execution.

One should use the params! macro to build parameters for execution.

Note: Positional and named parameters can't be mixed within the single statement.

Statements

In MySql each prepared statement belongs to a particular connection and can't be executed on another connection. Trying to do so will lead to an error. The driver won't tie statement to its connection in any way, but one can look on to the connection id, contained in the [Statement] structure.

LOCAL INFILE Handlers

Warning: You should be aware of Security Considerations for LOAD DATA LOCAL.

There are two flavors of LOCAL INFILE handlers – global and local.

I case of a LOCAL INFILE request from the server the driver will try to find a handler for it:

  1. It'll try to use local handler installed on the connection, if any;
  2. It'll try to use global handler, specified via [OptsBuilder::local_infile_handler], if any;
  3. It will emit [LocalInfileError::NoHandler] if no handlers found.

The purpose of a handler (local or global) is to return [InfileData].

Global LOCAL INFILE handler

See [prelude::GlobalHandler].

Simply speaking the global handler is an async function that takes a file name (as &[u8]) and returns Result<InfileData>.

You can set it up using [OptsBuilder::local_infile_handler]. Server will use it if there is no local handler installed for the connection. This handler might be called multiple times.

Examles:

  1. [WhiteListFsHandler] is a global handler.
  2. Every T: Fn(&[u8]) -> BoxFuture<'static, Result<InfileData, LocalInfileError>> is a global handler.

Local LOCAL INFILE handler.

Simply speaking the local handler is a future, that returns Result<InfileData>.

This is a one-time handler – it's consumed after use. You can set it up using [Conn::set_infile_handler]. This handler have priority over global handler.

Worth noting:

  1. impl Drop for Conn will clear local handler, i.e. handler will be removed when connection is returned to a Pool.
  2. [Conn::reset] will clear local handler.

Example:

#
let pool = mysql_async::Pool::new(database_url);

let mut conn = pool.get_conn().await?;
"CREATE TEMPORARY TABLE tmp (id INT, val TEXT)".ignore(&mut conn).await?;

// We are going to call `LOAD DATA LOCAL` so let's setup a one-time handler.
conn.set_infile_handler(async move {
    // We need to return a stream of `io::Result<Bytes>`
    Ok(stream::iter([Bytes::from("1,a\r\n"), Bytes::from("2,b\r\n3,c")]).map(Ok).boxed())
});

let result = r#"LOAD DATA LOCAL INFILE 'whatever'
    INTO TABLE `tmp`
    FIELDS TERMINATED BY ',' ENCLOSED BY '\"'
    LINES TERMINATED BY '\r\n'"#.ignore(&mut conn).await;

match result {
    Ok(()) => (),
    Err(Error::Server(ref err)) if err.code == 1148 => {
        // The used command is not allowed with this MySQL version
        return Ok(());
    },
    Err(Error::Server(ref err)) if err.code == 3948 => {
        // Loading local data is disabled;
        // this must be enabled on both the client and the server
        return Ok(());
    }
    e @ Err(_) => e.unwrap(),
}

// Now let's verify the result
let result: Vec<(u32, String)> = conn.query("SELECT * FROM tmp ORDER BY id ASC").await?;
assert_eq!(
    result,
    vec![(1, "a".into()), (2, "b".into()), (3, "c".into())]
);

drop(conn);
pool.disconnect().await?;

Testing

Tests uses followin environment variables:

  • DATABASE_URL – defaults to mysql://root:password@127.0.0.1:3307/mysql
  • COMPRESS – set to 1 or true to enable compression for tests
  • SSL – set to 1 or true to enable TLS for tests

You can run a test server using doker. Please note that params related to max allowed packet, local-infile and binary logging are required to properly run tests (please refer to azure-pipelines.yml):

docker run -d --name container \
    -v `pwd`:/root \
    -p 3307:3306 \
    -e MYSQL_ROOT_PASSWORD=password \
    mysql:8.0 \
    --max-allowed-packet=36700160 \
    --local-infile \
    --log-bin=mysql-bin \
    --log-slave-updates \
    --gtid_mode=ON \
    --enforce_gtid_consistency=ON \
    --server-id=1

Change log

Available here

Download Details:
Author: blackbeam
Source Code: https://github.com/blackbeam/mysql_async
License: Apache-2.0, MIT licenses found

#rust  #rustlang  #database #mysql 

What is GEEK

Buddha Community

Mysql Asyn: Asyncronous Rust Mysql Driver Based on tokio
Joe  Hoppe

Joe Hoppe

1595905879

Best MySQL DigitalOcean Performance – ScaleGrid vs. DigitalOcean Managed Databases

HTML to Markdown

MySQL is the all-time number one open source database in the world, and a staple in RDBMS space. DigitalOcean is quickly building its reputation as the developers cloud by providing an affordable, flexible and easy to use cloud platform for developers to work with. MySQL on DigitalOcean is a natural fit, but what’s the best way to deploy your cloud database? In this post, we are going to compare the top two providers, DigitalOcean Managed Databases for MySQL vs. ScaleGrid MySQL hosting on DigitalOcean.

At a glance – TLDR
ScaleGrid Blog - At a glance overview - 1st pointCompare Throughput
ScaleGrid averages almost 40% higher throughput over DigitalOcean for MySQL, with up to 46% higher throughput in write-intensive workloads. Read now

ScaleGrid Blog - At a glance overview - 2nd pointCompare Latency
On average, ScaleGrid achieves almost 30% lower latency over DigitalOcean for the same deployment configurations. Read now

ScaleGrid Blog - At a glance overview - 3rd pointCompare Pricing
ScaleGrid provides 30% more storage on average vs. DigitalOcean for MySQL at the same affordable price. Read now

MySQL DigitalOcean Performance Benchmark
In this benchmark, we compare equivalent plan sizes between ScaleGrid MySQL on DigitalOcean and DigitalOcean Managed Databases for MySQL. We are going to use a common, popular plan size using the below configurations for this performance benchmark:

Comparison Overview
ScaleGridDigitalOceanInstance TypeMedium: 4 vCPUsMedium: 4 vCPUsMySQL Version8.0.208.0.20RAM8GB8GBSSD140GB115GBDeployment TypeStandaloneStandaloneRegionSF03SF03SupportIncludedBusiness-level support included with account sizes over $500/monthMonthly Price$120$120

As you can see above, ScaleGrid and DigitalOcean offer the same plan configurations across this plan size, apart from SSD where ScaleGrid provides over 20% more storage for the same price.

To ensure the most accurate results in our performance tests, we run the benchmark four times for each comparison to find the average performance across throughput and latency over read-intensive workloads, balanced workloads, and write-intensive workloads.

Throughput
In this benchmark, we measure MySQL throughput in terms of queries per second (QPS) to measure our query efficiency. To quickly summarize the results, we display read-intensive, write-intensive and balanced workload averages below for 150 threads for ScaleGrid vs. DigitalOcean MySQL:

ScaleGrid MySQL vs DigitalOcean Managed Databases - Throughput Performance Graph

For the common 150 thread comparison, ScaleGrid averages almost 40% higher throughput over DigitalOcean for MySQL, with up to 46% higher throughput in write-intensive workloads.

#cloud #database #developer #digital ocean #mysql #performance #scalegrid #95th percentile latency #balanced workloads #developers cloud #digitalocean droplet #digitalocean managed databases #digitalocean performance #digitalocean pricing #higher throughput #latency benchmark #lower latency #mysql benchmark setup #mysql client threads #mysql configuration #mysql digitalocean #mysql latency #mysql on digitalocean #mysql throughput #performance benchmark #queries per second #read-intensive #scalegrid mysql #scalegrid vs. digitalocean #throughput benchmark #write-intensive

Awesome  Rust

Awesome Rust

1654296240

Mysql Asyn: Asyncronous Rust Mysql Driver Based on tokio

mysql_async

Tokio based asynchronous MySql client library for The Rust Programming Language.

Installation

The library is hosted on crates.io.

[dependencies]
mysql_async = "<desired version>"

Example

use mysql_async::prelude::*;

#[derive(Debug, PartialEq, Eq, Clone)]
struct Payment {
    customer_id: i32,
    amount: i32,
    account_name: Option<String>,
}

#[tokio::main]
async fn main() -> Result<()> {
    let payments = vec![
        Payment { customer_id: 1, amount: 2, account_name: None },
        Payment { customer_id: 3, amount: 4, account_name: Some("foo".into()) },
        Payment { customer_id: 5, amount: 6, account_name: None },
        Payment { customer_id: 7, amount: 8, account_name: None },
        Payment { customer_id: 9, amount: 10, account_name: Some("bar".into()) },
    ];

    let database_url = /* ... */
    # get_opts();

    let pool = mysql_async::Pool::new(database_url);
    let mut conn = pool.get_conn().await?;

    // Create a temporary table
    r"CREATE TEMPORARY TABLE payment (
        customer_id int not null,
        amount int not null,
        account_name text
    )".ignore(&mut conn).await?;

    // Save payments
    r"INSERT INTO payment (customer_id, amount, account_name)
      VALUES (:customer_id, :amount, :account_name)"
        .with(payments.iter().map(|payment| params! {
            "customer_id" => payment.customer_id,
            "amount" => payment.amount,
            "account_name" => payment.account_name.as_ref(),
        }))
        .batch(&mut conn)
        .await?;

    // Load payments from the database. Type inference will work here.
    let loaded_payments = "SELECT customer_id, amount, account_name FROM payment"
        .with(())
        .map(&mut conn, |(customer_id, amount, account_name)| Payment { customer_id, amount, account_name })
        .await?;

    // Dropped connection will go to the pool
    drop(conn);

    // The Pool must be disconnected explicitly because
    // it's an asynchronous operation.
    pool.disconnect().await?;

    assert_eq!(loaded_payments, payments);

    // the async fn returns Result, so
    Ok(())
}

Pool

The [Pool] structure is an asynchronous connection pool.

Please note:

  • [Pool] is a smart pointer – each clone will point to the same pool instance.
  • [Pool] is Send + Sync + 'static – feel free to pass it around.
  • use [Pool::disconnect] to gracefuly close the pool.
  • [Pool::new] is lazy and won't assert server availability.

Transaction

[Conn::start_transaction] is a wrapper, that starts with START TRANSACTION and ends with COMMIT or ROLLBACK.

Dropped transaction will be implicitly rolled back if it wasn't explicitly committed or rolled back. Note that this behaviour will be triggered by a pool (on conn drop) or by the next query, i.e. may be delayed.

API won't allow you to run nested transactions because some statements causes an implicit commit (START TRANSACTION is one of them), so this behavior is chosen as less error prone.

Value

This enumeration represents the raw value of a MySql cell. Library offers conversion between Value and different rust types via FromValue trait described below.

FromValue trait

This trait is reexported from mysql_common create. Please refer to its crate docs for the list of supported conversions.

Trait offers conversion in two flavours:

from_value(Value) -> T - convenient, but panicking conversion.

Note, that for any variant of Value there exist a type, that fully covers its domain, i.e. for any variant of Value there exist T: FromValue such that from_value will never panic. This means, that if your database schema is known, than it's possible to write your application using only from_value with no fear of runtime panic.

Also note, that some convertions may fail even though the type seem sufficient, e.g. in case of invalid dates (see sql mode).

from_value_opt(Value) -> Option<T> - non-panicking, but less convenient conversion.

This function is useful to probe conversion in cases, where source database schema is unknown.

MySql query protocols

Text protocol

MySql text protocol is implemented in the set of Queryable::query* methods and in the [prelude::Query] trait if query is [prelude::AsQuery]. It's useful when your query doesn't have parameters.

Note: All values of a text protocol result set will be encoded as strings by the server, so from_value conversion may lead to additional parsing costs.

Binary protocol and prepared statements.

MySql binary protocol is implemented in the set of exec* methods, defined on the [prelude::Queryable] trait and in the [prelude::Query] trait if query is [QueryWithParams]. Prepared statements is the only way to pass rust value to the MySql server. MySql uses ? symbol as a parameter placeholder.

Note: it's only possible to use parameters where a single MySql value is expected, i.e. you can't execute something like SELECT ... WHERE id IN ? with a vector as a parameter. You'll need to build a query that looks like SELECT ... WHERE id IN (?, ?, ...) and to pass each vector element as a parameter.

Named parameters

MySql itself doesn't have named parameters support, so it's implemented on the client side. One should use :name as a placeholder syntax for a named parameter. Named parameters uses the following naming convention:

  • parameter name must start with either _ or a..z
  • parameter name may continue with _, a..z and 0..9

Note: this rules mean that, say, the statment SELECT :fooBar will be translated to SELECT ?Bar so please be careful.

Named parameters may be repeated within the statement, e.g SELECT :foo, :foo will require a single named parameter foo that will be repeated on the corresponding positions during statement execution.

One should use the params! macro to build parameters for execution.

Note: Positional and named parameters can't be mixed within the single statement.

Statements

In MySql each prepared statement belongs to a particular connection and can't be executed on another connection. Trying to do so will lead to an error. The driver won't tie statement to its connection in any way, but one can look on to the connection id, contained in the [Statement] structure.

LOCAL INFILE Handlers

Warning: You should be aware of Security Considerations for LOAD DATA LOCAL.

There are two flavors of LOCAL INFILE handlers – global and local.

I case of a LOCAL INFILE request from the server the driver will try to find a handler for it:

  1. It'll try to use local handler installed on the connection, if any;
  2. It'll try to use global handler, specified via [OptsBuilder::local_infile_handler], if any;
  3. It will emit [LocalInfileError::NoHandler] if no handlers found.

The purpose of a handler (local or global) is to return [InfileData].

Global LOCAL INFILE handler

See [prelude::GlobalHandler].

Simply speaking the global handler is an async function that takes a file name (as &[u8]) and returns Result<InfileData>.

You can set it up using [OptsBuilder::local_infile_handler]. Server will use it if there is no local handler installed for the connection. This handler might be called multiple times.

Examles:

  1. [WhiteListFsHandler] is a global handler.
  2. Every T: Fn(&[u8]) -> BoxFuture<'static, Result<InfileData, LocalInfileError>> is a global handler.

Local LOCAL INFILE handler.

Simply speaking the local handler is a future, that returns Result<InfileData>.

This is a one-time handler – it's consumed after use. You can set it up using [Conn::set_infile_handler]. This handler have priority over global handler.

Worth noting:

  1. impl Drop for Conn will clear local handler, i.e. handler will be removed when connection is returned to a Pool.
  2. [Conn::reset] will clear local handler.

Example:

#
let pool = mysql_async::Pool::new(database_url);

let mut conn = pool.get_conn().await?;
"CREATE TEMPORARY TABLE tmp (id INT, val TEXT)".ignore(&mut conn).await?;

// We are going to call `LOAD DATA LOCAL` so let's setup a one-time handler.
conn.set_infile_handler(async move {
    // We need to return a stream of `io::Result<Bytes>`
    Ok(stream::iter([Bytes::from("1,a\r\n"), Bytes::from("2,b\r\n3,c")]).map(Ok).boxed())
});

let result = r#"LOAD DATA LOCAL INFILE 'whatever'
    INTO TABLE `tmp`
    FIELDS TERMINATED BY ',' ENCLOSED BY '\"'
    LINES TERMINATED BY '\r\n'"#.ignore(&mut conn).await;

match result {
    Ok(()) => (),
    Err(Error::Server(ref err)) if err.code == 1148 => {
        // The used command is not allowed with this MySQL version
        return Ok(());
    },
    Err(Error::Server(ref err)) if err.code == 3948 => {
        // Loading local data is disabled;
        // this must be enabled on both the client and the server
        return Ok(());
    }
    e @ Err(_) => e.unwrap(),
}

// Now let's verify the result
let result: Vec<(u32, String)> = conn.query("SELECT * FROM tmp ORDER BY id ASC").await?;
assert_eq!(
    result,
    vec![(1, "a".into()), (2, "b".into()), (3, "c".into())]
);

drop(conn);
pool.disconnect().await?;

Testing

Tests uses followin environment variables:

  • DATABASE_URL – defaults to mysql://root:password@127.0.0.1:3307/mysql
  • COMPRESS – set to 1 or true to enable compression for tests
  • SSL – set to 1 or true to enable TLS for tests

You can run a test server using doker. Please note that params related to max allowed packet, local-infile and binary logging are required to properly run tests (please refer to azure-pipelines.yml):

docker run -d --name container \
    -v `pwd`:/root \
    -p 3307:3306 \
    -e MYSQL_ROOT_PASSWORD=password \
    mysql:8.0 \
    --max-allowed-packet=36700160 \
    --local-infile \
    --log-bin=mysql-bin \
    --log-slave-updates \
    --gtid_mode=ON \
    --enforce_gtid_consistency=ON \
    --server-id=1

Change log

Available here

Download Details:
Author: blackbeam
Source Code: https://github.com/blackbeam/mysql_async
License: Apache-2.0, MIT licenses found

#rust  #rustlang  #database #mysql 

Serde Rust: Serialization Framework for Rust

Serde

*Serde is a framework for serializing and deserializing Rust data structures efficiently and generically.*

You may be looking for:

Serde in action

Click to show Cargo.toml. Run this code in the playground.

[dependencies]

# The core APIs, including the Serialize and Deserialize traits. Always
# required when using Serde. The "derive" feature is only required when
# using #[derive(Serialize, Deserialize)] to make Serde work with structs
# and enums defined in your crate.
serde = { version = "1.0", features = ["derive"] }

# Each data format lives in its own crate; the sample code below uses JSON
# but you may be using a different one.
serde_json = "1.0"

 

use serde::{Serialize, Deserialize};

#[derive(Serialize, Deserialize, Debug)]
struct Point {
    x: i32,
    y: i32,
}

fn main() {
    let point = Point { x: 1, y: 2 };

    // Convert the Point to a JSON string.
    let serialized = serde_json::to_string(&point).unwrap();

    // Prints serialized = {"x":1,"y":2}
    println!("serialized = {}", serialized);

    // Convert the JSON string back to a Point.
    let deserialized: Point = serde_json::from_str(&serialized).unwrap();

    // Prints deserialized = Point { x: 1, y: 2 }
    println!("deserialized = {:?}", deserialized);
}

Getting help

Serde is one of the most widely used Rust libraries so any place that Rustaceans congregate will be able to help you out. For chat, consider trying the #rust-questions or #rust-beginners channels of the unofficial community Discord (invite: https://discord.gg/rust-lang-community), the #rust-usage or #beginners channels of the official Rust Project Discord (invite: https://discord.gg/rust-lang), or the #general stream in Zulip. For asynchronous, consider the [rust] tag on StackOverflow, the /r/rust subreddit which has a pinned weekly easy questions post, or the Rust Discourse forum. It's acceptable to file a support issue in this repo but they tend not to get as many eyes as any of the above and may get closed without a response after some time.

Download Details:
Author: serde-rs
Source Code: https://github.com/serde-rs/serde
License: View license

#rust  #rustlang 

Loma  Baumbach

Loma Baumbach

1595781840

Exploring MySQL Binlog Server - Ripple

MySQL does not limit the number of slaves that you can connect to the master server in a replication topology. However, as the number of slaves increases, they will have a toll on the master resources because the binary logs will need to be served to different slaves working at different speeds. If the data churn on the master is high, the serving of binary logs alone could saturate the network interface of the master.

A classic solution for this problem is to deploy a binlog server – an intermediate proxy server that sits between the master and its slaves. The binlog server is set up as a slave to the master, and in turn, acts as a master to the original set of slaves. It receives binary log events from the master, does not apply these events, but serves them to all the other slaves. This way, the load on the master is tremendously reduced, and at the same time, the binlog server serves the binlogs more efficiently to slaves since it does not have to do any other database server processing.

MySQL Binlog Server Deployment Diagram - ScaleGrid Blog

Ripple is an open source binlog server developed by Pavel Ivanov. A blog post from Percona, titled MySQL Ripple: The First Impression of a MySQL Binlog Server, gives a very good introduction to deploying and using Ripple. I had an opportunity to explore Ripple in some more detail and wanted to share my observations through this post.

1. Support for GTID based replication

Ripple supports only GTID mode, and not file and position-based replication. If your master is running in non-GTID mode, you will get this error from Ripple:

Failed to read packet: Got error reading packet from server: The replication sender thread cannot start in AUTO_POSITION mode: this server has GTID_MODE = OFF instead of ON.

You can specify Server_id and UUID for the ripple server using the cmd line options: -ripple_server_id and -ripple_server_uuid

Both are optional parameters, and if not specified, Ripple will use the default server_id=112211 and uuid will be auto generated.

2. Connecting to the master using replication user and password

While connecting to the master, you can specify the replication user and password using the command line options:

-ripple_master_user and -ripple_master_password

3. Connection endpoint for the Ripple server

You can use the command line options -ripple_server_ports and -ripple_server_address to specify the connection end points for the Ripple server. Ensure to specify the network accessible hostname or IP address of your Ripple server as the -rippple_server_address. Otherwise, by default, Ripple will bind to localhost and hence you will not be able to connect to it remotely.

4. Setting up slaves to the Ripple server

You can use the CHANGE MASTER TO command to connect your slaves to replicate from the Ripple server.

To ensure that Ripple can authenticate the password that you use to connect to it, you need to start Ripple by specifying the option -ripple_server_password_hash

For example, if you start the ripple server with the command:

rippled -ripple_datadir=./binlog_server -ripple_master_address= <master ip> -ripple_master_port=3306 -ripple_master_user=repl -ripple_master_password='password' -ripple_server_ports=15000 -ripple_server_address='172.31.23.201' -ripple_server_password_hash='EF8C75CB6E99A0732D2DE207DAEF65D555BDFB8E'

you can use the following CHANGE MASTER TO command to connect from the slave:

CHANGE MASTER TO master_host='172.31.23.201', master_port=15000, master_password=’XpKWeZRNH5#satCI’, master_user=’rep’

Note that the password hash specified for the Ripple server corresponds to the text password used in the CHANGE MASTER TO command. Currently, Ripple does not authenticate based on the usernames and accepts any non-empty username as long as the password matches.

Exploring MySQL Binlog Server - Ripple

CLICK TO TWEET

5. Ripple server management

It’s possible to monitor and manage the Ripple server using the MySQL protocol from any standard MySQL client. There are a limited set of commands that are supported which you can see directly in the source code on the mysql-ripple GitHub page.

Some of the useful commands are:

  • SELECT @@global.gtid_executed; – To see the GTID SET of the Ripple server based on its downloaded binary logs.
  • STOP SLAVE; – To disconnect the Ripple server from the master.
  • START SLAVE; – To connect the Ripple server to the master.

#cloud #database #developer #high availability #mysql #performance #binary logs #gtid replication #mysql binlog #mysql protocol #mysql ripple #mysql server #parallel threads #proxy server #replication topology #ripple server

Awesome  Rust

Awesome Rust

1654894080

Serde JSON: JSON Support for Serde Framework

Serde JSON

Serde is a framework for serializing and deserializing Rust data structures efficiently and generically.

[dependencies]
serde_json = "1.0"

You may be looking for:

JSON is a ubiquitous open-standard format that uses human-readable text to transmit data objects consisting of key-value pairs.

{
    "name": "John Doe",
    "age": 43,
    "address": {
        "street": "10 Downing Street",
        "city": "London"
    },
    "phones": [
        "+44 1234567",
        "+44 2345678"
    ]
}

There are three common ways that you might find yourself needing to work with JSON data in Rust.

  • As text data. An unprocessed string of JSON data that you receive on an HTTP endpoint, read from a file, or prepare to send to a remote server.
  • As an untyped or loosely typed representation. Maybe you want to check that some JSON data is valid before passing it on, but without knowing the structure of what it contains. Or you want to do very basic manipulations like insert a key in a particular spot.
  • As a strongly typed Rust data structure. When you expect all or most of your data to conform to a particular structure and want to get real work done without JSON's loosey-goosey nature tripping you up.

Serde JSON provides efficient, flexible, safe ways of converting data between each of these representations.

Operating on untyped JSON values

Any valid JSON data can be manipulated in the following recursive enum representation. This data structure is serde_json::Value.

enum Value {
    Null,
    Bool(bool),
    Number(Number),
    String(String),
    Array(Vec<Value>),
    Object(Map<String, Value>),
}

A string of JSON data can be parsed into a serde_json::Value by the serde_json::from_str function. There is also from_slice for parsing from a byte slice &[u8] and from_reader for parsing from any io::Read like a File or a TCP stream.

use serde_json::{Result, Value};

fn untyped_example() -> Result<()> {
    // Some JSON input data as a &str. Maybe this comes from the user.
    let data = r#"
        {
            "name": "John Doe",
            "age": 43,
            "phones": [
                "+44 1234567",
                "+44 2345678"
            ]
        }"#;

    // Parse the string of data into serde_json::Value.
    let v: Value = serde_json::from_str(data)?;

    // Access parts of the data by indexing with square brackets.
    println!("Please call {} at the number {}", v["name"], v["phones"][0]);

    Ok(())
}

The result of square bracket indexing like v["name"] is a borrow of the data at that index, so the type is &Value. A JSON map can be indexed with string keys, while a JSON array can be indexed with integer keys. If the type of the data is not right for the type with which it is being indexed, or if a map does not contain the key being indexed, or if the index into a vector is out of bounds, the returned element is Value::Null.

When a Value is printed, it is printed as a JSON string. So in the code above, the output looks like Please call "John Doe" at the number "+44 1234567". The quotation marks appear because v["name"] is a &Value containing a JSON string and its JSON representation is "John Doe". Printing as a plain string without quotation marks involves converting from a JSON string to a Rust string with as_str() or avoiding the use of Value as described in the following section.

The Value representation is sufficient for very basic tasks but can be tedious to work with for anything more significant. Error handling is verbose to implement correctly, for example imagine trying to detect the presence of unrecognized fields in the input data. The compiler is powerless to help you when you make a mistake, for example imagine typoing v["name"] as v["nmae"] in one of the dozens of places it is used in your code.

Parsing JSON as strongly typed data structures

Serde provides a powerful way of mapping JSON data into Rust data structures largely automatically.

use serde::{Deserialize, Serialize};
use serde_json::Result;

#[derive(Serialize, Deserialize)]
struct Person {
    name: String,
    age: u8,
    phones: Vec<String>,
}

fn typed_example() -> Result<()> {
    // Some JSON input data as a &str. Maybe this comes from the user.
    let data = r#"
        {
            "name": "John Doe",
            "age": 43,
            "phones": [
                "+44 1234567",
                "+44 2345678"
            ]
        }"#;

    // Parse the string of data into a Person object. This is exactly the
    // same function as the one that produced serde_json::Value above, but
    // now we are asking it for a Person as output.
    let p: Person = serde_json::from_str(data)?;

    // Do things just like with any other Rust data structure.
    println!("Please call {} at the number {}", p.name, p.phones[0]);

    Ok(())
}

This is the same serde_json::from_str function as before, but this time we assign the return value to a variable of type Person so Serde will automatically interpret the input data as a Person and produce informative error messages if the layout does not conform to what a Person is expected to look like.

Any type that implements Serde's Deserialize trait can be deserialized this way. This includes built-in Rust standard library types like Vec<T> and HashMap<K, V>, as well as any structs or enums annotated with #[derive(Deserialize)].

Once we have p of type Person, our IDE and the Rust compiler can help us use it correctly like they do for any other Rust code. The IDE can autocomplete field names to prevent typos, which was impossible in the serde_json::Value representation. And the Rust compiler can check that when we write p.phones[0], then p.phones is guaranteed to be a Vec<String> so indexing into it makes sense and produces a String.

The necessary setup for using Serde's derive macros is explained on the Using derive page of the Serde site.

Constructing JSON values

Serde JSON provides a json! macro to build serde_json::Value objects with very natural JSON syntax.

use serde_json::json;

fn main() {
    // The type of `john` is `serde_json::Value`
    let john = json!({
        "name": "John Doe",
        "age": 43,
        "phones": [
            "+44 1234567",
            "+44 2345678"
        ]
    });

    println!("first phone number: {}", john["phones"][0]);

    // Convert to a string of JSON and print it out
    println!("{}", john.to_string());
}

The Value::to_string() function converts a serde_json::Value into a String of JSON text.

One neat thing about the json! macro is that variables and expressions can be interpolated directly into the JSON value as you are building it. Serde will check at compile time that the value you are interpolating is able to be represented as JSON.

let full_name = "John Doe";
let age_last_year = 42;

// The type of `john` is `serde_json::Value`
let john = json!({
    "name": full_name,
    "age": age_last_year + 1,
    "phones": [
        format!("+44 {}", random_phone())
    ]
});

This is amazingly convenient, but we have the problem we had before with Value: the IDE and Rust compiler cannot help us if we get it wrong. Serde JSON provides a better way of serializing strongly-typed data structures into JSON text.

Creating JSON by serializing data structures

A data structure can be converted to a JSON string by serde_json::to_string. There is also serde_json::to_vec which serializes to a Vec<u8> and serde_json::to_writer which serializes to any io::Write such as a File or a TCP stream.

use serde::{Deserialize, Serialize};
use serde_json::Result;

#[derive(Serialize, Deserialize)]
struct Address {
    street: String,
    city: String,
}

fn print_an_address() -> Result<()> {
    // Some data structure.
    let address = Address {
        street: "10 Downing Street".to_owned(),
        city: "London".to_owned(),
    };

    // Serialize it to a JSON string.
    let j = serde_json::to_string(&address)?;

    // Print, write to a file, or send to an HTTP server.
    println!("{}", j);

    Ok(())
}

Any type that implements Serde's Serialize trait can be serialized this way. This includes built-in Rust standard library types like Vec<T> and HashMap<K, V>, as well as any structs or enums annotated with #[derive(Serialize)].

Performance

It is fast. You should expect in the ballpark of 500 to 1000 megabytes per second deserialization and 600 to 900 megabytes per second serialization, depending on the characteristics of your data. This is competitive with the fastest C and C++ JSON libraries or even 30% faster for many use cases. Benchmarks live in the serde-rs/json-benchmark repo.

Getting help

Serde is one of the most widely used Rust libraries, so any place that Rustaceans congregate will be able to help you out. For chat, consider trying the #rust-questions or #rust-beginners channels of the unofficial community Discord (invite: https://discord.gg/rust-lang-community), the #rust-usage or #beginners channels of the official Rust Project Discord (invite: https://discord.gg/rust-lang), or the #general stream in Zulip. For asynchronous, consider the [rust] tag on StackOverflow, the /r/rust subreddit which has a pinned weekly easy questions post, or the Rust Discourse forum. It's acceptable to file a support issue in this repo, but they tend not to get as many eyes as any of the above and may get closed without a response after some time.

No-std support

As long as there is a memory allocator, it is possible to use serde_json without the rest of the Rust standard library. This is supported on Rust 1.36+. Disable the default "std" feature and enable the "alloc" feature:

[dependencies]
serde_json = { version = "1.0", default-features = false, features = ["alloc"] }

For JSON support in Serde without a memory allocator, please see the serde-json-core crate.

Link: https://crates.io/crates/serde_json

#rust  #rustlang  #encode   #json