1634641620
This tutorial shows how to create a basic application demonstrating an approach to writing simple “function-based microservices”. It explains how to create such microservices, as well as deploy them, setup autoscaling, metrics gathering, and tracing.
Such an approach might be quite powerful to conduct large-scale events processing, while gives great balance over ownership of infrastructure and the complexity of deployments.
1654075127
Amazon Aurora is a relational database management system (RDBMS) developed by AWS(Amazon Web Services). Aurora gives you the performance and availability of commercial-grade databases with full MySQL and PostgreSQL compatibility. In terms of high performance, Aurora MySQL and Aurora PostgreSQL have shown an increase in throughput of up to 5X over stock MySQL and 3X over stock PostgreSQL respectively on similar hardware. In terms of scalability, Aurora achieves enhancements and innovations in storage and computing, horizontal and vertical functions.
Aurora supports up to 128TB of storage capacity and supports dynamic scaling of storage layer in units of 10GB. In terms of computing, Aurora supports scalable configurations for multiple read replicas. Each region can have an additional 15 Aurora replicas. In addition, Aurora provides multi-primary architecture to support four read/write nodes. Its Serverless architecture allows vertical scaling and reduces typical latency to under a second, while the Global Database enables a single database cluster to span multiple AWS Regions in low latency.
Aurora already provides great scalability with the growth of user data volume. Can it handle more data and support more concurrent access? You may consider using sharding to support the configuration of multiple underlying Aurora clusters. To this end, a series of blogs, including this one, provides you with a reference in choosing between Proxy and JDBC for sharding.
AWS Aurora offers a single relational database. Primary-secondary, multi-primary, and global database, and other forms of hosting architecture can satisfy various architectural scenarios above. However, Aurora doesn’t provide direct support for sharding scenarios, and sharding has a variety of forms, such as vertical and horizontal forms. If we want to further increase data capacity, some problems have to be solved, such as cross-node database Join
, associated query, distributed transactions, SQL sorting, page turning, function calculation, database global primary key, capacity planning, and secondary capacity expansion after sharding.
It is generally accepted that when the capacity of a MySQL table is less than 10 million, the time spent on queries is optimal because at this time the height of its BTREE
index is between 3 and 5. Data sharding can reduce the amount of data in a single table and distribute the read and write loads to different data nodes at the same time. Data sharding can be divided into vertical sharding and horizontal sharding.
1. Advantages of vertical sharding
2. Disadvantages of vertical sharding
Join
can only be implemented by interface aggregation, which will increase the complexity of development.3. Advantages of horizontal sharding
4. Disadvantages of horizontal sharding
Join
is poor.Based on the analysis above, and the available studis on popular sharding middleware, we selected ShardingSphere, an open source product, combined with Amazon Aurora to introduce how the combination of these two products meets various forms of sharding and how to solve the problems brought by sharding.
ShardingSphere is an open source ecosystem including a set of distributed database middleware solutions, including 3 independent products, Sharding-JDBC, Sharding-Proxy & Sharding-Sidecar.
The characteristics of Sharding-JDBC are:
Hybrid Structure Integrating Sharding-JDBC and Applications
Sharding-JDBC’s core concepts
Data node: The smallest unit of a data slice, consisting of a data source name and a data table, such as ds_0.product_order_0.
Actual table: The physical table that really exists in the horizontal sharding database, such as product order tables: product_order_0, product_order_1, and product_order_2.
Logic table: The logical name of the horizontal sharding databases (tables) with the same schema. For instance, the logic table of the order product_order_0, product_order_1, and product_order_2 is product_order.
Binding table: It refers to the primary table and the joiner table with the same sharding rules. For example, product_order table and product_order_item are sharded by order_id, so they are binding tables with each other. Cartesian product correlation will not appear in the multi-tables correlating query, so the query efficiency will increase greatly.
Broadcast table: It refers to tables that exist in all sharding database sources. The schema and data must consist in each database. It can be applied to the small data volume that needs to correlate with big data tables to query, dictionary table and configuration table for example.
Download the example project code locally. In order to ensure the stability of the test code, we choose shardingsphere-example-4.0.0
version.
git clone
https://github.com/apache/shardingsphere-example.git
Project description:
shardingsphere-example
├── example-core
│ ├── config-utility
│ ├── example-api
│ ├── example-raw-jdbc
│ ├── example-spring-jpa #spring+jpa integration-based entity,repository
│ └── example-spring-mybatis
├── sharding-jdbc-example
│ ├── sharding-example
│ │ ├── sharding-raw-jdbc-example
│ │ ├── sharding-spring-boot-jpa-example #integration-based sharding-jdbc functions
│ │ ├── sharding-spring-boot-mybatis-example
│ │ ├── sharding-spring-namespace-jpa-example
│ │ └── sharding-spring-namespace-mybatis-example
│ ├── orchestration-example
│ │ ├── orchestration-raw-jdbc-example
│ │ ├── orchestration-spring-boot-example #integration-based sharding-jdbc governance function
│ │ └── orchestration-spring-namespace-example
│ ├── transaction-example
│ │ ├── transaction-2pc-xa-example #sharding-jdbc sample of two-phase commit for a distributed transaction
│ │ └──transaction-base-seata-example #sharding-jdbc distributed transaction seata sample
│ ├── other-feature-example
│ │ ├── hint-example
│ │ └── encrypt-example
├── sharding-proxy-example
│ └── sharding-proxy-boot-mybatis-example
└── src/resources
└── manual_schema.sql
Configuration file description:
application-master-slave.properties #read/write splitting profile
application-sharding-databases-tables.properties #sharding profile
application-sharding-databases.properties #library split profile only
application-sharding-master-slave.properties #sharding and read/write splitting profile
application-sharding-tables.properties #table split profile
application.properties #spring boot profile
Code logic description:
The following is the entry class of the Spring Boot application below. Execute it to run the project.
The execution logic of demo is as follows:
As business grows, the write and read requests can be split to different database nodes to effectively promote the processing capability of the entire database cluster. Aurora uses a reader/writer endpoint
to meet users' requirements to write and read with strong consistency, and a read-only endpoint
to meet the requirements to read without strong consistency. Aurora's read and write latency is within single-digit milliseconds, much lower than MySQL's binlog
-based logical replication, so there's a lot of loads that can be directed to a read-only endpoint
.
Through the one primary and multiple secondary configuration, query requests can be evenly distributed to multiple data replicas, which further improves the processing capability of the system. Read/write splitting can improve the throughput and availability of system, but it can also lead to data inconsistency. Aurora provides a primary/secondary architecture in a fully managed form, but applications on the upper-layer still need to manage multiple data sources when interacting with Aurora, routing SQL requests to different nodes based on the read/write type of SQL statements and certain routing policies.
ShardingSphere-JDBC provides read/write splitting features and it is integrated with application programs so that the complex configuration between application programs and database clusters can be separated from application programs. Developers can manage the Shard
through configuration files and combine it with ORM frameworks such as Spring JPA and Mybatis to completely separate the duplicated logic from the code, which greatly improves the ability to maintain code and reduces the coupling between code and database.
Create a set of Aurora MySQL read/write splitting clusters. The model is db.r5.2xlarge. Each set of clusters has one write node and two read nodes.
application.properties spring boot
Master profile description:
You need to replace the green ones with your own environment configuration.
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#Activate master-slave configuration item so that sharding-jdbc can use master-slave profile
spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-master-slave.properties sharding-jdbc
profile description:
spring.shardingsphere.datasource.names=ds_master,ds_slave_0,ds_slave_1
# data souce-master
spring.shardingsphere.datasource.ds_master.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master.password=Your master DB password
spring.shardingsphere.datasource.ds_master.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master.jdbc-url=Your primary DB data sourceurl spring.shardingsphere.datasource.ds_master.username=Your primary DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_0.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_0.jdbc-url=Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_0.username= Your slave DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_1.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_1.jdbc-url= Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_1.username= Your slave DB username
# Routing Policy Configuration
spring.shardingsphere.masterslave.load-balance-algorithm-type=round_robin
spring.shardingsphere.masterslave.name=ds_ms
spring.shardingsphere.masterslave.master-data-source-name=ds_master
spring.shardingsphere.masterslave.slave-data-source-names=ds_slave_0,ds_slave_1
# sharding-jdbc configures the information storage mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log,and you can see the conversion from logical SQL to actual SQL from the print
spring.shardingsphere.props.sql.show=true
As shown in the ShardingSphere-SQL log
figure below, the write SQL is executed on the ds_master
data source.
As shown in the ShardingSphere-SQL log
figure below, the read SQL is executed on the ds_slave
data source in the form of polling.
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] Rule Type: master-slave
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] SQL: select orderentit0_.order_id as order_id1_1_, orderentit0_.address_id as address_2_1_,
orderentit0_.status as status3_1_, orderentit0_.user_id as user_id4_1_ from t_order orderentit0_ ::: DataSources: ds_slave_0
---------------------------- Print OrderItem Data -------------------
Hibernate: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, orderiteme1_.user_id
as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] Rule Type: master-slave
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] SQL: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_,
orderiteme1_.user_id as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id ::: DataSources: ds_slave_1
Note: As shown in the figure below, if there are both reads and writes in a transaction, Sharding-JDBC routes both read and write operations to the master library. If the read/write requests are not in the same transaction, the corresponding read requests are distributed to different read nodes according to the routing policy.
@Override
@Transactional // When a transaction is started, both read and write in the transaction go through the master library. When closed, read goes through the slave library and write goes through the master library
public void processSuccess() throws SQLException {
System.out.println("-------------- Process Success Begin ---------------");
List<Long> orderIds = insertData();
printData();
deleteData(orderIds);
printData();
System.out.println("-------------- Process Success Finish --------------");
}
The Aurora database environment adopts the configuration described in Section 2.2.1.
3.2.4.1 Verification process description
Spring-Boot
project2. Perform a failover on Aurora’s console
3. Execute the Rest API
request
4. Repeatedly execute POST
(http://localhost:8088/save-user) until the call to the API failed to write to Aurora and eventually recovered successfully.
5. The following figure shows the process of executing code failover. It takes about 37 seconds from the time when the latest SQL write is successfully performed to the time when the next SQL write is successfully performed. That is, the application can be automatically recovered from Aurora failover, and the recovery time is about 37 seconds.
application.properties spring boot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#Activate sharding-tables configuration items
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
# spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-tables.properties sharding-jdbc
profile description
## configure primary-key policy
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# configure the binding relation of t_order and t_order_item
spring.shardingsphere.sharding.binding-tables[0]=t_order,t_order_item
# configure broadcast tables
spring.shardingsphere.sharding.broadcast-tables=t_address
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC routing rules are configured, the client
executes DDL, and Sharding-JDBC automatically creates corresponding tables according to the table splitting rules. If t_address
is a broadcast table, create a t_address
because there is only one master instance. Two physical tables t_order_0
and t_order_1
will be created when creating t_order
.
2. Write operation
As shown in the figure below, Logic SQL
inserts a record into t_order
. When Sharding-JDBC is executed, data will be distributed to t_order_0
and t_order_1
according to the table splitting rules.
When t_order
and t_order_item
are bound, the records associated with order_item
and order
are placed on the same physical table.
3. Read operation
As shown in the figure below, perform the join
query operations to order
and order_item
under the binding table, and the physical shard is precisely located based on the binding relationship.
The join
query operations on order
and order_item
under the unbound table will traverse all shards.
Create two instances on Aurora: ds_0
and ds_1
When the sharding-spring-boot-jpa-example
project is started, tables t_order
, t_order_item
,t_address
will be created on two Aurora instances.
application.properties springboot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases configuration items
spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-databases.properties sharding-jdbc
profile description
spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= spring.shardingsphere.datasource.ds_0.username=
spring.shardingsphere.datasource.ds_0.password=
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url=
spring.shardingsphere.datasource.ds_1.username=
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, physical tables will be created on ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the slave library are written on the table in the corresponding instance according to the slave library field and routing policy.
3. Read operation
Query order
is routed to the corresponding Aurora instance according to the routing rules of the slave library .
Query Address
. Since address
is a broadcast table, an instance of address
will be randomly selected and queried from the nodes used.
As shown in the figure below, perform the join
query operations to order
and order_item
under the binding table, and the physical shard is precisely located based on the binding relationship.
As shown in the figure below, create two instances on Aurora: ds_0
and ds_1
When the sharding-spring-boot-jpa-example
project is started, physical tables t_order_01
, t_order_02
, t_order_item_01
,and t_order_item_02
and global table t_address
will be created on two Aurora instances.
application.properties springboot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-databases.properties sharding-jdbc
profile description
spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= 306/dev?useSSL=false&characterEncoding=utf-8
spring.shardingsphere.datasource.ds_0.username=
spring.shardingsphere.datasource.ds_0.password=
spring.shardingsphere.datasource.ds_0.max-active=16
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url=
spring.shardingsphere.datasource.ds_1.username=
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.datasource.ds_1.max-active=16
# default library splitting policy
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
# Tables that do not meet the library splitting policy are placed on ds_0
spring.shardingsphere.sharding.default-data-source-name=ds_0
# t_order table splitting policy
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
# t_order_item table splitting policy
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mdoe
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s sharding and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, t_address
will be created on both ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the sub-library are written to the table on the corresponding instance according to the slave library field and routing policy.
3. Read operation
The read operation is similar to the library split function verification described in section2.4.3.
The following figure shows the physical table of the created database instance.
application.properties spring boot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
spring.profiles.active=sharding-master-slave
application-sharding-master-slave.properties sharding-jdbc
profile description
The url, name and password of the database need to be changed to your own database parameters.
spring.shardingsphere.datasource.names=ds_master_0,ds_master_1,ds_master_0_slave_0,ds_master_0_slave_1,ds_master_1_slave_0,ds_master_1_slave_1
spring.shardingsphere.datasource.ds_master_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0.username=
spring.shardingsphere.datasource.ds_master_0.password=
spring.shardingsphere.datasource.ds_master_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_0.username=
spring.shardingsphere.datasource.ds_master_0_slave_0.password=
spring.shardingsphere.datasource.ds_master_0_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_1.username=
spring.shardingsphere.datasource.ds_master_0_slave_1.password=
spring.shardingsphere.datasource.ds_master_0_slave_1.max-active=16
spring.shardingsphere.datasource.ds_master_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1.jdbc-url=
spring.shardingsphere.datasource.ds_master_1.username=
spring.shardingsphere.datasource.ds_master_1.password=
spring.shardingsphere.datasource.ds_master_1.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_0.jdbc-url=
spring.shardingsphere.datasource.ds_master_1_slave_0.username=
spring.shardingsphere.datasource.ds_master_1_slave_0.password=
spring.shardingsphere.datasource.ds_master_1_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_1_slave_1.username=admin
spring.shardingsphere.datasource.ds_master_1_slave_1.password=
spring.shardingsphere.datasource.ds_master_1_slave_1.max-active=16
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_master_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# master/slave data source and slave data source configuration
spring.shardingsphere.sharding.master-slave-rules.ds_0.master-data-source-name=ds_master_0
spring.shardingsphere.sharding.master-slave-rules.ds_0.slave-data-source-names=ds_master_0_slave_0, ds_master_0_slave_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.master-data-source-name=ds_master_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.slave-data-source-names=ds_master_1_slave_0, ds_master_1_slave_1
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, t_address
will be created on both ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the slave library are written to the table on the corresponding instance according to the slave library field and routing policy.
3. Read operation
The join
query operations on order
and order_item
under the binding table are shown below.
3. Conclusion
As an open source product focusing on database enhancement, ShardingSphere is pretty good in terms of its community activitiy, product maturity and documentation richness.
Among its products, ShardingSphere-JDBC is a sharding solution based on the client-side, which supports all sharding scenarios. And there’s no need to introduce an intermediate layer like Proxy, so the complexity of operation and maintenance is reduced. Its latency is theoretically lower than Proxy due to the lack of intermediate layer. In addition, ShardingSphere-JDBC can support a variety of relational databases based on SQL standards such as MySQL/PostgreSQL/Oracle/SQL Server, etc.
However, due to the integration of Sharding-JDBC with the application program, it only supports Java language for now, and is strongly dependent on the application programs. Nevertheless, Sharding-JDBC separates all sharding configuration from the application program, which brings relatively small changes when switching to other middleware.
In conclusion, Sharding-JDBC is a good choice if you use a Java-based system and have to to interconnect with different relational databases — and don’t want to bother with introducing an intermediate layer.
Author
Sun Jinhua
A senior solution architect at AWS, Sun is responsible for the design and consult on cloud architecture. for providing customers with cloud-related design and consulting services. Before joining AWS, he ran his own business, specializing in building e-commerce platforms and designing the overall architecture for e-commerce platforms of automotive companies. He worked in a global leading communication equipment company as a senior engineer, responsible for the development and architecture design of multiple subsystems of LTE equipment system. He has rich experience in architecture design with high concurrency and high availability system, microservice architecture design, database, middleware, IOT etc.
1602964260
Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.
According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.
(State of Kubernetes and Container Security, 2020)
And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.
(State of Kubernetes and Container Security, 2020)
#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml
1659283860
ActiveInteraction manages application-specific business logic. It's an implementation of service objects designed to blend seamlessly into Rails.
ActiveInteraction gives you a place to put your business logic. It also helps you write safer code by validating that your inputs conform to your expectations. If ActiveModel deals with your nouns, then ActiveInteraction handles your verbs.
Add it to your Gemfile:
gem 'active_interaction', '~> 5.1'
Or install it manually:
$ gem install active_interaction --version '~> 5.1'
This project uses Semantic Versioning. Check out GitHub releases for a detailed list of changes.
To define an interaction, create a subclass of ActiveInteraction::Base
. Then you need to do two things:
Define your inputs. Use class filter methods to define what you expect your inputs to look like. For instance, if you need a boolean flag for pepperoni, use boolean :pepperoni
. Check out the filters section for all the available options.
Define your business logic. Do this by implementing the #execute
method. Each input you defined will be available as the type you specified. If any of the inputs are invalid, #execute
won't be run. Filters are responsible for checking your inputs. Check out the validations section if you need more than that.
That covers the basics. Let's put it all together into a simple example that squares a number.
require 'active_interaction'
class Square < ActiveInteraction::Base
float :x
def execute
x**2
end
end
Call .run
on your interaction to execute it. You must pass a single hash to .run
. It will return an instance of your interaction. By convention, we call this an outcome. You can use the #valid?
method to ask the outcome if it's valid. If it's invalid, take a look at its errors with #errors
. In either case, the value returned from #execute
will be stored in #result
.
outcome = Square.run(x: 'two point one')
outcome.valid?
# => nil
outcome.errors.messages
# => {:x=>["is not a valid float"]}
outcome = Square.run(x: 2.1)
outcome.valid?
# => true
outcome.result
# => 4.41
You can also use .run!
to execute interactions. It's like .run
but more dangerous. It doesn't return an outcome. If the outcome would be invalid, it will instead raise an error. But if the outcome would be valid, it simply returns the result.
Square.run!(x: 'two point one')
# ActiveInteraction::InvalidInteractionError: X is not a valid float
Square.run!(x: 2.1)
# => 4.41
ActiveInteraction checks your inputs. Often you'll want more than that. For instance, you may want an input to be a string with at least one non-whitespace character. Instead of writing your own validation for that, you can use validations from ActiveModel.
These validations aren't provided by ActiveInteraction. They're from ActiveModel. You can also use any custom validations you wrote yourself in your interactions.
class SayHello < ActiveInteraction::Base
string :name
validates :name,
presence: true
def execute
"Hello, #{name}!"
end
end
When you run this interaction, two things will happen. First ActiveInteraction will check your inputs. Then ActiveModel will validate them. If both of those are happy, it will be executed.
SayHello.run!(name: nil)
# ActiveInteraction::InvalidInteractionError: Name is required
SayHello.run!(name: '')
# ActiveInteraction::InvalidInteractionError: Name can't be blank
SayHello.run!(name: 'Taylor')
# => "Hello, Taylor!"
You can define filters inside an interaction using the appropriate class method. Each method has the same signature:
Some symbolic names. These are the attributes to create.
An optional hash of options. Each filter supports at least these two options:
default
is the fallback value to use if nil
is given. To make a filter optional, set default: nil
.
desc
is a human-readable description of the input. This can be useful for generating documentation. For more information about this, read the descriptions section.
An optional block of sub-filters. Only array and hash filters support this. Other filters will ignore blocks when given to them.
Let's take a look at an example filter. It defines three inputs: x
, y
, and z
. Those inputs are optional and they all share the same description ("an example filter").
array :x, :y, :z,
default: nil,
desc: 'an example filter' do
# Some filters support sub-filters here.
end
In general, filters accept values of the type they correspond to, plus a few alternatives that can be reasonably coerced. Typically the coercions come from Rails, so "1"
can be interpreted as the boolean value true
, the string "1"
, or the number 1
.
In addition to accepting arrays, array inputs will convert ActiveRecord::Relation
s into arrays.
class ArrayInteraction < ActiveInteraction::Base
array :toppings
def execute
toppings.size
end
end
ArrayInteraction.run!(toppings: 'everything')
# ActiveInteraction::InvalidInteractionError: Toppings is not a valid array
ArrayInteraction.run!(toppings: [:cheese, 'pepperoni'])
# => 2
Use a block to constrain the types of elements an array can contain. Note that you can only have one filter inside an array block, and it must not have a name.
array :birthdays do
date
end
For interface
, object
, and record
filters, the name of the array filter will be singularized and used to determine the type of value passed. In the example below, the objects passed would need to be of type Cow
.
array :cows do
object
end
You can override this by passing the necessary information to the inner filter.
array :managers do
object class: People
end
Errors that occur will be indexed based on the Rails configuration setting index_nested_attribute_errors
. You can also manually override this setting with the :index_errors
option. In this state is is possible to get multiple errors from a single filter.
class ArrayInteraction < ActiveInteraction::Base
array :favorite_numbers, index_errors: true do
integer
end
def execute
favorite_numbers
end
end
ArrayInteraction.run(favorite_numbers: [8, 'bazillion']).errors.details
=> {:"favorite_numbers[1]"=>[{:error=>:invalid_type, :type=>"array"}]}
With :index_errors
set to false
the error would have been:
{:favorite_numbers=>[{:error=>:invalid_type, :type=>"array"}]}
Boolean filters convert the strings "1"
, "true"
, and "on"
(case-insensitive) into true
. They also convert "0"
, "false"
, and "off"
into false
. Blank strings will be treated as nil
.
class BooleanInteraction < ActiveInteraction::Base
boolean :kool_aid
def execute
'Oh yeah!' if kool_aid
end
end
BooleanInteraction.run!(kool_aid: 1)
# ActiveInteraction::InvalidInteractionError: Kool aid is not a valid boolean
BooleanInteraction.run!(kool_aid: true)
# => "Oh yeah!"
File filters also accept TempFile
s and anything that responds to #rewind
. That means that you can pass the params
from uploading files via forms in Rails.
class FileInteraction < ActiveInteraction::Base
file :readme
def execute
readme.size
end
end
FileInteraction.run!(readme: 'README.md')
# ActiveInteraction::InvalidInteractionError: Readme is not a valid file
FileInteraction.run!(readme: File.open('README.md'))
# => 21563
Hash filters accept hashes. The expected value types are given by passing a block and nesting other filters. You can have any number of filters inside a hash, including other hashes.
class HashInteraction < ActiveInteraction::Base
hash :preferences do
boolean :newsletter
boolean :sweepstakes
end
def execute
puts 'Thanks for joining the newsletter!' if preferences[:newsletter]
puts 'Good luck in the sweepstakes!' if preferences[:sweepstakes]
end
end
HashInteraction.run!(preferences: 'yes, no')
# ActiveInteraction::InvalidInteractionError: Preferences is not a valid hash
HashInteraction.run!(preferences: { newsletter: true, 'sweepstakes' => false })
# Thanks for joining the newsletter!
# => nil
Setting default hash values can be tricky. The default value has to be either nil
or {}
. Use nil
to make the hash optional. Use {}
if you want to set some defaults for values inside the hash.
hash :optional,
default: nil
# => {:optional=>nil}
hash :with_defaults,
default: {} do
boolean :likes_cookies,
default: true
end
# => {:with_defaults=>{:likes_cookies=>true}}
By default, hashes remove any keys that aren't given as nested filters. To allow all hash keys, set strip: false
. In general we don't recommend doing this, but it's sometimes necessary.
hash :stuff,
strip: false
String filters define inputs that only accept strings.
class StringInteraction < ActiveInteraction::Base
string :name
def execute
"Hello, #{name}!"
end
end
StringInteraction.run!(name: 0xDEADBEEF)
# ActiveInteraction::InvalidInteractionError: Name is not a valid string
StringInteraction.run!(name: 'Taylor')
# => "Hello, Taylor!"
String filter strips leading and trailing whitespace by default. To disable it, set the strip
option to false
.
string :comment,
strip: false
Symbol filters define inputs that accept symbols. Strings will be converted into symbols.
class SymbolInteraction < ActiveInteraction::Base
symbol :method
def execute
method.to_proc
end
end
SymbolInteraction.run!(method: -> {})
# ActiveInteraction::InvalidInteractionError: Method is not a valid symbol
SymbolInteraction.run!(method: :object_id)
# => #<Proc:0x007fdc9ba94118>
Filters that work with dates and times behave similarly. By default, they all convert strings into their expected data types using .parse
. Blank strings will be treated as nil
. If you give the format
option, they will instead convert strings using .strptime
. Note that formats won't work with DateTime
and Time
filters if a time zone is set.
Date
class DateInteraction < ActiveInteraction::Base
date :birthday
def execute
birthday + (18 * 365)
end
end
DateInteraction.run!(birthday: 'yesterday')
# ActiveInteraction::InvalidInteractionError: Birthday is not a valid date
DateInteraction.run!(birthday: Date.new(1989, 9, 1))
# => #<Date: 2007-08-28 ((2454341j,0s,0n),+0s,2299161j)>
date :birthday,
format: '%Y-%m-%d'
DateTime
class DateTimeInteraction < ActiveInteraction::Base
date_time :now
def execute
now.iso8601
end
end
DateTimeInteraction.run!(now: 'now')
# ActiveInteraction::InvalidInteractionError: Now is not a valid date time
DateTimeInteraction.run!(now: DateTime.now)
# => "2015-03-11T11:04:40-05:00"
date_time :start,
format: '%Y-%m-%dT%H:%M:%S'
Time
In addition to converting strings with .parse
(or .strptime
), time filters convert numbers with .at
.
class TimeInteraction < ActiveInteraction::Base
time :epoch
def execute
Time.now - epoch
end
end
TimeInteraction.run!(epoch: 'a long, long time ago')
# ActiveInteraction::InvalidInteractionError: Epoch is not a valid time
TimeInteraction.run!(epoch: Time.new(1970))
# => 1426068362.5136619
time :start,
format: '%Y-%m-%dT%H:%M:%S'
All numeric filters accept numeric input. They will also convert strings using the appropriate method from Kernel
(like .Float
). Blank strings will be treated as nil
.
Decimal
class DecimalInteraction < ActiveInteraction::Base
decimal :price
def execute
price * 1.0825
end
end
DecimalInteraction.run!(price: 'one ninety-nine')
# ActiveInteraction::InvalidInteractionError: Price is not a valid decimal
DecimalInteraction.run!(price: BigDecimal(1.99, 2))
# => #<BigDecimal:7fe792a42028,'0.2165E1',18(45)>
To specify the number of significant digits, use the digits
option.
decimal :dollars,
digits: 2
Float
class FloatInteraction < ActiveInteraction::Base
float :x
def execute
x**2
end
end
FloatInteraction.run!(x: 'two point one')
# ActiveInteraction::InvalidInteractionError: X is not a valid float
FloatInteraction.run!(x: 2.1)
# => 4.41
Integer
class IntegerInteraction < ActiveInteraction::Base
integer :limit
def execute
limit.downto(0).to_a
end
end
IntegerInteraction.run!(limit: 'ten')
# ActiveInteraction::InvalidInteractionError: Limit is not a valid integer
IntegerInteraction.run!(limit: 10)
# => [10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
When a String
is passed into an integer
input, the value will be coerced. A default base of 10
is used though it may be overridden with the base
option. If a base of 0
is provided, the coercion will respect radix indicators present in the string.
class IntegerInteraction < ActiveInteraction::Base
integer :limit1
integer :limit2, base: 8
integer :limit3, base: 0
def execute
[limit1, limit2, limit3]
end
end
IntegerInteraction.run!(limit1: 71, limit2: 71, limit3: 71)
# => [71, 71, 71]
IntegerInteraction.run!(limit1: "071", limit2: "071", limit3: "0x71")
# => [71, 57, 113]
IntegerInteraction.run!(limit1: "08", limit2: "08", limit3: "08")
ActiveInteraction::InvalidInteractionError: Limit2 is not a valid integer, Limit3 is not a valid integer
Interface filters allow you to specify an interface that the passed value must meet in order to pass. The name of the interface is used to look for a constant inside the ancestor listing for the passed value. This allows for a variety of checks depending on what's passed. Class instances are checked for an included module or an inherited ancestor class. Classes are checked for an extended module or an inherited ancestor class. Modules are checked for an extended module.
class InterfaceInteraction < ActiveInteraction::Base
interface :exception
def execute
exception
end
end
InterfaceInteraction.run!(exception: Exception)
# ActiveInteraction::InvalidInteractionError: Exception is not a valid interface
InterfaceInteraction.run!(exception: NameError) # a subclass of Exception
# => NameError
You can use :from
to specify a class or module. This would be the equivalent of what's above.
class InterfaceInteraction < ActiveInteraction::Base
interface :error,
from: Exception
def execute
error
end
end
You can also create an anonymous interface on the fly by passing the methods
option.
class InterfaceInteraction < ActiveInteraction::Base
interface :serializer,
methods: %i[dump load]
def execute
input = '{ "is_json" : true }'
object = serializer.load(input)
output = serializer.dump(object)
output
end
end
require 'json'
InterfaceInteraction.run!(serializer: Object.new)
# ActiveInteraction::InvalidInteractionError: Serializer is not a valid interface
InterfaceInteraction.run!(serializer: JSON)
# => "{\"is_json\":true}"
Object filters allow you to require an instance of a particular class or one of its subclasses.
class Cow
def moo
'Moo!'
end
end
class ObjectInteraction < ActiveInteraction::Base
object :cow
def execute
cow.moo
end
end
ObjectInteraction.run!(cow: Object.new)
# ActiveInteraction::InvalidInteractionError: Cow is not a valid object
ObjectInteraction.run!(cow: Cow.new)
# => "Moo!"
The class name is automatically determined by the filter name. If your filter name is different than your class name, use the class
option. It can be either the class, a string, or a symbol.
object :dolly1,
class: Sheep
object :dolly2,
class: 'Sheep'
object :dolly3,
class: :Sheep
If you have value objects or you would like to build one object from another, you can use the converter
option. It is only called if the value provided is not an instance of the class or one of its subclasses. The converter
option accepts a symbol that specifies a class method on the object class or a proc. Both will be passed the value and any errors thrown inside the converter will cause the value to be considered invalid. Any returned value that is not the correct class will also be treated as invalid. Any default
that is not an instance of the class or subclass and is not nil
will also be converted.
class ObjectInteraction < ActiveInteraction::Base
object :ip_address,
class: IPAddr,
converter: :new
def execute
ip_address
end
end
ObjectInteraction.run!(ip_address: '192.168.1.1')
# #<IPAddr: IPv4:192.168.1.1/255.255.255.255>
ObjectInteraction.run!(ip_address: 1)
# ActiveInteraction::InvalidInteractionError: Ip address is not a valid object
Record filters allow you to require an instance of a particular class (or one of its subclasses) or a value that can be used to locate an instance of the object. If the value does not match, it will call find
on the class of the record. This is particularly useful when working with ActiveRecord objects. Like an object filter, the class is derived from the name passed but can be specified with the class
option. Any default
that is not an instance of the class or subclass and is not nil
will also be found. Blank strings passed in will be treated as nil
.
class RecordInteraction < ActiveInteraction::Base
record :encoding
def execute
encoding
end
end
> RecordInteraction.run!(encoding: Encoding::US_ASCII)
=> #<Encoding:US-ASCII>
> RecordInteraction.run!(encoding: 'ascii')
=> #<Encoding:US-ASCII>
A different method can be specified by providing a symbol to the finder
option.
ActiveInteraction plays nicely with Rails. You can use interactions to handle your business logic instead of models or controllers. To see how it all works, let's take a look at a complete example of a controller with the typical resourceful actions.
We recommend putting your interactions in app/interactions
. It's also very helpful to group them by model. That way you can look in app/interactions/accounts
for all the ways you can interact with accounts.
- app/
- controllers/
- accounts_controller.rb
- interactions/
- accounts/
- create_account.rb
- destroy_account.rb
- find_account.rb
- list_accounts.rb
- update_account.rb
- models/
- account.rb
- views/
- account/
- edit.html.erb
- index.html.erb
- new.html.erb
- show.html.erb
# GET /accounts
def index
@accounts = ListAccounts.run!
end
Since we're not passing any inputs to ListAccounts
, it makes sense to use .run!
instead of .run
. If it failed, that would mean we probably messed up writing the interaction.
class ListAccounts < ActiveInteraction::Base
def execute
Account.not_deleted.order(last_name: :asc, first_name: :asc)
end
end
Up next is the show action. For this one we'll define a helper method to handle raising the correct errors. We have to do this because calling .run!
would raise an ActiveInteraction::InvalidInteractionError
instead of an ActiveRecord::RecordNotFound
. That means Rails would render a 500 instead of a 404.
# GET /accounts/:id
def show
@account = find_account!
end
private
def find_account!
outcome = FindAccount.run(params)
if outcome.valid?
outcome.result
else
fail ActiveRecord::RecordNotFound, outcome.errors.full_messages.to_sentence
end
end
This probably looks a little different than you're used to. Rails commonly handles this with a before_filter
that sets the @account
instance variable. Why is all this interaction code better? Two reasons: One, you can reuse the FindAccount
interaction in other places, like your API controller or a Resque task. And two, if you want to change how accounts are found, you only have to change one place.
Inside the interaction, we could use #find
instead of #find_by_id
. That way we wouldn't need the #find_account!
helper method in the controller because the error would bubble all the way up. However, you should try to avoid raising errors from interactions. If you do, you'll have to deal with raised exceptions as well as the validity of the outcome.
class FindAccount < ActiveInteraction::Base
integer :id
def execute
account = Account.not_deleted.find_by_id(id)
if account
account
else
errors.add(:id, 'does not exist')
end
end
end
Note that it's perfectly fine to add errors during execution. Not all errors have to come from checking or validation.
The new action will be a little different than the ones we've looked at so far. Instead of calling .run
or .run!
, it's going to initialize a new interaction. This is possible because interactions behave like ActiveModels.
# GET /accounts/new
def new
@account = CreateAccount.new
end
Since interactions behave like ActiveModels, we can use ActiveModel validations with them. We'll use validations here to make sure that the first and last names are not blank. The validations section goes into more detail about this.
class CreateAccount < ActiveInteraction::Base
string :first_name, :last_name
validates :first_name, :last_name,
presence: true
def to_model
Account.new
end
def execute
account = Account.new(inputs)
unless account.save
errors.merge!(account.errors)
end
account
end
end
We used a couple of advanced features here. The #to_model
method helps determine the correct form to use in the view. Check out the section on forms for more about that. Inside #execute
, we merge errors. This is a convenient way to move errors from one object to another. Read more about it in the errors section.
The create action has a lot in common with the new action. Both of them use the CreateAccount
interaction. And if creating the account fails, this action falls back to rendering the new action.
# POST /accounts
def create
outcome = CreateAccount.run(params.fetch(:account, {}))
if outcome.valid?
redirect_to(outcome.result)
else
@account = outcome
render(:new)
end
end
Note that we have to pass a hash to .run
. Passing nil
is an error.
Since we're using an interaction, we don't need strong parameters. The interaction will ignore any inputs that weren't defined by filters. So you can forget about params.require
and params.permit
because interactions handle that for you.
The destroy action will reuse the #find_account!
helper method we wrote earlier.
# DELETE /accounts/:id
def destroy
DestroyAccount.run!(account: find_account!)
redirect_to(accounts_url)
end
In this simple example, the destroy interaction doesn't do much. It's not clear that you gain anything by putting it in an interaction. But in the future, when you need to do more than account.destroy
, you'll only have to update one spot.
class DestroyAccount < ActiveInteraction::Base
object :account
def execute
account.destroy
end
end
Just like the destroy action, editing uses the #find_account!
helper. Then it creates a new interaction instance to use as a form object.
# GET /accounts/:id/edit
def edit
account = find_account!
@account = UpdateAccount.new(
account: account,
first_name: account.first_name,
last_name: account.last_name)
end
The interaction that updates accounts is more complicated than the others. It requires an account to update, but the other inputs are optional. If they're missing, it'll ignore those attributes. If they're present, it'll update them.
class UpdateAccount < ActiveInteraction::Base
object :account
string :first_name, :last_name,
default: nil
validates :first_name,
presence: true,
unless: -> { first_name.nil? }
validates :last_name,
presence: true,
unless: -> { last_name.nil? }
def execute
account.first_name = first_name if first_name.present?
account.last_name = last_name if last_name.present?
unless account.save
errors.merge!(account.errors)
end
account
end
end
Hopefully you've gotten the hang of this by now. We'll use #find_account!
to get the account. Then we'll build up the inputs for UpdateAccount
. Then we'll run the interaction and either redirect to the updated account or back to the edit page.
# PUT /accounts/:id
def update
inputs = { account: find_account! }.reverse_merge(params[:account])
outcome = UpdateAccount.run(inputs)
if outcome.valid?
redirect_to(outcome.result)
else
@account = outcome
render(:edit)
end
end
ActiveSupport::Callbacks provides a powerful framework for defining callbacks. ActiveInteraction uses that framework to allow hooking into various parts of an interaction's lifecycle.
class Increment < ActiveInteraction::Base
set_callback :filter, :before, -> { puts 'before filter' }
integer :x
set_callback :validate, :after, -> { puts 'after validate' }
validates :x,
numericality: { greater_than_or_equal_to: 0 }
set_callback :execute, :around, lambda { |_interaction, block|
puts '>>>'
block.call
puts '<<<'
}
def execute
puts 'executing'
x + 1
end
end
Increment.run!(x: 1)
# before filter
# after validate
# >>>
# executing
# <<<
# => 2
In order, the available callbacks are filter
, validate
, and execute
. You can set before
, after
, or around
on any of them.
You can run interactions from within other interactions with #compose
. If the interaction is successful, it'll return the result (just like if you had called it with .run!
). If something went wrong, execution will halt immediately and the errors will be moved onto the caller.
class Add < ActiveInteraction::Base
integer :x, :y
def execute
x + y
end
end
class AddThree < ActiveInteraction::Base
integer :x
def execute
compose(Add, x: x, y: 3)
end
end
AddThree.run!(x: 5)
# => 8
To bring in filters from another interaction, use .import_filters
. Combined with inputs
, delegating to another interaction is a piece of cake.
class AddAndDouble < ActiveInteraction::Base
import_filters Add
def execute
compose(Add, inputs) * 2
end
end
Note that errors in composed interactions have a few tricky cases. See the errors section for more information about them.
The default value for an input can take on many different forms. Setting the default to nil
makes the input optional. Setting it to some value makes that the default value for that input. Setting it to a lambda will lazily set the default value for that input. That means the value will be computed when the interaction is run, as opposed to when it is defined.
Lambda defaults are evaluated in the context of the interaction, so you can use the values of other inputs in them.
# This input is optional.
time :a, default: nil
# This input defaults to `Time.at(123)`.
time :b, default: Time.at(123)
# This input lazily defaults to `Time.now`.
time :c, default: -> { Time.now }
# This input defaults to the value of `c` plus 10 seconds.
time :d, default: -> { c + 10 }
Use the desc
option to provide human-readable descriptions of filters. You should prefer these to comments because they can be used to generate documentation. The interaction class has a .filters
method that returns a hash of filters. Each filter has a #desc
method that returns the description.
class Descriptive < ActiveInteraction::Base
string :first_name,
desc: 'your first name'
string :last_name,
desc: 'your last name'
end
Descriptive.filters.each do |name, filter|
puts "#{name}: #{filter.desc}"
end
# first_name: your first name
# last_name: your last name
ActiveInteraction provides detailed errors for easier introspection and testing of errors. Detailed errors improve on regular errors by adding a symbol that represents the type of error that has occurred. Let's look at an example where an item is purchased using a credit card.
class BuyItem < ActiveInteraction::Base
object :credit_card, :item
hash :options do
boolean :gift_wrapped
end
def execute
order = credit_card.purchase(item)
notify(credit_card.account)
order
end
private def notify(account)
# ...
end
end
Having missing or invalid inputs causes the interaction to fail and return errors.
outcome = BuyItem.run(item: 'Thing', options: { gift_wrapped: 'yes' })
outcome.errors.messages
# => {:credit_card=>["is required"], :item=>["is not a valid object"], :"options.gift_wrapped"=>["is not a valid boolean"]}
Determining the type of error based on the string is difficult if not impossible. Calling #details
instead of #messages
on errors
gives you the same list of errors with a testable label representing the error.
outcome.errors.details
# => {:credit_card=>[{:error=>:missing}], :item=>[{:error=>:invalid_type, :type=>"object"}], :"options.gift_wrapped"=>[{:error=>:invalid_type, :type=>"boolean"}]}
Detailed errors can also be manually added during the execute call by passing a symbol to #add
instead of a string.
def execute
errors.add(:monster, :no_passage)
end
ActiveInteraction also supports merging errors. This is useful if you want to delegate validation to some other object. For example, if you have an interaction that updates a record, you might want that record to validate itself. By using the #merge!
helper on errors
, you can do exactly that.
class UpdateThing < ActiveInteraction::Base
object :thing
def execute
unless thing.save
errors.merge!(thing.errors)
end
thing
end
end
When a composed interaction fails, its errors are merged onto the caller. This generally produces good error messages, but there are a few cases to look out for.
class Inner < ActiveInteraction::Base
boolean :x, :y
end
class Outer < ActiveInteraction::Base
string :x
boolean :z, default: nil
def execute
compose(Inner, x: x, y: z)
end
end
outcome = Outer.run(x: 'yes')
outcome.errors.details
# => { :x => [{ :error => :invalid_type, :type => "boolean" }],
# :base => [{ :error => "Y is required" }] }
outcome.errors.full_messages.join(' and ')
# => "X is not a valid boolean and Y is required"
Since both interactions have an input called x
, the inner error for that input is moved to the x
error on the outer interaction. This results in a misleading error that claims the input x
is not a valid boolean even though it's a string on the outer interaction.
Since only the inner interaction has an input called y
, the inner error for that input is moved to the base
error on the outer interaction. This results in a confusing error that claims the input y
is required even though it's not present on the outer interaction.
The outcome returned by .run
can be used in forms as though it were an ActiveModel object. You can also create a form object by calling .new
on the interaction.
Given an application with an Account
model we'll create a new Account
using the CreateAccount
interaction.
# GET /accounts/new
def new
@account = CreateAccount.new
end
# POST /accounts
def create
outcome = CreateAccount.run(params.fetch(:account, {}))
if outcome.valid?
redirect_to(outcome.result)
else
@account = outcome
render(:new)
end
end
The form used to create a new Account
has slightly more information on the form_for
call than you might expect.
<%= form_for @account, as: :account, url: accounts_path do |f| %>
<%= f.text_field :first_name %>
<%= f.text_field :last_name %>
<%= f.submit 'Create' %>
<% end %>
This is necessary because we want the form to act like it is creating a new Account
. Defining to_model
on the CreateAccount
interaction tells the form to treat our interaction like an Account
.
class CreateAccount < ActiveInteraction::Base
# ...
def to_model
Account.new
end
end
Now our form_for
call knows how to generate the correct URL and param name (i.e. params[:account]
).
# app/views/accounts/new.html.erb
<%= form_for @account do |f| %>
<%# ... %>
<% end %>
If you have an interaction that updates an Account
, you can define to_model
to return the object you're updating.
class UpdateAccount < ActiveInteraction::Base
# ...
object :account
def to_model
account
end
end
ActiveInteraction also supports formtastic and simple_form. The filters used to define the inputs on your interaction will relay type information to these gems. As a result, form fields will automatically use the appropriate input type.
It can be convenient to apply the same options to a bunch of inputs. One common use case is making many inputs optional. Instead of setting default: nil
on each one of them, you can use with_options
to reduce duplication.
with_options default: nil do
date :birthday
string :name
boolean :wants_cake
end
Optional inputs can be defined by using the :default
option as described in the filters section. Within the interaction, provided and default values are merged to create inputs
. There are times where it is useful to know whether a value was passed to run
or the result of a filter default. In particular, it is useful when nil
is an acceptable value. For example, you may optionally track your users' birthdays. You can use the inputs.given?
predicate to see if an input was even passed to run
. With inputs.given?
you can also check the input of a hash or array filter by passing a series of keys or indexes to check.
class UpdateUser < ActiveInteraction::Base
object :user
date :birthday,
default: nil
def execute
user.birthday = birthday if inputs.given?(:birthday)
errors.merge!(user.errors) unless user.save
user
end
end
Now you have a few options. If you don't want to update their birthday, leave it out of the hash. If you want to remove their birthday, set birthday: nil
. And if you want to update it, pass in the new value as usual.
user = User.find(...)
# Don't update their birthday.
UpdateUser.run!(user: user)
# Remove their birthday.
UpdateUser.run!(user: user, birthday: nil)
# Update their birthday.
UpdateUser.run!(user: user, birthday: Date.new(2000, 1, 2))
ActiveInteraction is i18n aware out of the box! All you have to do is add translations to your project. In Rails, these typically go into config/locales
. For example, let's say that for some reason you want to print everything out backwards. Simply add translations for ActiveInteraction to your hsilgne
locale.
# config/locales/hsilgne.yml
hsilgne:
active_interaction:
types:
array: yarra
boolean: naeloob
date: etad
date_time: emit etad
decimal: lamiced
file: elif
float: taolf
hash: hsah
integer: regetni
interface: ecafretni
object: tcejbo
string: gnirts
symbol: lobmys
time: emit
errors:
messages:
invalid: dilavni si
invalid_type: '%{type} dilav a ton si'
missing: deriuqer si
Then set your locale and run interactions like normal.
class I18nInteraction < ActiveInteraction::Base
string :name
end
I18nInteraction.run(name: false).errors.messages[:name]
# => ["is not a valid string"]
I18n.locale = :hsilgne
I18nInteraction.run(name: false).errors.messages[:name]
# => ["gnirts dilav a ton si"]
Everything else works like an activerecord
entry. For example, to rename an attribute you can use attributes
.
Here we'll rename the num
attribute on an interaction named product
:
en:
active_interaction:
attributes:
product:
num: 'Number'
ActiveInteraction is brought to you by Aaron Lasseigne. Along with Aaron, Taylor Fausak helped create and maintain ActiveInteraction but has since moved on.
If you want to contribute to ActiveInteraction, please read our contribution guidelines. A complete list of contributors is available on GitHub.
ActiveInteraction is licensed under the MIT License.
Author: AaronLasseigne
Source code: https://github.com/AaronLasseigne/active_interaction
License: MIT license
1634641620
This tutorial shows how to create a basic application demonstrating an approach to writing simple “function-based microservices”. It explains how to create such microservices, as well as deploy them, setup autoscaling, metrics gathering, and tracing.
Such an approach might be quite powerful to conduct large-scale events processing, while gives great balance over ownership of infrastructure and the complexity of deployments.
1630409837
In this article we will learn about “How to implement Fault Tolerance in Microservices using Resilience4j?”
https://javatechonline.com/how-to-implement-fault-tolerance-in-microservices-using-resilience4j/
#java #Java #microservices #microservice #spring-boot #spring #spring-framework
When we develop an application, especially a Microservices-based applications, there are high chances that we experience some deviations while running it in real time. Sometimes, it could be slow response, network failures, REST call failures, failures due to the high number of requests and much more. In order to tolerate these kinds of suspected faults, we need to incorporate Fault Tolerance mechanism in our application. To achieve it, we will make use of Resilience4j library. Resilience4j is a lightweight, easy-to-use fault tolerance library inspired by Netflix Hystrix, but designed for Java 8 and functional programming. Get complete detail of it with examples.