1623429180
In the video below, we take a closer look at the How to add a Servlet filter in Spring Boot? | Spring Boot: Servlet Filter | Spring Boot tutorial. Let’s get started!
#spring boot #servlet filter #add a servlet filter in spring boot #servlet
1654075127
Amazon Aurora is a relational database management system (RDBMS) developed by AWS(Amazon Web Services). Aurora gives you the performance and availability of commercial-grade databases with full MySQL and PostgreSQL compatibility. In terms of high performance, Aurora MySQL and Aurora PostgreSQL have shown an increase in throughput of up to 5X over stock MySQL and 3X over stock PostgreSQL respectively on similar hardware. In terms of scalability, Aurora achieves enhancements and innovations in storage and computing, horizontal and vertical functions.
Aurora supports up to 128TB of storage capacity and supports dynamic scaling of storage layer in units of 10GB. In terms of computing, Aurora supports scalable configurations for multiple read replicas. Each region can have an additional 15 Aurora replicas. In addition, Aurora provides multi-primary architecture to support four read/write nodes. Its Serverless architecture allows vertical scaling and reduces typical latency to under a second, while the Global Database enables a single database cluster to span multiple AWS Regions in low latency.
Aurora already provides great scalability with the growth of user data volume. Can it handle more data and support more concurrent access? You may consider using sharding to support the configuration of multiple underlying Aurora clusters. To this end, a series of blogs, including this one, provides you with a reference in choosing between Proxy and JDBC for sharding.
AWS Aurora offers a single relational database. Primary-secondary, multi-primary, and global database, and other forms of hosting architecture can satisfy various architectural scenarios above. However, Aurora doesn’t provide direct support for sharding scenarios, and sharding has a variety of forms, such as vertical and horizontal forms. If we want to further increase data capacity, some problems have to be solved, such as cross-node database Join
, associated query, distributed transactions, SQL sorting, page turning, function calculation, database global primary key, capacity planning, and secondary capacity expansion after sharding.
It is generally accepted that when the capacity of a MySQL table is less than 10 million, the time spent on queries is optimal because at this time the height of its BTREE
index is between 3 and 5. Data sharding can reduce the amount of data in a single table and distribute the read and write loads to different data nodes at the same time. Data sharding can be divided into vertical sharding and horizontal sharding.
1. Advantages of vertical sharding
2. Disadvantages of vertical sharding
Join
can only be implemented by interface aggregation, which will increase the complexity of development.3. Advantages of horizontal sharding
4. Disadvantages of horizontal sharding
Join
is poor.Based on the analysis above, and the available studis on popular sharding middleware, we selected ShardingSphere, an open source product, combined with Amazon Aurora to introduce how the combination of these two products meets various forms of sharding and how to solve the problems brought by sharding.
ShardingSphere is an open source ecosystem including a set of distributed database middleware solutions, including 3 independent products, Sharding-JDBC, Sharding-Proxy & Sharding-Sidecar.
The characteristics of Sharding-JDBC are:
Hybrid Structure Integrating Sharding-JDBC and Applications
Sharding-JDBC’s core concepts
Data node: The smallest unit of a data slice, consisting of a data source name and a data table, such as ds_0.product_order_0.
Actual table: The physical table that really exists in the horizontal sharding database, such as product order tables: product_order_0, product_order_1, and product_order_2.
Logic table: The logical name of the horizontal sharding databases (tables) with the same schema. For instance, the logic table of the order product_order_0, product_order_1, and product_order_2 is product_order.
Binding table: It refers to the primary table and the joiner table with the same sharding rules. For example, product_order table and product_order_item are sharded by order_id, so they are binding tables with each other. Cartesian product correlation will not appear in the multi-tables correlating query, so the query efficiency will increase greatly.
Broadcast table: It refers to tables that exist in all sharding database sources. The schema and data must consist in each database. It can be applied to the small data volume that needs to correlate with big data tables to query, dictionary table and configuration table for example.
Download the example project code locally. In order to ensure the stability of the test code, we choose shardingsphere-example-4.0.0
version.
git clone
https://github.com/apache/shardingsphere-example.git
Project description:
shardingsphere-example
├── example-core
│ ├── config-utility
│ ├── example-api
│ ├── example-raw-jdbc
│ ├── example-spring-jpa #spring+jpa integration-based entity,repository
│ └── example-spring-mybatis
├── sharding-jdbc-example
│ ├── sharding-example
│ │ ├── sharding-raw-jdbc-example
│ │ ├── sharding-spring-boot-jpa-example #integration-based sharding-jdbc functions
│ │ ├── sharding-spring-boot-mybatis-example
│ │ ├── sharding-spring-namespace-jpa-example
│ │ └── sharding-spring-namespace-mybatis-example
│ ├── orchestration-example
│ │ ├── orchestration-raw-jdbc-example
│ │ ├── orchestration-spring-boot-example #integration-based sharding-jdbc governance function
│ │ └── orchestration-spring-namespace-example
│ ├── transaction-example
│ │ ├── transaction-2pc-xa-example #sharding-jdbc sample of two-phase commit for a distributed transaction
│ │ └──transaction-base-seata-example #sharding-jdbc distributed transaction seata sample
│ ├── other-feature-example
│ │ ├── hint-example
│ │ └── encrypt-example
├── sharding-proxy-example
│ └── sharding-proxy-boot-mybatis-example
└── src/resources
└── manual_schema.sql
Configuration file description:
application-master-slave.properties #read/write splitting profile
application-sharding-databases-tables.properties #sharding profile
application-sharding-databases.properties #library split profile only
application-sharding-master-slave.properties #sharding and read/write splitting profile
application-sharding-tables.properties #table split profile
application.properties #spring boot profile
Code logic description:
The following is the entry class of the Spring Boot application below. Execute it to run the project.
The execution logic of demo is as follows:
As business grows, the write and read requests can be split to different database nodes to effectively promote the processing capability of the entire database cluster. Aurora uses a reader/writer endpoint
to meet users' requirements to write and read with strong consistency, and a read-only endpoint
to meet the requirements to read without strong consistency. Aurora's read and write latency is within single-digit milliseconds, much lower than MySQL's binlog
-based logical replication, so there's a lot of loads that can be directed to a read-only endpoint
.
Through the one primary and multiple secondary configuration, query requests can be evenly distributed to multiple data replicas, which further improves the processing capability of the system. Read/write splitting can improve the throughput and availability of system, but it can also lead to data inconsistency. Aurora provides a primary/secondary architecture in a fully managed form, but applications on the upper-layer still need to manage multiple data sources when interacting with Aurora, routing SQL requests to different nodes based on the read/write type of SQL statements and certain routing policies.
ShardingSphere-JDBC provides read/write splitting features and it is integrated with application programs so that the complex configuration between application programs and database clusters can be separated from application programs. Developers can manage the Shard
through configuration files and combine it with ORM frameworks such as Spring JPA and Mybatis to completely separate the duplicated logic from the code, which greatly improves the ability to maintain code and reduces the coupling between code and database.
Create a set of Aurora MySQL read/write splitting clusters. The model is db.r5.2xlarge. Each set of clusters has one write node and two read nodes.
application.properties spring boot
Master profile description:
You need to replace the green ones with your own environment configuration.
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#Activate master-slave configuration item so that sharding-jdbc can use master-slave profile
spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-master-slave.properties sharding-jdbc
profile description:
spring.shardingsphere.datasource.names=ds_master,ds_slave_0,ds_slave_1
# data souce-master
spring.shardingsphere.datasource.ds_master.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master.password=Your master DB password
spring.shardingsphere.datasource.ds_master.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master.jdbc-url=Your primary DB data sourceurl spring.shardingsphere.datasource.ds_master.username=Your primary DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_0.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_0.jdbc-url=Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_0.username= Your slave DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_1.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_1.jdbc-url= Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_1.username= Your slave DB username
# Routing Policy Configuration
spring.shardingsphere.masterslave.load-balance-algorithm-type=round_robin
spring.shardingsphere.masterslave.name=ds_ms
spring.shardingsphere.masterslave.master-data-source-name=ds_master
spring.shardingsphere.masterslave.slave-data-source-names=ds_slave_0,ds_slave_1
# sharding-jdbc configures the information storage mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log,and you can see the conversion from logical SQL to actual SQL from the print
spring.shardingsphere.props.sql.show=true
As shown in the ShardingSphere-SQL log
figure below, the write SQL is executed on the ds_master
data source.
As shown in the ShardingSphere-SQL log
figure below, the read SQL is executed on the ds_slave
data source in the form of polling.
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] Rule Type: master-slave
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] SQL: select orderentit0_.order_id as order_id1_1_, orderentit0_.address_id as address_2_1_,
orderentit0_.status as status3_1_, orderentit0_.user_id as user_id4_1_ from t_order orderentit0_ ::: DataSources: ds_slave_0
---------------------------- Print OrderItem Data -------------------
Hibernate: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, orderiteme1_.user_id
as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] Rule Type: master-slave
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] SQL: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_,
orderiteme1_.user_id as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id ::: DataSources: ds_slave_1
Note: As shown in the figure below, if there are both reads and writes in a transaction, Sharding-JDBC routes both read and write operations to the master library. If the read/write requests are not in the same transaction, the corresponding read requests are distributed to different read nodes according to the routing policy.
@Override
@Transactional // When a transaction is started, both read and write in the transaction go through the master library. When closed, read goes through the slave library and write goes through the master library
public void processSuccess() throws SQLException {
System.out.println("-------------- Process Success Begin ---------------");
List<Long> orderIds = insertData();
printData();
deleteData(orderIds);
printData();
System.out.println("-------------- Process Success Finish --------------");
}
The Aurora database environment adopts the configuration described in Section 2.2.1.
3.2.4.1 Verification process description
Spring-Boot
project2. Perform a failover on Aurora’s console
3. Execute the Rest API
request
4. Repeatedly execute POST
(http://localhost:8088/save-user) until the call to the API failed to write to Aurora and eventually recovered successfully.
5. The following figure shows the process of executing code failover. It takes about 37 seconds from the time when the latest SQL write is successfully performed to the time when the next SQL write is successfully performed. That is, the application can be automatically recovered from Aurora failover, and the recovery time is about 37 seconds.
application.properties spring boot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#Activate sharding-tables configuration items
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
# spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-tables.properties sharding-jdbc
profile description
## configure primary-key policy
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# configure the binding relation of t_order and t_order_item
spring.shardingsphere.sharding.binding-tables[0]=t_order,t_order_item
# configure broadcast tables
spring.shardingsphere.sharding.broadcast-tables=t_address
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC routing rules are configured, the client
executes DDL, and Sharding-JDBC automatically creates corresponding tables according to the table splitting rules. If t_address
is a broadcast table, create a t_address
because there is only one master instance. Two physical tables t_order_0
and t_order_1
will be created when creating t_order
.
2. Write operation
As shown in the figure below, Logic SQL
inserts a record into t_order
. When Sharding-JDBC is executed, data will be distributed to t_order_0
and t_order_1
according to the table splitting rules.
When t_order
and t_order_item
are bound, the records associated with order_item
and order
are placed on the same physical table.
3. Read operation
As shown in the figure below, perform the join
query operations to order
and order_item
under the binding table, and the physical shard is precisely located based on the binding relationship.
The join
query operations on order
and order_item
under the unbound table will traverse all shards.
Create two instances on Aurora: ds_0
and ds_1
When the sharding-spring-boot-jpa-example
project is started, tables t_order
, t_order_item
,t_address
will be created on two Aurora instances.
application.properties springboot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases configuration items
spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-databases.properties sharding-jdbc
profile description
spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= spring.shardingsphere.datasource.ds_0.username=
spring.shardingsphere.datasource.ds_0.password=
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url=
spring.shardingsphere.datasource.ds_1.username=
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, physical tables will be created on ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the slave library are written on the table in the corresponding instance according to the slave library field and routing policy.
3. Read operation
Query order
is routed to the corresponding Aurora instance according to the routing rules of the slave library .
Query Address
. Since address
is a broadcast table, an instance of address
will be randomly selected and queried from the nodes used.
As shown in the figure below, perform the join
query operations to order
and order_item
under the binding table, and the physical shard is precisely located based on the binding relationship.
As shown in the figure below, create two instances on Aurora: ds_0
and ds_1
When the sharding-spring-boot-jpa-example
project is started, physical tables t_order_01
, t_order_02
, t_order_item_01
,and t_order_item_02
and global table t_address
will be created on two Aurora instances.
application.properties springboot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-databases.properties sharding-jdbc
profile description
spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= 306/dev?useSSL=false&characterEncoding=utf-8
spring.shardingsphere.datasource.ds_0.username=
spring.shardingsphere.datasource.ds_0.password=
spring.shardingsphere.datasource.ds_0.max-active=16
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url=
spring.shardingsphere.datasource.ds_1.username=
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.datasource.ds_1.max-active=16
# default library splitting policy
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
# Tables that do not meet the library splitting policy are placed on ds_0
spring.shardingsphere.sharding.default-data-source-name=ds_0
# t_order table splitting policy
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
# t_order_item table splitting policy
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mdoe
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s sharding and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, t_address
will be created on both ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the sub-library are written to the table on the corresponding instance according to the slave library field and routing policy.
3. Read operation
The read operation is similar to the library split function verification described in section2.4.3.
The following figure shows the physical table of the created database instance.
application.properties spring boot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
spring.profiles.active=sharding-master-slave
application-sharding-master-slave.properties sharding-jdbc
profile description
The url, name and password of the database need to be changed to your own database parameters.
spring.shardingsphere.datasource.names=ds_master_0,ds_master_1,ds_master_0_slave_0,ds_master_0_slave_1,ds_master_1_slave_0,ds_master_1_slave_1
spring.shardingsphere.datasource.ds_master_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0.username=
spring.shardingsphere.datasource.ds_master_0.password=
spring.shardingsphere.datasource.ds_master_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_0.username=
spring.shardingsphere.datasource.ds_master_0_slave_0.password=
spring.shardingsphere.datasource.ds_master_0_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_1.username=
spring.shardingsphere.datasource.ds_master_0_slave_1.password=
spring.shardingsphere.datasource.ds_master_0_slave_1.max-active=16
spring.shardingsphere.datasource.ds_master_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1.jdbc-url=
spring.shardingsphere.datasource.ds_master_1.username=
spring.shardingsphere.datasource.ds_master_1.password=
spring.shardingsphere.datasource.ds_master_1.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_0.jdbc-url=
spring.shardingsphere.datasource.ds_master_1_slave_0.username=
spring.shardingsphere.datasource.ds_master_1_slave_0.password=
spring.shardingsphere.datasource.ds_master_1_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_1_slave_1.username=admin
spring.shardingsphere.datasource.ds_master_1_slave_1.password=
spring.shardingsphere.datasource.ds_master_1_slave_1.max-active=16
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_master_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# master/slave data source and slave data source configuration
spring.shardingsphere.sharding.master-slave-rules.ds_0.master-data-source-name=ds_master_0
spring.shardingsphere.sharding.master-slave-rules.ds_0.slave-data-source-names=ds_master_0_slave_0, ds_master_0_slave_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.master-data-source-name=ds_master_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.slave-data-source-names=ds_master_1_slave_0, ds_master_1_slave_1
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, t_address
will be created on both ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the slave library are written to the table on the corresponding instance according to the slave library field and routing policy.
3. Read operation
The join
query operations on order
and order_item
under the binding table are shown below.
3. Conclusion
As an open source product focusing on database enhancement, ShardingSphere is pretty good in terms of its community activitiy, product maturity and documentation richness.
Among its products, ShardingSphere-JDBC is a sharding solution based on the client-side, which supports all sharding scenarios. And there’s no need to introduce an intermediate layer like Proxy, so the complexity of operation and maintenance is reduced. Its latency is theoretically lower than Proxy due to the lack of intermediate layer. In addition, ShardingSphere-JDBC can support a variety of relational databases based on SQL standards such as MySQL/PostgreSQL/Oracle/SQL Server, etc.
However, due to the integration of Sharding-JDBC with the application program, it only supports Java language for now, and is strongly dependent on the application programs. Nevertheless, Sharding-JDBC separates all sharding configuration from the application program, which brings relatively small changes when switching to other middleware.
In conclusion, Sharding-JDBC is a good choice if you use a Java-based system and have to to interconnect with different relational databases — and don’t want to bother with introducing an intermediate layer.
Author
Sun Jinhua
A senior solution architect at AWS, Sun is responsible for the design and consult on cloud architecture. for providing customers with cloud-related design and consulting services. Before joining AWS, he ran his own business, specializing in building e-commerce platforms and designing the overall architecture for e-commerce platforms of automotive companies. He worked in a global leading communication equipment company as a senior engineer, responsible for the development and architecture design of multiple subsystems of LTE equipment system. He has rich experience in architecture design with high concurrency and high availability system, microservice architecture design, database, middleware, IOT etc.
1660147320
Whenever we work with data of any sort, we need a clear picture of the kind of data that we are dealing with. For most of the data out there, which may contain thousands or even millions of entries with a wide variety of information, it’s really impossible to make sense of that data without any tool to present the data in a short and readable format.
Most of the time we need to go through the data, manipulate it, and visualize it for getting insights. Well, there is a great library which goes by the name pandas which provides us with that capability. The most frequent Data manipulation operation is Data Filtering. It is very similar to the WHERE clause in SQL or you must have used a filter in MS Excel for selecting specific rows based on some conditions.
pandas is a powerful, flexible and open source data analysis/manipulation tool which is essentially a python package that provides speed, flexibility and expressive data structures crafted to work with “relational” or “labelled” data in an intuitive and easy manner. It is one of the most popular libraries to perform real-world data analysis in Python.
pandas is built on top of the NumPy library which aims to integrate well with the scientific computing environment and numerous other 3rd party libraries. It has two primary data structures namely Series (1D) and Dataframes(2D), which in most real-world use cases is the type of data that is being dealt with in many sectors of finance, scientific computing, engineering and statistics.
Installing pandas
!pip install pandas
Importing the Pandas library, reading our sample data file and assigning it to “df” DataFrame
import pandas as pd
df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv")
Let’s check out our dataframe:
print(df.head())
Sample_data
Now that we have our DataFrame, we will be applying various methods to filter it.
We have a column named “Total_Sales” in our DataFrame and we want to filter out all the sales value which is greater than 300.
#Filter a DataFrame for a single column value with a given condition
greater_than = df[df['Total_Sales'] > 300]
print(greater_than.head())
Sales with Greater than 300
Here we are filtering all the values whose “Total_Sales” value is greater than 300 and also where the “Units” is greater than 20. We will have to use the python operator “&” which performs a bitwise AND operation in order to display the corresponding result.
#Filter a DataFrame with multiple conditions
filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)]
print(Filter_sales_units.head())
Filter on Sales and Units
If we want to filter our data frame based on a certain date value, for example here we are trying to get all the results based on a particular date, in our case the results after the date ’03/10/21′.
#Filter a DataFrame based on specific date
date_filter = df[df['Date'] > '03/10/21']
print(date_filter.head())
Filter on Date
Here we are getting all the results for our Date operation evaluating multiple dates.
#Filter a DataFrame with multiple conditions our Date value
date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')]
print(date_filter2.head())
Filter on a date with multiple conditions
Here we are selecting a column called ‘Region’ and getting all the rows that are from the region ‘East’, thus filtering based on a specific string value.
#Filter a DataFrame to a specific string
east = df[df['Region'] == 'East']
print(east.head())
Filter based on a specific string
Here we are selecting a column called ‘Region’ and getting all the rows which has the letter ‘E’ as the first character i.e at index 0 in the specified column results.
#Filter a DataFrame to show rows starting with a specfic letter
starting_with_e = df[df['Region'].str[0]== 'E']
print(starting_with_e.head())
Filter based on a specific letter
Here we are filtering rows in the column ‘Region’ which contains the values ‘West’ as well as ‘East’ and display the combined result. Two methods can be used to perform this filtering namely using a pipe | operator with the corresponding desired set of values with the below syntax OR we can use the .isin() function to filter for the values in a given column, which in our case is the ‘Region’, and provide the list of the desired set of values inside it as a list.
#Filter a DataFrame rows based on list of values
#Method 1:
east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')]
print(east_west)
#Method 2:
east_west_1 = df[df['Region'].isin(['West', 'East'])]
print(east_west_1.head())
Output of Method -2
Here we want all the values in the column ‘Region’, which ends with ‘th’ in their string value and display them. In other words, we want our results to show the values of ‘North‘ and ‘South‘ and ignore ‘East’ and ‘West’. The method .str.contains() with the specified values along with the $ RegEx pattern can be used to get the desired results.
For more information please check the Regex Documentation
#Filtering the DataFrame rows using regular expressions(REGEX)
regex_df = df[df['Region'].str.contains('th$')]
print(regex_df.head())
Filter based on REGEX
Here, we’ll check for null and not null values in all the columns with the help of isnull() function.
#Filtering to check for null and not null values in all columns
df_null = df[df.isnull().any(axis=1)]
print(df_null.head())
Filter based on NULL or NOT null values
#Filtering to check for null values if any in the 'Units' column
units_df = df[df['Units'].isnull()]
print(units_df.head())
Finding null values on specific columns
#Filtering to check for not null values in the 'Units' column
df_not_null = df[df['Units'].notnull()]
print(df_not_null.head())
Finding not-null values on specific columns
query()
with a condition#Using query function in pandas
df_query = df.query('Total_Sales > 300')
print(df_query.head())
Filtering values with Query
Function
query()
with multiple conditions#Using query function with multiple conditions in pandas
df_query_1 = df.query('Total_Sales > 300 and Units <18')
print(df_query_1.head())
Filtering multiple columns with Query
Function
loc
and iloc
functions.#Creating a sample DataFrame for illustrations
import numpy as np
data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5])
print(data)
sample_data
Explanation: iloc
considers rows based on the position of the given index, so that it takes only integers as values.
For more information please check out Pandas Documentation
#Filter with iloc
data.iloc[0 : 5]
Filter using iloc
Explanation: loc
considers rows based on index labels
#Filter with loc
data.loc[0 : 5]
Filter using loc
You might be thinking about why the loc
function returns 6 rows instead of 5 rows. This is because loc
does not produce output based on index position. It considers labels of index only which can be an alphabet as well and includes both starting and endpoint.
So, these were some of the most common filtering methods used in pandas. There are many other filtering methods that could be used, but these are some of the most common.
Link: https://www.askpython.com/python-modules/pandas/filter-pandas-dataframe
#pandas #python #datafame
1660039560
Всякий раз, когда мы работаем с данными любого рода, нам нужна четкая картина того, с какими данными мы имеем дело. Для большинства имеющихся данных, которые могут содержать тысячи или даже миллионы записей с разнообразной информацией, действительно невозможно разобраться в этих данных без какого-либо инструмента для представления данных в кратком и удобочитаемом формате.
Большую часть времени нам нужно просматривать данные, манипулировать ими и визуализировать их для получения информации. Что ж, есть отличная библиотека под названием pandas, которая предоставляет нам эту возможность. Наиболее частой операцией манипулирования данными является фильтрация данных. Это очень похоже на предложение WHERE в SQL, или вы должны были использовать фильтр в MS Excel для выбора определенных строк на основе некоторых условий.
pandas — это мощный, гибкий инструмент с открытым исходным кодом для анализа/манипулирования данными, который, по сути, представляет собойпакет Python, обеспечивающий скорость, гибкость и выразительные структуры данных, созданные для интуитивно понятной и простой работы с «реляционными» или «помеченными» данными. Это одна из самых популярных библиотекдля реального анализа данных в Python.
pandas построен на основе библиотеки NumPy, которая нацелена на хорошую интеграцию с научной вычислительной средой и множеством других сторонних библиотек. Он имеет две основные структуры данных, а именно Series (1D) и Dataframes(2D) , которые в большинстве реальных случаев использования представляют собой тип данных, с которыми имеют дело во многих секторах финансов, научных вычислений, инженерии и статистики.
Установка панд
!pip install pandas
Импорт библиотеки Pandas, чтение нашего примера файла данных и назначение его в «df» DataFrame
import pandas as pd
df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv")
Давайте проверим наш фрейм данных :
print(df.head())
Образец данных
Теперь, когда у нас есть DataFrame, мы будем применять различные методы для его фильтрации.
У нас есть столбец с именем «Total_Sales» в нашем DataFrame, и мы хотим отфильтровать все значения продаж, превышающие 300.
#Filter a DataFrame for a single column value with a given condition
greater_than = df[df['Total_Sales'] > 300]
print(greater_than.head())
Продажи с более чем 300
Здесь мы фильтруем все значения, у которых значение «Total_Sales» больше 300, а также где «Единицы» больше 20. Нам нужно будет использовать оператор Python «&», который выполняет побитовую операцию И, чтобы отобразить соответствующий результат.
#Filter a DataFrame with multiple conditions
filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)]
print(Filter_sales_units.head())
Фильтр по продажам и единицам
Если мы хотим отфильтровать наш фрейм данных на основе определенного значения даты, например, здесь мы пытаемся получить все результаты на основе определенной даты, в нашем случае результаты после даты «10.03.21».
#Filter a DataFrame based on specific date
date_filter = df[df['Date'] > '03/10/21']
print(date_filter.head())
Фильтр по дате
Здесь мы получаем все результаты нашей операции Date, оценивающей несколько дат .
#Filter a DataFrame with multiple conditions our Date value
date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')]
print(date_filter2.head())
Фильтр по дате с несколькими условиями
Здесь мы выбираем столбец под названием «Регион» и получаем все строки из региона «Восток», таким образом фильтруя на основе определенного строкового значения .
#Filter a DataFrame to a specific string
east = df[df['Region'] == 'East']
print(east.head())
Фильтровать по определенной строке
Здесь мы выбираем столбец под названием «Регион» и получаем все строки, в которых буква «Е» является первым символом, т.е. индексом 0 в результатах указанного столбца.
#Filter a DataFrame to show rows starting with a specfic letter
starting_with_e = df[df['Region'].str[0]== 'E']
print(starting_with_e.head())
Фильтр по определенной букве
Здесь мы фильтруем строки в столбце «Регион», который содержит значения «Запад», а также «Восток», и отображаем объединенный результат. Для выполнения этой фильтрации можно использовать два метода, а именно использование канала | оператор с соответствующим желаемым набором значений с приведенным ниже синтаксисом ИЛИ мы можем использовать функцию .isin() для фильтрации значений в данном столбце, которым в нашем случае является «Регион», и предоставить список желаемого набора значений внутри него в виде списка.
#Filter a DataFrame rows based on list of values
#Method 1:
east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')]
print(east_west)
#Method 2:
east_west_1 = df[df['Region'].isin(['West', 'East'])]
print(east_west_1.head())
Выход метода -2
Здесь нам нужны все значения в столбце «Регион» , которые заканчиваются на «th» в их строковом значении, и отобразить их. Другими словами, мы хотим, чтобы наши результаты показывали значения «Север » и «Юг » и игнорировали «Восток» и «Запад» . Метод .str.contains() с указанными значениями вместе с шаблоном $ RegEx можно использовать для получения желаемых результатов.
Для получения дополнительной информации ознакомьтесь с документацией по регулярным выражениям.
#Filtering the DataFrame rows using regular expressions(REGEX)
regex_df = df[df['Region'].str.contains('th$')]
print(regex_df.head())
Фильтр на основе REGEX
Здесь мы проверим нулевые и не нулевые значения во всех столбцах с помощью функции isnull() .
#Filtering to check for null and not null values in all columns
df_null = df[df.isnull().any(axis=1)]
print(df_null.head())
Фильтр на основе значений NULL или NOT null
#Filtering to check for null values if any in the 'Units' column
units_df = df[df['Units'].isnull()]
print(units_df.head())
Поиск нулевых значений в определенных столбцах
#Filtering to check for not null values in the 'Units' column
df_not_null = df[df['Units'].notnull()]
print(df_not_null.head())
Поиск ненулевых значений в определенных столбцах
query()
с использованием условия#Using query function in pandas
df_query = df.query('Total_Sales > 300')
print(df_query.head())
Фильтрация значений с Query
функцией
query()
нескольких условий#Using query function with multiple conditions in pandas
df_query_1 = df.query('Total_Sales > 300 and Units <18')
print(df_query_1.head())
Фильтрация нескольких столбцов с Query
функцией
loc
и .iloc
#Creating a sample DataFrame for illustrations
import numpy as np
data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5])
print(data)
образец данных
Объяснение : iloc
считает строки на основе позиции заданного индекса, поэтому в качестве значений принимает только целые числа.
Для получения дополнительной информации ознакомьтесь с документацией Pandas.
#Filter with iloc
data.iloc[0 : 5]
Фильтровать с помощьюiloc
Объяснение : loc
считает строки на основе меток индекса .
#Filter with loc
data.loc[0 : 5]
Фильтровать с помощьюloc
Вы можете подумать, почему loc
функция возвращает 6 строк вместо 5 строк. Это связано с тем , что вывод не производится на основе позиции индекса. Он рассматривает только метки индекса, которые также могут быть алфавитом, и включает как начальную, так и конечную точку. loc
Итак, это были одни из наиболее распространенных методов фильтрации, используемых в пандах. Существует множество других методов фильтрации, которые можно использовать, но эти являются одними из наиболее распространенных.
Ссылка: https://www.askpython.com/python-modules/pandas/filter-pandas-dataframe
#pandas #python #datafame
1660017761
Chaque fois que nous travaillons avec des données de toutes sortes, nous avons besoin d'une image claire du type de données avec lesquelles nous traitons. Pour la plupart des données disponibles, qui peuvent contenir des milliers, voire des millions d'entrées avec une grande variété d'informations, il est vraiment impossible de donner un sens à ces données sans aucun outil pour présenter les données dans un format court et lisible.
La plupart du temps, nous devons parcourir les données, les manipuler et les visualiser pour obtenir des informations. Eh bien, il existe une excellente bibliothèque qui porte le nom de pandas et qui nous offre cette capacité. L'opération de manipulation de données la plus fréquente est le filtrage de données. Il est très similaire à la clause WHERE dans SQL ou vous devez avoir utilisé un filtre dans MS Excel pour sélectionner des lignes spécifiques en fonction de certaines conditions.
pandas est un outil d'analyse/manipulation de données puissant, flexible et open source qui est essentiellement unpackage pythonqui offre vitesse, flexibilité et structures de données expressives conçues pour fonctionner avec des données « relationnelles » ou « étiquetées » de manière intuitive et simple. C'est l'une des bibliothèques les plus populairespour effectuer une analyse de données du monde réel en Python.
pandas est construit au-dessus de la bibliothèque NumPy qui vise à bien s'intégrer à l'environnement informatique scientifique et à de nombreuses autres bibliothèques tierces. Il comporte deux structures de données principales, à savoir Series (1D) et Dataframes (2D) , qui, dans la plupart des cas d'utilisation réels, correspondent au type de données traitées dans de nombreux secteurs de la finance, du calcul scientifique, de l'ingénierie et des statistiques.
Installer des pandas
!pip install pandas
Importation de la bibliothèque Pandas, lecture de notre exemple de fichier de données et affectation à "df" DataFrame
import pandas as pd
df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv")
Voyons notre dataframe :
print(df.head())
Sample_data
Maintenant que nous avons notre DataFrame, nous allons appliquer différentes méthodes pour le filtrer.
Nous avons une colonne nommée "Total_Sales" dans notre DataFrame et nous voulons filtrer toute la valeur des ventes supérieure à 300.
#Filter a DataFrame for a single column value with a given condition
greater_than = df[df['Total_Sales'] > 300]
print(greater_than.head())
Ventes avec plus de 300
Ici, nous filtrons toutes les valeurs dont la valeur "Total_Sales" est supérieure à 300 et également où les "Unités" sont supérieures à 20. Nous devrons utiliser l'opérateur python "&" qui effectue une opération ET au niveau du bit afin d'afficher le résultat correspondant.
#Filter a DataFrame with multiple conditions
filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)]
print(Filter_sales_units.head())
Filtrer sur les ventes et les unités
Si nous voulons filtrer notre trame de données en fonction d'une certaine valeur de date, par exemple ici nous essayons d'obtenir tous les résultats en fonction d'une date particulière, dans notre cas les résultats après la date '03/10/21'.
#Filter a DataFrame based on specific date
date_filter = df[df['Date'] > '03/10/21']
print(date_filter.head())
Filtrer par date
Ici, nous obtenons tous les résultats de notre opération Date évaluant plusieurs dates .
#Filter a DataFrame with multiple conditions our Date value
date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')]
print(date_filter2.head())
Filtrer sur une date avec plusieurs conditions
Ici, nous sélectionnons une colonne appelée 'Region' et obtenons toutes les lignes qui proviennent de la région 'East', filtrant ainsi en fonction d'une valeur de chaîne spécifique .
#Filter a DataFrame to a specific string
east = df[df['Region'] == 'East']
print(east.head())
Filtre basé sur une chaîne spécifique
Ici, nous sélectionnons une colonne appelée 'Region' et obtenons toutes les lignes qui ont la lettre 'E' comme premier caractère, c'est-à-dire à l'index 0 dans les résultats de colonne spécifiés.
#Filter a DataFrame to show rows starting with a specfic letter
starting_with_e = df[df['Region'].str[0]== 'E']
print(starting_with_e.head())
Filtre basé sur une lettre spécifique
Ici, nous filtrons les lignes dans la colonne « Région » qui contient les valeurs « Ouest » ainsi que « Est » et affichons le résultat combiné. Deux méthodes peuvent être utilisées pour effectuer ce filtrage à savoir l'utilisation d'un tube | opérateur avec l'ensemble de valeurs souhaité correspondant avec la syntaxe ci-dessous OU nous pouvons utiliser la fonction .isin() pour filtrer les valeurs dans une colonne donnée, qui dans notre cas est la 'Région', et fournir la liste de l'ensemble souhaité de valeurs à l'intérieur sous forme de liste.
#Filter a DataFrame rows based on list of values
#Method 1:
east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')]
print(east_west)
#Method 2:
east_west_1 = df[df['Region'].isin(['West', 'East'])]
print(east_west_1.head())
Sortie de la méthode -2
Ici, nous voulons toutes les valeurs de la colonne 'Region' , qui se termine par 'th' dans leur valeur de chaîne et les afficher. En d'autres termes, nous voulons que nos résultats montrent les valeurs de « Nord » et « Sud » et ignorent « Est » et « Ouest » . La méthode .str.contains() avec les valeurs spécifiées avec le modèle $ RegEx peut être utilisée pour obtenir les résultats souhaités.
Pour plus d'informations, veuillez consulter la documentation Regex
#Filtering the DataFrame rows using regular expressions(REGEX)
regex_df = df[df['Region'].str.contains('th$')]
print(regex_df.head())
Filtre basé sur REGEX
Ici, nous allons vérifier les valeurs nulles et non nulles dans toutes les colonnes à l'aide de la fonction isnull() .
#Filtering to check for null and not null values in all columns
df_null = df[df.isnull().any(axis=1)]
print(df_null.head())
Filtre basé sur les valeurs NULL ou NOT null
#Filtering to check for null values if any in the 'Units' column
units_df = df[df['Units'].isnull()]
print(units_df.head())
Recherche de valeurs nulles sur des colonnes spécifiques
#Filtering to check for not null values in the 'Units' column
df_not_null = df[df['Units'].notnull()]
print(df_not_null.head())
Recherche de valeurs non nulles sur des colonnes spécifiques
query()
d'une condition#Using query function in pandas
df_query = df.query('Total_Sales > 300')
print(df_query.head())
Filtrer les valeurs avec Query
la fonction
query()
de plusieurs conditions#Using query function with multiple conditions in pandas
df_query_1 = df.query('Total_Sales > 300 and Units <18')
print(df_query_1.head())
Filtrer plusieurs colonnes avec Query
Function
loc
et iloc
.#Creating a sample DataFrame for illustrations
import numpy as np
data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5])
print(data)
sample_data
Explication : iloc
considère les lignes en fonction de la position de l'index donné, de sorte qu'il ne prend que des entiers comme valeurs.
Pour plus d'informations, veuillez consulter la documentation de Pandas
#Filter with iloc
data.iloc[0 : 5]
Filtrer en utilisantiloc
Explication : loc
considère les lignes en fonction des étiquettes d'index
#Filter with loc
data.loc[0 : 5]
Filtrer en utilisantloc
Vous vous demandez peut-être pourquoi la loc
fonction renvoie 6 lignes au lieu de 5 lignes. En effet , ne produit pas de sortie basée sur la position de l'index. Il ne prend en compte que les étiquettes d'index qui peuvent également être un alphabet et incluent à la fois le point de départ et le point final. loc
Donc, ce sont quelques-unes des méthodes de filtrage les plus couramment utilisées dans les pandas. Il existe de nombreuses autres méthodes de filtrage qui pourraient être utilisées, mais celles-ci sont parmi les plus courantes.
Lien : https://www.askpython.com/python-modules/pandas/filter-pandas-dataframe
#pandas #python #datafame
1660046820
Bất cứ khi nào chúng tôi làm việc với bất kỳ loại dữ liệu nào, chúng tôi cần một bức tranh rõ ràng về loại dữ liệu mà chúng tôi đang xử lý. Đối với hầu hết dữ liệu ngoài kia, có thể chứa hàng nghìn hoặc thậm chí hàng triệu mục nhập với nhiều loại thông tin, thực sự không thể hiểu được dữ liệu đó nếu không có bất kỳ công cụ nào để trình bày dữ liệu ở định dạng ngắn gọn và dễ đọc.
Hầu hết thời gian chúng ta cần xem qua dữ liệu, thao tác và trực quan hóa nó để có được thông tin chi tiết. Chà, có một thư viện tuyệt vời mang tên gấu trúc cung cấp cho chúng ta khả năng đó. Thao tác thao tác dữ liệu thường xuyên nhất là Lọc dữ liệu. Nó rất giống với mệnh đề WHERE trong SQL hoặc bạn phải sử dụng một bộ lọc trong MS Excel để chọn các hàng cụ thể dựa trên một số điều kiện.
pandas là một công cụ phân tích / thao tác dữ liệu nguồn mở, linh hoạt và mạnh mẽ, về cơ bản là mộtgói pythoncung cấp tốc độ, tính linh hoạt và cấu trúc dữ liệu biểu cảm được tạo ra để làm việc với dữ liệu “quan hệ” hoặc “có nhãn” một cách trực quan và dễ dàng. Nó là một trong những thư viện phổ biến nhấtđể thực hiện phân tích dữ liệu trong thế giới thực bằng Python.
pandas được xây dựng dựa trên thư viện NumPy nhằm mục đích tích hợp tốt với môi trường máy tính khoa học và nhiều thư viện bên thứ 3 khác. Nó có hai cấu trúc dữ liệu chính là Series (1D) và Dataframe (2D) , trong hầu hết các trường hợp sử dụng trong thế giới thực là loại dữ liệu đang được xử lý trong nhiều lĩnh vực tài chính, máy tính khoa học, kỹ thuật và thống kê.
Cài đặt gấu trúc
!pip install pandas
Nhập thư viện Pandas, đọc tệp dữ liệu mẫu của chúng tôi và gán nó cho DataFrame “df”
import pandas as pd
df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv")
Hãy kiểm tra khung dữ liệu của chúng tôi :
print(df.head())
Dữ liệu mẫu
Bây giờ chúng tôi đã có DataFrame của mình, chúng tôi sẽ áp dụng nhiều phương pháp khác nhau để lọc nó.
Chúng tôi có một cột tên là “Total_Sales” trong DataFrame của mình và chúng tôi muốn lọc ra tất cả giá trị bán hàng lớn hơn 300.
#Filter a DataFrame for a single column value with a given condition
greater_than = df[df['Total_Sales'] > 300]
print(greater_than.head())
Doanh số lớn hơn 300
Ở đây chúng tôi đang lọc tất cả các giá trị có giá trị “Total_Sales” lớn hơn 300 và cũng có giá trị “Đơn vị” lớn hơn 20. Chúng tôi sẽ phải sử dụng toán tử python “&” thực hiện thao tác AND bitwise để hiển thị kết quả tương ứng.
#Filter a DataFrame with multiple conditions
filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)]
print(Filter_sales_units.head())
Lọc theo Doanh số và Đơn vị
Nếu chúng tôi muốn lọc khung dữ liệu của mình dựa trên một giá trị ngày nhất định, ví dụ: ở đây chúng tôi đang cố gắng lấy tất cả kết quả dựa trên một ngày cụ thể, trong trường hợp của chúng tôi là kết quả sau ngày '03/10/21'.
#Filter a DataFrame based on specific date
date_filter = df[df['Date'] > '03/10/21']
print(date_filter.head())
Lọc vào ngày
Ở đây, chúng tôi nhận được tất cả các kết quả cho hoạt động Ngày đánh giá nhiều ngày của chúng tôi .
#Filter a DataFrame with multiple conditions our Date value
date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')]
print(date_filter2.head())
Lọc vào một ngày có nhiều điều kiện
Ở đây chúng tôi đang chọn một cột có tên là 'Khu vực' và lấy tất cả các hàng từ khu vực 'Đông', do đó lọc dựa trên một giá trị chuỗi cụ thể .
#Filter a DataFrame to a specific string
east = df[df['Region'] == 'East']
print(east.head())
Lọc dựa trên một chuỗi cụ thể
Ở đây chúng tôi đang chọn một cột có tên là 'Vùng' và lấy tất cả các hàng có ký tự 'E' là ký tự đầu tiên, tức là ở chỉ số 0 trong kết quả cột được chỉ định.
#Filter a DataFrame to show rows starting with a specfic letter
starting_with_e = df[df['Region'].str[0]== 'E']
print(starting_with_e.head())
Lọc dựa trên một chữ cái cụ thể
Ở đây chúng tôi đang lọc các hàng trong cột 'Vùng' chứa các giá trị 'Tây' cũng như 'Đông' và hiển thị kết quả kết hợp. Hai phương pháp có thể được sử dụng để thực hiện việc lọc này là sử dụng đường ống | toán tử với tập giá trị mong muốn tương ứng với cú pháp bên dưới HOẶC chúng ta có thể sử dụng hàm .isin () để lọc các giá trị trong một cột nhất định, trong trường hợp của chúng ta là 'Vùng' và cung cấp danh sách tập hợp mong muốn của các giá trị bên trong nó dưới dạng danh sách.
#Filter a DataFrame rows based on list of values
#Method 1:
east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')]
print(east_west)
#Method 2:
east_west_1 = df[df['Region'].isin(['West', 'East'])]
print(east_west_1.head())
Đầu ra của Phương pháp -2
Ở đây chúng tôi muốn tất cả các giá trị trong cột 'Vùng' , kết thúc bằng 'th' trong giá trị chuỗi của chúng và hiển thị chúng. Nói cách khác, chúng tôi muốn kết quả của mình hiển thị các giá trị của "Nor th " và "Sou th " và bỏ qua "East" và "West" . Phương thức .str.contains () với các giá trị được chỉ định cùng với mẫu $ RegEx có thể được sử dụng để có được kết quả mong muốn.
Để biết thêm thông tin, vui lòng kiểm tra Tài liệu Regex
#Filtering the DataFrame rows using regular expressions(REGEX)
regex_df = df[df['Region'].str.contains('th$')]
print(regex_df.head())
Lọc dựa trên REGEX
Ở đây, chúng tôi sẽ kiểm tra các giá trị null và không null trong tất cả các cột với sự trợ giúp của hàm isnull () .
#Filtering to check for null and not null values in all columns
df_null = df[df.isnull().any(axis=1)]
print(df_null.head())
Lọc dựa trên giá trị NULL hoặc NOT null
#Filtering to check for null values if any in the 'Units' column
units_df = df[df['Units'].isnull()]
print(units_df.head())
Tìm giá trị null trên các cột cụ thể
#Filtering to check for not null values in the 'Units' column
df_not_null = df[df['Units'].notnull()]
print(df_not_null.head())
Tìm các giá trị not-null trên các cột cụ thể
query()
với một điều kiện#Using query function in pandas
df_query = df.query('Total_Sales > 300')
print(df_query.head())
Lọc các giá trị bằng Query
Hàm
query()
nhiều điều kiện#Using query function with multiple conditions in pandas
df_query_1 = df.query('Total_Sales > 300 and Units <18')
print(df_query_1.head())
Lọc nhiều cột với Query
Hàm
loc
và iloc
.#Creating a sample DataFrame for illustrations
import numpy as np
data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5])
print(data)
dữ liệu mẫu
Giải thích : iloc
xem xét các hàng dựa trên vị trí của chỉ mục đã cho, do đó nó chỉ nhận các số nguyên làm giá trị.
Để biết thêm thông tin, vui lòng xem Tài liệu về Gấu trúc
#Filter with iloc
data.iloc[0 : 5]
Lọc bằng cách sử dụngiloc
Giải thích : loc
xem xét các hàng dựa trên nhãn chỉ mục
#Filter with loc
data.loc[0 : 5]
Lọc bằng cách sử dụngloc
Bạn có thể đang suy nghĩ về lý do tại sao loc
hàm trả về 6 hàng thay vì 5 hàng. Điều này là do không tạo ra sản lượng dựa trên vị trí chỉ mục. Nó chỉ xem xét các nhãn của chỉ mục cũng có thể là một bảng chữ cái và bao gồm cả điểm đầu và điểm cuối. loc
Vì vậy, đây là một số phương pháp lọc phổ biến nhất được sử dụng ở gấu trúc. Có nhiều phương pháp lọc khác có thể được sử dụng, nhưng đây là một số phương pháp phổ biến nhất.
Liên kết: https://www.askpython.com/python-modules/pandas/filter-pandas-dataframe
#pandas #python #datafame