Add a Servlet Filter in Spring Boot [Video]

Get your Servlet Filter in your Spring Boot application up and running in just over 3 minutes, so you have more time for the rest of your app.

In the video below, we take a closer look at the How to add a Servlet filter in Spring Boot? | Spring Boot: Servlet Filter | Spring Boot tutorial. Let’s get started!

#spring boot #servlet filter #add a servlet filter in spring boot #servlet

What is GEEK

Buddha Community

Add a Servlet Filter in Spring Boot [Video]

Enhance Amazon Aurora Read/Write Capability with ShardingSphere-JDBC

1. Introduction

Amazon Aurora is a relational database management system (RDBMS) developed by AWS(Amazon Web Services). Aurora gives you the performance and availability of commercial-grade databases with full MySQL and PostgreSQL compatibility. In terms of high performance, Aurora MySQL and Aurora PostgreSQL have shown an increase in throughput of up to 5X over stock MySQL and 3X over stock PostgreSQL respectively on similar hardware. In terms of scalability, Aurora achieves enhancements and innovations in storage and computing, horizontal and vertical functions.

Aurora supports up to 128TB of storage capacity and supports dynamic scaling of storage layer in units of 10GB. In terms of computing, Aurora supports scalable configurations for multiple read replicas. Each region can have an additional 15 Aurora replicas. In addition, Aurora provides multi-primary architecture to support four read/write nodes. Its Serverless architecture allows vertical scaling and reduces typical latency to under a second, while the Global Database enables a single database cluster to span multiple AWS Regions in low latency.

Aurora already provides great scalability with the growth of user data volume. Can it handle more data and support more concurrent access? You may consider using sharding to support the configuration of multiple underlying Aurora clusters. To this end, a series of blogs, including this one, provides you with a reference in choosing between Proxy and JDBC for sharding.

1.1 Why sharding is needed

AWS Aurora offers a single relational database. Primary-secondary, multi-primary, and global database, and other forms of hosting architecture can satisfy various architectural scenarios above. However, Aurora doesn’t provide direct support for sharding scenarios, and sharding has a variety of forms, such as vertical and horizontal forms. If we want to further increase data capacity, some problems have to be solved, such as cross-node database Join, associated query, distributed transactions, SQL sorting, page turning, function calculation, database global primary key, capacity planning, and secondary capacity expansion after sharding.

1.2 Sharding methods

It is generally accepted that when the capacity of a MySQL table is less than 10 million, the time spent on queries is optimal because at this time the height of its BTREE index is between 3 and 5. Data sharding can reduce the amount of data in a single table and distribute the read and write loads to different data nodes at the same time. Data sharding can be divided into vertical sharding and horizontal sharding.

1. Advantages of vertical sharding

  • Address the coupling of business system and make clearer.
  • Implement hierarchical management, maintenance, monitoring, and expansion to data of different businesses, like micro-service governance.
  • In high concurrency scenarios, vertical sharding removes the bottleneck of IO, database connections, and hardware resources on a single machine to some extent.

2. Disadvantages of vertical sharding

  • After splitting the library, Join can only be implemented by interface aggregation, which will increase the complexity of development.
  • After splitting the library, it is complex to process distributed transactions.
  • There is a large amount of data on a single table and horizontal sharding is required.

3. Advantages of horizontal sharding

  • There is no such performance bottleneck as a large amount of data on a single database and high concurrency, and it increases system stability and load capacity.
  • The business modules do not need to be split due to minor modification on the application client.

4. Disadvantages of horizontal sharding

  • Transaction consistency across shards is hard to be guaranteed;
  • The performance of associated query in cross-library Join is poor.
  • It’s difficult to scale the data many times and maintenance is a big workload.

Based on the analysis above, and the available studis on popular sharding middleware, we selected ShardingSphere, an open source product, combined with Amazon Aurora to introduce how the combination of these two products meets various forms of sharding and how to solve the problems brought by sharding.

ShardingSphere is an open source ecosystem including a set of distributed database middleware solutions, including 3 independent products, Sharding-JDBC, Sharding-Proxy & Sharding-Sidecar.

2. ShardingSphere introduction:

The characteristics of Sharding-JDBC are:

  1. With the client end connecting directly to the database, it provides service in the form of jar and requires no extra deployment and dependence.
  2. It can be considered as an enhanced JDBC driver, which is fully compatible with JDBC and all kinds of ORM frameworks.
  3. Applicable in any ORM framework based on JDBC, such as JPA, Hibernate, Mybatis, Spring JDBC Template or direct use of JDBC.
  4. Support any third-party database connection pool, such as DBCP, C3P0, BoneCP, Druid, HikariCP;
  5. Support any kind of JDBC standard database: MySQL, Oracle, SQLServer, PostgreSQL and any databases accessible to JDBC.
  6. Sharding-JDBC adopts decentralized architecture, applicable to high-performance light-weight OLTP application developed with Java

Hybrid Structure Integrating Sharding-JDBC and Applications

Sharding-JDBC’s core concepts

Data node: The smallest unit of a data slice, consisting of a data source name and a data table, such as ds_0.product_order_0.

Actual table: The physical table that really exists in the horizontal sharding database, such as product order tables: product_order_0, product_order_1, and product_order_2.

Logic table: The logical name of the horizontal sharding databases (tables) with the same schema. For instance, the logic table of the order product_order_0, product_order_1, and product_order_2 is product_order.

Binding table: It refers to the primary table and the joiner table with the same sharding rules. For example, product_order table and product_order_item are sharded by order_id, so they are binding tables with each other. Cartesian product correlation will not appear in the multi-tables correlating query, so the query efficiency will increase greatly.

Broadcast table: It refers to tables that exist in all sharding database sources. The schema and data must consist in each database. It can be applied to the small data volume that needs to correlate with big data tables to query, dictionary table and configuration table for example.

3. Testing ShardingSphere-JDBC

3.1 Example project

Download the example project code locally. In order to ensure the stability of the test code, we choose shardingsphere-example-4.0.0 version.

git clone https://github.com/apache/shardingsphere-example.git

Project description:

shardingsphere-example
  ├── example-core
  │   ├── config-utility
  │   ├── example-api
  │   ├── example-raw-jdbc
  │   ├── example-spring-jpa #spring+jpa integration-based entity,repository
  │   └── example-spring-mybatis
  ├── sharding-jdbc-example
  │   ├── sharding-example
  │   │   ├── sharding-raw-jdbc-example
  │   │   ├── sharding-spring-boot-jpa-example #integration-based sharding-jdbc functions
  │   │   ├── sharding-spring-boot-mybatis-example
  │   │   ├── sharding-spring-namespace-jpa-example
  │   │   └── sharding-spring-namespace-mybatis-example
  │   ├── orchestration-example
  │   │   ├── orchestration-raw-jdbc-example
  │   │   ├── orchestration-spring-boot-example #integration-based sharding-jdbc governance function
  │   │   └── orchestration-spring-namespace-example
  │   ├── transaction-example
  │   │   ├── transaction-2pc-xa-example #sharding-jdbc sample of two-phase commit for a distributed transaction
  │   │   └──transaction-base-seata-example #sharding-jdbc distributed transaction seata sample
  │   ├── other-feature-example
  │   │   ├── hint-example
  │   │   └── encrypt-example
  ├── sharding-proxy-example
  │   └── sharding-proxy-boot-mybatis-example
  └── src/resources
        └── manual_schema.sql  

Configuration file description:

application-master-slave.properties #read/write splitting profile
application-sharding-databases-tables.properties #sharding profile
application-sharding-databases.properties       #library split profile only
application-sharding-master-slave.properties    #sharding and read/write splitting profile
application-sharding-tables.properties          #table split profile
application.properties                         #spring boot profile

Code logic description:

The following is the entry class of the Spring Boot application below. Execute it to run the project.

The execution logic of demo is as follows:

3.2 Verifying read/write splitting

As business grows, the write and read requests can be split to different database nodes to effectively promote the processing capability of the entire database cluster. Aurora uses a reader/writer endpoint to meet users' requirements to write and read with strong consistency, and a read-only endpoint to meet the requirements to read without strong consistency. Aurora's read and write latency is within single-digit milliseconds, much lower than MySQL's binlog-based logical replication, so there's a lot of loads that can be directed to a read-only endpoint.

Through the one primary and multiple secondary configuration, query requests can be evenly distributed to multiple data replicas, which further improves the processing capability of the system. Read/write splitting can improve the throughput and availability of system, but it can also lead to data inconsistency. Aurora provides a primary/secondary architecture in a fully managed form, but applications on the upper-layer still need to manage multiple data sources when interacting with Aurora, routing SQL requests to different nodes based on the read/write type of SQL statements and certain routing policies.

ShardingSphere-JDBC provides read/write splitting features and it is integrated with application programs so that the complex configuration between application programs and database clusters can be separated from application programs. Developers can manage the Shard through configuration files and combine it with ORM frameworks such as Spring JPA and Mybatis to completely separate the duplicated logic from the code, which greatly improves the ability to maintain code and reduces the coupling between code and database.

3.2.1 Setting up the database environment

Create a set of Aurora MySQL read/write splitting clusters. The model is db.r5.2xlarge. Each set of clusters has one write node and two read nodes.

3.2.2 Configuring Sharding-JDBC

application.properties spring boot Master profile description:

You need to replace the green ones with your own environment configuration.

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#Activate master-slave configuration item so that sharding-jdbc can use master-slave profile
spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-master-slave.properties sharding-jdbc profile description:

spring.shardingsphere.datasource.names=ds_master,ds_slave_0,ds_slave_1
# data souce-master
spring.shardingsphere.datasource.ds_master.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master.password=Your master DB password
spring.shardingsphere.datasource.ds_master.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master.jdbc-url=Your primary DB data sourceurl spring.shardingsphere.datasource.ds_master.username=Your primary DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_0.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_0.jdbc-url=Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_0.username= Your slave DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_1.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_1.jdbc-url= Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_1.username= Your slave DB username
# Routing Policy Configuration
spring.shardingsphere.masterslave.load-balance-algorithm-type=round_robin
spring.shardingsphere.masterslave.name=ds_ms
spring.shardingsphere.masterslave.master-data-source-name=ds_master
spring.shardingsphere.masterslave.slave-data-source-names=ds_slave_0,ds_slave_1
# sharding-jdbc configures the information storage mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log,and you can see the conversion from logical SQL to actual SQL from the print
spring.shardingsphere.props.sql.show=true

 

3.2.3 Test and verification process description

  • Test environment data initialization: Spring JPA initialization automatically creates tables for testing.

  • Write data to the master instance

As shown in the ShardingSphere-SQL log figure below, the write SQL is executed on the ds_master data source.

  • Data query operations are performed on the slave library.

As shown in the ShardingSphere-SQL log figure below, the read SQL is executed on the ds_slave data source in the form of polling.

[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] Rule Type: master-slave 
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] SQL: select orderentit0_.order_id as order_id1_1_, orderentit0_.address_id as address_2_1_, 
orderentit0_.status as status3_1_, orderentit0_.user_id as user_id4_1_ from t_order orderentit0_ ::: DataSources: ds_slave_0 
---------------------------- Print OrderItem Data -------------------
Hibernate: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, orderiteme1_.user_id 
as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] Rule Type: master-slave 
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] SQL: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, 
orderiteme1_.user_id as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id ::: DataSources: ds_slave_1 

Note: As shown in the figure below, if there are both reads and writes in a transaction, Sharding-JDBC routes both read and write operations to the master library. If the read/write requests are not in the same transaction, the corresponding read requests are distributed to different read nodes according to the routing policy.

@Override
@Transactional // When a transaction is started, both read and write in the transaction go through the master library. When closed, read goes through the slave library and write goes through the master library
public void processSuccess() throws SQLException {
    System.out.println("-------------- Process Success Begin ---------------");
    List<Long> orderIds = insertData();
    printData();
    deleteData(orderIds);
    printData();
    System.out.println("-------------- Process Success Finish --------------");
}

3.2.4 Verifying Aurora failover scenario

The Aurora database environment adopts the configuration described in Section 2.2.1.

3.2.4.1 Verification process description

  1. Start the Spring-Boot project

2. Perform a failover on Aurora’s console

3. Execute the Rest API request

4. Repeatedly execute POST (http://localhost:8088/save-user) until the call to the API failed to write to Aurora and eventually recovered successfully.

5. The following figure shows the process of executing code failover. It takes about 37 seconds from the time when the latest SQL write is successfully performed to the time when the next SQL write is successfully performed. That is, the application can be automatically recovered from Aurora failover, and the recovery time is about 37 seconds.

3.3 Testing table sharding-only function

3.3.1 Configuring Sharding-JDBC

application.properties spring boot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#Activate sharding-tables configuration items
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
# spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-tables.properties sharding-jdbc profile description

## configure primary-key policy
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# configure the binding relation of t_order and t_order_item
spring.shardingsphere.sharding.binding-tables[0]=t_order,t_order_item
# configure broadcast tables
spring.shardingsphere.sharding.broadcast-tables=t_address
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.3.2 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC routing rules are configured, the client executes DDL, and Sharding-JDBC automatically creates corresponding tables according to the table splitting rules. If t_address is a broadcast table, create a t_address because there is only one master instance. Two physical tables t_order_0 and t_order_1 will be created when creating t_order.

2. Write operation

As shown in the figure below, Logic SQL inserts a record into t_order. When Sharding-JDBC is executed, data will be distributed to t_order_0 and t_order_1 according to the table splitting rules.

When t_order and t_order_item are bound, the records associated with order_item and order are placed on the same physical table.

3. Read operation

As shown in the figure below, perform the join query operations to order and order_item under the binding table, and the physical shard is precisely located based on the binding relationship.

The join query operations on order and order_item under the unbound table will traverse all shards.

3.4 Testing database sharding-only function

3.4.1 Setting up the database environment

Create two instances on Aurora: ds_0 and ds_1

When the sharding-spring-boot-jpa-example project is started, tables t_order, t_order_itemt_address will be created on two Aurora instances.

3.4.2 Configuring Sharding-JDBC

application.properties springboot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

# Activate sharding-databases configuration items
spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-databases.properties sharding-jdbc profile description

spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= spring.shardingsphere.datasource.ds_0.username= 
spring.shardingsphere.datasource.ds_0.password=
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url= 
spring.shardingsphere.datasource.ds_1.username= 
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_0

spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.4.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, physical tables will be created on ds_0 and ds_1. The three tables, t_address, t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address, each record written will also be written to the t_address tables of ds_0 and ds_1.

The tables t_order and t_order_item of the slave library are written on the table in the corresponding instance according to the slave library field and routing policy.

3. Read operation

Query order is routed to the corresponding Aurora instance according to the routing rules of the slave library .

Query Address. Since address is a broadcast table, an instance of address will be randomly selected and queried from the nodes used.

As shown in the figure below, perform the join query operations to order and order_item under the binding table, and the physical shard is precisely located based on the binding relationship.

3.5 Verifying sharding function

3.5.1 Setting up the database environment

As shown in the figure below, create two instances on Aurora: ds_0 and ds_1

When the sharding-spring-boot-jpa-example project is started, physical tables t_order_01, t_order_02, t_order_item_01,and t_order_item_02 and global table t_address will be created on two Aurora instances.

3.5.2 Configuring Sharding-JDBC

application.properties springboot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-databases.properties sharding-jdbc profile description

spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= 306/dev?useSSL=false&characterEncoding=utf-8
spring.shardingsphere.datasource.ds_0.username= 
spring.shardingsphere.datasource.ds_0.password=
spring.shardingsphere.datasource.ds_0.max-active=16
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url= 
spring.shardingsphere.datasource.ds_1.username= 
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.datasource.ds_1.max-active=16
# default library splitting policy
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
# Tables that do not meet the library splitting policy are placed on ds_0
spring.shardingsphere.sharding.default-data-source-name=ds_0
# t_order table splitting policy
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
# t_order_item table splitting policy
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mdoe
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.5.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s sharding and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, t_address will be created on both ds_0 and ds_1. The three tables, t_address, t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address, each record written will also be written to the t_address tables of ds_0 and ds_1.

The tables t_order and t_order_item of the sub-library are written to the table on the corresponding instance according to the slave library field and routing policy.

3. Read operation

The read operation is similar to the library split function verification described in section2.4.3.

3.6 Testing database sharding, table sharding and read/write splitting function

3.6.1 Setting up the database environment

The following figure shows the physical table of the created database instance.

3.6.2 Configuring Sharding-JDBC

application.properties spring boot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

# activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
spring.profiles.active=sharding-master-slave

application-sharding-master-slave.properties sharding-jdbc profile description

The url, name and password of the database need to be changed to your own database parameters.

spring.shardingsphere.datasource.names=ds_master_0,ds_master_1,ds_master_0_slave_0,ds_master_0_slave_1,ds_master_1_slave_0,ds_master_1_slave_1
spring.shardingsphere.datasource.ds_master_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0.username= 
spring.shardingsphere.datasource.ds_master_0.password=
spring.shardingsphere.datasource.ds_master_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_0.username= 
spring.shardingsphere.datasource.ds_master_0_slave_0.password=
spring.shardingsphere.datasource.ds_master_0_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_1.username= 
spring.shardingsphere.datasource.ds_master_0_slave_1.password=
spring.shardingsphere.datasource.ds_master_0_slave_1.max-active=16
spring.shardingsphere.datasource.ds_master_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1.jdbc-url= 
spring.shardingsphere.datasource.ds_master_1.username= 
spring.shardingsphere.datasource.ds_master_1.password=
spring.shardingsphere.datasource.ds_master_1.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_0.jdbc-url=
spring.shardingsphere.datasource.ds_master_1_slave_0.username=
spring.shardingsphere.datasource.ds_master_1_slave_0.password=
spring.shardingsphere.datasource.ds_master_1_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_1_slave_1.username=admin
spring.shardingsphere.datasource.ds_master_1_slave_1.password=
spring.shardingsphere.datasource.ds_master_1_slave_1.max-active=16
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_master_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# master/slave data source and slave data source configuration
spring.shardingsphere.sharding.master-slave-rules.ds_0.master-data-source-name=ds_master_0
spring.shardingsphere.sharding.master-slave-rules.ds_0.slave-data-source-names=ds_master_0_slave_0, ds_master_0_slave_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.master-data-source-name=ds_master_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.slave-data-source-names=ds_master_1_slave_0, ds_master_1_slave_1
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.6.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, t_address will be created on both ds_0 and ds_1. The three tables, t_address, t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address, each record written will also be written to the t_address tables of ds_0 and ds_1.

The tables t_order and t_order_item of the slave library are written to the table on the corresponding instance according to the slave library field and routing policy.

3. Read operation

The join query operations on order and order_item under the binding table are shown below.

3. Conclusion

As an open source product focusing on database enhancement, ShardingSphere is pretty good in terms of its community activitiy, product maturity and documentation richness.

Among its products, ShardingSphere-JDBC is a sharding solution based on the client-side, which supports all sharding scenarios. And there’s no need to introduce an intermediate layer like Proxy, so the complexity of operation and maintenance is reduced. Its latency is theoretically lower than Proxy due to the lack of intermediate layer. In addition, ShardingSphere-JDBC can support a variety of relational databases based on SQL standards such as MySQL/PostgreSQL/Oracle/SQL Server, etc.

However, due to the integration of Sharding-JDBC with the application program, it only supports Java language for now, and is strongly dependent on the application programs. Nevertheless, Sharding-JDBC separates all sharding configuration from the application program, which brings relatively small changes when switching to other middleware.

In conclusion, Sharding-JDBC is a good choice if you use a Java-based system and have to to interconnect with different relational databases — and don’t want to bother with introducing an intermediate layer.

Author

Sun Jinhua

A senior solution architect at AWS, Sun is responsible for the design and consult on cloud architecture. for providing customers with cloud-related design and consulting services. Before joining AWS, he ran his own business, specializing in building e-commerce platforms and designing the overall architecture for e-commerce platforms of automotive companies. He worked in a global leading communication equipment company as a senior engineer, responsible for the development and architecture design of multiple subsystems of LTE equipment system. He has rich experience in architecture design with high concurrency and high availability system, microservice architecture design, database, middleware, IOT etc.

August  Larson

August Larson

1660147320

Top 14 Ways to Filter Pandas Dataframes Easily

Whenever we work with data of any sort, we need a clear picture of the kind of data that we are dealing with. For most of the data out there, which may contain thousands or even millions of entries with a wide variety of information, it’s really impossible to make sense of that data without any tool to present the data in a short and readable format.

Most of the time we need to go through the data, manipulate it, and visualize it for getting insights. Well, there is a great library which goes by the name pandas which provides us with that capability. The most frequent Data manipulation operation is Data Filtering. It is very similar to the WHERE clause in SQL or you must have used a filter in MS Excel for selecting specific rows based on some conditions.

pandas is a powerful, flexible and open source data analysis/manipulation tool which is essentially a python package that provides speed, flexibility and expressive data structures crafted to work with “relational” or “labelled” data in an intuitive and easy manner. It is one of the most popular libraries to perform real-world data analysis in Python.

pandas is built on top of the NumPy library which aims to integrate well with the scientific computing environment and numerous other 3rd party libraries. It has two primary data structures namely Series (1D) and Dataframes(2D), which in most real-world use cases is the type of data that is being dealt with in many sectors of finance, scientific computing, engineering and statistics.

Let’s Start Filtering Data With the Help of Pandas Dataframe

Installing pandas

!pip install pandas

Importing the Pandas library, reading our sample data file and assigning it to “df” DataFrame

import pandas as pd
df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv")

Let’s check out our dataframe:

print(df.head())

Sample_data

Sample_data

Now that we have our DataFrame, we will be applying various methods to filter it.

Method – 1: Filtering DataFrame by column value

We have a column named “Total_Sales” in our DataFrame and we want to filter out all the sales value which is greater than 300.

#Filter a DataFrame for a single column value with a given condition
 
greater_than = df[df['Total_Sales'] > 300]
print(greater_than.head())

Sample_data with sales > 300

Sales with Greater than 300

Method – 2: Filtering DataFrame based on multiple conditions

Here we are filtering all the values whose “Total_Sales” value is greater than 300 and also where the “Units” is greater than 20. We will have to use the python operator “&” which performs a bitwise AND operation in order to display the corresponding result.

#Filter a DataFrame with multiple conditions
 
filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)]
print(Filter_sales_units.head())

Image 3

Filter on Sales and Units

Method – 3: Filtering DataFrame based on Date value

If we want to filter our data frame based on a certain date value, for example here we are trying to get all the results based on a particular date, in our case the results after the date ’03/10/21′.

#Filter a DataFrame based on specific date
 
date_filter = df[df['Date'] > '03/10/21']
print(date_filter.head())

Image 1

Filter on Date

Method – 4: Filtering DataFrame based on Date value with multiple conditions

Here we are getting all the results for our Date operation evaluating multiple dates.

#Filter a DataFrame with multiple conditions our Date value
 
date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')]
print(date_filter2.head())

Image 2

Filter on a date with multiple conditions

Method – 5: Filtering DataFrame based on a specific string

Here we are selecting a column called ‘Region’ and getting all the rows that are from the region ‘East’, thus filtering based on a specific string value.

#Filter a DataFrame to a specific string
 
east = df[df['Region'] == 'East']
print(east.head())

Image 6

Filter based on a specific string

Method – 6: Filtering DataFrame based on a specific index value in a string

Here we are selecting a column called ‘Region’ and getting all the rows which has the letter ‘E’ as the first character i.e at index 0 in the specified column results.

#Filter a DataFrame to show rows starting with a specfic letter
 
starting_with_e = df[df['Region'].str[0]== 'E']
print(starting_with_e.head())

Image 7

Filter based on a specific letter

Method – 7: Filtering DataFrame based on a list of values

Here we are filtering rows in the column ‘Region’ which contains the values ‘West’ as well as ‘East’ and display the combined result. Two methods can be used to perform this filtering namely using a pipe | operator with the corresponding desired set of values with the below syntax OR we can use the .isin() function to filter for the values in a given column, which in our case is the ‘Region’, and provide the list of the desired set of values inside it as a list.

#Filter a DataFrame rows based on list of values
 
#Method 1:
east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')]
print(east_west)
 
#Method 2:
east_west_1 = df[df['Region'].isin(['West', 'East'])]
print(east_west_1.head())

Image 9

Output of Method -2

Method – 8: Filtering DataFrame rows based on specific values using RegEx

Here we want all the values in the column ‘Region’, which ends with ‘th’ in their string value and display them. In other words, we want our results to show the values of ‘North‘ and ‘South‘ and ignore ‘East’ and ‘West’. The method .str.contains() with the specified values along with the $ RegEx pattern can be used to get the desired results.

For more information please check the Regex Documentation

#Filtering the DataFrame rows using regular expressions(REGEX)
 
regex_df = df[df['Region'].str.contains('th$')]
print(regex_df.head())

Image 10

Filter based on REGEX

Method – 9: Filtering DataFrame to check for null

Here, we’ll check for null and not null values in all the columns with the help of isnull() function.

#Filtering to check for null and not null values in all columns
 
df_null = df[df.isnull().any(axis=1)]
print(df_null.head())

Image 12

Filter based on NULL or NOT null values

Method – 10: Filtering DataFrame to check for null values in a specific column.

#Filtering to check for null values if any in the 'Units' column
 
units_df = df[df['Units'].isnull()]
print(units_df.head())

Image 13

Finding null values on specific columns

Method – 11: Filtering DataFrame to check for not null values in specific columns

#Filtering to check for not null values in the 'Units' column
 
df_not_null = df[df['Units'].notnull()]
print(df_not_null.head())

Image 14

Finding not-null values on specific columns

Method – 12: Filtering DataFrame using query() with a condition

#Using query function in pandas
 
df_query = df.query('Total_Sales > 300')
print(df_query.head())

Image 17

Filtering values with Query Function

Method – 13: Filtering DataFrame using query() with multiple conditions

#Using query function with multiple conditions in pandas
 
df_query_1 = df.query('Total_Sales > 300 and Units <18')
print(df_query_1.head())

Image 18

Filtering multiple columns with Query Function

Method – 14: Filtering our DataFrame using the loc and iloc functions.

#Creating a sample DataFrame for illustrations
 
import numpy as np
data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5])
print(data)

Image 19

sample_data

Explanation: iloc considers rows based on the position of the given index, so that it takes only integers as values.

For more information please check out Pandas Documentation

#Filter with iloc
 
data.iloc[0 : 5]

Image 20

Filter using iloc

Explanation: loc considers rows based on index labels

#Filter with loc
 
data.loc[0 : 5]

Image 21

Filter using loc

You might be thinking about why the loc function returns 6 rows instead of 5 rows. This is because loc does not produce output based on index position. It considers labels of index only which can be an alphabet as well and includes both starting and endpoint.

Conclusion

So, these were some of the most common filtering methods used in pandas. There are many other filtering methods that could be used, but these are some of the most common.

Link: https://www.askpython.com/python-modules/pandas/filter-pandas-dataframe

#pandas #python #datafame

14 лучших способов легко фильтровать кадры данных Pandas

Всякий раз, когда мы работаем с данными любого рода, нам нужна четкая картина того, с какими данными мы имеем дело. Для большинства имеющихся данных, которые могут содержать тысячи или даже миллионы записей с разнообразной информацией, действительно невозможно разобраться в этих данных без какого-либо инструмента для представления данных в кратком и удобочитаемом формате.

Большую часть времени нам нужно просматривать данные, манипулировать ими и визуализировать их для получения информации. Что ж, есть отличная библиотека под названием pandas, которая предоставляет нам эту возможность. Наиболее частой операцией манипулирования данными является фильтрация данных. Это очень похоже на предложение WHERE в SQL, или вы должны были использовать фильтр в MS Excel для выбора определенных строк на основе некоторых условий.

pandas — это мощный, гибкий инструмент с открытым исходным кодом для анализа/манипулирования данными, который, по сути, представляет собойпакет Python, обеспечивающий скорость, гибкость и выразительные структуры данных, созданные для интуитивно понятной и простой работы с «реляционными» или «помеченными» данными. Это одна из самых популярных библиотекдля реального анализа данных в Python.

pandas построен на основе библиотеки NumPy, которая нацелена на хорошую интеграцию с научной вычислительной средой и множеством других сторонних библиотек. Он имеет две основные структуры данных, а именно Series (1D) и Dataframes(2D) , которые в большинстве реальных случаев использования представляют собой тип данных, с которыми имеют дело во многих секторах финансов, научных вычислений, инженерии и статистики.

Давайте начнем фильтровать данные с помощью Pandas Dataframe

Установка панд

!pip install pandas

Импорт библиотеки Pandas, чтение нашего примера файла данных и назначение его в «df» DataFrame

import pandas as pd
df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv")

Давайте проверим наш фрейм данных :

print(df.head())

Образец данных

Образец данных

Теперь, когда у нас есть DataFrame, мы будем применять различные методы для его фильтрации.

Метод — 1 : фильтрация DataFrame по значению столбца

У нас есть столбец с именем «Total_Sales» в нашем DataFrame, и мы хотим отфильтровать все значения продаж, превышающие 300.

#Filter a DataFrame for a single column value with a given condition
 
greater_than = df[df['Total_Sales'] > 300]
print(greater_than.head())

Sample_data с продажами > 300

Продажи с более чем 300

Метод — 2 : фильтрация DataFrame на основе нескольких условий

Здесь мы фильтруем все значения, у которых значение «Total_Sales» больше 300, а также где «Единицы» больше 20. Нам нужно будет использовать оператор Python «&», который выполняет побитовую операцию И, чтобы отобразить соответствующий результат.

#Filter a DataFrame with multiple conditions
 
filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)]
print(Filter_sales_units.head())

Изображение 3

Фильтр по продажам и единицам

Метод — 3 : фильтрация DataFrame на основе значения даты

Если мы хотим отфильтровать наш фрейм данных на основе определенного значения даты, например, здесь мы пытаемся получить все результаты на основе определенной даты, в нашем случае результаты после даты «10.03.21».

#Filter a DataFrame based on specific date
 
date_filter = df[df['Date'] > '03/10/21']
print(date_filter.head())

Изображение 1

Фильтр по дате

Метод — 4: фильтрация DataFrame на основе значения даты с несколькими условиями

Здесь мы получаем все результаты нашей операции Date, оценивающей несколько дат .

#Filter a DataFrame with multiple conditions our Date value
 
date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')]
print(date_filter2.head())

Изображение 2

Фильтр по дате с несколькими условиями

Метод — 5: фильтрация DataFrame на основе определенной строки

Здесь мы выбираем столбец под названием «Регион» и получаем все строки из региона «Восток», таким образом фильтруя на основе определенного строкового значения .

#Filter a DataFrame to a specific string
 
east = df[df['Region'] == 'East']
print(east.head())

Изображение 6

Фильтровать по определенной строке

Метод — 6: фильтрация DataFrame на основе определенного значения индекса в строке

Здесь мы выбираем столбец под названием «Регион» и получаем все строки, в которых буква «Е» является первым символом, т.е. индексом 0 в результатах указанного столбца.

#Filter a DataFrame to show rows starting with a specfic letter
 
starting_with_e = df[df['Region'].str[0]== 'E']
print(starting_with_e.head())

Изображение 7

Фильтр по определенной букве

Метод — 7: Фильтрация DataFrame на основе списка значений

Здесь мы фильтруем строки в столбце «Регион», который содержит значения «Запад», а также «Восток», и отображаем объединенный результат. Для выполнения этой фильтрации можно использовать два метода, а именно использование канала | оператор с соответствующим желаемым набором значений с приведенным ниже синтаксисом ИЛИ мы можем использовать функцию .isin() для фильтрации значений в данном столбце, которым в нашем случае является «Регион», и предоставить список желаемого набора значений внутри него в виде списка.

#Filter a DataFrame rows based on list of values
 
#Method 1:
east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')]
print(east_west)
 
#Method 2:
east_west_1 = df[df['Region'].isin(['West', 'East'])]
print(east_west_1.head())

Изображение 9

Выход метода -2

Метод — 8: фильтрация строк DataFrame на основе определенных значений с использованием RegEx

Здесь нам нужны все значения в столбце «Регион» , которые заканчиваются на «th» в их строковом значении, и отобразить их. Другими словами, мы хотим, чтобы наши результаты показывали значения «Север » и «Юг » и игнорировали «Восток» и «Запад» . Метод .str.contains() с указанными значениями вместе с шаблоном $ RegEx можно использовать для получения желаемых результатов.

Для получения дополнительной информации ознакомьтесь с документацией по регулярным выражениям.

#Filtering the DataFrame rows using regular expressions(REGEX)
 
regex_df = df[df['Region'].str.contains('th$')]
print(regex_df.head())

Изображение 10

Фильтр на основе REGEX

Метод — 9: фильтрация DataFrame для проверки на нуль

Здесь мы проверим нулевые и не нулевые значения во всех столбцах с помощью функции isnull() .

#Filtering to check for null and not null values in all columns
 
df_null = df[df.isnull().any(axis=1)]
print(df_null.head())

Изображение 12

Фильтр на основе значений NULL или NOT null

Метод — 10: фильтрация DataFrame для проверки нулевых значений в определенном столбце.

#Filtering to check for null values if any in the 'Units' column
 
units_df = df[df['Units'].isnull()]
print(units_df.head())

Изображение 13

Поиск нулевых значений в определенных столбцах

Метод — 11: фильтрация DataFrame для проверки ненулевых значений в определенных столбцах

#Filtering to check for not null values in the 'Units' column
 
df_not_null = df[df['Units'].notnull()]
print(df_not_null.head())

Изображение 14

Поиск ненулевых значений в определенных столбцах

Метод — 12: Фильтрация DataFrame query()с использованием условия

#Using query function in pandas
 
df_query = df.query('Total_Sales > 300')
print(df_query.head())

Изображение 17

Фильтрация значений с Queryфункцией

Метод — 13: фильтрация DataFrame с использованием query()нескольких условий

#Using query function with multiple conditions in pandas
 
df_query_1 = df.query('Total_Sales > 300 and Units <18')
print(df_query_1.head())

Изображение 18

Фильтрация нескольких столбцов с Queryфункцией

Метод — 14: фильтрация нашего DataFrame с использованием функций locи .iloc

#Creating a sample DataFrame for illustrations
 
import numpy as np
data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5])
print(data)

Изображение 19

образец данных

Объяснение : iloc считает строки на основе позиции заданного индекса, поэтому в качестве значений принимает только целые числа.

Для получения дополнительной информации ознакомьтесь с документацией Pandas.

#Filter with iloc
 
data.iloc[0 : 5]

Изображение 20

Фильтровать с помощьюiloc

Объяснение : loc считает строки на основе меток индекса .

#Filter with loc
 
data.loc[0 : 5]

Изображение 21

Фильтровать с помощьюloc

Вы можете подумать, почему locфункция возвращает 6 строк вместо 5 строк. Это связано с тем , что вывод не производится на основе позиции индекса. Он рассматривает только метки индекса, которые также могут быть алфавитом, и включает как начальную, так и конечную точку. loc 

Вывод

Итак, это были одни из наиболее распространенных методов фильтрации, используемых в пандах. Существует множество других методов фильтрации, которые можно использовать, но эти являются одними из наиболее распространенных.

Ссылка: https://www.askpython.com/python-modules/pandas/filter-pandas-dataframe

#pandas #python #datafame

Thierry  Perret

Thierry Perret

1660017761

14 Meilleures Façons De Filtrer Facilement Les Dataframes Pandas

Chaque fois que nous travaillons avec des données de toutes sortes, nous avons besoin d'une image claire du type de données avec lesquelles nous traitons. Pour la plupart des données disponibles, qui peuvent contenir des milliers, voire des millions d'entrées avec une grande variété d'informations, il est vraiment impossible de donner un sens à ces données sans aucun outil pour présenter les données dans un format court et lisible.

La plupart du temps, nous devons parcourir les données, les manipuler et les visualiser pour obtenir des informations. Eh bien, il existe une excellente bibliothèque qui porte le nom de pandas et qui nous offre cette capacité. L'opération de manipulation de données la plus fréquente est le filtrage de données. Il est très similaire à la clause WHERE dans SQL ou vous devez avoir utilisé un filtre dans MS Excel pour sélectionner des lignes spécifiques en fonction de certaines conditions.

pandas est un outil d'analyse/manipulation de données puissant, flexible et open source qui est essentiellement unpackage pythonqui offre vitesse, flexibilité et structures de données expressives conçues pour fonctionner avec des données « relationnelles » ou « étiquetées » de manière intuitive et simple. C'est l'une des bibliothèques les plus populairespour effectuer une analyse de données du monde réel en Python.

pandas est construit au-dessus de la bibliothèque NumPy qui vise à bien s'intégrer à l'environnement informatique scientifique et à de nombreuses autres bibliothèques tierces. Il comporte deux structures de données principales, à savoir Series (1D) et Dataframes (2D) , qui, dans la plupart des cas d'utilisation réels, correspondent au type de données traitées dans de nombreux secteurs de la finance, du calcul scientifique, de l'ingénierie et des statistiques.

Commençons à filtrer les données à l'aide de Pandas Dataframe

Installer des pandas

!pip install pandas

Importation de la bibliothèque Pandas, lecture de notre exemple de fichier de données et affectation à "df" DataFrame

import pandas as pd
df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv")

Voyons notre dataframe :

print(df.head())

Sample_data

Sample_data

Maintenant que nous avons notre DataFrame, nous allons appliquer différentes méthodes pour le filtrer.

Méthode – 1 : Filtrage de DataFrame par valeur de colonne

Nous avons une colonne nommée "Total_Sales" dans notre DataFrame et nous voulons filtrer toute la valeur des ventes supérieure à 300.

#Filter a DataFrame for a single column value with a given condition
 
greater_than = df[df['Total_Sales'] > 300]
print(greater_than.head())

Sample_data avec des ventes > 300

Ventes avec plus de 300

Méthode – 2 : Filtrage de DataFrame basé sur plusieurs conditions

Ici, nous filtrons toutes les valeurs dont la valeur "Total_Sales" est supérieure à 300 et également où les "Unités" sont supérieures à 20. Nous devrons utiliser l'opérateur python "&" qui effectue une opération ET au niveau du bit afin d'afficher le résultat correspondant.

#Filter a DataFrame with multiple conditions
 
filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)]
print(Filter_sales_units.head())

Image 3

Filtrer sur les ventes et les unités

Méthode - 3 : Filtrage de DataFrame basé sur la valeur Date

Si nous voulons filtrer notre trame de données en fonction d'une certaine valeur de date, par exemple ici nous essayons d'obtenir tous les résultats en fonction d'une date particulière, dans notre cas les résultats après la date '03/10/21'.

#Filter a DataFrame based on specific date
 
date_filter = df[df['Date'] > '03/10/21']
print(date_filter.head())

Image 1

Filtrer par date

Méthode - 4 : Filtrage de DataFrame en fonction de la valeur Date avec plusieurs conditions

Ici, nous obtenons tous les résultats de notre opération Date évaluant plusieurs dates .

#Filter a DataFrame with multiple conditions our Date value
 
date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')]
print(date_filter2.head())

Image 2

Filtrer sur une date avec plusieurs conditions

Méthode - 5 : Filtrage de DataFrame en fonction d'une chaîne spécifique

Ici, nous sélectionnons une colonne appelée 'Region' et obtenons toutes les lignes qui proviennent de la région 'East', filtrant ainsi en fonction d'une valeur de chaîne spécifique .

#Filter a DataFrame to a specific string
 
east = df[df['Region'] == 'East']
print(east.head())

Image 6

Filtre basé sur une chaîne spécifique

Méthode - 6 : Filtrage de DataFrame en fonction d'une valeur d'index spécifique dans une chaîne

Ici, nous sélectionnons une colonne appelée 'Region' et obtenons toutes les lignes qui ont la lettre 'E' comme premier caractère, c'est-à-dire à l'index 0 dans les résultats de colonne spécifiés.

#Filter a DataFrame to show rows starting with a specfic letter
 
starting_with_e = df[df['Region'].str[0]== 'E']
print(starting_with_e.head())

Image 7

Filtre basé sur une lettre spécifique

Méthode - 7 : Filtrage de DataFrame basé sur une liste de valeurs

Ici, nous filtrons les lignes dans la colonne « Région » qui contient les valeurs « Ouest » ainsi que « Est » et affichons le résultat combiné. Deux méthodes peuvent être utilisées pour effectuer ce filtrage à savoir l'utilisation d'un tube | opérateur avec l'ensemble de valeurs souhaité correspondant avec la syntaxe ci-dessous OU nous pouvons utiliser la fonction .isin() pour filtrer les valeurs dans une colonne donnée, qui dans notre cas est la 'Région', et fournir la liste de l'ensemble souhaité de valeurs à l'intérieur sous forme de liste.

#Filter a DataFrame rows based on list of values
 
#Method 1:
east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')]
print(east_west)
 
#Method 2:
east_west_1 = df[df['Region'].isin(['West', 'East'])]
print(east_west_1.head())

Image 9

Sortie de la méthode -2

Méthode - 8: Filtrage des lignes DataFrame en fonction de valeurs spécifiques à l'aide de RegEx

Ici, nous voulons toutes les valeurs de la colonne 'Region' , qui se termine par 'th' dans leur valeur de chaîne et les afficher. En d'autres termes, nous voulons que nos résultats montrent les valeurs de « Nord » et « Sud » et ignorent « Est » et « Ouest » . La méthode .str.contains() avec les valeurs spécifiées avec le modèle $ RegEx peut être utilisée pour obtenir les résultats souhaités.

Pour plus d'informations, veuillez consulter la documentation Regex

#Filtering the DataFrame rows using regular expressions(REGEX)
 
regex_df = df[df['Region'].str.contains('th$')]
print(regex_df.head())

Image 10

Filtre basé sur REGEX

Méthode - 9: Filtrage de DataFrame pour vérifier null

Ici, nous allons vérifier les valeurs nulles et non nulles dans toutes les colonnes à l'aide de la fonction isnull() .

#Filtering to check for null and not null values in all columns
 
df_null = df[df.isnull().any(axis=1)]
print(df_null.head())

Image 12

Filtre basé sur les valeurs NULL ou NOT null

Méthode - 10 : Filtrage de DataFrame pour vérifier les valeurs nulles dans une colonne spécifique.

#Filtering to check for null values if any in the 'Units' column
 
units_df = df[df['Units'].isnull()]
print(units_df.head())

Image 13

Recherche de valeurs nulles sur des colonnes spécifiques

Méthode - 11 : Filtrage de DataFrame pour vérifier les valeurs non nulles dans des colonnes spécifiques

#Filtering to check for not null values in the 'Units' column
 
df_not_null = df[df['Units'].notnull()]
print(df_not_null.head())

Image 14

Recherche de valeurs non nulles sur des colonnes spécifiques

Méthode - 12: Filtrage de DataFrame à l'aide query()d'une condition

#Using query function in pandas
 
df_query = df.query('Total_Sales > 300')
print(df_query.head())

Image 17

Filtrer les valeurs avec Queryla fonction

Méthode - 13: Filtrage de DataFrame à l'aide query()de plusieurs conditions

#Using query function with multiple conditions in pandas
 
df_query_1 = df.query('Total_Sales > 300 and Units <18')
print(df_query_1.head())

Image 18

Filtrer plusieurs colonnes avec QueryFunction

Méthode – 14 : Filtrage de notre DataFrame à l'aide des fonctions locet iloc.

#Creating a sample DataFrame for illustrations
 
import numpy as np
data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5])
print(data)

Image 19

sample_data

Explication : iloc considère les lignes en fonction de la position de l'index donné, de sorte qu'il ne prend que des entiers comme valeurs.

Pour plus d'informations, veuillez consulter la documentation de Pandas

#Filter with iloc
 
data.iloc[0 : 5]

Image 20

Filtrer en utilisantiloc

Explication : loc considère les lignes en fonction des étiquettes d'index

#Filter with loc
 
data.loc[0 : 5]

Image 21

Filtrer en utilisantloc

Vous vous demandez peut-être pourquoi la locfonction renvoie 6 lignes au lieu de 5 lignes. En effet , ne produit pas de sortie basée sur la position de l'index. Il ne prend en compte que les étiquettes d'index qui peuvent également être un alphabet et incluent à la fois le point de départ et le point final. loc 

Conclusion

Donc, ce sont quelques-unes des méthodes de filtrage les plus couramment utilisées dans les pandas. Il existe de nombreuses autres méthodes de filtrage qui pourraient être utilisées, mais celles-ci sont parmi les plus courantes.

Lien : https://www.askpython.com/python-modules/pandas/filter-pandas-dataframe

#pandas #python #datafame

Hoang  Kim

Hoang Kim

1660046820

14 Cách Hàng đầu để Lọc Khung Dữ Liệu Pandas Một Cách Dễ Dàng

Bất cứ khi nào chúng tôi làm việc với bất kỳ loại dữ liệu nào, chúng tôi cần một bức tranh rõ ràng về loại dữ liệu mà chúng tôi đang xử lý. Đối với hầu hết dữ liệu ngoài kia, có thể chứa hàng nghìn hoặc thậm chí hàng triệu mục nhập với nhiều loại thông tin, thực sự không thể hiểu được dữ liệu đó nếu không có bất kỳ công cụ nào để trình bày dữ liệu ở định dạng ngắn gọn và dễ đọc.

Hầu hết thời gian chúng ta cần xem qua dữ liệu, thao tác và trực quan hóa nó để có được thông tin chi tiết. Chà, có một thư viện tuyệt vời mang tên gấu trúc cung cấp cho chúng ta khả năng đó. Thao tác thao tác dữ liệu thường xuyên nhất là Lọc dữ liệu. Nó rất giống với mệnh đề WHERE trong SQL hoặc bạn phải sử dụng một bộ lọc trong MS Excel để chọn các hàng cụ thể dựa trên một số điều kiện.

pandas là một công cụ phân tích / thao tác dữ liệu nguồn mở, linh hoạt và mạnh mẽ, về cơ bản là mộtgói pythoncung cấp tốc độ, tính linh hoạt và cấu trúc dữ liệu biểu cảm được tạo ra để làm việc với dữ liệu “quan hệ” hoặc “có nhãn” một cách trực quan và dễ dàng. Nó là một trong những thư viện phổ biến nhấtđể thực hiện phân tích dữ liệu trong thế giới thực bằng Python.

pandas được xây dựng dựa trên thư viện NumPy nhằm mục đích tích hợp tốt với môi trường máy tính khoa học và nhiều thư viện bên thứ 3 khác. Nó có hai cấu trúc dữ liệu chính là Series (1D) Dataframe (2D) , trong hầu hết các trường hợp sử dụng trong thế giới thực là loại dữ liệu đang được xử lý trong nhiều lĩnh vực tài chính, máy tính khoa học, kỹ thuật và thống kê.

Hãy bắt đầu lọc dữ liệu với sự trợ giúp của khung dữ liệu Pandas

Cài đặt gấu trúc

!pip install pandas

Nhập thư viện Pandas, đọc tệp dữ liệu mẫu của chúng tôi và gán nó cho DataFrame “df”

import pandas as pd
df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv")

Hãy kiểm tra khung dữ liệu của chúng tôi :

print(df.head())

Dữ liệu mẫu

Dữ liệu mẫu

Bây giờ chúng tôi đã có DataFrame của mình, chúng tôi sẽ áp dụng nhiều phương pháp khác nhau để lọc nó.

Phương pháp - 1 : Lọc DataFrame theo giá trị cột

Chúng tôi có một cột tên là “Total_Sales” trong DataFrame của mình và chúng tôi muốn lọc ra tất cả giá trị bán hàng lớn hơn 300.

#Filter a DataFrame for a single column value with a given condition
 
greater_than = df[df['Total_Sales'] > 300]
print(greater_than.head())

Dữ liệu mẫu với doanh số> 300

Doanh số lớn hơn 300

Phương pháp - 2 : Lọc DataFrame dựa trên nhiều điều kiện

Ở đây chúng tôi đang lọc tất cả các giá trị có giá trị “Total_Sales” lớn hơn 300 và cũng có giá trị “Đơn vị” lớn hơn 20. Chúng tôi sẽ phải sử dụng toán tử python “&” thực hiện thao tác AND bitwise để hiển thị kết quả tương ứng.

#Filter a DataFrame with multiple conditions
 
filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)]
print(Filter_sales_units.head())

Hình ảnh 3

Lọc theo Doanh số và Đơn vị

Phương pháp - 3 : Lọc DataFrame dựa trên giá trị Ngày tháng

Nếu chúng tôi muốn lọc khung dữ liệu của mình dựa trên một giá trị ngày nhất định, ví dụ: ở đây chúng tôi đang cố gắng lấy tất cả kết quả dựa trên một ngày cụ thể, trong trường hợp của chúng tôi là kết quả sau ngày '03/10/21'.

#Filter a DataFrame based on specific date
 
date_filter = df[df['Date'] > '03/10/21']
print(date_filter.head())

Hình ảnh 1

Lọc vào ngày

Phương pháp - 4: Lọc DataFrame dựa trên giá trị Ngày với nhiều điều kiện

Ở đây, chúng tôi nhận được tất cả các kết quả cho hoạt động Ngày đánh giá nhiều ngày của chúng tôi .

#Filter a DataFrame with multiple conditions our Date value
 
date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')]
print(date_filter2.head())

Hình ảnh 2

Lọc vào một ngày có nhiều điều kiện

Phương pháp - 5: Lọc DataFrame dựa trên một chuỗi cụ thể

Ở đây chúng tôi đang chọn một cột có tên là 'Khu vực' và lấy tất cả các hàng từ khu vực 'Đông', do đó lọc dựa trên một giá trị chuỗi cụ thể .

#Filter a DataFrame to a specific string
 
east = df[df['Region'] == 'East']
print(east.head())

Hình ảnh 6

Lọc dựa trên một chuỗi cụ thể

Phương pháp - 6: Lọc DataFrame dựa trên một giá trị chỉ mục cụ thể trong một chuỗi

Ở đây chúng tôi đang chọn một cột có tên là 'Vùng' và lấy tất cả các hàng có ký tự 'E' là ký tự đầu tiên, tức là ở chỉ số 0 trong kết quả cột được chỉ định.

#Filter a DataFrame to show rows starting with a specfic letter
 
starting_with_e = df[df['Region'].str[0]== 'E']
print(starting_with_e.head())

Hình ảnh 7

Lọc dựa trên một chữ cái cụ thể

Phương pháp - 7: Lọc DataFrame dựa trên danh sách các giá trị

Ở đây chúng tôi đang lọc các hàng trong cột 'Vùng' chứa các giá trị 'Tây' cũng như 'Đông' và hiển thị kết quả kết hợp. Hai phương pháp có thể được sử dụng để thực hiện việc lọc này là sử dụng đường ống | toán tử với tập giá trị mong muốn tương ứng với cú pháp bên dưới HOẶC chúng ta có thể sử dụng hàm .isin () để lọc các giá trị trong một cột nhất định, trong trường hợp của chúng ta là 'Vùng' và cung cấp danh sách tập hợp mong muốn của các giá trị bên trong nó dưới dạng danh sách.

#Filter a DataFrame rows based on list of values
 
#Method 1:
east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')]
print(east_west)
 
#Method 2:
east_west_1 = df[df['Region'].isin(['West', 'East'])]
print(east_west_1.head())

Hình ảnh 9

Đầu ra của Phương pháp -2

Phương pháp - 8: Lọc các hàng DataFrame dựa trên các giá trị cụ thể bằng cách sử dụng RegEx

Ở đây chúng tôi muốn tất cả các giá trị trong cột 'Vùng' , kết thúc bằng 'th' trong giá trị chuỗi của chúng và hiển thị chúng. Nói cách khác, chúng tôi muốn kết quả của mình hiển thị các giá trị của "Nor th " và "Sou th " và bỏ qua "East" và "West" . Phương thức .str.contains () với các giá trị được chỉ định cùng với mẫu $ RegEx có thể được sử dụng để có được kết quả mong muốn.

Để biết thêm thông tin, vui lòng kiểm tra Tài liệu Regex

#Filtering the DataFrame rows using regular expressions(REGEX)
 
regex_df = df[df['Region'].str.contains('th$')]
print(regex_df.head())

Hình ảnh 10

Lọc dựa trên REGEX

Phương pháp - 9: Lọc DataFrame để kiểm tra null

Ở đây, chúng tôi sẽ kiểm tra các giá trị null và không null trong tất cả các cột với sự trợ giúp của hàm isnull () .

#Filtering to check for null and not null values in all columns
 
df_null = df[df.isnull().any(axis=1)]
print(df_null.head())

Hình ảnh 12

Lọc dựa trên giá trị NULL hoặc NOT null

Phương pháp - 10: Lọc DataFrame để kiểm tra các giá trị null trong một cột cụ thể.

#Filtering to check for null values if any in the 'Units' column
 
units_df = df[df['Units'].isnull()]
print(units_df.head())

Hình ảnh 13

Tìm giá trị null trên các cột cụ thể

Phương pháp - 11: Lọc DataFrame để kiểm tra các giá trị không rỗng trong các cột cụ thể

#Filtering to check for not null values in the 'Units' column
 
df_not_null = df[df['Units'].notnull()]
print(df_not_null.head())

Hình ảnh 14

Tìm các giá trị not-null trên các cột cụ thể

Phương pháp - 12: Lọc DataFrame bằng cách sử dụng query()với một điều kiện

#Using query function in pandas
 
df_query = df.query('Total_Sales > 300')
print(df_query.head())

Hình ảnh 17

Lọc các giá trị bằng QueryHàm

Phương pháp - 13: Lọc DataFrame bằng query()nhiều điều kiện

#Using query function with multiple conditions in pandas
 
df_query_1 = df.query('Total_Sales > 300 and Units <18')
print(df_query_1.head())

Hình ảnh 18

Lọc nhiều cột với QueryHàm

Phương pháp - 14: Lọc DataFrame của chúng tôi bằng cách sử dụng các hàm lociloc.

#Creating a sample DataFrame for illustrations
 
import numpy as np
data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5])
print(data)

Hình ảnh 19

dữ liệu mẫu

Giải thích : iloc xem xét các hàng dựa trên vị trí của chỉ mục đã cho, do đó nó chỉ nhận các số nguyên làm giá trị.

Để biết thêm thông tin, vui lòng xem Tài liệu về Gấu trúc

#Filter with iloc
 
data.iloc[0 : 5]

Hình ảnh 20

Lọc bằng cách sử dụngiloc

Giải thích : loc xem xét các hàng dựa trên nhãn chỉ mục

#Filter with loc
 
data.loc[0 : 5]

Hình ảnh 21

Lọc bằng cách sử dụngloc

Bạn có thể đang suy nghĩ về lý do tại sao lochàm trả về 6 hàng thay vì 5 hàng. Điều này là do không tạo ra sản lượng dựa trên vị trí chỉ mục. Nó chỉ xem xét các nhãn của chỉ mục cũng có thể là một bảng chữ cái và bao gồm cả điểm đầu và điểm cuối. loc 

Sự kết luận

Vì vậy, đây là một số phương pháp lọc phổ biến nhất được sử dụng ở gấu trúc. Có nhiều phương pháp lọc khác có thể được sử dụng, nhưng đây là một số phương pháp phổ biến nhất.

Liên kết: https://www.askpython.com/python-modules/pandas/filter-pandas-dataframe

#pandas #python #datafame