GraphQL Server with Netflix DGS and Spring Boot

Here we started with the basics of Netflix DGS and how it can be used to write GraphQL components and how it can be used to implement Query and Mutation with an example to persist and read data from H2 DB.

#spring-boot #netflix #graphql 

What is GEEK

Buddha Community

GraphQL Server with Netflix DGS and Spring Boot

Enhance Amazon Aurora Read/Write Capability with ShardingSphere-JDBC

1. Introduction

Amazon Aurora is a relational database management system (RDBMS) developed by AWS(Amazon Web Services). Aurora gives you the performance and availability of commercial-grade databases with full MySQL and PostgreSQL compatibility. In terms of high performance, Aurora MySQL and Aurora PostgreSQL have shown an increase in throughput of up to 5X over stock MySQL and 3X over stock PostgreSQL respectively on similar hardware. In terms of scalability, Aurora achieves enhancements and innovations in storage and computing, horizontal and vertical functions.

Aurora supports up to 128TB of storage capacity and supports dynamic scaling of storage layer in units of 10GB. In terms of computing, Aurora supports scalable configurations for multiple read replicas. Each region can have an additional 15 Aurora replicas. In addition, Aurora provides multi-primary architecture to support four read/write nodes. Its Serverless architecture allows vertical scaling and reduces typical latency to under a second, while the Global Database enables a single database cluster to span multiple AWS Regions in low latency.

Aurora already provides great scalability with the growth of user data volume. Can it handle more data and support more concurrent access? You may consider using sharding to support the configuration of multiple underlying Aurora clusters. To this end, a series of blogs, including this one, provides you with a reference in choosing between Proxy and JDBC for sharding.

1.1 Why sharding is needed

AWS Aurora offers a single relational database. Primary-secondary, multi-primary, and global database, and other forms of hosting architecture can satisfy various architectural scenarios above. However, Aurora doesn’t provide direct support for sharding scenarios, and sharding has a variety of forms, such as vertical and horizontal forms. If we want to further increase data capacity, some problems have to be solved, such as cross-node database Join, associated query, distributed transactions, SQL sorting, page turning, function calculation, database global primary key, capacity planning, and secondary capacity expansion after sharding.

1.2 Sharding methods

It is generally accepted that when the capacity of a MySQL table is less than 10 million, the time spent on queries is optimal because at this time the height of its BTREE index is between 3 and 5. Data sharding can reduce the amount of data in a single table and distribute the read and write loads to different data nodes at the same time. Data sharding can be divided into vertical sharding and horizontal sharding.

1. Advantages of vertical sharding

  • Address the coupling of business system and make clearer.
  • Implement hierarchical management, maintenance, monitoring, and expansion to data of different businesses, like micro-service governance.
  • In high concurrency scenarios, vertical sharding removes the bottleneck of IO, database connections, and hardware resources on a single machine to some extent.

2. Disadvantages of vertical sharding

  • After splitting the library, Join can only be implemented by interface aggregation, which will increase the complexity of development.
  • After splitting the library, it is complex to process distributed transactions.
  • There is a large amount of data on a single table and horizontal sharding is required.

3. Advantages of horizontal sharding

  • There is no such performance bottleneck as a large amount of data on a single database and high concurrency, and it increases system stability and load capacity.
  • The business modules do not need to be split due to minor modification on the application client.

4. Disadvantages of horizontal sharding

  • Transaction consistency across shards is hard to be guaranteed;
  • The performance of associated query in cross-library Join is poor.
  • It’s difficult to scale the data many times and maintenance is a big workload.

Based on the analysis above, and the available studis on popular sharding middleware, we selected ShardingSphere, an open source product, combined with Amazon Aurora to introduce how the combination of these two products meets various forms of sharding and how to solve the problems brought by sharding.

ShardingSphere is an open source ecosystem including a set of distributed database middleware solutions, including 3 independent products, Sharding-JDBC, Sharding-Proxy & Sharding-Sidecar.

2. ShardingSphere introduction:

The characteristics of Sharding-JDBC are:

  1. With the client end connecting directly to the database, it provides service in the form of jar and requires no extra deployment and dependence.
  2. It can be considered as an enhanced JDBC driver, which is fully compatible with JDBC and all kinds of ORM frameworks.
  3. Applicable in any ORM framework based on JDBC, such as JPA, Hibernate, Mybatis, Spring JDBC Template or direct use of JDBC.
  4. Support any third-party database connection pool, such as DBCP, C3P0, BoneCP, Druid, HikariCP;
  5. Support any kind of JDBC standard database: MySQL, Oracle, SQLServer, PostgreSQL and any databases accessible to JDBC.
  6. Sharding-JDBC adopts decentralized architecture, applicable to high-performance light-weight OLTP application developed with Java

Hybrid Structure Integrating Sharding-JDBC and Applications

Sharding-JDBC’s core concepts

Data node: The smallest unit of a data slice, consisting of a data source name and a data table, such as ds_0.product_order_0.

Actual table: The physical table that really exists in the horizontal sharding database, such as product order tables: product_order_0, product_order_1, and product_order_2.

Logic table: The logical name of the horizontal sharding databases (tables) with the same schema. For instance, the logic table of the order product_order_0, product_order_1, and product_order_2 is product_order.

Binding table: It refers to the primary table and the joiner table with the same sharding rules. For example, product_order table and product_order_item are sharded by order_id, so they are binding tables with each other. Cartesian product correlation will not appear in the multi-tables correlating query, so the query efficiency will increase greatly.

Broadcast table: It refers to tables that exist in all sharding database sources. The schema and data must consist in each database. It can be applied to the small data volume that needs to correlate with big data tables to query, dictionary table and configuration table for example.

3. Testing ShardingSphere-JDBC

3.1 Example project

Download the example project code locally. In order to ensure the stability of the test code, we choose shardingsphere-example-4.0.0 version.

git clone https://github.com/apache/shardingsphere-example.git

Project description:

shardingsphere-example
  ├── example-core
  │   ├── config-utility
  │   ├── example-api
  │   ├── example-raw-jdbc
  │   ├── example-spring-jpa #spring+jpa integration-based entity,repository
  │   └── example-spring-mybatis
  ├── sharding-jdbc-example
  │   ├── sharding-example
  │   │   ├── sharding-raw-jdbc-example
  │   │   ├── sharding-spring-boot-jpa-example #integration-based sharding-jdbc functions
  │   │   ├── sharding-spring-boot-mybatis-example
  │   │   ├── sharding-spring-namespace-jpa-example
  │   │   └── sharding-spring-namespace-mybatis-example
  │   ├── orchestration-example
  │   │   ├── orchestration-raw-jdbc-example
  │   │   ├── orchestration-spring-boot-example #integration-based sharding-jdbc governance function
  │   │   └── orchestration-spring-namespace-example
  │   ├── transaction-example
  │   │   ├── transaction-2pc-xa-example #sharding-jdbc sample of two-phase commit for a distributed transaction
  │   │   └──transaction-base-seata-example #sharding-jdbc distributed transaction seata sample
  │   ├── other-feature-example
  │   │   ├── hint-example
  │   │   └── encrypt-example
  ├── sharding-proxy-example
  │   └── sharding-proxy-boot-mybatis-example
  └── src/resources
        └── manual_schema.sql  

Configuration file description:

application-master-slave.properties #read/write splitting profile
application-sharding-databases-tables.properties #sharding profile
application-sharding-databases.properties       #library split profile only
application-sharding-master-slave.properties    #sharding and read/write splitting profile
application-sharding-tables.properties          #table split profile
application.properties                         #spring boot profile

Code logic description:

The following is the entry class of the Spring Boot application below. Execute it to run the project.

The execution logic of demo is as follows:

3.2 Verifying read/write splitting

As business grows, the write and read requests can be split to different database nodes to effectively promote the processing capability of the entire database cluster. Aurora uses a reader/writer endpoint to meet users' requirements to write and read with strong consistency, and a read-only endpoint to meet the requirements to read without strong consistency. Aurora's read and write latency is within single-digit milliseconds, much lower than MySQL's binlog-based logical replication, so there's a lot of loads that can be directed to a read-only endpoint.

Through the one primary and multiple secondary configuration, query requests can be evenly distributed to multiple data replicas, which further improves the processing capability of the system. Read/write splitting can improve the throughput and availability of system, but it can also lead to data inconsistency. Aurora provides a primary/secondary architecture in a fully managed form, but applications on the upper-layer still need to manage multiple data sources when interacting with Aurora, routing SQL requests to different nodes based on the read/write type of SQL statements and certain routing policies.

ShardingSphere-JDBC provides read/write splitting features and it is integrated with application programs so that the complex configuration between application programs and database clusters can be separated from application programs. Developers can manage the Shard through configuration files and combine it with ORM frameworks such as Spring JPA and Mybatis to completely separate the duplicated logic from the code, which greatly improves the ability to maintain code and reduces the coupling between code and database.

3.2.1 Setting up the database environment

Create a set of Aurora MySQL read/write splitting clusters. The model is db.r5.2xlarge. Each set of clusters has one write node and two read nodes.

3.2.2 Configuring Sharding-JDBC

application.properties spring boot Master profile description:

You need to replace the green ones with your own environment configuration.

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#Activate master-slave configuration item so that sharding-jdbc can use master-slave profile
spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-master-slave.properties sharding-jdbc profile description:

spring.shardingsphere.datasource.names=ds_master,ds_slave_0,ds_slave_1
# data souce-master
spring.shardingsphere.datasource.ds_master.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master.password=Your master DB password
spring.shardingsphere.datasource.ds_master.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master.jdbc-url=Your primary DB data sourceurl spring.shardingsphere.datasource.ds_master.username=Your primary DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_0.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_0.jdbc-url=Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_0.username= Your slave DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_1.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_1.jdbc-url= Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_1.username= Your slave DB username
# Routing Policy Configuration
spring.shardingsphere.masterslave.load-balance-algorithm-type=round_robin
spring.shardingsphere.masterslave.name=ds_ms
spring.shardingsphere.masterslave.master-data-source-name=ds_master
spring.shardingsphere.masterslave.slave-data-source-names=ds_slave_0,ds_slave_1
# sharding-jdbc configures the information storage mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log,and you can see the conversion from logical SQL to actual SQL from the print
spring.shardingsphere.props.sql.show=true

 

3.2.3 Test and verification process description

  • Test environment data initialization: Spring JPA initialization automatically creates tables for testing.

  • Write data to the master instance

As shown in the ShardingSphere-SQL log figure below, the write SQL is executed on the ds_master data source.

  • Data query operations are performed on the slave library.

As shown in the ShardingSphere-SQL log figure below, the read SQL is executed on the ds_slave data source in the form of polling.

[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] Rule Type: master-slave 
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] SQL: select orderentit0_.order_id as order_id1_1_, orderentit0_.address_id as address_2_1_, 
orderentit0_.status as status3_1_, orderentit0_.user_id as user_id4_1_ from t_order orderentit0_ ::: DataSources: ds_slave_0 
---------------------------- Print OrderItem Data -------------------
Hibernate: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, orderiteme1_.user_id 
as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] Rule Type: master-slave 
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] SQL: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, 
orderiteme1_.user_id as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id ::: DataSources: ds_slave_1 

Note: As shown in the figure below, if there are both reads and writes in a transaction, Sharding-JDBC routes both read and write operations to the master library. If the read/write requests are not in the same transaction, the corresponding read requests are distributed to different read nodes according to the routing policy.

@Override
@Transactional // When a transaction is started, both read and write in the transaction go through the master library. When closed, read goes through the slave library and write goes through the master library
public void processSuccess() throws SQLException {
    System.out.println("-------------- Process Success Begin ---------------");
    List<Long> orderIds = insertData();
    printData();
    deleteData(orderIds);
    printData();
    System.out.println("-------------- Process Success Finish --------------");
}

3.2.4 Verifying Aurora failover scenario

The Aurora database environment adopts the configuration described in Section 2.2.1.

3.2.4.1 Verification process description

  1. Start the Spring-Boot project

2. Perform a failover on Aurora’s console

3. Execute the Rest API request

4. Repeatedly execute POST (http://localhost:8088/save-user) until the call to the API failed to write to Aurora and eventually recovered successfully.

5. The following figure shows the process of executing code failover. It takes about 37 seconds from the time when the latest SQL write is successfully performed to the time when the next SQL write is successfully performed. That is, the application can be automatically recovered from Aurora failover, and the recovery time is about 37 seconds.

3.3 Testing table sharding-only function

3.3.1 Configuring Sharding-JDBC

application.properties spring boot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#Activate sharding-tables configuration items
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
# spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-tables.properties sharding-jdbc profile description

## configure primary-key policy
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# configure the binding relation of t_order and t_order_item
spring.shardingsphere.sharding.binding-tables[0]=t_order,t_order_item
# configure broadcast tables
spring.shardingsphere.sharding.broadcast-tables=t_address
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.3.2 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC routing rules are configured, the client executes DDL, and Sharding-JDBC automatically creates corresponding tables according to the table splitting rules. If t_address is a broadcast table, create a t_address because there is only one master instance. Two physical tables t_order_0 and t_order_1 will be created when creating t_order.

2. Write operation

As shown in the figure below, Logic SQL inserts a record into t_order. When Sharding-JDBC is executed, data will be distributed to t_order_0 and t_order_1 according to the table splitting rules.

When t_order and t_order_item are bound, the records associated with order_item and order are placed on the same physical table.

3. Read operation

As shown in the figure below, perform the join query operations to order and order_item under the binding table, and the physical shard is precisely located based on the binding relationship.

The join query operations on order and order_item under the unbound table will traverse all shards.

3.4 Testing database sharding-only function

3.4.1 Setting up the database environment

Create two instances on Aurora: ds_0 and ds_1

When the sharding-spring-boot-jpa-example project is started, tables t_order, t_order_itemt_address will be created on two Aurora instances.

3.4.2 Configuring Sharding-JDBC

application.properties springboot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

# Activate sharding-databases configuration items
spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-databases.properties sharding-jdbc profile description

spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= spring.shardingsphere.datasource.ds_0.username= 
spring.shardingsphere.datasource.ds_0.password=
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url= 
spring.shardingsphere.datasource.ds_1.username= 
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_0

spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.4.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, physical tables will be created on ds_0 and ds_1. The three tables, t_address, t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address, each record written will also be written to the t_address tables of ds_0 and ds_1.

The tables t_order and t_order_item of the slave library are written on the table in the corresponding instance according to the slave library field and routing policy.

3. Read operation

Query order is routed to the corresponding Aurora instance according to the routing rules of the slave library .

Query Address. Since address is a broadcast table, an instance of address will be randomly selected and queried from the nodes used.

As shown in the figure below, perform the join query operations to order and order_item under the binding table, and the physical shard is precisely located based on the binding relationship.

3.5 Verifying sharding function

3.5.1 Setting up the database environment

As shown in the figure below, create two instances on Aurora: ds_0 and ds_1

When the sharding-spring-boot-jpa-example project is started, physical tables t_order_01, t_order_02, t_order_item_01,and t_order_item_02 and global table t_address will be created on two Aurora instances.

3.5.2 Configuring Sharding-JDBC

application.properties springboot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-databases.properties sharding-jdbc profile description

spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= 306/dev?useSSL=false&characterEncoding=utf-8
spring.shardingsphere.datasource.ds_0.username= 
spring.shardingsphere.datasource.ds_0.password=
spring.shardingsphere.datasource.ds_0.max-active=16
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url= 
spring.shardingsphere.datasource.ds_1.username= 
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.datasource.ds_1.max-active=16
# default library splitting policy
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
# Tables that do not meet the library splitting policy are placed on ds_0
spring.shardingsphere.sharding.default-data-source-name=ds_0
# t_order table splitting policy
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
# t_order_item table splitting policy
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mdoe
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.5.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s sharding and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, t_address will be created on both ds_0 and ds_1. The three tables, t_address, t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address, each record written will also be written to the t_address tables of ds_0 and ds_1.

The tables t_order and t_order_item of the sub-library are written to the table on the corresponding instance according to the slave library field and routing policy.

3. Read operation

The read operation is similar to the library split function verification described in section2.4.3.

3.6 Testing database sharding, table sharding and read/write splitting function

3.6.1 Setting up the database environment

The following figure shows the physical table of the created database instance.

3.6.2 Configuring Sharding-JDBC

application.properties spring boot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

# activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
spring.profiles.active=sharding-master-slave

application-sharding-master-slave.properties sharding-jdbc profile description

The url, name and password of the database need to be changed to your own database parameters.

spring.shardingsphere.datasource.names=ds_master_0,ds_master_1,ds_master_0_slave_0,ds_master_0_slave_1,ds_master_1_slave_0,ds_master_1_slave_1
spring.shardingsphere.datasource.ds_master_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0.username= 
spring.shardingsphere.datasource.ds_master_0.password=
spring.shardingsphere.datasource.ds_master_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_0.username= 
spring.shardingsphere.datasource.ds_master_0_slave_0.password=
spring.shardingsphere.datasource.ds_master_0_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_1.username= 
spring.shardingsphere.datasource.ds_master_0_slave_1.password=
spring.shardingsphere.datasource.ds_master_0_slave_1.max-active=16
spring.shardingsphere.datasource.ds_master_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1.jdbc-url= 
spring.shardingsphere.datasource.ds_master_1.username= 
spring.shardingsphere.datasource.ds_master_1.password=
spring.shardingsphere.datasource.ds_master_1.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_0.jdbc-url=
spring.shardingsphere.datasource.ds_master_1_slave_0.username=
spring.shardingsphere.datasource.ds_master_1_slave_0.password=
spring.shardingsphere.datasource.ds_master_1_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_1_slave_1.username=admin
spring.shardingsphere.datasource.ds_master_1_slave_1.password=
spring.shardingsphere.datasource.ds_master_1_slave_1.max-active=16
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_master_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# master/slave data source and slave data source configuration
spring.shardingsphere.sharding.master-slave-rules.ds_0.master-data-source-name=ds_master_0
spring.shardingsphere.sharding.master-slave-rules.ds_0.slave-data-source-names=ds_master_0_slave_0, ds_master_0_slave_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.master-data-source-name=ds_master_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.slave-data-source-names=ds_master_1_slave_0, ds_master_1_slave_1
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.6.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, t_address will be created on both ds_0 and ds_1. The three tables, t_address, t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address, each record written will also be written to the t_address tables of ds_0 and ds_1.

The tables t_order and t_order_item of the slave library are written to the table on the corresponding instance according to the slave library field and routing policy.

3. Read operation

The join query operations on order and order_item under the binding table are shown below.

3. Conclusion

As an open source product focusing on database enhancement, ShardingSphere is pretty good in terms of its community activitiy, product maturity and documentation richness.

Among its products, ShardingSphere-JDBC is a sharding solution based on the client-side, which supports all sharding scenarios. And there’s no need to introduce an intermediate layer like Proxy, so the complexity of operation and maintenance is reduced. Its latency is theoretically lower than Proxy due to the lack of intermediate layer. In addition, ShardingSphere-JDBC can support a variety of relational databases based on SQL standards such as MySQL/PostgreSQL/Oracle/SQL Server, etc.

However, due to the integration of Sharding-JDBC with the application program, it only supports Java language for now, and is strongly dependent on the application programs. Nevertheless, Sharding-JDBC separates all sharding configuration from the application program, which brings relatively small changes when switching to other middleware.

In conclusion, Sharding-JDBC is a good choice if you use a Java-based system and have to to interconnect with different relational databases — and don’t want to bother with introducing an intermediate layer.

Author

Sun Jinhua

A senior solution architect at AWS, Sun is responsible for the design and consult on cloud architecture. for providing customers with cloud-related design and consulting services. Before joining AWS, he ran his own business, specializing in building e-commerce platforms and designing the overall architecture for e-commerce platforms of automotive companies. He worked in a global leading communication equipment company as a senior engineer, responsible for the development and architecture design of multiple subsystems of LTE equipment system. He has rich experience in architecture design with high concurrency and high availability system, microservice architecture design, database, middleware, IOT etc.

GraphQL Server with Netflix DGS and Spring Boot

Here we started with the basics of Netflix DGS and how it can be used to write GraphQL components and how it can be used to implement Query and Mutation with an example to persist and read data from H2 DB.

#spring-boot #netflix #graphql 

How to Configure the Interceptor With Spring Boot Application

In the video in this article, we take a closer look at how to configure the interceptor with the Spring Boot application! Let’s take a look!

#spring boot #spring boot tutorial #interceptor #interceptors #spring boot interceptor #spring boot tutorial for beginners

Veronica  Roob

Veronica Roob

1648869960

LaravelS: An Out-Of-The-Box Adapter Between Swoole and Laravel/Lumen

 _                               _  _____ 
| |                             | |/ ____|
| |     __ _ _ __ __ ___   _____| | (___  
| |    / _` | '__/ _` \ \ / / _ \ |\___ \ 
| |___| (_| | | | (_| |\ V /  __/ |____) |
|______\__,_|_|  \__,_| \_/ \___|_|_____/ 
                                           

🚀 LaravelS is an out-of-the-box adapter between Swoole and Laravel/Lumen.

Please Watch this repository to get the latest updates.

中文文档

Features

Built-in Http/WebSocket server

Multi-port mixed protocol

Custom process

Memory resident

Asynchronous event listening

Asynchronous task queue

Millisecond cron job

Common Components

Gracefully reload

Automatically reload after modifying code

Support Laravel/Lumen both, good compatibility

Simple & Out of the box

Benchmark

Which is the fastest web framework?

TechEmpower Framework Benchmarks

Requirements

DependencyRequirement
PHP>= 5.5.9 Recommend PHP7+
Swoole>= 1.7.19 No longer support PHP5 since 2.0.12 Recommend 4.5.0+
Laravel/Lumen>= 5.1 Recommend 8.0+

Install

1.Require package via Composer(packagist).

composer require "hhxsv5/laravel-s:~3.7.0" -vvv
# Make sure that your composer.lock file is under the VCS

2.Register service provider(pick one of two).

Laravel: in config/app.php file, Laravel 5.5+ supports package discovery automatically, you should skip this step

'providers' => [
    //...
    Hhxsv5\LaravelS\Illuminate\LaravelSServiceProvider::class,
],

Lumen: in bootstrap/app.php file

$app->register(Hhxsv5\LaravelS\Illuminate\LaravelSServiceProvider::class);

3.Publish configuration and binaries.

After upgrading LaravelS, you need to republish; click here to see the change notes of each version.

php artisan laravels publish
# Configuration: config/laravels.php
# Binary: bin/laravels bin/fswatch bin/inotify

4.Change config/laravels.php: listen_ip, listen_port, refer Settings.

5.Performance tuning

Adjust kernel parameters

Number of Workers: LaravelS uses Swoole's Synchronous IO mode, the larger the worker_num setting, the better the concurrency performance, but it will cause more memory usage and process switching overhead. If one request takes 100ms, in order to provide 1000QPS concurrency, at least 100 Worker processes need to be configured. The calculation method is: worker_num = 1000QPS/(1s/1ms) = 100, so incremental pressure testing is needed to calculate the best worker_num.

Number of Task Workers

Run

Please read the notices carefully before running, Important notices(IMPORTANT).

  • Commands: php bin/laravels {start|stop|restart|reload|info|help}.
CommandDescription
startStart LaravelS, list the processes by "ps -ef|grep laravels"
stopStop LaravelS, and trigger the method onStop of Custom process
restartRestart LaravelS: Stop gracefully before starting; The service is unavailable until startup is complete
reloadReload all Task/Worker/Timer processes which contain your business codes, and trigger the method onReload of Custom process, CANNOT reload Master/Manger processes. After modifying config/laravels.php, you only have to call restart to restart
infoDisplay component version information
helpDisplay help information
  • Boot options for the commands start and restart.
OptionDescription
-d|--daemonizeRun as a daemon, this option will override the swoole.daemonize setting in laravels.php
-e|--envThe environment the command should run under, such as --env=testing will use the configuration file .env.testing firstly, this feature requires Laravel 5.2+
-i|--ignoreIgnore checking PID file of Master process
-x|--x-versionThe version(branch) of the current project, stored in $_ENV/$_SERVER, access via $_ENV['X_VERSION'] $_SERVER['X_VERSION'] $request->server->get('X_VERSION')
  • Runtime files: start will automatically execute php artisan laravels config and generate these files, developers generally don't need to pay attention to them, it's recommended to add them to .gitignore.
FileDescription
storage/laravels.confLaravelS's runtime configuration file
storage/laravels.pidPID file of Master process
storage/laravels-timer-process.pidPID file of the Timer process
storage/laravels-custom-processes.pidPID file of all custom processes

Deploy

It is recommended to supervise the main process through Supervisord, the premise is without option -d and to set swoole.daemonize to false.

[program:laravel-s-test]
directory=/var/www/laravel-s-test
command=/usr/local/bin/php bin/laravels start -i
numprocs=1
autostart=true
autorestart=true
startretries=3
user=www-data
redirect_stderr=true
stdout_logfile=/var/log/supervisor/%(program_name)s.log

Cooperate with Nginx (Recommended)

Demo.

gzip on;
gzip_min_length 1024;
gzip_comp_level 2;
gzip_types text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml application/x-httpd-php image/jpeg image/gif image/png font/ttf font/otf image/svg+xml;
gzip_vary on;
gzip_disable "msie6";
upstream swoole {
    # Connect IP:Port
    server 127.0.0.1:5200 weight=5 max_fails=3 fail_timeout=30s;
    # Connect UnixSocket Stream file, tips: put the socket file in the /dev/shm directory to get better performance
    #server unix:/yourpath/laravel-s-test/storage/laravels.sock weight=5 max_fails=3 fail_timeout=30s;
    #server 192.168.1.1:5200 weight=3 max_fails=3 fail_timeout=30s;
    #server 192.168.1.2:5200 backup;
    keepalive 16;
}
server {
    listen 80;
    # Don't forget to bind the host
    server_name laravels.com;
    root /yourpath/laravel-s-test/public;
    access_log /yourpath/log/nginx/$server_name.access.log  main;
    autoindex off;
    index index.html index.htm;
    # Nginx handles the static resources(recommend enabling gzip), LaravelS handles the dynamic resource.
    location / {
        try_files $uri @laravels;
    }
    # Response 404 directly when request the PHP file, to avoid exposing public/*.php
    #location ~* \.php$ {
    #    return 404;
    #}
    location @laravels {
        # proxy_connect_timeout 60s;
        # proxy_send_timeout 60s;
        # proxy_read_timeout 120s;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Real-PORT $remote_port;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header Scheme $scheme;
        proxy_set_header Server-Protocol $server_protocol;
        proxy_set_header Server-Name $server_name;
        proxy_set_header Server-Addr $server_addr;
        proxy_set_header Server-Port $server_port;
        # "swoole" is the upstream
        proxy_pass http://swoole;
    }
}

Cooperate with Apache

LoadModule proxy_module /yourpath/modules/mod_proxy.so
LoadModule proxy_balancer_module /yourpath/modules/mod_proxy_balancer.so
LoadModule lbmethod_byrequests_module /yourpath/modules/mod_lbmethod_byrequests.so
LoadModule proxy_http_module /yourpath/modules/mod_proxy_http.so
LoadModule slotmem_shm_module /yourpath/modules/mod_slotmem_shm.so
LoadModule rewrite_module /yourpath/modules/mod_rewrite.so
LoadModule remoteip_module /yourpath/modules/mod_remoteip.so
LoadModule deflate_module /yourpath/modules/mod_deflate.so

<IfModule deflate_module>
    SetOutputFilter DEFLATE
    DeflateCompressionLevel 2
    AddOutputFilterByType DEFLATE text/html text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml application/x-httpd-php image/jpeg image/gif image/png font/ttf font/otf image/svg+xml
</IfModule>

<VirtualHost *:80>
    # Don't forget to bind the host
    ServerName www.laravels.com
    ServerAdmin hhxsv5@sina.com

    DocumentRoot /yourpath/laravel-s-test/public;
    DirectoryIndex index.html index.htm
    <Directory "/">
        AllowOverride None
        Require all granted
    </Directory>

    RemoteIPHeader X-Forwarded-For

    ProxyRequests Off
    ProxyPreserveHost On
    <Proxy balancer://laravels>  
        BalancerMember http://192.168.1.1:5200 loadfactor=7
        #BalancerMember http://192.168.1.2:5200 loadfactor=3
        #BalancerMember http://192.168.1.3:5200 loadfactor=1 status=+H
        ProxySet lbmethod=byrequests
    </Proxy>
    #ProxyPass / balancer://laravels/
    #ProxyPassReverse / balancer://laravels/

    # Apache handles the static resources, LaravelS handles the dynamic resource.
    RewriteEngine On
    RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} !-d
    RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} !-f
    RewriteRule ^/(.*)$ balancer://laravels%{REQUEST_URI} [P,L]

    ErrorLog ${APACHE_LOG_DIR}/www.laravels.com.error.log
    CustomLog ${APACHE_LOG_DIR}/www.laravels.com.access.log combined
</VirtualHost>

Enable WebSocket server

The Listening address of WebSocket Sever is the same as Http Server.

1.Create WebSocket Handler class, and implement interface WebSocketHandlerInterface.The instant is automatically instantiated when start, you do not need to manually create it.

namespace App\Services;
use Hhxsv5\LaravelS\Swoole\WebSocketHandlerInterface;
use Swoole\Http\Request;
use Swoole\Http\Response;
use Swoole\WebSocket\Frame;
use Swoole\WebSocket\Server;
/**
 * @see https://www.swoole.co.uk/docs/modules/swoole-websocket-server
 */
class WebSocketService implements WebSocketHandlerInterface
{
    // Declare constructor without parameters
    public function __construct()
    {
    }
    // public function onHandShake(Request $request, Response $response)
    // {
           // Custom handshake: https://www.swoole.co.uk/docs/modules/swoole-websocket-server-on-handshake
           // The onOpen event will be triggered automatically after a successful handshake
    // }
    public function onOpen(Server $server, Request $request)
    {
        // Before the onOpen event is triggered, the HTTP request to establish the WebSocket has passed the Laravel route,
        // so Laravel's Request, Auth information are readable, Session is readable and writable, but only in the onOpen event.
        // \Log::info('New WebSocket connection', [$request->fd, request()->all(), session()->getId(), session('xxx'), session(['yyy' => time()])]);
        // The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
        $server->push($request->fd, 'Welcome to LaravelS');
    }
    public function onMessage(Server $server, Frame $frame)
    {
        // \Log::info('Received message', [$frame->fd, $frame->data, $frame->opcode, $frame->finish]);
        // The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
        $server->push($frame->fd, date('Y-m-d H:i:s'));
    }
    public function onClose(Server $server, $fd, $reactorId)
    {
        // The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
    }
}

2.Modify config/laravels.php.

// ...
'websocket'      => [
    'enable'  => true, // Note: set enable to true
    'handler' => \App\Services\WebSocketService::class,
],
'swoole'         => [
    //...
    // Must set dispatch_mode in (2, 4, 5), see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
    'dispatch_mode' => 2,
    //...
],
// ...

3.Use SwooleTable to bind FD & UserId, optional, Swoole Table Demo. Also you can use the other global storage services, like Redis/Memcached/MySQL, but be careful that FD will be possible conflicting between multiple Swoole Servers.

4.Cooperate with Nginx (Recommended)

Refer WebSocket Proxy

map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}
upstream swoole {
    # Connect IP:Port
    server 127.0.0.1:5200 weight=5 max_fails=3 fail_timeout=30s;
    # Connect UnixSocket Stream file, tips: put the socket file in the /dev/shm directory to get better performance
    #server unix:/yourpath/laravel-s-test/storage/laravels.sock weight=5 max_fails=3 fail_timeout=30s;
    #server 192.168.1.1:5200 weight=3 max_fails=3 fail_timeout=30s;
    #server 192.168.1.2:5200 backup;
    keepalive 16;
}
server {
    listen 80;
    # Don't forget to bind the host
    server_name laravels.com;
    root /yourpath/laravel-s-test/public;
    access_log /yourpath/log/nginx/$server_name.access.log  main;
    autoindex off;
    index index.html index.htm;
    # Nginx handles the static resources(recommend enabling gzip), LaravelS handles the dynamic resource.
    location / {
        try_files $uri @laravels;
    }
    # Response 404 directly when request the PHP file, to avoid exposing public/*.php
    #location ~* \.php$ {
    #    return 404;
    #}
    # Http and WebSocket are concomitant, Nginx identifies them by "location"
    # !!! The location of WebSocket is "/ws"
    # Javascript: var ws = new WebSocket("ws://laravels.com/ws");
    location =/ws {
        # proxy_connect_timeout 60s;
        # proxy_send_timeout 60s;
        # proxy_read_timeout: Nginx will close the connection if the proxied server does not send data to Nginx in 60 seconds; At the same time, this close behavior is also affected by heartbeat setting of Swoole.
        # proxy_read_timeout 60s;
        proxy_http_version 1.1;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Real-PORT $remote_port;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header Scheme $scheme;
        proxy_set_header Server-Protocol $server_protocol;
        proxy_set_header Server-Name $server_name;
        proxy_set_header Server-Addr $server_addr;
        proxy_set_header Server-Port $server_port;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_pass http://swoole;
    }
    location @laravels {
        # proxy_connect_timeout 60s;
        # proxy_send_timeout 60s;
        # proxy_read_timeout 60s;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Real-PORT $remote_port;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header Scheme $scheme;
        proxy_set_header Server-Protocol $server_protocol;
        proxy_set_header Server-Name $server_name;
        proxy_set_header Server-Addr $server_addr;
        proxy_set_header Server-Port $server_port;
        proxy_pass http://swoole;
    }
}

5.Heartbeat setting

Heartbeat setting of Swoole

// config/laravels.php
'swoole' => [
    //...
    // All connections are traversed every 60 seconds. If a connection does not send any data to the server within 600 seconds, the connection will be forced to close.
    'heartbeat_idle_time'      => 600,
    'heartbeat_check_interval' => 60,
    //...
],

Proxy read timeout of Nginx

# Nginx will close the connection if the proxied server does not send data to Nginx in 60 seconds
proxy_read_timeout 60s;

6.Push data in controller

namespace App\Http\Controllers;
class TestController extends Controller
{
    public function push()
    {
        $fd = 1; // Find fd by userId from a map [userId=>fd].
        /**@var \Swoole\WebSocket\Server $swoole */
        $swoole = app('swoole');
        $success = $swoole->push($fd, 'Push data to fd#1 in Controller');
        var_dump($success);
    }
}

Listen events

System events

Usually, you can reset/destroy some global/static variables, or change the current Request/Response object.

laravels.received_request After LaravelS parsed Swoole\Http\Request to Illuminate\Http\Request, before Laravel's Kernel handles this request.

// Edit file `app/Providers/EventServiceProvider.php`, add the following code into method `boot`
// If no variable $events, you can also call Facade \Event::listen(). 
$events->listen('laravels.received_request', function (\Illuminate\Http\Request $req, $app) {
    $req->query->set('get_key', 'hhxsv5');// Change query of request
    $req->request->set('post_key', 'hhxsv5'); // Change post of request
});

laravels.generated_response After Laravel's Kernel handled the request, before LaravelS parses Illuminate\Http\Response to Swoole\Http\Response.

// Edit file `app/Providers/EventServiceProvider.php`, add the following code into method `boot`
// If no variable $events, you can also call Facade \Event::listen(). 
$events->listen('laravels.generated_response', function (\Illuminate\Http\Request $req, \Symfony\Component\HttpFoundation\Response $rsp, $app) {
    $rsp->headers->set('header-key', 'hhxsv5');// Change header of response
});

Customized asynchronous events

This feature depends on AsyncTask of Swoole, your need to set swoole.task_worker_num in config/laravels.php firstly. The performance of asynchronous event processing is influenced by number of Swoole task process, you need to set task_worker_num appropriately.

1.Create event class.

use Hhxsv5\LaravelS\Swoole\Task\Event;
class TestEvent extends Event
{
    protected $listeners = [
        // Listener list
        TestListener1::class,
        // TestListener2::class,
    ];
    private $data;
    public function __construct($data)
    {
        $this->data = $data;
    }
    public function getData()
    {
        return $this->data;
    }
}

2.Create listener class.

use Hhxsv5\LaravelS\Swoole\Task\Task;
use Hhxsv5\LaravelS\Swoole\Task\Listener;
class TestListener1 extends Listener
{
    /**
     * @var TestEvent
     */
    protected $event;
    
    public function handle()
    {
        \Log::info(__CLASS__ . ':handle start', [$this->event->getData()]);
        sleep(2);// Simulate the slow codes
        // Deliver task in CronJob, but NOT support callback finish() of task.
        // Note: Modify task_ipc_mode to 1 or 2 in config/laravels.php, see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
        $ret = Task::deliver(new TestTask('task data'));
        var_dump($ret);
        // The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
    }
}

3.Fire event.

// Create instance of event and fire it, "fire" is asynchronous.
use Hhxsv5\LaravelS\Swoole\Task\Event;
$event = new TestEvent('event data');
// $event->delay(10); // Delay 10 seconds to fire event
// $event->setTries(3); // When an error occurs, try 3 times in total
$success = Event::fire($event);
var_dump($success);// Return true if sucess, otherwise false

Asynchronous task queue

This feature depends on AsyncTask of Swoole, your need to set swoole.task_worker_num in config/laravels.php firstly. The performance of task processing is influenced by number of Swoole task process, you need to set task_worker_num appropriately.

1.Create task class.

use Hhxsv5\LaravelS\Swoole\Task\Task;
class TestTask extends Task
{
    private $data;
    private $result;
    public function __construct($data)
    {
        $this->data = $data;
    }
    // The logic of task handling, run in task process, CAN NOT deliver task
    public function handle()
    {
        \Log::info(__CLASS__ . ':handle start', [$this->data]);
        sleep(2);// Simulate the slow codes
        // The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
        $this->result = 'the result of ' . $this->data;
    }
    // Optional, finish event, the logic of after task handling, run in worker process, CAN deliver task 
    public function finish()
    {
        \Log::info(__CLASS__ . ':finish start', [$this->result]);
        Task::deliver(new TestTask2('task2 data')); // Deliver the other task
    }
}

2.Deliver task.

// Create instance of TestTask and deliver it, "deliver" is asynchronous.
use Hhxsv5\LaravelS\Swoole\Task\Task;
$task = new TestTask('task data');
// $task->delay(3);// delay 3 seconds to deliver task
// $task->setTries(3); // When an error occurs, try 3 times in total
$ret = Task::deliver($task);
var_dump($ret);// Return true if sucess, otherwise false

Millisecond cron job

Wrapper cron job base on Swoole's Millisecond Timer, replace Linux Crontab.

1.Create cron job class.

namespace App\Jobs\Timer;
use App\Tasks\TestTask;
use Swoole\Coroutine;
use Hhxsv5\LaravelS\Swoole\Task\Task;
use Hhxsv5\LaravelS\Swoole\Timer\CronJob;
class TestCronJob extends CronJob
{
    protected $i = 0;
    // !!! The `interval` and `isImmediate` of cron job can be configured in two ways(pick one of two): one is to overload the corresponding method, and the other is to pass parameters when registering cron job.
    // --- Override the corresponding method to return the configuration: begin
    public function interval()
    {
        return 1000;// Run every 1000ms
    }
    public function isImmediate()
    {
        return false;// Whether to trigger `run` immediately after setting up
    }
    // --- Override the corresponding method to return the configuration: end
    public function run()
    {
        \Log::info(__METHOD__, ['start', $this->i, microtime(true)]);
        // do something
        // sleep(1); // Swoole < 2.1
        Coroutine::sleep(1); // Swoole>=2.1 Coroutine will be automatically created for run().
        $this->i++;
        \Log::info(__METHOD__, ['end', $this->i, microtime(true)]);

        if ($this->i >= 10) { // Run 10 times only
            \Log::info(__METHOD__, ['stop', $this->i, microtime(true)]);
            $this->stop(); // Stop this cron job, but it will run again after restart/reload.
            // Deliver task in CronJob, but NOT support callback finish() of task.
            // Note: Modify task_ipc_mode to 1 or 2 in config/laravels.php, see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
            $ret = Task::deliver(new TestTask('task data'));
            var_dump($ret);
        }
        // The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
    }
}

2.Register cron job.

// Register cron jobs in file "config/laravels.php"
[
    // ...
    'timer'          => [
        'enable' => true, // Enable Timer
        'jobs'   => [ // The list of cron job
            // Enable LaravelScheduleJob to run `php artisan schedule:run` every 1 minute, replace Linux Crontab
            // \Hhxsv5\LaravelS\Illuminate\LaravelScheduleJob::class,
            // Two ways to configure parameters:
            // [\App\Jobs\Timer\TestCronJob::class, [1000, true]], // Pass in parameters when registering
            \App\Jobs\Timer\TestCronJob::class, // Override the corresponding method to return the configuration
        ],
        'max_wait_time' => 5, // Max waiting time of reloading
        // Enable the global lock to ensure that only one instance starts the timer when deploying multiple instances. This feature depends on Redis, please see https://laravel.com/docs/7.x/redis
        'global_lock'     => false,
        'global_lock_key' => config('app.name', 'Laravel'),
    ],
    // ...
];

3.Note: it will launch multiple timers when build the server cluster, so you need to make sure that launch one timer only to avoid running repetitive task.

4.LaravelS v3.4.0 starts to support the hot restart [Reload] Timer process. After LaravelS receives the SIGUSR1 signal, it waits for max_wait_time(default 5) seconds to end the process, then the Manager process will pull up the Timer process again.

5.If you only need to use minute-level scheduled tasks, it is recommended to enable Hhxsv5\LaravelS\Illuminate\LaravelScheduleJob instead of Linux Crontab, so that you can follow the coding habits of Laravel task scheduling and configure Kernel.

// app/Console/Kernel.php
protected function schedule(Schedule $schedule)
{
    // runInBackground() will start a new child process to execute the task. This is asynchronous and will not affect the execution timing of other tasks.
    $schedule->command(TestCommand::class)->runInBackground()->everyMinute();
}

Automatically reload after modifying code

Via inotify, support Linux only.

1.Install inotify extension.

2.Turn on the switch in Settings.

3.Notice: Modify the file only in Linux to receive the file change events. It's recommended to use the latest Docker. Vagrant Solution.

Via fswatch, support OS X/Linux/Windows.

1.Install fswatch.

2.Run command in your project root directory.

# Watch current directory
./bin/fswatch
# Watch app directory
./bin/fswatch ./app

Via inotifywait, support Linux.

1.Install inotify-tools.

2.Run command in your project root directory.

# Watch current directory
./bin/inotify
# Watch app directory
./bin/inotify ./app

When the above methods does not work, the ultimate solution: set max_request=1,worker_num=1, so that Worker process will restart after processing a request. The performance of this method is very poor, so only development environment use.

Get the instance of SwooleServer in your project

/**
 * $swoole is the instance of `Swoole\WebSocket\Server` if enable WebSocket server, otherwise `Swoole\Http\Server`
 * @var \Swoole\WebSocket\Server|\Swoole\Http\Server $swoole
 */
$swoole = app('swoole');
var_dump($swoole->stats());
$swoole->push($fd, 'Push WebSocket message');

Use SwooleTable

1.Define Table, support multiple.

All defined tables will be created before Swoole starting.

// in file "config/laravels.php"
[
    // ...
    'swoole_tables'  => [
        // Scene:bind UserId & FD in WebSocket
        'ws' => [// The Key is table name, will add suffix "Table" to avoid naming conflicts. Here defined a table named "wsTable"
            'size'   => 102400,// The max size
            'column' => [// Define the columns
                ['name' => 'value', 'type' => \Swoole\Table::TYPE_INT, 'size' => 8],
            ],
        ],
        //...Define the other tables
    ],
    // ...
];

2.Access Table: all table instances will be bound on SwooleServer, access by app('swoole')->xxxTable.

namespace App\Services;
use Hhxsv5\LaravelS\Swoole\WebSocketHandlerInterface;
use Swoole\Http\Request;
use Swoole\WebSocket\Frame;
use Swoole\WebSocket\Server;
class WebSocketService implements WebSocketHandlerInterface
{
    /**@var \Swoole\Table $wsTable */
    private $wsTable;
    public function __construct()
    {
        $this->wsTable = app('swoole')->wsTable;
    }
    // Scene:bind UserId & FD in WebSocket
    public function onOpen(Server $server, Request $request)
    {
        // var_dump(app('swoole') === $server);// The same instance
        /**
         * Get the currently logged in user
         * This feature requires that the path to establish a WebSocket connection go through middleware such as Authenticate.
         * E.g:
         * Browser side: var ws = new WebSocket("ws://127.0.0.1:5200/ws");
         * Then the /ws route in Laravel needs to add the middleware like Authenticate.
         * Route::get('/ws', function () {
         *     // Respond any content with status code 200
         *     return 'websocket';
         * })->middleware(['auth']);
         */
        // $user = Auth::user();
        // $userId = $user ? $user->id : 0; // 0 means a guest user who is not logged in
        $userId = mt_rand(1000, 10000);
        // if (!$userId) {
        //     // Disconnect the connections of unlogged users
        //     $server->disconnect($request->fd);
        //     return;
        // }
        $this->wsTable->set('uid:' . $userId, ['value' => $request->fd]);// Bind map uid to fd
        $this->wsTable->set('fd:' . $request->fd, ['value' => $userId]);// Bind map fd to uid
        $server->push($request->fd, "Welcome to LaravelS #{$request->fd}");
    }
    public function onMessage(Server $server, Frame $frame)
    {
        // Broadcast
        foreach ($this->wsTable as $key => $row) {
            if (strpos($key, 'uid:') === 0 && $server->isEstablished($row['value'])) {
                $content = sprintf('Broadcast: new message "%s" from #%d', $frame->data, $frame->fd);
                $server->push($row['value'], $content);
            }
        }
    }
    public function onClose(Server $server, $fd, $reactorId)
    {
        $uid = $this->wsTable->get('fd:' . $fd);
        if ($uid !== false) {
            $this->wsTable->del('uid:' . $uid['value']); // Unbind uid map
        }
        $this->wsTable->del('fd:' . $fd);// Unbind fd map
        $server->push($fd, "Goodbye #{$fd}");
    }
}

Multi-port mixed protocol

For more information, please refer to Swoole Server AddListener

To make our main server support more protocols not just Http and WebSocket, we bring the feature multi-port mixed protocol of Swoole in LaravelS and name it Socket. Now, you can build TCP/UDP applications easily on top of Laravel.

Create Socket handler class, and extend Hhxsv5\LaravelS\Swoole\Socket\{TcpSocket|UdpSocket|Http|WebSocket}.

namespace App\Sockets;
use Hhxsv5\LaravelS\Swoole\Socket\TcpSocket;
use Swoole\Server;
class TestTcpSocket extends TcpSocket
{
    public function onConnect(Server $server, $fd, $reactorId)
    {
        \Log::info('New TCP connection', [$fd]);
        $server->send($fd, 'Welcome to LaravelS.');
    }
    public function onReceive(Server $server, $fd, $reactorId, $data)
    {
        \Log::info('Received data', [$fd, $data]);
        $server->send($fd, 'LaravelS: ' . $data);
        if ($data === "quit\r\n") {
            $server->send($fd, 'LaravelS: bye' . PHP_EOL);
            $server->close($fd);
        }
    }
    public function onClose(Server $server, $fd, $reactorId)
    {
        \Log::info('Close TCP connection', [$fd]);
        $server->send($fd, 'Goodbye');
    }
}

These Socket connections share the same worker processes with your HTTP/WebSocket connections. So it won't be a problem at all if you want to deliver tasks, use SwooleTable, even Laravel components such as DB, Eloquent and so on. At the same time, you can access Swoole\Server\Port object directly by member property swoolePort.

public function onReceive(Server $server, $fd, $reactorId, $data)
{
    $port = $this->swoolePort; // Get the `Swoole\Server\Port` object
}
namespace App\Http\Controllers;
class TestController extends Controller
{
    public function test()
    {
        /**@var \Swoole\Http\Server|\Swoole\WebSocket\Server $swoole */
        $swoole = app('swoole');
        // $swoole->ports: Traverse all Port objects, https://www.swoole.co.uk/docs/modules/swoole-server/multiple-ports
        $port = $swoole->ports[0]; // Get the `Swoole\Server\Port` object, $port[0] is the port of the main server
        foreach ($port->connections as $fd) { // Traverse all connections
            // $swoole->send($fd, 'Send tcp message');
            // if($swoole->isEstablished($fd)) {
            //     $swoole->push($fd, 'Send websocket message');
            // }
        }
    }
}

Register Sockets.

// Edit `config/laravels.php`
//...
'sockets' => [
    [
        'host'     => '127.0.0.1',
        'port'     => 5291,
        'type'     => SWOOLE_SOCK_TCP,// Socket type: SWOOLE_SOCK_TCP/SWOOLE_SOCK_TCP6/SWOOLE_SOCK_UDP/SWOOLE_SOCK_UDP6/SWOOLE_UNIX_DGRAM/SWOOLE_UNIX_STREAM
        'settings' => [// Swoole settings:https://www.swoole.co.uk/docs/modules/swoole-server-methods#swoole_server-addlistener
            'open_eof_check' => true,
            'package_eof'    => "\r\n",
        ],
        'handler'  => \App\Sockets\TestTcpSocket::class,
        'enable'   => true, // whether to enable, default true
    ],
],

About the heartbeat configuration, it can only be set on the main server and cannot be configured on Socket, but the Socket inherits the heartbeat configuration of the main server.

For TCP socket, onConnect and onClose events will be blocked when dispatch_mode of Swoole is 1/3, so if you want to unblock these two events please set dispatch_mode to 2/4/5.

'swoole' => [
    //...
    'dispatch_mode' => 2,
    //...
];

Test.

TCP: telnet 127.0.0.1 5291

UDP: [Linux] echo "Hello LaravelS" > /dev/udp/127.0.0.1/5292

Register example of other protocols.

  • UDP
  • Http
  • WebSocket: The main server must turn on WebSocket, that is, set websocket.enable to true.

Coroutine

Swoole Coroutine

Warning: The order of code execution in the coroutine is out of order. The data of the request level should be isolated by the coroutine ID. However, there are many singleton and static attributes in Laravel/Lumen, the data between different requests will affect each other, it's Unsafe. For example, the database connection is a singleton, the same database connection shares the same PDO resource. This is fine in the synchronous blocking mode, but it does not work in the asynchronous coroutine mode. Each query needs to create different connections and maintain IO state of different connections, which requires a connection pool.

DO NOT enable the coroutine, only the custom process can use the coroutine.

Custom process

Support developers to create special work processes for monitoring, reporting, or other special tasks. Refer addProcess.

Create Proccess class, implements CustomProcessInterface.

namespace App\Processes;
use App\Tasks\TestTask;
use Hhxsv5\LaravelS\Swoole\Process\CustomProcessInterface;
use Hhxsv5\LaravelS\Swoole\Task\Task;
use Swoole\Coroutine;
use Swoole\Http\Server;
use Swoole\Process;
class TestProcess implements CustomProcessInterface
{
    /**
     * @var bool Quit tag for Reload updates
     */
    private static $quit = false;

    public static function callback(Server $swoole, Process $process)
    {
        // The callback method cannot exit. Once exited, Manager process will automatically create the process 
        while (!self::$quit) {
            \Log::info('Test process: running');
            // sleep(1); // Swoole < 2.1
            Coroutine::sleep(1); // Swoole>=2.1: Coroutine & Runtime will be automatically enabled for callback().
             // Deliver task in custom process, but NOT support callback finish() of task.
            // Note: Modify task_ipc_mode to 1 or 2 in config/laravels.php, see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
            $ret = Task::deliver(new TestTask('task data'));
            var_dump($ret);
            // The upper layer will catch the exception thrown in the callback and record it in the Swoole log, and then this process will exit. The Manager process will re-create the process after 3 seconds, so developers need to try/catch to catch the exception by themselves to avoid frequent process creation.
            // throw new \Exception('an exception');
        }
    }
    // Requirements: LaravelS >= v3.4.0 & callback() must be async non-blocking program.
    public static function onReload(Server $swoole, Process $process)
    {
        // Stop the process...
        // Then end process
        \Log::info('Test process: reloading');
        self::$quit = true;
        // $process->exit(0); // Force exit process
    }
    // Requirements: LaravelS >= v3.7.4 & callback() must be async non-blocking program.
    public static function onStop(Server $swoole, Process $process)
    {
        // Stop the process...
        // Then end process
        \Log::info('Test process: stopping');
        self::$quit = true;
        // $process->exit(0); // Force exit process
    }
}

Register TestProcess.

// Edit `config/laravels.php`
// ...
'processes' => [
    'test' => [ // Key name is process name
        'class'    => \App\Processes\TestProcess::class,
        'redirect' => false, // Whether redirect stdin/stdout, true or false
        'pipe'     => 0,     // The type of pipeline, 0: no pipeline 1: SOCK_STREAM 2: SOCK_DGRAM
        'enable'   => true,  // Whether to enable, default true
        //'num'    => 3   // To create multiple processes of this class, default is 1
        //'queue'    => [ // Enable message queue as inter-process communication, configure empty array means use default parameters
        //    'msg_key'  => 0,    // The key of the message queue. Default: ftok(__FILE__, 1).
        //    'mode'     => 2,    // Communication mode, default is 2, which means contention mode
        //    'capacity' => 8192, // The length of a single message, is limited by the operating system kernel parameters. The default is 8192, and the maximum is 65536
        //],
        //'restart_interval' => 5, // After the process exits abnormally, how many seconds to wait before restarting the process, default 5 seconds
    ],
],

Note: The callback() cannot quit. If quit, the Manager process will re-create the process.

Example: Write data to a custom process.

// config/laravels.php
'processes' => [
    'test' => [
        'class'    => \App\Processes\TestProcess::class,
        'redirect' => false,
        'pipe'     => 1,
    ],
],
// app/Processes/TestProcess.php
public static function callback(Server $swoole, Process $process)
{
    while ($data = $process->read()) {
        \Log::info('TestProcess: read data', [$data]);
        $process->write('TestProcess: ' . $data);
    }
}
// app/Http/Controllers/TestController.php
public function testProcessWrite()
{
    /**@var \Swoole\Process $process */
    $process = app('swoole')->customProcesses['test'];
    $process->write('TestController: write data' . time());
    var_dump($process->read());
}

Common components

Apollo

LaravelS will pull the Apollo configuration and write it to the .env file when starting. At the same time, LaravelS will start the custom process apollo to monitor the configuration and automatically reload when the configuration changes.

Enable Apollo: add --enable-apollo and Apollo parameters to the startup parameters.

php bin/laravels start --enable-apollo --apollo-server=http://127.0.0.1:8080 --apollo-app-id=LARAVEL-S-TEST

Support hot updates(optional).

// Edit `config/laravels.php`
'processes' => Hhxsv5\LaravelS\Components\Apollo\Process::getDefinition(),
// When there are other custom process configurations
'processes' => [
    'test' => [
        'class'    => \App\Processes\TestProcess::class,
        'redirect' => false,
        'pipe'     => 1,
    ],
    // ...
] + Hhxsv5\LaravelS\Components\Apollo\Process::getDefinition(),

List of available parameters.

ParameterDescriptionDefaultDemo
apollo-serverApollo server URL---apollo-server=http://127.0.0.1:8080
apollo-app-idApollo APP ID---apollo-app-id=LARAVEL-S-TEST
apollo-namespacesThe namespace to which the APP belongs, support specify the multipleapplication--apollo-namespaces=application --apollo-namespaces=env
apollo-clusterThe cluster to which the APP belongsdefault--apollo-cluster=default
apollo-client-ipIP of current instance, can also be used for grayscale publishingLocal intranet IP--apollo-client-ip=10.2.1.83
apollo-pull-timeoutTimeout time(seconds) when pulling configuration5--apollo-pull-timeout=5
apollo-backup-old-envWhether to backup the old configuration file when updating the configuration file .envfalse--apollo-backup-old-env

Prometheus

Support Prometheus monitoring and alarm, Grafana visually view monitoring metrics. Please refer to Docker Compose for the environment construction of Prometheus and Grafana.

Require extension APCu >= 5.0.0, please install it by pecl install apcu.

Copy the configuration file prometheus.php to the config directory of your project. Modify the configuration as appropriate.

# Execute commands in the project root directory
cp vendor/hhxsv5/laravel-s/config/prometheus.php config/

If your project is Lumen, you also need to manually load the configuration $app->configure('prometheus'); in bootstrap/app.php.

Configure global middleware: Hhxsv5\LaravelS\Components\Prometheus\RequestMiddleware::class. In order to count the request time consumption as accurately as possible, RequestMiddleware must be the first global middleware, which needs to be placed in front of other middleware.

Register ServiceProvider: Hhxsv5\LaravelS\Components\Prometheus\ServiceProvider::class.

Configure the CollectorProcess in config/laravels.php to collect the metrics of Swoole Worker/Task/Timer processes regularly.

'processes' => Hhxsv5\LaravelS\Components\Prometheus\CollectorProcess::getDefinition(),

Create the route to output metrics.

use Hhxsv5\LaravelS\Components\Prometheus\Exporter;

Route::get('/actuator/prometheus', function () {
    $result = app(Exporter::class)->render();
    return response($result, 200, ['Content-Type' => Exporter::REDNER_MIME_TYPE]);
});

Complete the configuration of Prometheus and start it.

global:
  scrape_interval: 5s
  scrape_timeout: 5s
  evaluation_interval: 30s
scrape_configs:
- job_name: laravel-s-test
  honor_timestamps: true
  metrics_path: /actuator/prometheus
  scheme: http
  follow_redirects: true
  static_configs:
  - targets:
    - 127.0.0.1:5200 # The ip and port of the monitored service
# Dynamically discovered using one of the supported service-discovery mechanisms
# https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config
# - job_name: laravels-eureka
#   honor_timestamps: true
#   scrape_interval: 5s
#   metrics_path: /actuator/prometheus
#   scheme: http
#   follow_redirects: true
  # eureka_sd_configs:
  # - server: http://127.0.0.1:8080/eureka
  #   follow_redirects: true
  #   refresh_interval: 5s

Start Grafana, then import panel json.

Grafana Dashboard

Other features

Configure Swoole events

Supported events:

EventInterfaceWhen happened
ServerStartHhxsv5\LaravelS\Swoole\Events\ServerStartInterfaceOccurs when the Master process is starting, this event should not handle complex business logic, and can only do some simple work of initialization.
ServerStopHhxsv5\LaravelS\Swoole\Events\ServerStopInterfaceOccurs when the server exits normally, CANNOT use async or coroutine related APIs in this event.
WorkerStartHhxsv5\LaravelS\Swoole\Events\WorkerStartInterfaceOccurs after the Worker/Task process is started, and the Laravel initialization has been completed.
WorkerStopHhxsv5\LaravelS\Swoole\Events\WorkerStopInterfaceOccurs after the Worker/Task process exits normally
WorkerErrorHhxsv5\LaravelS\Swoole\Events\WorkerErrorInterfaceOccurs when an exception or fatal error occurs in the Worker/Task process

1.Create an event class to implement the corresponding interface.

namespace App\Events;
use Hhxsv5\LaravelS\Swoole\Events\ServerStartInterface;
use Swoole\Atomic;
use Swoole\Http\Server;
class ServerStartEvent implements ServerStartInterface
{
    public function __construct()
    {
    }
    public function handle(Server $server)
    {
        // Initialize a global counter (available across processes)
        $server->atomicCount = new Atomic(2233);

        // Invoked in controller: app('swoole')->atomicCount->get();
    }
}
namespace App\Events;
use Hhxsv5\LaravelS\Swoole\Events\WorkerStartInterface;
use Swoole\Http\Server;
class WorkerStartEvent implements WorkerStartInterface
{
    public function __construct()
    {
    }
    public function handle(Server $server, $workerId)
    {
        // Initialize a database connection pool
        // DatabaseConnectionPool::init();
    }
}

2.Configuration.

// Edit `config/laravels.php`
'event_handlers' => [
    'ServerStart' => [\App\Events\ServerStartEvent::class], // Trigger events in array order
    'WorkerStart' => [\App\Events\WorkerStartEvent::class],
],

Serverless

Alibaba Cloud Function Compute

Function Compute.

1.Modify bootstrap/app.php and set the storage directory. Because the project directory is read-only, the /tmp directory can only be read and written.

$app->useStoragePath(env('APP_STORAGE_PATH', '/tmp/storage'));

2.Create a shell script laravels_bootstrap and grant executable permission.

#!/usr/bin/env bash
set +e

# Create storage-related directories
mkdir -p /tmp/storage/app/public
mkdir -p /tmp/storage/framework/cache
mkdir -p /tmp/storage/framework/sessions
mkdir -p /tmp/storage/framework/testing
mkdir -p /tmp/storage/framework/views
mkdir -p /tmp/storage/logs

# Set the environment variable APP_STORAGE_PATH, please make sure it's the same as APP_STORAGE_PATH in .env
export APP_STORAGE_PATH=/tmp/storage

# Start LaravelS
php bin/laravels start

3.Configure template.xml.

ROSTemplateFormatVersion: '2015-09-01'
Transform: 'Aliyun::Serverless-2018-04-03'
Resources:
  laravel-s-demo:
    Type: 'Aliyun::Serverless::Service'
    Properties:
      Description: 'LaravelS Demo for Serverless'
    fc-laravel-s:
      Type: 'Aliyun::Serverless::Function'
      Properties:
        Handler: laravels.handler
        Runtime: custom
        MemorySize: 512
        Timeout: 30
        CodeUri: ./
        InstanceConcurrency: 10
        EnvironmentVariables:
          BOOTSTRAP_FILE: laravels_bootstrap

Important notices

Singleton Issue

Under FPM mode, singleton instances will be instantiated and recycled in every request, request start=>instantiate instance=>request end=>recycled instance.

Under Swoole Server, All singleton instances will be held in memory, different lifetime from FPM, request start=>instantiate instance=>request end=>do not recycle singleton instance. So need developer to maintain status of singleton instances in every request.

Common solutions:

Write a XxxCleaner class to clean up the singleton object state. This class implements the interface Hhxsv5\LaravelS\Illuminate\Cleaners\CleanerInterface and then registers it in cleaners of laravels.php.

Reset status of singleton instances by Middleware.

Re-register ServiceProvider, add XxxServiceProvider into register_providers of file laravels.php. So that reinitialize singleton instances in every request Refer.

Cleaners

Configuration cleaners.

Known issues

Known issues: a package of known issues and solutions.

Debugging method

Logging; if you want to output to the console, you can use stderr, Log::channel('stderr')->debug('debug message').

Laravel Dump Server(Laravel 5.7 has been integrated by default).

Read request

Read request by Illuminate\Http\Request Object, $_ENV is readable, $_SERVER is partially readable, CANNOT USE $_GET/$_POST/$_FILES/$_COOKIE/$_REQUEST/$_SESSION/$GLOBALS.

public function form(\Illuminate\Http\Request $request)
{
    $name = $request->input('name');
    $all = $request->all();
    $sessionId = $request->cookie('sessionId');
    $photo = $request->file('photo');
    // Call getContent() to get the raw POST body, instead of file_get_contents('php://input')
    $rawContent = $request->getContent();
    //...
}

Output response

Respond by Illuminate\Http\Response Object, compatible with echo/vardump()/print_r(),CANNOT USE functions dd()/exit()/die()/header()/setcookie()/http_response_code().

public function json()
{
    return response()->json(['time' => time()])->header('header1', 'value1')->withCookie('c1', 'v1');
}

Persistent connection

Singleton connection will be resident in memory, it is recommended to turn on persistent connection for better performance.

Database connection, it will reconnect automatically immediately after disconnect.

// config/database.php
'connections' => [
    'my_conn' => [
        'driver'    => 'mysql',
        'host'      => env('DB_MY_CONN_HOST', 'localhost'),
        'port'      => env('DB_MY_CONN_PORT', 3306),
        'database'  => env('DB_MY_CONN_DATABASE', 'forge'),
        'username'  => env('DB_MY_CONN_USERNAME', 'forge'),
        'password'  => env('DB_MY_CONN_PASSWORD', ''),
        'charset'   => 'utf8mb4',
        'collation' => 'utf8mb4_unicode_ci',
        'prefix'    => '',
        'strict'    => false,
        'options'   => [
            // Enable persistent connection
            \PDO::ATTR_PERSISTENT => true,
        ],
    ],
],

Redis connection, it won't reconnect automatically immediately after disconnect, and will throw an exception about lost connection, reconnect next time. You need to make sure that SELECT DB correctly before operating Redis every time.

// config/database.php
'redis' => [
    'client' => env('REDIS_CLIENT', 'phpredis'), // It is recommended to use phpredis for better performance.
    'default' => [
        'host'       => env('REDIS_HOST', 'localhost'),
        'password'   => env('REDIS_PASSWORD', null),
        'port'       => env('REDIS_PORT', 6379),
        'database'   => 0,
        'persistent' => true, // Enable persistent connection
    ],
],

About memory leaks

Avoid using global variables. If necessary, please clean or reset them manually.

Infinitely appending element into static/global variable will lead to OOM(Out of Memory).

class Test
{
    public static $array = [];
    public static $string = '';
}

// Controller
public function test(Request $req)
{
    // Out of Memory
    Test::$array[] = $req->input('param1');
    Test::$string .= $req->input('param2');
}

Memory leak detection method

Modify config/laravels.php: worker_num=1, max_request=1000000, remember to change it back after test;

Add routing /debug-memory-leak without route middleware to observe the memory changes of the Worker process;

Start LaravelS and request /debug-memory-leak until diff_mem is less than or equal to zero; if diff_mem is always greater than zero, it means that there may be a memory leak in Global Middleware or Laravel Framework;

After completing Step 3, alternately request the business routes and /debug-memory-leak (It is recommended to use ab/wrk to make a large number of requests for business routes), the initial increase in memory is normal. After a large number of requests for the business routes, if diff_mem is always greater than zero and curr_mem continues to increase, there is a high probability of memory leak; If curr_mem always changes within a certain range and does not continue to increase, there is a low probability of memory leak.

If you still can't solve it, max_request is the last guarantee.

Linux kernel parameter adjustment

Linux kernel parameter adjustment

Pressure test

Pressure test

Alternatives

Sponsor

PayPal

BTC

Gitee

License

MIT

Author: hhxsv5
Source Code: https://github.com/hhxsv5/laravel-s
License: MIT License

#php #laravel 

LaravelS: Glue for using Swoole in Laravel Or Lumen

🚀 LaravelS is an out-of-the-box adapter between Swoole and Laravel/Lumen.

Please Watch this repository to get the latest updates.

 _                               _  _____ 
| |                             | |/ ____|
| |     __ _ _ __ __ ___   _____| | (___  
| |    / _` | '__/ _` \ \ / / _ \ |\___ \ 
| |___| (_| | | | (_| |\ V /  __/ |____) |
|______\__,_|_|  \__,_| \_/ \___|_|_____/ 

中文文档

Features

Built-in Http/WebSocket server

Multi-port mixed protocol

Custom process

Memory resident

Asynchronous event listening

Asynchronous task queue

Millisecond cron job

Common Components

Gracefully reload

Automatically reload after modifying code

Support Laravel/Lumen both, good compatibility

Simple & Out of the box

Benchmark

Which is the fastest web framework?

TechEmpower Framework Benchmarks

Requirements

DependencyRequirement
PHP>= 5.5.9 Recommend PHP7+
Swoole>= 1.7.19 No longer support PHP5 since 2.0.12 Recommend 4.5.0+
Laravel/Lumen>= 5.1 Recommend 8.0+

Install

1.Require package via Composer(packagist).

composer require "hhxsv5/laravel-s:~3.7.0" -vvv
# Make sure that your composer.lock file is under the VCS

2.Register service provider(pick one of two).

Laravel: in config/app.php file, Laravel 5.5+ supports package discovery automatically, you should skip this step

'providers' => [
    //...
    Hhxsv5\LaravelS\Illuminate\LaravelSServiceProvider::class,
],

Lumen: in bootstrap/app.php file

$app->register(Hhxsv5\LaravelS\Illuminate\LaravelSServiceProvider::class);

3.Publish configuration and binaries.

After upgrading LaravelS, you need to republish; click here to see the change notes of each version.

php artisan laravels publish
# Configuration: config/laravels.php
# Binary: bin/laravels bin/fswatch bin/inotify

4.Change config/laravels.php: listen_ip, listen_port, refer Settings.

5.Performance tuning

Adjust kernel parameters

Number of Workers: LaravelS uses Swoole's Synchronous IO mode, the larger the worker_num setting, the better the concurrency performance, but it will cause more memory usage and process switching overhead. If one request takes 100ms, in order to provide 1000QPS concurrency, at least 100 Worker processes need to be configured. The calculation method is: worker_num = 1000QPS/(1s/1ms) = 100, so incremental pressure testing is needed to calculate the best worker_num.

Number of Task Workers

Run

Please read the notices carefully before running, Important notices(IMPORTANT).

  • Commands: php bin/laravels {start|stop|restart|reload|info|help}.
CommandDescription
startStart LaravelS, list the processes by "ps -ef|grep laravels"
stopStop LaravelS, and trigger the method onStop of Custom process
restartRestart LaravelS: Stop gracefully before starting; The service is unavailable until startup is complete
reloadReload all Task/Worker/Timer processes which contain your business codes, and trigger the method onReload of Custom process, CANNOT reload Master/Manger processes. After modifying config/laravels.php, you only have to call restart to restart
infoDisplay component version information
helpDisplay help information
  • Boot options for the commands start and restart.
OptionDescription
-d|--daemonizeRun as a daemon, this option will override the swoole.daemonize setting in laravels.php
-e|--envThe environment the command should run under, such as --env=testing will use the configuration file .env.testing firstly, this feature requires Laravel 5.2+
-i|--ignoreIgnore checking PID file of Master process
-x|--x-versionThe version(branch) of the current project, stored in $_ENV/$_SERVER, access via $_ENV['X_VERSION'] $_SERVER['X_VERSION'] $request->server->get('X_VERSION')
  • Runtime files: start will automatically execute php artisan laravels config and generate these files, developers generally don't need to pay attention to them, it's recommended to add them to .gitignore.
FileDescription
storage/laravels.confLaravelS's runtime configuration file
storage/laravels.pidPID file of Master process
storage/laravels-timer-process.pidPID file of the Timer process
storage/laravels-custom-processes.pidPID file of all custom processes

Deploy

It is recommended to supervise the main process through Supervisord, the premise is without option -d and to set swoole.daemonize to false.

[program:laravel-s-test]
directory=/var/www/laravel-s-test
command=/usr/local/bin/php bin/laravels start -i
numprocs=1
autostart=true
autorestart=true
startretries=3
user=www-data
redirect_stderr=true
stdout_logfile=/var/log/supervisor/%(program_name)s.log

Cooperate with Nginx (Recommended)

Demo.

gzip on;
gzip_min_length 1024;
gzip_comp_level 2;
gzip_types text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml application/x-httpd-php image/jpeg image/gif image/png font/ttf font/otf image/svg+xml;
gzip_vary on;
gzip_disable "msie6";
upstream swoole {
    # Connect IP:Port
    server 127.0.0.1:5200 weight=5 max_fails=3 fail_timeout=30s;
    # Connect UnixSocket Stream file, tips: put the socket file in the /dev/shm directory to get better performance
    #server unix:/yourpath/laravel-s-test/storage/laravels.sock weight=5 max_fails=3 fail_timeout=30s;
    #server 192.168.1.1:5200 weight=3 max_fails=3 fail_timeout=30s;
    #server 192.168.1.2:5200 backup;
    keepalive 16;
}
server {
    listen 80;
    # Don't forget to bind the host
    server_name laravels.com;
    root /yourpath/laravel-s-test/public;
    access_log /yourpath/log/nginx/$server_name.access.log  main;
    autoindex off;
    index index.html index.htm;
    # Nginx handles the static resources(recommend enabling gzip), LaravelS handles the dynamic resource.
    location / {
        try_files $uri @laravels;
    }
    # Response 404 directly when request the PHP file, to avoid exposing public/*.php
    #location ~* \.php$ {
    #    return 404;
    #}
    location @laravels {
        # proxy_connect_timeout 60s;
        # proxy_send_timeout 60s;
        # proxy_read_timeout 120s;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Real-PORT $remote_port;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header Scheme $scheme;
        proxy_set_header Server-Protocol $server_protocol;
        proxy_set_header Server-Name $server_name;
        proxy_set_header Server-Addr $server_addr;
        proxy_set_header Server-Port $server_port;
        # "swoole" is the upstream
        proxy_pass http://swoole;
    }
}

Cooperate with Apache

LoadModule proxy_module /yourpath/modules/mod_proxy.so
LoadModule proxy_balancer_module /yourpath/modules/mod_proxy_balancer.so
LoadModule lbmethod_byrequests_module /yourpath/modules/mod_lbmethod_byrequests.so
LoadModule proxy_http_module /yourpath/modules/mod_proxy_http.so
LoadModule slotmem_shm_module /yourpath/modules/mod_slotmem_shm.so
LoadModule rewrite_module /yourpath/modules/mod_rewrite.so
LoadModule remoteip_module /yourpath/modules/mod_remoteip.so
LoadModule deflate_module /yourpath/modules/mod_deflate.so

<IfModule deflate_module>
    SetOutputFilter DEFLATE
    DeflateCompressionLevel 2
    AddOutputFilterByType DEFLATE text/html text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml application/x-httpd-php image/jpeg image/gif image/png font/ttf font/otf image/svg+xml
</IfModule>

<VirtualHost *:80>
    # Don't forget to bind the host
    ServerName www.laravels.com
    ServerAdmin hhxsv5@sina.com

    DocumentRoot /yourpath/laravel-s-test/public;
    DirectoryIndex index.html index.htm
    <Directory "/">
        AllowOverride None
        Require all granted
    </Directory>

    RemoteIPHeader X-Forwarded-For

    ProxyRequests Off
    ProxyPreserveHost On
    <Proxy balancer://laravels>  
        BalancerMember http://192.168.1.1:5200 loadfactor=7
        #BalancerMember http://192.168.1.2:5200 loadfactor=3
        #BalancerMember http://192.168.1.3:5200 loadfactor=1 status=+H
        ProxySet lbmethod=byrequests
    </Proxy>
    #ProxyPass / balancer://laravels/
    #ProxyPassReverse / balancer://laravels/

    # Apache handles the static resources, LaravelS handles the dynamic resource.
    RewriteEngine On
    RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} !-d
    RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} !-f
    RewriteRule ^/(.*)$ balancer://laravels%{REQUEST_URI} [P,L]

    ErrorLog ${APACHE_LOG_DIR}/www.laravels.com.error.log
    CustomLog ${APACHE_LOG_DIR}/www.laravels.com.access.log combined
</VirtualHost>

Enable WebSocket server

The Listening address of WebSocket Sever is the same as Http Server.

1.Create WebSocket Handler class, and implement interface WebSocketHandlerInterface.The instant is automatically instantiated when start, you do not need to manually create it.

namespace App\Services;
use Hhxsv5\LaravelS\Swoole\WebSocketHandlerInterface;
use Swoole\Http\Request;
use Swoole\Http\Response;
use Swoole\WebSocket\Frame;
use Swoole\WebSocket\Server;
/**
 * @see https://www.swoole.co.uk/docs/modules/swoole-websocket-server
 */
class WebSocketService implements WebSocketHandlerInterface
{
    // Declare constructor without parameters
    public function __construct()
    {
    }
    // public function onHandShake(Request $request, Response $response)
    // {
           // Custom handshake: https://www.swoole.co.uk/docs/modules/swoole-websocket-server-on-handshake
           // The onOpen event will be triggered automatically after a successful handshake
    // }
    public function onOpen(Server $server, Request $request)
    {
        // Before the onOpen event is triggered, the HTTP request to establish the WebSocket has passed the Laravel route,
        // so Laravel's Request, Auth information are readable, Session is readable and writable, but only in the onOpen event.
        // \Log::info('New WebSocket connection', [$request->fd, request()->all(), session()->getId(), session('xxx'), session(['yyy' => time()])]);
        // The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
        $server->push($request->fd, 'Welcome to LaravelS');
    }
    public function onMessage(Server $server, Frame $frame)
    {
        // \Log::info('Received message', [$frame->fd, $frame->data, $frame->opcode, $frame->finish]);
        // The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
        $server->push($frame->fd, date('Y-m-d H:i:s'));
    }
    public function onClose(Server $server, $fd, $reactorId)
    {
        // The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
    }
}

2.Modify config/laravels.php.

// ...
'websocket'      => [
    'enable'  => true, // Note: set enable to true
    'handler' => \App\Services\WebSocketService::class,
],
'swoole'         => [
    //...
    // Must set dispatch_mode in (2, 4, 5), see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
    'dispatch_mode' => 2,
    //...
],
// ...

3.Use SwooleTable to bind FD & UserId, optional, Swoole Table Demo. Also you can use the other global storage services, like Redis/Memcached/MySQL, but be careful that FD will be possible conflicting between multiple Swoole Servers.

4.Cooperate with Nginx (Recommended)

Refer WebSocket Proxy

map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}
upstream swoole {
    # Connect IP:Port
    server 127.0.0.1:5200 weight=5 max_fails=3 fail_timeout=30s;
    # Connect UnixSocket Stream file, tips: put the socket file in the /dev/shm directory to get better performance
    #server unix:/yourpath/laravel-s-test/storage/laravels.sock weight=5 max_fails=3 fail_timeout=30s;
    #server 192.168.1.1:5200 weight=3 max_fails=3 fail_timeout=30s;
    #server 192.168.1.2:5200 backup;
    keepalive 16;
}
server {
    listen 80;
    # Don't forget to bind the host
    server_name laravels.com;
    root /yourpath/laravel-s-test/public;
    access_log /yourpath/log/nginx/$server_name.access.log  main;
    autoindex off;
    index index.html index.htm;
    # Nginx handles the static resources(recommend enabling gzip), LaravelS handles the dynamic resource.
    location / {
        try_files $uri @laravels;
    }
    # Response 404 directly when request the PHP file, to avoid exposing public/*.php
    #location ~* \.php$ {
    #    return 404;
    #}
    # Http and WebSocket are concomitant, Nginx identifies them by "location"
    # !!! The location of WebSocket is "/ws"
    # Javascript: var ws = new WebSocket("ws://laravels.com/ws");
    location =/ws {
        # proxy_connect_timeout 60s;
        # proxy_send_timeout 60s;
        # proxy_read_timeout: Nginx will close the connection if the proxied server does not send data to Nginx in 60 seconds; At the same time, this close behavior is also affected by heartbeat setting of Swoole.
        # proxy_read_timeout 60s;
        proxy_http_version 1.1;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Real-PORT $remote_port;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header Scheme $scheme;
        proxy_set_header Server-Protocol $server_protocol;
        proxy_set_header Server-Name $server_name;
        proxy_set_header Server-Addr $server_addr;
        proxy_set_header Server-Port $server_port;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_pass http://swoole;
    }
    location @laravels {
        # proxy_connect_timeout 60s;
        # proxy_send_timeout 60s;
        # proxy_read_timeout 60s;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Real-PORT $remote_port;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header Scheme $scheme;
        proxy_set_header Server-Protocol $server_protocol;
        proxy_set_header Server-Name $server_name;
        proxy_set_header Server-Addr $server_addr;
        proxy_set_header Server-Port $server_port;
        proxy_pass http://swoole;
    }
}

5.Heartbeat setting

Heartbeat setting of Swoole

// config/laravels.php
'swoole' => [
    //...
    // All connections are traversed every 60 seconds. If a connection does not send any data to the server within 600 seconds, the connection will be forced to close.
    'heartbeat_idle_time'      => 600,
    'heartbeat_check_interval' => 60,
    //...
],

Proxy read timeout of Nginx

# Nginx will close the connection if the proxied server does not send data to Nginx in 60 seconds
proxy_read_timeout 60s;

6.Push data in controller

namespace App\Http\Controllers;
class TestController extends Controller
{
    public function push()
    {
        $fd = 1; // Find fd by userId from a map [userId=>fd].
        /**@var \Swoole\WebSocket\Server $swoole */
        $swoole = app('swoole');
        $success = $swoole->push($fd, 'Push data to fd#1 in Controller');
        var_dump($success);
    }
}

Listen events

System events

Usually, you can reset/destroy some global/static variables, or change the current Request/Response object.

laravels.received_request After LaravelS parsed Swoole\Http\Request to Illuminate\Http\Request, before Laravel's Kernel handles this request.

// Edit file `app/Providers/EventServiceProvider.php`, add the following code into method `boot`
// If no variable $events, you can also call Facade \Event::listen(). 
$events->listen('laravels.received_request', function (\Illuminate\Http\Request $req, $app) {
    $req->query->set('get_key', 'hhxsv5');// Change query of request
    $req->request->set('post_key', 'hhxsv5'); // Change post of request
});

laravels.generated_response After Laravel's Kernel handled the request, before LaravelS parses Illuminate\Http\Response to Swoole\Http\Response.

// Edit file `app/Providers/EventServiceProvider.php`, add the following code into method `boot`
// If no variable $events, you can also call Facade \Event::listen(). 
$events->listen('laravels.generated_response', function (\Illuminate\Http\Request $req, \Symfony\Component\HttpFoundation\Response $rsp, $app) {
    $rsp->headers->set('header-key', 'hhxsv5');// Change header of response
});

Customized asynchronous events

This feature depends on AsyncTask of Swoole, your need to set swoole.task_worker_num in config/laravels.php firstly. The performance of asynchronous event processing is influenced by number of Swoole task process, you need to set task_worker_num appropriately.

1.Create event class.

use Hhxsv5\LaravelS\Swoole\Task\Event;
class TestEvent extends Event
{
    protected $listeners = [
        // Listener list
        TestListener1::class,
        // TestListener2::class,
    ];
    private $data;
    public function __construct($data)
    {
        $this->data = $data;
    }
    public function getData()
    {
        return $this->data;
    }
}

2.Create listener class.

use Hhxsv5\LaravelS\Swoole\Task\Task;
use Hhxsv5\LaravelS\Swoole\Task\Listener;
class TestListener1 extends Listener
{
    /**
     * @var TestEvent
     */
    protected $event;
    
    public function handle()
    {
        \Log::info(__CLASS__ . ':handle start', [$this->event->getData()]);
        sleep(2);// Simulate the slow codes
        // Deliver task in CronJob, but NOT support callback finish() of task.
        // Note: Modify task_ipc_mode to 1 or 2 in config/laravels.php, see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
        $ret = Task::deliver(new TestTask('task data'));
        var_dump($ret);
        // The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
    }
}

3.Fire event.

// Create instance of event and fire it, "fire" is asynchronous.
use Hhxsv5\LaravelS\Swoole\Task\Event;
$event = new TestEvent('event data');
// $event->delay(10); // Delay 10 seconds to fire event
// $event->setTries(3); // When an error occurs, try 3 times in total
$success = Event::fire($event);
var_dump($success);// Return true if sucess, otherwise false

Asynchronous task queue

This feature depends on AsyncTask of Swoole, your need to set swoole.task_worker_num in config/laravels.php firstly. The performance of task processing is influenced by number of Swoole task process, you need to set task_worker_num appropriately.

1.Create task class.

use Hhxsv5\LaravelS\Swoole\Task\Task;
class TestTask extends Task
{
    private $data;
    private $result;
    public function __construct($data)
    {
        $this->data = $data;
    }
    // The logic of task handling, run in task process, CAN NOT deliver task
    public function handle()
    {
        \Log::info(__CLASS__ . ':handle start', [$this->data]);
        sleep(2);// Simulate the slow codes
        // The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
        $this->result = 'the result of ' . $this->data;
    }
    // Optional, finish event, the logic of after task handling, run in worker process, CAN deliver task 
    public function finish()
    {
        \Log::info(__CLASS__ . ':finish start', [$this->result]);
        Task::deliver(new TestTask2('task2 data')); // Deliver the other task
    }
}

2.Deliver task.

// Create instance of TestTask and deliver it, "deliver" is asynchronous.
use Hhxsv5\LaravelS\Swoole\Task\Task;
$task = new TestTask('task data');
// $task->delay(3);// delay 3 seconds to deliver task
// $task->setTries(3); // When an error occurs, try 3 times in total
$ret = Task::deliver($task);
var_dump($ret);// Return true if sucess, otherwise false

Millisecond cron job

Wrapper cron job base on Swoole's Millisecond Timer, replace Linux Crontab.

1.Create cron job class.

namespace App\Jobs\Timer;
use App\Tasks\TestTask;
use Swoole\Coroutine;
use Hhxsv5\LaravelS\Swoole\Task\Task;
use Hhxsv5\LaravelS\Swoole\Timer\CronJob;
class TestCronJob extends CronJob
{
    protected $i = 0;
    // !!! The `interval` and `isImmediate` of cron job can be configured in two ways(pick one of two): one is to overload the corresponding method, and the other is to pass parameters when registering cron job.
    // --- Override the corresponding method to return the configuration: begin
    public function interval()
    {
        return 1000;// Run every 1000ms
    }
    public function isImmediate()
    {
        return false;// Whether to trigger `run` immediately after setting up
    }
    // --- Override the corresponding method to return the configuration: end
    public function run()
    {
        \Log::info(__METHOD__, ['start', $this->i, microtime(true)]);
        // do something
        // sleep(1); // Swoole < 2.1
        Coroutine::sleep(1); // Swoole>=2.1 Coroutine will be automatically created for run().
        $this->i++;
        \Log::info(__METHOD__, ['end', $this->i, microtime(true)]);

        if ($this->i >= 10) { // Run 10 times only
            \Log::info(__METHOD__, ['stop', $this->i, microtime(true)]);
            $this->stop(); // Stop this cron job, but it will run again after restart/reload.
            // Deliver task in CronJob, but NOT support callback finish() of task.
            // Note: Modify task_ipc_mode to 1 or 2 in config/laravels.php, see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
            $ret = Task::deliver(new TestTask('task data'));
            var_dump($ret);
        }
        // The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
    }
}

2.Register cron job.

// Register cron jobs in file "config/laravels.php"
[
    // ...
    'timer'          => [
        'enable' => true, // Enable Timer
        'jobs'   => [ // The list of cron job
            // Enable LaravelScheduleJob to run `php artisan schedule:run` every 1 minute, replace Linux Crontab
            // \Hhxsv5\LaravelS\Illuminate\LaravelScheduleJob::class,
            // Two ways to configure parameters:
            // [\App\Jobs\Timer\TestCronJob::class, [1000, true]], // Pass in parameters when registering
            \App\Jobs\Timer\TestCronJob::class, // Override the corresponding method to return the configuration
        ],
        'max_wait_time' => 5, // Max waiting time of reloading
        // Enable the global lock to ensure that only one instance starts the timer when deploying multiple instances. This feature depends on Redis, please see https://laravel.com/docs/7.x/redis
        'global_lock'     => false,
        'global_lock_key' => config('app.name', 'Laravel'),
    ],
    // ...
];

3.Note: it will launch multiple timers when build the server cluster, so you need to make sure that launch one timer only to avoid running repetitive task.

4.LaravelS v3.4.0 starts to support the hot restart [Reload] Timer process. After LaravelS receives the SIGUSR1 signal, it waits for max_wait_time(default 5) seconds to end the process, then the Manager process will pull up the Timer process again.

5.If you only need to use minute-level scheduled tasks, it is recommended to enable Hhxsv5\LaravelS\Illuminate\LaravelScheduleJob instead of Linux Crontab, so that you can follow the coding habits of Laravel task scheduling and configure Kernel.

// app/Console/Kernel.php
protected function schedule(Schedule $schedule)
{
    // runInBackground() will start a new child process to execute the task. This is asynchronous and will not affect the execution timing of other tasks.
    $schedule->command(TestCommand::class)->runInBackground()->everyMinute();
}

Automatically reload after modifying code

Via inotify, support Linux only.

1.Install inotify extension.

2.Turn on the switch in Settings.

3.Notice: Modify the file only in Linux to receive the file change events. It's recommended to use the latest Docker. Vagrant Solution.

Via fswatch, support OS X/Linux/Windows.

1.Install fswatch.

2.Run command in your project root directory.

# Watch current directory
./bin/fswatch
# Watch app directory
./bin/fswatch ./app

Via inotifywait, support Linux.

1.Install inotify-tools.

2.Run command in your project root directory.

# Watch current directory
./bin/inotify
# Watch app directory
./bin/inotify ./app

When the above methods does not work, the ultimate solution: set max_request=1,worker_num=1, so that Worker process will restart after processing a request. The performance of this method is very poor, so only development environment use.

Get the instance of SwooleServer in your project

/**
 * $swoole is the instance of `Swoole\WebSocket\Server` if enable WebSocket server, otherwise `Swoole\Http\Server`
 * @var \Swoole\WebSocket\Server|\Swoole\Http\Server $swoole
 */
$swoole = app('swoole');
var_dump($swoole->stats());
$swoole->push($fd, 'Push WebSocket message');

Use SwooleTable

1.Define Table, support multiple.

All defined tables will be created before Swoole starting.

// in file "config/laravels.php"
[
    // ...
    'swoole_tables'  => [
        // Scene:bind UserId & FD in WebSocket
        'ws' => [// The Key is table name, will add suffix "Table" to avoid naming conflicts. Here defined a table named "wsTable"
            'size'   => 102400,// The max size
            'column' => [// Define the columns
                ['name' => 'value', 'type' => \Swoole\Table::TYPE_INT, 'size' => 8],
            ],
        ],
        //...Define the other tables
    ],
    // ...
];

2.Access Table: all table instances will be bound on SwooleServer, access by app('swoole')->xxxTable.

namespace App\Services;
use Hhxsv5\LaravelS\Swoole\WebSocketHandlerInterface;
use Swoole\Http\Request;
use Swoole\WebSocket\Frame;
use Swoole\WebSocket\Server;
class WebSocketService implements WebSocketHandlerInterface
{
    /**@var \Swoole\Table $wsTable */
    private $wsTable;
    public function __construct()
    {
        $this->wsTable = app('swoole')->wsTable;
    }
    // Scene:bind UserId & FD in WebSocket
    public function onOpen(Server $server, Request $request)
    {
        // var_dump(app('swoole') === $server);// The same instance
        /**
         * Get the currently logged in user
         * This feature requires that the path to establish a WebSocket connection go through middleware such as Authenticate.
         * E.g:
         * Browser side: var ws = new WebSocket("ws://127.0.0.1:5200/ws");
         * Then the /ws route in Laravel needs to add the middleware like Authenticate.
         * Route::get('/ws', function () {
         *     // Respond any content with status code 200
         *     return 'websocket';
         * })->middleware(['auth']);
         */
        // $user = Auth::user();
        // $userId = $user ? $user->id : 0; // 0 means a guest user who is not logged in
        $userId = mt_rand(1000, 10000);
        // if (!$userId) {
        //     // Disconnect the connections of unlogged users
        //     $server->disconnect($request->fd);
        //     return;
        // }
        $this->wsTable->set('uid:' . $userId, ['value' => $request->fd]);// Bind map uid to fd
        $this->wsTable->set('fd:' . $request->fd, ['value' => $userId]);// Bind map fd to uid
        $server->push($request->fd, "Welcome to LaravelS #{$request->fd}");
    }
    public function onMessage(Server $server, Frame $frame)
    {
        // Broadcast
        foreach ($this->wsTable as $key => $row) {
            if (strpos($key, 'uid:') === 0 && $server->isEstablished($row['value'])) {
                $content = sprintf('Broadcast: new message "%s" from #%d', $frame->data, $frame->fd);
                $server->push($row['value'], $content);
            }
        }
    }
    public function onClose(Server $server, $fd, $reactorId)
    {
        $uid = $this->wsTable->get('fd:' . $fd);
        if ($uid !== false) {
            $this->wsTable->del('uid:' . $uid['value']); // Unbind uid map
        }
        $this->wsTable->del('fd:' . $fd);// Unbind fd map
        $server->push($fd, "Goodbye #{$fd}");
    }
}

Multi-port mixed protocol

For more information, please refer to Swoole Server AddListener

To make our main server support more protocols not just Http and WebSocket, we bring the feature multi-port mixed protocol of Swoole in LaravelS and name it Socket. Now, you can build TCP/UDP applications easily on top of Laravel.

Create Socket handler class, and extend Hhxsv5\LaravelS\Swoole\Socket\{TcpSocket|UdpSocket|Http|WebSocket}.

namespace App\Sockets;
use Hhxsv5\LaravelS\Swoole\Socket\TcpSocket;
use Swoole\Server;
class TestTcpSocket extends TcpSocket
{
    public function onConnect(Server $server, $fd, $reactorId)
    {
        \Log::info('New TCP connection', [$fd]);
        $server->send($fd, 'Welcome to LaravelS.');
    }
    public function onReceive(Server $server, $fd, $reactorId, $data)
    {
        \Log::info('Received data', [$fd, $data]);
        $server->send($fd, 'LaravelS: ' . $data);
        if ($data === "quit\r\n") {
            $server->send($fd, 'LaravelS: bye' . PHP_EOL);
            $server->close($fd);
        }
    }
    public function onClose(Server $server, $fd, $reactorId)
    {
        \Log::info('Close TCP connection', [$fd]);
        $server->send($fd, 'Goodbye');
    }
}

These Socket connections share the same worker processes with your HTTP/WebSocket connections. So it won't be a problem at all if you want to deliver tasks, use SwooleTable, even Laravel components such as DB, Eloquent and so on. At the same time, you can access Swoole\Server\Port object directly by member property swoolePort.

public function onReceive(Server $server, $fd, $reactorId, $data)
{
    $port = $this->swoolePort; // Get the `Swoole\Server\Port` object
}
namespace App\Http\Controllers;
class TestController extends Controller
{
    public function test()
    {
        /**@var \Swoole\Http\Server|\Swoole\WebSocket\Server $swoole */
        $swoole = app('swoole');
        // $swoole->ports: Traverse all Port objects, https://www.swoole.co.uk/docs/modules/swoole-server/multiple-ports
        $port = $swoole->ports[0]; // Get the `Swoole\Server\Port` object, $port[0] is the port of the main server
        foreach ($port->connections as $fd) { // Traverse all connections
            // $swoole->send($fd, 'Send tcp message');
            // if($swoole->isEstablished($fd)) {
            //     $swoole->push($fd, 'Send websocket message');
            // }
        }
    }
}

Register Sockets.

// Edit `config/laravels.php`
//...
'sockets' => [
    [
        'host'     => '127.0.0.1',
        'port'     => 5291,
        'type'     => SWOOLE_SOCK_TCP,// Socket type: SWOOLE_SOCK_TCP/SWOOLE_SOCK_TCP6/SWOOLE_SOCK_UDP/SWOOLE_SOCK_UDP6/SWOOLE_UNIX_DGRAM/SWOOLE_UNIX_STREAM
        'settings' => [// Swoole settings:https://www.swoole.co.uk/docs/modules/swoole-server-methods#swoole_server-addlistener
            'open_eof_check' => true,
            'package_eof'    => "\r\n",
        ],
        'handler'  => \App\Sockets\TestTcpSocket::class,
        'enable'   => true, // whether to enable, default true
    ],
],

About the heartbeat configuration, it can only be set on the main server and cannot be configured on Socket, but the Socket inherits the heartbeat configuration of the main server.

For TCP socket, onConnect and onClose events will be blocked when dispatch_mode of Swoole is 1/3, so if you want to unblock these two events please set dispatch_mode to 2/4/5.

'swoole' => [
    //...
    'dispatch_mode' => 2,
    //...
];

Test.

TCP: telnet 127.0.0.1 5291

UDP: [Linux] echo "Hello LaravelS" > /dev/udp/127.0.0.1/5292

Register example of other protocols.

  • UDP
  • Http
  • WebSocket: The main server must turn on WebSocket, that is, set websocket.enable to true.

Coroutine

Swoole Coroutine

Warning: The order of code execution in the coroutine is out of order. The data of the request level should be isolated by the coroutine ID. However, there are many singleton and static attributes in Laravel/Lumen, the data between different requests will affect each other, it's Unsafe. For example, the database connection is a singleton, the same database connection shares the same PDO resource. This is fine in the synchronous blocking mode, but it does not work in the asynchronous coroutine mode. Each query needs to create different connections and maintain IO state of different connections, which requires a connection pool.

DO NOT enable the coroutine, only the custom process can use the coroutine.

Custom process

Support developers to create special work processes for monitoring, reporting, or other special tasks. Refer addProcess.

Create Proccess class, implements CustomProcessInterface.

namespace App\Processes;
use App\Tasks\TestTask;
use Hhxsv5\LaravelS\Swoole\Process\CustomProcessInterface;
use Hhxsv5\LaravelS\Swoole\Task\Task;
use Swoole\Coroutine;
use Swoole\Http\Server;
use Swoole\Process;
class TestProcess implements CustomProcessInterface
{
    /**
     * @var bool Quit tag for Reload updates
     */
    private static $quit = false;

    public static function callback(Server $swoole, Process $process)
    {
        // The callback method cannot exit. Once exited, Manager process will automatically create the process 
        while (!self::$quit) {
            \Log::info('Test process: running');
            // sleep(1); // Swoole < 2.1
            Coroutine::sleep(1); // Swoole>=2.1: Coroutine & Runtime will be automatically enabled for callback().
             // Deliver task in custom process, but NOT support callback finish() of task.
            // Note: Modify task_ipc_mode to 1 or 2 in config/laravels.php, see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
            $ret = Task::deliver(new TestTask('task data'));
            var_dump($ret);
            // The upper layer will catch the exception thrown in the callback and record it in the Swoole log, and then this process will exit. The Manager process will re-create the process after 3 seconds, so developers need to try/catch to catch the exception by themselves to avoid frequent process creation.
            // throw new \Exception('an exception');
        }
    }
    // Requirements: LaravelS >= v3.4.0 & callback() must be async non-blocking program.
    public static function onReload(Server $swoole, Process $process)
    {
        // Stop the process...
        // Then end process
        \Log::info('Test process: reloading');
        self::$quit = true;
        // $process->exit(0); // Force exit process
    }
    // Requirements: LaravelS >= v3.7.4 & callback() must be async non-blocking program.
    public static function onStop(Server $swoole, Process $process)
    {
        // Stop the process...
        // Then end process
        \Log::info('Test process: stopping');
        self::$quit = true;
        // $process->exit(0); // Force exit process
    }
}

Register TestProcess.

// Edit `config/laravels.php`
// ...
'processes' => [
    'test' => [ // Key name is process name
        'class'    => \App\Processes\TestProcess::class,
        'redirect' => false, // Whether redirect stdin/stdout, true or false
        'pipe'     => 0,     // The type of pipeline, 0: no pipeline 1: SOCK_STREAM 2: SOCK_DGRAM
        'enable'   => true,  // Whether to enable, default true
        //'num'    => 3   // To create multiple processes of this class, default is 1
        //'queue'    => [ // Enable message queue as inter-process communication, configure empty array means use default parameters
        //    'msg_key'  => 0,    // The key of the message queue. Default: ftok(__FILE__, 1).
        //    'mode'     => 2,    // Communication mode, default is 2, which means contention mode
        //    'capacity' => 8192, // The length of a single message, is limited by the operating system kernel parameters. The default is 8192, and the maximum is 65536
        //],
        //'restart_interval' => 5, // After the process exits abnormally, how many seconds to wait before restarting the process, default 5 seconds
    ],
],

Note: The callback() cannot quit. If quit, the Manager process will re-create the process.

Example: Write data to a custom process.

// config/laravels.php
'processes' => [
    'test' => [
        'class'    => \App\Processes\TestProcess::class,
        'redirect' => false,
        'pipe'     => 1,
    ],
],
// app/Processes/TestProcess.php
public static function callback(Server $swoole, Process $process)
{
    while ($data = $process->read()) {
        \Log::info('TestProcess: read data', [$data]);
        $process->write('TestProcess: ' . $data);
    }
}
// app/Http/Controllers/TestController.php
public function testProcessWrite()
{
    /**@var \Swoole\Process $process */
    $process = app('swoole')->customProcesses['test'];
    $process->write('TestController: write data' . time());
    var_dump($process->read());
}

Common components

Apollo

LaravelS will pull the Apollo configuration and write it to the .env file when starting. At the same time, LaravelS will start the custom process apollo to monitor the configuration and automatically reload when the configuration changes.

Enable Apollo: add --enable-apollo and Apollo parameters to the startup parameters.

php bin/laravels start --enable-apollo --apollo-server=http://127.0.0.1:8080 --apollo-app-id=LARAVEL-S-TEST

Support hot updates(optional).

// Edit `config/laravels.php`
'processes' => Hhxsv5\LaravelS\Components\Apollo\Process::getDefinition(),
// When there are other custom process configurations
'processes' => [
    'test' => [
        'class'    => \App\Processes\TestProcess::class,
        'redirect' => false,
        'pipe'     => 1,
    ],
    // ...
] + Hhxsv5\LaravelS\Components\Apollo\Process::getDefinition(),

List of available parameters.

ParameterDescriptionDefaultDemo
apollo-serverApollo server URL---apollo-server=http://127.0.0.1:8080
apollo-app-idApollo APP ID---apollo-app-id=LARAVEL-S-TEST
apollo-namespacesThe namespace to which the APP belongs, support specify the multipleapplication--apollo-namespaces=application --apollo-namespaces=env
apollo-clusterThe cluster to which the APP belongsdefault--apollo-cluster=default
apollo-client-ipIP of current instance, can also be used for grayscale publishingLocal intranet IP--apollo-client-ip=10.2.1.83
apollo-pull-timeoutTimeout time(seconds) when pulling configuration5--apollo-pull-timeout=5
apollo-backup-old-envWhether to backup the old configuration file when updating the configuration file .envfalse--apollo-backup-old-env

Prometheus

Support Prometheus monitoring and alarm, Grafana visually view monitoring metrics. Please refer to Docker Compose for the environment construction of Prometheus and Grafana.

Require extension APCu >= 5.0.0, please install it by pecl install apcu.

Copy the configuration file prometheus.php to the config directory of your project. Modify the configuration as appropriate.

# Execute commands in the project root directory
cp vendor/hhxsv5/laravel-s/config/prometheus.php config/

If your project is Lumen, you also need to manually load the configuration $app->configure('prometheus'); in bootstrap/app.php.

Configure global middleware: Hhxsv5\LaravelS\Components\Prometheus\RequestMiddleware::class. In order to count the request time consumption as accurately as possible, RequestMiddleware must be the first global middleware, which needs to be placed in front of other middleware.

Register ServiceProvider: Hhxsv5\LaravelS\Components\Prometheus\ServiceProvider::class.

Configure the CollectorProcess in config/laravels.php to collect the metrics of Swoole Worker/Task/Timer processes regularly.

'processes' => Hhxsv5\LaravelS\Components\Prometheus\CollectorProcess::getDefinition(),

Create the route to output metrics.

use Hhxsv5\LaravelS\Components\Prometheus\Exporter;

Route::get('/actuator/prometheus', function () {
    $result = app(Exporter::class)->render();
    return response($result, 200, ['Content-Type' => Exporter::REDNER_MIME_TYPE]);
});

Complete the configuration of Prometheus and start it.

global:
  scrape_interval: 5s
  scrape_timeout: 5s
  evaluation_interval: 30s
scrape_configs:
- job_name: laravel-s-test
  honor_timestamps: true
  metrics_path: /actuator/prometheus
  scheme: http
  follow_redirects: true
  static_configs:
  - targets:
    - 127.0.0.1:5200 # The ip and port of the monitored service
# Dynamically discovered using one of the supported service-discovery mechanisms
# https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config
# - job_name: laravels-eureka
#   honor_timestamps: true
#   scrape_interval: 5s
#   metrics_path: /actuator/prometheus
#   scheme: http
#   follow_redirects: true
  # eureka_sd_configs:
  # - server: http://127.0.0.1:8080/eureka
  #   follow_redirects: true
  #   refresh_interval: 5s

Start Grafana, then import panel json.

Grafana Dashboard

Other features

Configure Swoole events

Supported events:

EventInterfaceWhen happened
ServerStartHhxsv5\LaravelS\Swoole\Events\ServerStartInterfaceOccurs when the Master process is starting, this event should not handle complex business logic, and can only do some simple work of initialization.
ServerStopHhxsv5\LaravelS\Swoole\Events\ServerStopInterfaceOccurs when the server exits normally, CANNOT use async or coroutine related APIs in this event.
WorkerStartHhxsv5\LaravelS\Swoole\Events\WorkerStartInterfaceOccurs after the Worker/Task process is started, and the Laravel initialization has been completed.
WorkerStopHhxsv5\LaravelS\Swoole\Events\WorkerStopInterfaceOccurs after the Worker/Task process exits normally
WorkerErrorHhxsv5\LaravelS\Swoole\Events\WorkerErrorInterfaceOccurs when an exception or fatal error occurs in the Worker/Task process

1.Create an event class to implement the corresponding interface.

namespace App\Events;
use Hhxsv5\LaravelS\Swoole\Events\ServerStartInterface;
use Swoole\Atomic;
use Swoole\Http\Server;
class ServerStartEvent implements ServerStartInterface
{
    public function __construct()
    {
    }
    public function handle(Server $server)
    {
        // Initialize a global counter (available across processes)
        $server->atomicCount = new Atomic(2233);

        // Invoked in controller: app('swoole')->atomicCount->get();
    }
}
namespace App\Events;
use Hhxsv5\LaravelS\Swoole\Events\WorkerStartInterface;
use Swoole\Http\Server;
class WorkerStartEvent implements WorkerStartInterface
{
    public function __construct()
    {
    }
    public function handle(Server $server, $workerId)
    {
        // Initialize a database connection pool
        // DatabaseConnectionPool::init();
    }
}

2.Configuration.

// Edit `config/laravels.php`
'event_handlers' => [
    'ServerStart' => [\App\Events\ServerStartEvent::class], // Trigger events in array order
    'WorkerStart' => [\App\Events\WorkerStartEvent::class],
],

Serverless

Alibaba Cloud Function Compute

Function Compute.

1.Modify bootstrap/app.php and set the storage directory. Because the project directory is read-only, the /tmp directory can only be read and written.

$app->useStoragePath(env('APP_STORAGE_PATH', '/tmp/storage'));

2.Create a shell script laravels_bootstrap and grant executable permission.

#!/usr/bin/env bash
set +e

# Create storage-related directories
mkdir -p /tmp/storage/app/public
mkdir -p /tmp/storage/framework/cache
mkdir -p /tmp/storage/framework/sessions
mkdir -p /tmp/storage/framework/testing
mkdir -p /tmp/storage/framework/views
mkdir -p /tmp/storage/logs

# Set the environment variable APP_STORAGE_PATH, please make sure it's the same as APP_STORAGE_PATH in .env
export APP_STORAGE_PATH=/tmp/storage

# Start LaravelS
php bin/laravels start

3.Configure template.xml.

ROSTemplateFormatVersion: '2015-09-01'
Transform: 'Aliyun::Serverless-2018-04-03'
Resources:
  laravel-s-demo:
    Type: 'Aliyun::Serverless::Service'
    Properties:
      Description: 'LaravelS Demo for Serverless'
    fc-laravel-s:
      Type: 'Aliyun::Serverless::Function'
      Properties:
        Handler: laravels.handler
        Runtime: custom
        MemorySize: 512
        Timeout: 30
        CodeUri: ./
        InstanceConcurrency: 10
        EnvironmentVariables:
          BOOTSTRAP_FILE: laravels_bootstrap

Important notices

Singleton Issue

Under FPM mode, singleton instances will be instantiated and recycled in every request, request start=>instantiate instance=>request end=>recycled instance.

Under Swoole Server, All singleton instances will be held in memory, different lifetime from FPM, request start=>instantiate instance=>request end=>do not recycle singleton instance. So need developer to maintain status of singleton instances in every request.

Common solutions:

Write a XxxCleaner class to clean up the singleton object state. This class implements the interface Hhxsv5\LaravelS\Illuminate\Cleaners\CleanerInterface and then registers it in cleaners of laravels.php.

Reset status of singleton instances by Middleware.

Re-register ServiceProvider, add XxxServiceProvider into register_providers of file laravels.php. So that reinitialize singleton instances in every request Refer.

Cleaners

Configuration cleaners.

Known issues

Known issues: a package of known issues and solutions.

Debugging method

Logging; if you want to output to the console, you can use stderr, Log::channel('stderr')->debug('debug message').

Laravel Dump Server(Laravel 5.7 has been integrated by default).

Read request

Read request by Illuminate\Http\Request Object, $_ENV is readable, $_SERVER is partially readable, CANNOT USE $_GET/$_POST/$_FILES/$_COOKIE/$_REQUEST/$_SESSION/$GLOBALS.

public function form(\Illuminate\Http\Request $request)
{
    $name = $request->input('name');
    $all = $request->all();
    $sessionId = $request->cookie('sessionId');
    $photo = $request->file('photo');
    // Call getContent() to get the raw POST body, instead of file_get_contents('php://input')
    $rawContent = $request->getContent();
    //...
}

Output response

Respond by Illuminate\Http\Response Object, compatible with echo/vardump()/print_r(),CANNOT USE functions dd()/exit()/die()/header()/setcookie()/http_response_code().

public function json()
{
    return response()->json(['time' => time()])->header('header1', 'value1')->withCookie('c1', 'v1');
}

Persistent connection

Singleton connection will be resident in memory, it is recommended to turn on persistent connection for better performance.

  1. Database connection, it will reconnect automatically immediately after disconnect.
// config/database.php
'connections' => [
    'my_conn' => [
        'driver'    => 'mysql',
        'host'      => env('DB_MY_CONN_HOST', 'localhost'),
        'port'      => env('DB_MY_CONN_PORT', 3306),
        'database'  => env('DB_MY_CONN_DATABASE', 'forge'),
        'username'  => env('DB_MY_CONN_USERNAME', 'forge'),
        'password'  => env('DB_MY_CONN_PASSWORD', ''),
        'charset'   => 'utf8mb4',
        'collation' => 'utf8mb4_unicode_ci',
        'prefix'    => '',
        'strict'    => false,
        'options'   => [
            // Enable persistent connection
            \PDO::ATTR_PERSISTENT => true,
        ],
    ],
],
  1. Redis connection, it won't reconnect automatically immediately after disconnect, and will throw an exception about lost connection, reconnect next time. You need to make sure that SELECT DB correctly before operating Redis every time.
// config/database.php
'redis' => [
    'client' => env('REDIS_CLIENT', 'phpredis'), // It is recommended to use phpredis for better performance.
    'default' => [
        'host'       => env('REDIS_HOST', 'localhost'),
        'password'   => env('REDIS_PASSWORD', null),
        'port'       => env('REDIS_PORT', 6379),
        'database'   => 0,
        'persistent' => true, // Enable persistent connection
    ],
],

About memory leaks

Avoid using global variables. If necessary, please clean or reset them manually.

Infinitely appending element into static/global variable will lead to OOM(Out of Memory).

class Test
{
    public static $array = [];
    public static $string = '';
}

// Controller
public function test(Request $req)
{
    // Out of Memory
    Test::$array[] = $req->input('param1');
    Test::$string .= $req->input('param2');
}

Memory leak detection method

Modify config/laravels.php: worker_num=1, max_request=1000000, remember to change it back after test;

Add routing /debug-memory-leak without route middleware to observe the memory changes of the Worker process;

Start LaravelS and request /debug-memory-leak until diff_mem is less than or equal to zero; if diff_mem is always greater than zero, it means that there may be a memory leak in Global Middleware or Laravel Framework;

After completing Step 3, alternately request the business routes and /debug-memory-leak (It is recommended to use ab/wrk to make a large number of requests for business routes), the initial increase in memory is normal. After a large number of requests for the business routes, if diff_mem is always greater than zero and curr_mem continues to increase, there is a high probability of memory leak; If curr_mem always changes within a certain range and does not continue to increase, there is a low probability of memory leak.

If you still can't solve it, max_request is the last guarantee.

Linux kernel parameter adjustment

Linux kernel parameter adjustment

Pressure test

Pressure test

Alternatives

Sponsor

PayPal

BTC

Gitee

Author: hhxsv5
Source Code: https://github.com/hhxsv5/laravel-s 
License: MIT License

#php #laravel #http