1553749248
#graphql #kotlin #spring-boot #mongodb
1553749387
Learn how to build modern APIs with Kotlin, the flexible programming language that can run on JVMs.
In this article, you will learn how to build GraphQL APIs with Kotlin, Spring Boot, and MongoDB. Also, as you wouldn’t want to publish insecure APIs, you will learn how to integrate Auth0 in your stack. You can find the final code developed throughout the article in this GitHub repository.
Before proceeding, there are some tools you need to ensure you have on your machine to be able to follow the article seamlessly. They include:
Note: If you use Eclipse or NetBeans, apparently, you are covered. JetBrains (the creator of Kotlin and of IntelliJ IDEA) maintains plugins for both these IDEs (here and here). However, we haven’t tested these plugins and can’t guarantee they will work as expected.## What You Will Build
In this article, you will build a GraphQL API that performs some basic CRUD (Create, Retrieve, Update, and Delete) operations. The API will focus on snacks and reviews. Users (or client applications) will be able to use your API to list snacks and their reviews. However, beyond querying the API, they will also be able to update these snacks by issuing GraphQL mutations to create, update, and delete records in your database.
Spring Boot has an initializer tool that helps you bootstrap (or scaffold) your applications faster. So, open the initializer and fill in the options to put your project together:
Here, you are generating a Gradle project with Kotlin and Spring Boot 2.1.3
. The group name for the app (or the main package, if you prefer) is com.auth0
while the artifact name is kotlin-graphql
. After filling in these options, use the “search dependencies to add” field to include Web
and MongoDB
.
The Web
dependency is a starter dependency for building web applications while MongoDB
is a dependency to aid your database operations.
After adding these dependencies, click on the Generate Project button. This will download a zipped file that contains your project. Extract the project from this file, and use IntelliJ (or your preferred IDE) to open it.
Note: If you use Eclipse or NetBeans, apparently, you are covered. JetBrains (the creator of Kotlin and of IntelliJ IDEA) maintains plugins for both these IDEs (here and here). However, we haven’t tested these plugins and can’t guarantee they will work as expected.
Next, you will have to include some dependencies to help you add a GraphQL API in your Spring Boot and Kotlin application. To add these dependencies, open yourbuild.gradle
file and update it as follows:
// ./build.gradle
// ...
dependencies {
// ...
implementation 'com.graphql-java:graphql-spring-boot-starter:5.0.2'
implementation 'com.graphql-java:graphiql-spring-boot-starter:5.0.2'
implementation 'com.graphql-java:graphql-java-tools:5.2.4'
}
// ...
Note: If you use Eclipse or NetBeans, apparently, you are covered. JetBrains (the creator of Kotlin and of IntelliJ IDEA) maintains plugins for both these IDEs (here and here). However, we haven’t tested these plugins and can’t guarantee they will work as expected.
After that, open theapplication.properties
file located in the./src/main/resources/
directory and add the following properties:
server.port=9000
spring.data.mongodb.database=kotlin-graphql
spring.data.mongodb.port=27017
Here, you are defining which port your API will use to listen to requests (9000
in this case), and you are defining some MongoDB connection properties (you might need to add some more, depending on your MongoDB installation). With that in place, you are ready to start developing your application.
An entity is a term often associated with a model class that is persisted on databases. Since you are dealing with APIs to perform CRUD operations, you will need to persist data that will be consumed later. In this section, you will define entities that you’ll use in the course of building your API.
First, create a new package called entity
inside the com.auth0.kotlingraphql
one. You will keep all the entities you need here in this package. Next, create a new class called Snack
inside this package and add the following code to it:
// ./src/main/kotlin/com/auth0/kotlingraphql/entity/Snack.kt
package com.auth0.kotlingraphql.entity;
import org.springframework.data.annotation.Id
import org.springframework.data.mongodb.core.mapping.Document
@Document(collection = "snack")
data class Snack(
var name: String,
var amount: Float
) {
@Id
var id: String = ""
@Transient
var reviews: List<Review> = ArrayList()
}
Note: If you use Eclipse or NetBeans, apparently, you are covered. JetBrains (the creator of Kotlin and of IntelliJ IDEA) maintains plugins for both these IDEs (here and here). However, we haven’t tested these plugins and can’t guarantee they will work as expected.
This class is a model of a singleSnack
that you will store in your database. Each snack has aname
, anamount
, anid
(a unique identifier), andreviews
. Thereviews
variable will hold all the reviews associated with a particular snack.
You will use this model when storing snacks to the database, hence the use of the @Document
annotation. The name of the collection is specified using the collection
variable in the annotation. If you do not specify this property, Spring Boot will automatically use the class name.
The id
variable is annotated with @Id
to tell MongoDB that this variable will hold the unique identifier for the entity. The @Transient
annotation, on reviews
, means this variable will not be persisted to the database (you will make the Review
class persist the association).
Next, create another class called Review
(still under the entity
package) and add this snippet:
// ./src/main/kotlin/com/auth0/kotlingraphql/entity/Review.kt
package com.auth0.kotlingraphql.entity
import org.springframework.data.mongodb.core.mapping.Document
@Document(collection = "reviews")
data class Review(
var snackId: String,
var rating: Int,
var text: String
)
The approach used here is similar to the one used to create Snack
. In this case, you are defining a class that represents a single review.
According to the official Spring documentation:
Note: If you use Eclipse or NetBeans, apparently, you are covered. JetBrains (the creator of Kotlin and of IntelliJ IDEA) maintains plugins for both these IDEs (here and here). However, we haven’t tested these plugins and can’t guarantee they will work as expected.
In other others, a repository is a class responsible for some form of data storage, retrieval, and manipulation. In this section, you will create repositories to match the two entities you created earlier.
First, create a new package called repository
(again inside the com.auth0.kotlingraphql
one) and, inside this package, create a Kotlin interface called SnackRepository
. To this interface, add the following code:
// ./src/main/kotlin/com/auth0/kotlingraphql/repository/SnackRepository.kt
package com.auth0.kotlingraphql.repository
import com.auth0.kotlingraphql.entity.Snack
import org.springframework.data.mongodb.repository.MongoRepository
import org.springframework.stereotype.Repository
@Repository
interface SnackRepository : MongoRepository<Snack, String>
The interface you have just created extends MongoRepository
to take advantages of its predefined methods. Some of these methods include: findAll
, saveAll
, and findById
.
The MongoRepository
interface takes in two parameter types, Snack
and String
. The first parameter (Snack
) is the data type that will be managed by the repository while the second parameter (String
) is the data type of the id
property. As you can imagine, you need String
here since this is the data type for the id
variable in the Snack
entity.
The @Repository
annotation is used to indicate that the class is a repository. Although creating the SnackRepository
without the @Repository
annotation still works as expected, the annotation has the following benefits:
Next, you will create another Kotlin interface named ReviewRepository
(still in the repository
package) and add this:
// ./src/main/kotlin/com/auth0/kotlingraphql/repository/ReviewRepository.kt
package com.auth0.kotlingraphql.repository
import com.auth0.kotlingraphql.entity.Review
import org.springframework.data.mongodb.repository.MongoRepository
import org.springframework.stereotype.Repository
@Repository
interface ReviewRepository : MongoRepository<Review, String>
This is very similar to the first repository created. The difference is that you will manage instances of Review
with this new repository (hence <Review, String>
).
Unlike REST APIs, where you have to declare endpoints based on the resources they return, in GraphQL, you need to define a schema. This schema is used to:
While you have POST
, GET
, PUT
, and others as request methods in a REST API, for GraphQL, you have just Query
(equivalent of GET
in REST) and Mutation
(equivalent of PUT
, POST
, PATCH
and DELETE
in REST). In this section, you will now learn how to define a GraphQL schema for your Spring Boot and Kotlin application.
Note: If you use Eclipse or NetBeans, apparently, you are covered. JetBrains (the creator of Kotlin and of IntelliJ IDEA) maintains plugins for both these IDEs (here and here). However, we haven’t tested these plugins and can’t guarantee they will work as expected.
For starters, create a new file calledsnack.graphqls
in the./src/main/resources/
directory and add this code to it:
type Query {
snacks: [Snack]
}
type Snack {
id: ID!
name: String
amount: Float
reviews: [Review]
}
type Mutation {
newSnack(name: String!, amount: Float!) : Snack!
deleteSnack(id: ID!) : Boolean
updateSnack(id:ID!, amount: Float!) : Snack!
}
In this file, you declared three types with their respective fields. The Query
type is a standard type used by a client to request data. This type has a field called snacks
that returns a Snack
list. The Snack
type here mimics the snack entity you created earlier. The Mutation
type is another standard type that a client application will use to add, update, or delete data.
Now, still in the ./src/main/resources/
directory, create another file called review.graphqls
and add this code to it:
extend type Query {
reviews(snackId: ID!): [Review]
}
type Review {
snackId: ID!
rating: Int
text: String!
}
extend type Mutation {
newReview(snackId: ID!, rating: Int, text:String!) : Review!
}
In this file, the keyword extend
is attached to the Query
and Mutation
to extend the types declared in the other file. Everything else is similar to the other schema you defined.
A resolver is a function that provides a value for a field or a type declared in your schema. In other words, a GraphQL resolver is responsible for translating your data into the schema you are using. As such, now, you have to create corresponding Kotlin functions for the fields you declared in the last section: snacks
, newSnack
, deleteSnack
, updateSnack
, reviews
, newReview
.
So, the first thing you will do is to create a package called resolvers
inside the main package (i.e., inside com.auth0.kotlingraphql
). Then, you will create a class called SnackQueryResolver
inside this new package. After creating this class, add the following code to it:
// .src/main/kotlin/com/auth0/kotlingraphql/resolvers/SnackQueryResolver.kt
package com.auth0.kotlingraphql.resolvers
import com.auth0.kotlingraphql.entity.Review
import com.auth0.kotlingraphql.entity.Snack
import com.auth0.kotlingraphql.repository.SnackRepository
import com.coxautodev.graphql.tools.GraphQLQueryResolver
import org.springframework.data.mongodb.core.MongoOperations
import org.springframework.data.mongodb.core.query.Criteria
import org.springframework.data.mongodb.core.query.Query
import org.springframework.stereotype.Component
@Component
class SnackQueryResolver(val snackRepository: SnackRepository,
private val mongoOperations: MongoOperations) : GraphQLQueryResolver {
fun snacks(): List<Snack> {
val list = snackRepository.findAll()
for (item in list) {
item.reviews = getReviews(snackId = item.id)
}
return list
}
private fun getReviews(snackId: String): List<Review> {
val query = Query()
query.addCriteria(Criteria.where("snackId").`is`(snackId))
return mongoOperations.find(query, Review::class.java)
}
}
You are creating this class to support the queries defined in the snack.graphqls
file, hence the name SnackQueryResolver
. The class implements an interface (GraphQLQueryResolver
) provided by the GraphQL dependency you added earlier to your project. The class is also annotated with @Component
to configure it as a Spring component (meaning that Spring will automatically detect this class for dependency injection).
Remember that the query type in the snack.graphqls
looks like this:
type Query {
snacks: [Snack]
}
As such, the SnackQueryResolver
class contains one public function named snacks
which returns a list of snacks. Notice that the field name corresponds to the function name. This is important because, otherwise, Spring wouldn’t know that you want this function to resolve the snacks
query.
In the snacks
function, the snackRepository
is used to findAll()
the snacks from the database. Then, for each snack, all the reviews are fetched alongside.
The next class you should create is called SnackMutationResolver
. Create it inside the resolvers
package then add the following code to it:
// .src/main/kotlin/com/auth0/kotlingraphql/resolvers/SnackMutationResolver.kt
package com.auth0.kotlingraphql.resolvers
import com.auth0.kotlingraphql.entity.Snack
import com.auth0.kotlingraphql.repository.SnackRepository
import com.coxautodev.graphql.tools.GraphQLMutationResolver
import org.springframework.stereotype.Component
import java.util.*
@Component
class SnackMutationResolver (private val snackRepository: SnackRepository): GraphQLMutationResolver {
fun newSnack(name: String, amount: Float): Snack {
val snack = Snack(name, amount)
snack.id = UUID.randomUUID().toString()
snackRepository.save(snack)
return snack
}
fun deleteSnack(id:String): Boolean {
snackRepository.deleteById(id)
return true
}
fun updateSnack(id:String, amount:Float): Snack {
val snack = snackRepository.findById(id)
snack.ifPresent {
it.amount = amount
snackRepository.save(it)
}
return snack.get()
}
}
You are creating this class to resolve the mutations defined in the snack.graphqls
file. As such, you have the following functions in this class:
Next, you will create resolvers for the review.graphqls
schema. So, create another class inside the resolvers
package named ReviewQueryResolver
and add this code to it:
// .src/main/kotlin/com/auth0/kotlingraphql/resolvers/ReviewQueryResolver.kt
package com.auth0.kotlingraphql.resolvers
import com.auth0.kotlingraphql.entity.Review
import com.coxautodev.graphql.tools.GraphQLQueryResolver
import org.springframework.data.mongodb.core.MongoOperations
import org.springframework.data.mongodb.core.query.Criteria
import org.springframework.data.mongodb.core.query.Query
import org.springframework.stereotype.Component
@Component
class ReviewQueryResolver(val mongoOperations: MongoOperations) : GraphQLQueryResolver {
fun reviews(snackId: String): List<Review> {
val query = Query()
query.addCriteria(Criteria.where("snackId").`is`(snackId))
return mongoOperations.find(query, Review::class.java)
}
}
The ReviewQueryResolver
class handles the reviews property defined in the review.graphqls
file. As such, this class contains only one function, reviews
, which returns a list of reviews from the database depending on the snackId
passed in.
Finally, you will create the last class in this section (still in the resolvers
package). You will call it ReviewMutationResolver
and add this code to it:
// .src/main/kotlin/com/auth0/kotlingraphql/resolvers/ReviewMutationResolver.kt
package com.auth0.kotlingraphql.resolvers
import com.coxautodev.graphql.tools.GraphQLMutationResolver
import com.auth0.kotlingraphql.entity.Review
import com.auth0.kotlingraphql.repository.ReviewRepository
import org.springframework.stereotype.Component
@Component
class ReviewMutationResolver (private val reviewRepository: ReviewRepository): GraphQLMutationResolver {
fun newReview(snackId: String, rating: Int, text:String): Review {
val review = Review(snackId, rating, text)
reviewRepository.save(review)
return review
}
}
The resolver here is for the mutation field in the review.graphqls
file. In this function, a new review is added to the database using a snack id
, rating
value and text
.
With the resolvers you’ve just created, anytime a client application constructs a query, your functions will be able to provide the results for the requested fields. As such, you are ready to take your app for a spin. To run your Spring Boot and Kotlin application, you have two alternatives. You can either use the play button that is (most likely) available in your IDE, or you can use a terminal to issue the following command from project root:
./gradlew bootRun
Note: If you use Eclipse or NetBeans, apparently, you are covered. JetBrains (the creator of Kotlin and of IntelliJ IDEA) maintains plugins for both these IDEs (here and here). However, we haven’t tested these plugins and can’t guarantee they will work as expected.
After your application is up and running, open http://localhost:9000/graphiql on your browser. There, you will see a GraphiQL client app that you can use to test your API.
On that application, you can use a mutation to add a newSnack
. To see this in action, copy and paste the following code into the left-hand side panel and click on the play button (or hit Ctrl
+ Enter
in your keyboard):
mutation {
newSnack(name: "French Fries", amount: 40.5) {
id
name
amount
}
}
If everything runs as expected, you will get the following result back:
{
"data": {
"newSnack": {
"id": "da84885b-b160-4c09-a5ea-3484bac4d5f9",
"name": "French Fries",
"amount": 40.5
}
}
}
You just created a new snack. Awesome, right? Now, you can create a review for this snack:
mutation {
newReview(snackId:"SNACK_ID",
text: "Awesome snack!", rating:5
){
snackId, text, rating
}
}
Note: If you use Eclipse or NetBeans, apparently, you are covered. JetBrains (the creator of Kotlin and of IntelliJ IDEA) maintains plugins for both these IDEs (here and here). However, we haven’t tested these plugins and can’t guarantee they will work as expected.
Running this command will result in the following response:
{
"data": {
"newReview": {
"snackId": "da84885b-b160-4c09-a5ea-3484bac4d5f9",
"text": "Awesome snack!",
"rating": 5
}
}
}
Now, to fetch the snacks and reviews persisted in your database, you can issue the following query
:
query {
snacks {
name,
reviews {
text, rating
}
}
}
Running this query
will get you back a response similar to this:
{
"data": {
"snacks": [
{
"name": "French Fries",
"reviews": [
{
"text": "Awesome snack!",
"rating": 5
}
]
}
]
}
}
This is the beauty of GraphQL. With just one query
you can decide what is the exact format you need for the result.
As expected, your GraphQL API is working perfectly. However, you need to add a little more spice to it. For example, you probably don’t want to allow unauthenticated users to consume your API, right? One easy way to fix this is to integrate your app with Auth0.
So, if you don’t have an Auth0 account yet, now is a good time to create a free one. Then, after signing up (or signing in), head to the APIs section of your Auth0 dashboard and click on the Create API button. Then, fill in the form that Auth0 shows as follows:
Then, click on the create button to finish the process and head back to your project. There, open your build.gradle
file and add the Spring OAuth2 dependency:
// ...
dependencies {
// ...
implementation 'org.springframework.security.oauth.boot:spring-security-oauth2-autoconfigure:2.1.3.RELEASE'
}
// ...
Next, open the application.properties
file (located in the kotlingraphql/src/main/resources/
directory) and add these two properties:
# ...
security.oauth2.resource.id=<YOUR-AUTH0-API-IDENTIFIER>
security.oauth2.resource.jwk.keySetUri=https://<YOUR-AUTH0-DOMAIN>/.well-known/jwks.json
Note: If you use Eclipse or NetBeans, apparently, you are covered. JetBrains (the creator of Kotlin and of IntelliJ IDEA) maintains plugins for both these IDEs (here and here). However, we haven’t tested these plugins and can’t guarantee they will work as expected.
Now, create a new class calledSecurityConfig
inside thecom.auth0.kotlingraphql
package and add the following code to it:
// .src/main/kotlin/com/auth0/kotlingraphql/SecurityConfig.kt
import org.springframework.beans.factory.annotation.Value
import org.springframework.context.annotation.Configuration
import org.springframework.security.config.annotation.web.builders.HttpSecurity
import org.springframework.security.oauth2.config.annotation.web.configuration.EnableResourceServer
import org.springframework.security.oauth2.config.annotation.web.configuration.ResourceServerConfigurerAdapter
import org.springframework.security.oauth2.config.annotation.web.configurers.ResourceServerSecurityConfigurer
@Configuration
@EnableResourceServer
class SecurityConfig : ResourceServerConfigurerAdapter() {
@Value("\${security.oauth2.resource.id}")
private lateinit var resourceId: String
@Throws(Exception::class)
override fun configure(http: HttpSecurity) {
http.authorizeRequests()
.mvcMatchers("/graphql").authenticated()
.anyRequest().permitAll()
}
@Throws(Exception::class)
override fun configure(resources: ResourceServerSecurityConfigurer) {
resources.resourceId(resourceId)
}
}
Spring Boot will automatically detect this class and configure the integration with Auth0 for you (by using the properties you defined above). Also, as you can see in the code, this class will ensure that any request to /graphql
is authenticated()
and that other requests (like to the GraphiQL client app) are permitted (permitAll()
).
Note: If you use Eclipse or NetBeans, apparently, you are covered. JetBrains (the creator of Kotlin and of IntelliJ IDEA) maintains plugins for both these IDEs (here and here). However, we haven’t tested these plugins and can’t guarantee they will work as expected.
With that in place, stop the running instance of your app (which is still insecure) then rerun it (remember, you can also use your IDE to run it):
./gradlew bootRun
After running your API, open the GraphiQL client tool, and you will see that you get an error instantaneously. This happens because this tool issues a query (without authentication) right after loading and because your API is secured now.
To be able to issue requests again to your API, you will need an access token. The process of getting a token will depend on what type of client you are dealing with. This is out of scope here but, if you are dealing with a SPA application (like those created with React, Angular, and Vue.js), you can use the [auth0-js](https://auth0.com/docs/libraries/auth0js/v9 "auth0-js")
NPM library. If you are dealing with some other type of client (e.g., regular web application or native application), check the Auth0’s docs for more info.
Nevertheless, to see the whole thing in action, you can head back to your Auth0 Dashboard, open the API you created before, and move to the Test section. On this section, you will see a button called Copy Token that will provide you a temporary token that you can use to test your API.
After clicking on this button, Auth0 will move the token to your clipboard, and you will be able to use it to issue requests. However, as the GraphiQL tool does not have a place to configure the access token on the request, you will need another client. For example, you can use Postman (a popular HTTP graphical client) or you can use curl
(a command-line program) to issue requests with headers.
No matter what HTTP client you choose, you will have to configure it to use a header called Authorization
with a value that looks like Bearer <YOUR-TOKEN>
. Note that you will have to replace <YOUR-TOKEN>
with the token you copied from the Auth0 dashboard.
For example, if you are using curl
, you can issue a query request to your GraphQL API as follows:
# set a local variable with the token
TOKEN=<YOUR-TOKEN>
# issue the query request
curl -X POST -H 'Authorization: Bearer '$TOKEN -H 'Content-Type: application/json' -d '{
"query": "{ snacks { name } }"
}' http://localhost:9000/graphql
Done! You have just finished securing your Kotlin and GraphQL API with Auth0. How cool was that?
If you encounter an OAuth2 Spring error creating a bean with name [springSecurityFilterChain](https://stackoverflow.com/questions/47866963/oauth2-spring-error-creating-bean-with-name-springsecurityfilterchain "springSecurityFilterChain")
error, you will have to add these dependencies to your build.gradle
file:
// ./build.gradle
// ...
dependencies {
// ...
implementation 'javax.xml.bind:jaxb-api:2.3.0'
implementation 'com.sun.xml.bind:jaxb-core:2.3.0'
implementation 'com.sun.xml.bind:jaxb-impl:2.3.0'
implementation 'javax.activation:activation:1.1.1'
}
// ...
After adding these new dependencies, sync your Gradle files and try running your app again. If you still have trouble, ping us on the comments box below.
☞ The Modern GraphQL Bootcamp (Advanced Node.js)
☞ NodeJS - The Complete Guide (incl. MVC, REST APIs, GraphQL)
☞ GraphQL with React: The Complete Developers Guide
☞ GraphQL API with AWS and Use with React
☞ Developing and Securing GraphQL APIs with Laravel
☞ An introduction GraphQL with AWS AppSync
☞ Getting started with GraphQL and TypeScript
*Originally published by Idorenyin Obong at *https://auth0.com
1654075127
Amazon Aurora is a relational database management system (RDBMS) developed by AWS(Amazon Web Services). Aurora gives you the performance and availability of commercial-grade databases with full MySQL and PostgreSQL compatibility. In terms of high performance, Aurora MySQL and Aurora PostgreSQL have shown an increase in throughput of up to 5X over stock MySQL and 3X over stock PostgreSQL respectively on similar hardware. In terms of scalability, Aurora achieves enhancements and innovations in storage and computing, horizontal and vertical functions.
Aurora supports up to 128TB of storage capacity and supports dynamic scaling of storage layer in units of 10GB. In terms of computing, Aurora supports scalable configurations for multiple read replicas. Each region can have an additional 15 Aurora replicas. In addition, Aurora provides multi-primary architecture to support four read/write nodes. Its Serverless architecture allows vertical scaling and reduces typical latency to under a second, while the Global Database enables a single database cluster to span multiple AWS Regions in low latency.
Aurora already provides great scalability with the growth of user data volume. Can it handle more data and support more concurrent access? You may consider using sharding to support the configuration of multiple underlying Aurora clusters. To this end, a series of blogs, including this one, provides you with a reference in choosing between Proxy and JDBC for sharding.
AWS Aurora offers a single relational database. Primary-secondary, multi-primary, and global database, and other forms of hosting architecture can satisfy various architectural scenarios above. However, Aurora doesn’t provide direct support for sharding scenarios, and sharding has a variety of forms, such as vertical and horizontal forms. If we want to further increase data capacity, some problems have to be solved, such as cross-node database Join
, associated query, distributed transactions, SQL sorting, page turning, function calculation, database global primary key, capacity planning, and secondary capacity expansion after sharding.
It is generally accepted that when the capacity of a MySQL table is less than 10 million, the time spent on queries is optimal because at this time the height of its BTREE
index is between 3 and 5. Data sharding can reduce the amount of data in a single table and distribute the read and write loads to different data nodes at the same time. Data sharding can be divided into vertical sharding and horizontal sharding.
1. Advantages of vertical sharding
2. Disadvantages of vertical sharding
Join
can only be implemented by interface aggregation, which will increase the complexity of development.3. Advantages of horizontal sharding
4. Disadvantages of horizontal sharding
Join
is poor.Based on the analysis above, and the available studis on popular sharding middleware, we selected ShardingSphere, an open source product, combined with Amazon Aurora to introduce how the combination of these two products meets various forms of sharding and how to solve the problems brought by sharding.
ShardingSphere is an open source ecosystem including a set of distributed database middleware solutions, including 3 independent products, Sharding-JDBC, Sharding-Proxy & Sharding-Sidecar.
The characteristics of Sharding-JDBC are:
Hybrid Structure Integrating Sharding-JDBC and Applications
Sharding-JDBC’s core concepts
Data node: The smallest unit of a data slice, consisting of a data source name and a data table, such as ds_0.product_order_0.
Actual table: The physical table that really exists in the horizontal sharding database, such as product order tables: product_order_0, product_order_1, and product_order_2.
Logic table: The logical name of the horizontal sharding databases (tables) with the same schema. For instance, the logic table of the order product_order_0, product_order_1, and product_order_2 is product_order.
Binding table: It refers to the primary table and the joiner table with the same sharding rules. For example, product_order table and product_order_item are sharded by order_id, so they are binding tables with each other. Cartesian product correlation will not appear in the multi-tables correlating query, so the query efficiency will increase greatly.
Broadcast table: It refers to tables that exist in all sharding database sources. The schema and data must consist in each database. It can be applied to the small data volume that needs to correlate with big data tables to query, dictionary table and configuration table for example.
Download the example project code locally. In order to ensure the stability of the test code, we choose shardingsphere-example-4.0.0
version.
git clone
https://github.com/apache/shardingsphere-example.git
Project description:
shardingsphere-example
├── example-core
│ ├── config-utility
│ ├── example-api
│ ├── example-raw-jdbc
│ ├── example-spring-jpa #spring+jpa integration-based entity,repository
│ └── example-spring-mybatis
├── sharding-jdbc-example
│ ├── sharding-example
│ │ ├── sharding-raw-jdbc-example
│ │ ├── sharding-spring-boot-jpa-example #integration-based sharding-jdbc functions
│ │ ├── sharding-spring-boot-mybatis-example
│ │ ├── sharding-spring-namespace-jpa-example
│ │ └── sharding-spring-namespace-mybatis-example
│ ├── orchestration-example
│ │ ├── orchestration-raw-jdbc-example
│ │ ├── orchestration-spring-boot-example #integration-based sharding-jdbc governance function
│ │ └── orchestration-spring-namespace-example
│ ├── transaction-example
│ │ ├── transaction-2pc-xa-example #sharding-jdbc sample of two-phase commit for a distributed transaction
│ │ └──transaction-base-seata-example #sharding-jdbc distributed transaction seata sample
│ ├── other-feature-example
│ │ ├── hint-example
│ │ └── encrypt-example
├── sharding-proxy-example
│ └── sharding-proxy-boot-mybatis-example
└── src/resources
└── manual_schema.sql
Configuration file description:
application-master-slave.properties #read/write splitting profile
application-sharding-databases-tables.properties #sharding profile
application-sharding-databases.properties #library split profile only
application-sharding-master-slave.properties #sharding and read/write splitting profile
application-sharding-tables.properties #table split profile
application.properties #spring boot profile
Code logic description:
The following is the entry class of the Spring Boot application below. Execute it to run the project.
The execution logic of demo is as follows:
As business grows, the write and read requests can be split to different database nodes to effectively promote the processing capability of the entire database cluster. Aurora uses a reader/writer endpoint
to meet users' requirements to write and read with strong consistency, and a read-only endpoint
to meet the requirements to read without strong consistency. Aurora's read and write latency is within single-digit milliseconds, much lower than MySQL's binlog
-based logical replication, so there's a lot of loads that can be directed to a read-only endpoint
.
Through the one primary and multiple secondary configuration, query requests can be evenly distributed to multiple data replicas, which further improves the processing capability of the system. Read/write splitting can improve the throughput and availability of system, but it can also lead to data inconsistency. Aurora provides a primary/secondary architecture in a fully managed form, but applications on the upper-layer still need to manage multiple data sources when interacting with Aurora, routing SQL requests to different nodes based on the read/write type of SQL statements and certain routing policies.
ShardingSphere-JDBC provides read/write splitting features and it is integrated with application programs so that the complex configuration between application programs and database clusters can be separated from application programs. Developers can manage the Shard
through configuration files and combine it with ORM frameworks such as Spring JPA and Mybatis to completely separate the duplicated logic from the code, which greatly improves the ability to maintain code and reduces the coupling between code and database.
Create a set of Aurora MySQL read/write splitting clusters. The model is db.r5.2xlarge. Each set of clusters has one write node and two read nodes.
application.properties spring boot
Master profile description:
You need to replace the green ones with your own environment configuration.
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#Activate master-slave configuration item so that sharding-jdbc can use master-slave profile
spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-master-slave.properties sharding-jdbc
profile description:
spring.shardingsphere.datasource.names=ds_master,ds_slave_0,ds_slave_1
# data souce-master
spring.shardingsphere.datasource.ds_master.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master.password=Your master DB password
spring.shardingsphere.datasource.ds_master.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master.jdbc-url=Your primary DB data sourceurl spring.shardingsphere.datasource.ds_master.username=Your primary DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_0.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_0.jdbc-url=Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_0.username= Your slave DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_1.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_1.jdbc-url= Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_1.username= Your slave DB username
# Routing Policy Configuration
spring.shardingsphere.masterslave.load-balance-algorithm-type=round_robin
spring.shardingsphere.masterslave.name=ds_ms
spring.shardingsphere.masterslave.master-data-source-name=ds_master
spring.shardingsphere.masterslave.slave-data-source-names=ds_slave_0,ds_slave_1
# sharding-jdbc configures the information storage mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log,and you can see the conversion from logical SQL to actual SQL from the print
spring.shardingsphere.props.sql.show=true
As shown in the ShardingSphere-SQL log
figure below, the write SQL is executed on the ds_master
data source.
As shown in the ShardingSphere-SQL log
figure below, the read SQL is executed on the ds_slave
data source in the form of polling.
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] Rule Type: master-slave
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] SQL: select orderentit0_.order_id as order_id1_1_, orderentit0_.address_id as address_2_1_,
orderentit0_.status as status3_1_, orderentit0_.user_id as user_id4_1_ from t_order orderentit0_ ::: DataSources: ds_slave_0
---------------------------- Print OrderItem Data -------------------
Hibernate: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, orderiteme1_.user_id
as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] Rule Type: master-slave
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] SQL: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_,
orderiteme1_.user_id as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id ::: DataSources: ds_slave_1
Note: As shown in the figure below, if there are both reads and writes in a transaction, Sharding-JDBC routes both read and write operations to the master library. If the read/write requests are not in the same transaction, the corresponding read requests are distributed to different read nodes according to the routing policy.
@Override
@Transactional // When a transaction is started, both read and write in the transaction go through the master library. When closed, read goes through the slave library and write goes through the master library
public void processSuccess() throws SQLException {
System.out.println("-------------- Process Success Begin ---------------");
List<Long> orderIds = insertData();
printData();
deleteData(orderIds);
printData();
System.out.println("-------------- Process Success Finish --------------");
}
The Aurora database environment adopts the configuration described in Section 2.2.1.
3.2.4.1 Verification process description
Spring-Boot
project2. Perform a failover on Aurora’s console
3. Execute the Rest API
request
4. Repeatedly execute POST
(http://localhost:8088/save-user) until the call to the API failed to write to Aurora and eventually recovered successfully.
5. The following figure shows the process of executing code failover. It takes about 37 seconds from the time when the latest SQL write is successfully performed to the time when the next SQL write is successfully performed. That is, the application can be automatically recovered from Aurora failover, and the recovery time is about 37 seconds.
application.properties spring boot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#Activate sharding-tables configuration items
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
# spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-tables.properties sharding-jdbc
profile description
## configure primary-key policy
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# configure the binding relation of t_order and t_order_item
spring.shardingsphere.sharding.binding-tables[0]=t_order,t_order_item
# configure broadcast tables
spring.shardingsphere.sharding.broadcast-tables=t_address
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC routing rules are configured, the client
executes DDL, and Sharding-JDBC automatically creates corresponding tables according to the table splitting rules. If t_address
is a broadcast table, create a t_address
because there is only one master instance. Two physical tables t_order_0
and t_order_1
will be created when creating t_order
.
2. Write operation
As shown in the figure below, Logic SQL
inserts a record into t_order
. When Sharding-JDBC is executed, data will be distributed to t_order_0
and t_order_1
according to the table splitting rules.
When t_order
and t_order_item
are bound, the records associated with order_item
and order
are placed on the same physical table.
3. Read operation
As shown in the figure below, perform the join
query operations to order
and order_item
under the binding table, and the physical shard is precisely located based on the binding relationship.
The join
query operations on order
and order_item
under the unbound table will traverse all shards.
Create two instances on Aurora: ds_0
and ds_1
When the sharding-spring-boot-jpa-example
project is started, tables t_order
, t_order_item
,t_address
will be created on two Aurora instances.
application.properties springboot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases configuration items
spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-databases.properties sharding-jdbc
profile description
spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= spring.shardingsphere.datasource.ds_0.username=
spring.shardingsphere.datasource.ds_0.password=
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url=
spring.shardingsphere.datasource.ds_1.username=
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, physical tables will be created on ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the slave library are written on the table in the corresponding instance according to the slave library field and routing policy.
3. Read operation
Query order
is routed to the corresponding Aurora instance according to the routing rules of the slave library .
Query Address
. Since address
is a broadcast table, an instance of address
will be randomly selected and queried from the nodes used.
As shown in the figure below, perform the join
query operations to order
and order_item
under the binding table, and the physical shard is precisely located based on the binding relationship.
As shown in the figure below, create two instances on Aurora: ds_0
and ds_1
When the sharding-spring-boot-jpa-example
project is started, physical tables t_order_01
, t_order_02
, t_order_item_01
,and t_order_item_02
and global table t_address
will be created on two Aurora instances.
application.properties springboot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-databases.properties sharding-jdbc
profile description
spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= 306/dev?useSSL=false&characterEncoding=utf-8
spring.shardingsphere.datasource.ds_0.username=
spring.shardingsphere.datasource.ds_0.password=
spring.shardingsphere.datasource.ds_0.max-active=16
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url=
spring.shardingsphere.datasource.ds_1.username=
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.datasource.ds_1.max-active=16
# default library splitting policy
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
# Tables that do not meet the library splitting policy are placed on ds_0
spring.shardingsphere.sharding.default-data-source-name=ds_0
# t_order table splitting policy
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
# t_order_item table splitting policy
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mdoe
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s sharding and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, t_address
will be created on both ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the sub-library are written to the table on the corresponding instance according to the slave library field and routing policy.
3. Read operation
The read operation is similar to the library split function verification described in section2.4.3.
The following figure shows the physical table of the created database instance.
application.properties spring boot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
spring.profiles.active=sharding-master-slave
application-sharding-master-slave.properties sharding-jdbc
profile description
The url, name and password of the database need to be changed to your own database parameters.
spring.shardingsphere.datasource.names=ds_master_0,ds_master_1,ds_master_0_slave_0,ds_master_0_slave_1,ds_master_1_slave_0,ds_master_1_slave_1
spring.shardingsphere.datasource.ds_master_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0.username=
spring.shardingsphere.datasource.ds_master_0.password=
spring.shardingsphere.datasource.ds_master_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_0.username=
spring.shardingsphere.datasource.ds_master_0_slave_0.password=
spring.shardingsphere.datasource.ds_master_0_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_1.username=
spring.shardingsphere.datasource.ds_master_0_slave_1.password=
spring.shardingsphere.datasource.ds_master_0_slave_1.max-active=16
spring.shardingsphere.datasource.ds_master_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1.jdbc-url=
spring.shardingsphere.datasource.ds_master_1.username=
spring.shardingsphere.datasource.ds_master_1.password=
spring.shardingsphere.datasource.ds_master_1.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_0.jdbc-url=
spring.shardingsphere.datasource.ds_master_1_slave_0.username=
spring.shardingsphere.datasource.ds_master_1_slave_0.password=
spring.shardingsphere.datasource.ds_master_1_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_1_slave_1.username=admin
spring.shardingsphere.datasource.ds_master_1_slave_1.password=
spring.shardingsphere.datasource.ds_master_1_slave_1.max-active=16
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_master_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# master/slave data source and slave data source configuration
spring.shardingsphere.sharding.master-slave-rules.ds_0.master-data-source-name=ds_master_0
spring.shardingsphere.sharding.master-slave-rules.ds_0.slave-data-source-names=ds_master_0_slave_0, ds_master_0_slave_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.master-data-source-name=ds_master_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.slave-data-source-names=ds_master_1_slave_0, ds_master_1_slave_1
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, t_address
will be created on both ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the slave library are written to the table on the corresponding instance according to the slave library field and routing policy.
3. Read operation
The join
query operations on order
and order_item
under the binding table are shown below.
3. Conclusion
As an open source product focusing on database enhancement, ShardingSphere is pretty good in terms of its community activitiy, product maturity and documentation richness.
Among its products, ShardingSphere-JDBC is a sharding solution based on the client-side, which supports all sharding scenarios. And there’s no need to introduce an intermediate layer like Proxy, so the complexity of operation and maintenance is reduced. Its latency is theoretically lower than Proxy due to the lack of intermediate layer. In addition, ShardingSphere-JDBC can support a variety of relational databases based on SQL standards such as MySQL/PostgreSQL/Oracle/SQL Server, etc.
However, due to the integration of Sharding-JDBC with the application program, it only supports Java language for now, and is strongly dependent on the application programs. Nevertheless, Sharding-JDBC separates all sharding configuration from the application program, which brings relatively small changes when switching to other middleware.
In conclusion, Sharding-JDBC is a good choice if you use a Java-based system and have to to interconnect with different relational databases — and don’t want to bother with introducing an intermediate layer.
Author
Sun Jinhua
A senior solution architect at AWS, Sun is responsible for the design and consult on cloud architecture. for providing customers with cloud-related design and consulting services. Before joining AWS, he ran his own business, specializing in building e-commerce platforms and designing the overall architecture for e-commerce platforms of automotive companies. He worked in a global leading communication equipment company as a senior engineer, responsible for the development and architecture design of multiple subsystems of LTE equipment system. He has rich experience in architecture design with high concurrency and high availability system, microservice architecture design, database, middleware, IOT etc.
1553749248
#graphql #kotlin #spring-boot #mongodb
1624449960
Learn how to use Spring Boot, Java, and Auth0 to secure a feature-complete API. Learn how to use Auth0 to implement authorization in Spring Boot.
Learn how to secure an API with the world’s most popular Java framework and Auth0.
So far, you’ve built an API that allows anyone to read and write data. It’s time to tighten the security, so only users with the menu-admin
role can create, update, and delete menu items.
To know what a user can do, you first need to know who the user is. This is known as authentication. It is often done by asking for a set of credentials, such as username & password. Once verified, the client gets information about the identity and access of the user.
To implement these Identity and Access Management (IAM) tasks easily, you can use OAuth 2.0, an authorization framework, and OpenID Connect (OIDC), a simple identity layer on top of it.
OAuth encapsulates access information in an access token. In turn, OpenID Connect encapsulates identity information in an ID token. The authentication server can send these two tokens to the client application initiating the process. When the user requests a protected API endpoint, it must send the access token along with the request.
You won’t have to worry about implementing OAuth, OpenID Connect, or an authentication server. Instead, you’ll use Auth0.
Auth0 is a flexible, drop-in solution to add authentication and authorization services to your applications. Your team and organization can avoid the cost, time, and risk that comes with building your own solution. Also, there are tons of docs and SDKs for you to get started and integrate Auth0 in your stack easily.
#spring boot authorization tutorial: secure an api (java) #spring boot #api (java) #authorization #spring boot authorization tutorial #api
1623471060
This article demonstrates how to implement and manage Quartz Scheduler using Rest API with Spring Boot and MongoDB
Quartz is a job scheduling library that can be integrated into a wide variety of Java applications. Quartz is generally used for enterprise-class applications to support process workflow, system management actions and to provide timely services within the applications And quartz supports clustering.
MongoDB is a cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas. MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License.
#spring-boot #quartz #quartz-scheduler #rest-api #mongodb #quartz scheduler using rest api with spring boot and mongodb
1625802780
Spring Data MongoDB - Delete document | Spring Data MongoDB Delete Operation | Spring Boot MongoDB Delete
Hello and namaste everyone,
Today, we are learning how to delete a document in spring data mongodb. We are using mongoTemplate to delete the document. Spring Data MongoDB provides different functions to delete the document. we will understand the difference between these functions and their usage.
#springDataMongoDb #springDataMongodbDelete #mongoTemplate #springBooot #javaMongodb #smartyetchFizz
Email at: smartytechfizz@gmail.om
Follow on Instagram: https://www.instagram.com/smartytechfizz/
#spring data mongodb #mongodb #spring boot #spring data mongodb #mongotemplate delete