Build a Microservices Architecture with Spring Boot and Spring Cloud

Java Microservices with Spring Boot and Spring Cloud

This tutorial shows you how to build a microservices architecture with Spring Boot and Spring Cloud.

Java is a great language to use when developing a microservice architecture. In fact, some of the biggest names in our industry use it. Have you ever heard of Netflix, Amazon, or Google? What about eBay, Twitter, and LinkedIn? Yes, major companies handling incredible traffic are doing it with Java.

Implementing a microservices architecture in Java isn’t for everyone. For that matter, implementing microservices, in general, isn’t often needed. Most companies do it to scale their people, not their systems. If you’re going to scale your people, hiring Java developers is one of the best ways to do it. After all, there are more developers fluent in Java than most other languages - though JavaScript seems to be catching up quickly!

The Java ecosystem has some well-established patterns for developing microservice architectures. If you’re familiar with Spring, you’ll feel right at home developing with Spring Boot and Spring Cloud. Since that’s one of the quickest ways to get started, I figured I’d walk you through a quick tutorial.

Create Java Microservices with Spring Cloud and Spring Boot

In most of my tutorials, I show you how to build everything from scratch. Today I’d like to take a different approach and step through a pre-built example with you. Hopefully, this will be a bit shorter and easier to understand.

You can start by cloning the @oktadeveloper/java-microservices-examples repository.

git clone https://github.com/oktadeveloper/java-microservices-examples.git
cd java-microservices-examples/spring-boot+cloud

In the spring-boot+cloud directory, there are three projects:

discovery-service: a Netflix Eureka server, used for service discovery.

car-service: a simple Car Service that uses Spring Data REST to serve up a REST API of cars.

api-gateway: an API gateway that has a /cool-cars endpoint that talks to the car-service and filters out cars that aren’t cool (in my opinion, of course).

I created all of these applications using start.spring.io’s REST API and HTTPie.

http https://start.spring.io/starter.zip bootVersion==2.2.5.RELEASE javaVersion==11 \
  artifactId==discovery-service name==eureka-service \
  dependencies==cloud-eureka-server baseDir==discovery-service | tar -xzvf -

http https://start.spring.io/starter.zip bootVersion==2.2.5.RELEASE \
  artifactId==car-service name==car-service baseDir==car-service \
  dependencies==actuator,cloud-eureka,data-jpa,h2,data-rest,web,devtools,lombok | tar -xzvf -

http https://start.spring.io/starter.zip bootVersion==2.2.5.RELEASE \
  artifactId==api-gateway name==api-gateway baseDir==api-gateway \
  dependencies==cloud-eureka,cloud-feign,data-rest,web,cloud-hystrix,lombok | tar -xzvf -

Java Service Discovery with Netflix Eureka

The discovery-service is configured the same as you would most Eureka servers. It has an @EnableEurekaServer annotation on its main class and properties that set its port and turn off discovery.

server.port=8761
eureka.client.register-with-eureka=false

The car-service and api-gateway projects are configured in a similar fashion. Both have a unique name defined and car-service is configured to run on port 8090 so it doesn’t conflict with 8080.

car-service/src/main/resources/application.properties

server.port=8090
spring.application.name=car-service

api-gateway/src/main/resources/application.properties

spring.application.name=api-gateway

The main class in both projects is annotated with @EnableDiscoveryClient.

Build a Java Microservice with Spring Data REST

The car-service provides a REST API that lets you CRUD (Create, Read, Update, and Delete) cars. It creates a default set of cars when the application loads using an ApplicationRunner bean.

car-service/src/main/java/com/example/carservice/CarServiceApplication.java

package com.example.carservice;

import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.NonNull;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.context.annotation.Bean;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.rest.core.annotation.RepositoryRestResource;

import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import java.util.stream.Stream;

@EnableDiscoveryClient
@SpringBootApplication
public class CarServiceApplication {

    public static void main(String[] args) {
        SpringApplication.run(CarServiceApplication.class, args);
    }

    @Bean
    ApplicationRunner init(CarRepository repository) {
        return args -> {
            Stream.of("Ferrari", "Jaguar", "Porsche", "Lamborghini", "Bugatti",
                    "AMC Gremlin", "Triumph Stag", "Ford Pinto", "Yugo GV").forEach(name -> {
                repository.save(new Car(name));
            });
            repository.findAll().forEach(System.out::println);
        };
    }
}

@Data
@NoArgsConstructor
@Entity
class Car {

    public Car(String name) {
        this.name = name;
    }

    @Id
    @GeneratedValue
    private Long id;

    @NonNull
    private String name;
}

@RepositoryRestResource
interface CarRepository extends JpaRepository<Car, Long> {
}

Spring Cloud + Feign and Hystrix in an API Gateway

Feign makes writing Java HTTP clients easier. Spring Cloud makes it possible to create a Feign client with just a few lines of code. Hystrix makes it possible to add failover capabilities to your Feign clients so they’re more resilient.

The api-gateway uses Feign and Hystrix to talk to the downstream car-service and failover to a fallback() method if it’s unavailable. It also exposes a /cool-cars endpoint that filters out cars you might not want to own.

api-gateway/src/main/java/com/example/apigateway/ApiGatewayApplication.java

package com.example.apigateway;

import com.netflix.hystrix.contrib.javanica.annotation.HystrixCommand;
import lombok.Data;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.circuitbreaker.EnableCircuitBreaker;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.hateoas.CollectionModel;
import org.springframework.web.bind.annotation.CrossOrigin;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.ArrayList;
import java.util.Collection;
import java.util.stream.Collectors;

@EnableFeignClients
@EnableCircuitBreaker
@EnableDiscoveryClient
@SpringBootApplication
public class ApiGatewayApplication {

    public static void main(String[] args) {
        SpringApplication.run(ApiGatewayApplication.class, args);
    }
}

@Data
class Car {
    private String name;
}

@FeignClient("car-service")
interface CarClient {

    @GetMapping("/cars")
    @CrossOrigin
    CollectionModel<Car> readCars();
}

@RestController
class CoolCarController {

    private final CarClient carClient;

    public CoolCarController(CarClient carClient) {
        this.carClient = carClient;
    }

    private Collection<Car> fallback() {
        return new ArrayList<>();
    }

    @GetMapping("/cool-cars")
    @CrossOrigin
    @HystrixCommand(fallbackMethod = "fallback")
    public Collection<Car> goodCars() {
        return carClient.readCars()
                .getContent()
                .stream()
                .filter(this::isCool)
                .collect(Collectors.toList());
    }

    private boolean isCool(Car car) {
        return !car.getName().equals("AMC Gremlin") &&
                !car.getName().equals("Triumph Stag") &&
                !car.getName().equals("Ford Pinto") &&
                !car.getName().equals("Yugo GV");
    }
}

Run a Java Microservices Architecture

If you run all of these services with ./mvnw spring-boot:run in separate terminal windows, you can navigate to http://localhost:8761 and see they’ve registered with Eureka.

Eureka Server

If you cloned from GitHub to begin, and you navigate to http://localhost:8080/cool-cars in your browser, you’ll be redirected to Okta. What the?

Secure Java Microservices with OAuth 2.0 and OIDC

I’ve already configured security in this microservices architecture using OAuth 2.0 and OIDC. What’s the difference between the two? OIDC is an extension to OAuth 2.0 that provides identity. It also provides discovery so all the different OAuth 2.0 endpoints can be discovered from a single URL (called an issuer).

How did I configure security for all these microservices? I’m glad you asked!

I added Okta’s Spring Boot starter to the pom.xml in api-gateway and car-service:

<dependency>
    <groupId>com.okta.spring</groupId>
    <artifactId>okta-spring-boot-starter</artifactId>
    <version>1.4.0</version>
</dependency>

Then I created a new OIDC app in Okta, configured with authorization code flow. You’ll need to complete the following steps if you want to see everything in action.

Open a terminal window and navigate to the api-gateway project.

Create a Web Application in Okta

Before you begin, you’ll need a free Okta developer account. Install the Okta CLI and run okta register to sign up for a new account. If you already have an account, run okta login. Then, run okta apps create. Select the default app name, or change it as you see fit. Choose Web and press Enter.

Select Okta Spring Boot Starter. Accept the default Redirect URI values provided for you. That is, a Login Redirect of http://localhost:8080/login/oauth2/code/okta and a Logout Redirect of http://localhost:8080.

What does the Okta CLI do?

The Okta CLI will create an OIDC Web App in your Okta Org. It will add the redirect URIs you specified and grant access to the Everyone group. You will see output like the following when it’s finished:

Okta application configuration has been written to:
  /path/to/app/src/main/resources/application.properties

Open src/main/resources/application.properties to see the issuer and credentials for your app.

okta.oauth2.issuer=https://dev-133337.okta.com/oauth2/default
okta.oauth2.client-id=0oab8eb55Kb9jdMIr5d6
okta.oauth2.client-secret=NEVER-SHOW-SECRETS

Copy these keys and value into the car-service project’s application.properties file.

The Java code in the section below already exists, but I figured I’d explain it so you know what’s going on.

Configure Spring Security for OAuth 2.0 Login and Resource Server

In ApiGatewayApplication.java, I added Spring Security configuration to enable OAuth 2.0 login and enable the gateway as a resource server.

Configure Spring Security for OAuth 2.0 Login and Resource Server

In ApiGatewayApplication.java, I added Spring Security configuration to enable OAuth 2.0 login and enable the gateway as a resource server.

@Configuration
static class OktaOAuth2WebSecurityConfigurerAdapter extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        // @formatter:off
        http
            .authorizeRequests().anyRequest().authenticated()
                .and()
            .oauth2Login()
                .and()
            .oauth2ResourceServer().jwt();
        // @formatter:on
    }
}

The resource server configuration is not used in this example, but I added in case you wanted to hook up a mobile app or SPA to this gateway. If you’re using a SPA, you’ll also need to add a bean to configure CORS.

@Bean
public FilterRegistrationBean<CorsFilter> simpleCorsFilter() {
    UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
    CorsConfiguration config = new CorsConfiguration();
    config.setAllowCredentials(true);
    config.setAllowedOrigins(Collections.singletonList("*"));
    config.setAllowedMethods(Collections.singletonList("*"));
    config.setAllowedHeaders(Collections.singletonList("*"));
    source.registerCorsConfiguration("/**", config);
    FilterRegistrationBean<CorsFilter> bean = new FilterRegistrationBean<>(new CorsFilter(source));
    bean.setOrder(Ordered.HIGHEST_PRECEDENCE);
    return bean;
}

If you do use a CORS filter like this one, I recommend you change the origins, methods, and headers to be more specific, increasing security.

The CarServiceApplication.java is only configured as a resource server since it’s not expected to be accessed directly.

@Configuration
static class OktaOAuth2WebSecurityConfigurerAdapter extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        // @formatter:off
        http
            .authorizeRequests().anyRequest().authenticated()
                .and()
            .oauth2ResourceServer().jwt();
        // @formatter:on
    }
}

To make it possible for the API gateway to access the Car Service, I created a UserFeignClientInterceptor.java in the API gateway project.

api-gateway/src/main/java/com/example/apigateway/UserFeignClientInterceptor.java

package com.example.apigateway;

import feign.RequestInterceptor;
import feign.RequestTemplate;
import org.springframework.security.core.Authentication;
import org.springframework.security.core.context.SecurityContextHolder;
import org.springframework.security.oauth2.client.OAuth2AuthorizedClient;
import org.springframework.security.oauth2.client.OAuth2AuthorizedClientService;
import org.springframework.security.oauth2.client.authentication.OAuth2AuthenticationToken;
import org.springframework.security.oauth2.core.OAuth2AccessToken;
import org.springframework.stereotype.Component;

@Component
public class UserFeignClientInterceptor implements RequestInterceptor {
    private static final String AUTHORIZATION_HEADER = "Authorization";
    private static final String BEARER_TOKEN_TYPE = "Bearer";
    private final OAuth2AuthorizedClientService clientService;

    public UserFeignClientInterceptor(OAuth2AuthorizedClientService clientService) {
        this.clientService = clientService;
    }

    @Override
    public void apply(RequestTemplate template) {
        Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
        OAuth2AuthenticationToken oauthToken = (OAuth2AuthenticationToken) authentication;
        OAuth2AuthorizedClient client = clientService.loadAuthorizedClient(
                oauthToken.getAuthorizedClientRegistrationId(),
                oauthToken.getName());

        OAuth2AccessToken accessToken = client.getAccessToken();
        template.header(AUTHORIZATION_HEADER, String.format("%s %s", BEARER_TOKEN_TYPE, accessToken.getTokenValue()));
    }
}

I configured it as a RequestInterceptor in ApiGatewayApplication.java:

@Bean
public RequestInterceptor getUserFeignClientInterceptor(OAuth2AuthorizedClientService clientService) {
    return new UserFeignClientInterceptor(clientService);
}

And, I added two properties in api-gateway/src/main/resources/application.properties so Feign is Spring Security-aware.

feign.hystrix.enabled=true
hystrix.shareSecurityContext=true

See Java Microservices Running with Security Enabled

Run all the applications with ./mvnw spring-boot:run in separate terminal windows, or in your IDE if you prefer.

To make it simpler to run in an IDE, there is an aggregator pom.xml in the root directory. If you’d installed IntelliJ IDEA’s command line launcher, you just need to run idea pom.xml.

Navigate to http://localhost:8080/cool-cars and you’ll be redirected to Okta to log in.

Okta Login

Enter the username and password for your Okta developer account and you should see a list of cool cars.

Cool Cars

If you made it this far and got the examples apps running, congratulations! You’re super cool! 😎

Use Netflix Zuul and Spring Cloud to Proxy Routes

Another handy feature you might like in your microservices architecture is Netflix Zuul. Zuul is a gateway service that provides dynamic routing, monitoring, resiliency, and more.

To add Zuul, I added it as a dependency to api-gateway/pom.xml:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-netflix-zuul</artifactId>
</dependency>

Then I added @EnableZuulProxy to the ApiGatewayApplication class.

import org.springframework.cloud.netflix.zuul.EnableZuulProxy;

@EnableZuulProxy
@SpringBootApplication
public class ApiGatewayApplication {
    ...
}

To pass the access token to proxied routes, I created an AuthorizationHeaderFilter class that extends ZuulFilter.

package com.example.apigateway;

import com.netflix.zuul.ZuulFilter;
import com.netflix.zuul.context.RequestContext;
import org.springframework.core.Ordered;
import org.springframework.security.core.Authentication;
import org.springframework.security.core.context.SecurityContextHolder;
import org.springframework.security.oauth2.client.OAuth2AuthorizedClient;
import org.springframework.security.oauth2.client.OAuth2AuthorizedClientService;
import org.springframework.security.oauth2.client.authentication.OAuth2AuthenticationToken;
import org.springframework.security.oauth2.core.OAuth2AccessToken;

import java.util.Optional;

import static org.springframework.cloud.netflix.zuul.filters.support.FilterConstants.PRE_TYPE;

public class AuthorizationHeaderFilter extends ZuulFilter {

    private final OAuth2AuthorizedClientService clientService;

    public AuthorizationHeaderFilter(OAuth2AuthorizedClientService clientService) {
        this.clientService = clientService;
    }

    @Override
    public String filterType() {
        return PRE_TYPE;
    }

    @Override
    public int filterOrder() {
        return Ordered.LOWEST_PRECEDENCE;
    }

    @Override
    public boolean shouldFilter() {
        return true;
    }

    @Override
    public Object run() {
        RequestContext ctx = RequestContext.getCurrentContext();
        Optional<String> authorizationHeader = getAuthorizationHeader();
        authorizationHeader.ifPresent(s -> ctx.addZuulRequestHeader("Authorization", s));
        return null;
    }

    private Optional<String> getAuthorizationHeader() {
        Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
        OAuth2AuthenticationToken oauthToken = (OAuth2AuthenticationToken) authentication;
        OAuth2AuthorizedClient client = clientService.loadAuthorizedClient(
                oauthToken.getAuthorizedClientRegistrationId(),
                oauthToken.getName());

        OAuth2AccessToken accessToken = client.getAccessToken();

        if (accessToken == null) {
            return Optional.empty();
        } else {
            String tokenType = accessToken.getTokenType().getValue();
            String authorizationHeaderValue = String.format("%s %s", tokenType, accessToken.getTokenValue());
            return Optional.of(authorizationHeaderValue);
        }
    }
}

You might notice that there’s code in the getAuthorizationHeader() method that’s very similar to the code that’s in UserFeignClientInterceptor. Since it’s only a few lines, I opted not to move these to a utility class. The Feign interceptor is for the @FeignClient, while the Zuul filter is for Zuul-proxied requests.

To make Spring Boot and Zuul aware of this filter, I registered it as a bean in the main application class.

@Bean
public AuthorizationHeaderFilter authHeaderFilter(OAuth2AuthorizedClientService clientService) {
    return new AuthorizationHeaderFilter(clientService);
}

To proxy requests from the API Gateway to the Car Service, I added routes to api-gateway/src/main/resources/application.properties.

zuul.routes.car-service.path=/cars
zuul.routes.car-service.url=http://localhost:8090

zuul.routes.home.path=/home
zuul.routes.home.url=http://localhost:8090

zuul.sensitive-headers=Cookie,Set-Cookie

I added a HomeController to the car-service project for the /home route.

package com.example.carservice;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.security.oauth2.server.resource.authentication.JwtAuthenticationToken;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import java.security.Principal;

@RestController
public class HomeController {

    private final static Logger log = LoggerFactory.getLogger(HomeController.class);

    @GetMapping("/home")
    public String howdy(Principal principal) {
        String username = principal.getName();
        JwtAuthenticationToken token = (JwtAuthenticationToken) principal;
        log.info("claims: " + token.getTokenAttributes());
        return "Hello, " + username;
    }
}

Confirm Your Zuul Routes Work

Since these changes are already in the project you cloned, you should be able to view http://localhost:8080/cars and http://localhost:8080/home in your browser.

Home with Zuul

 

Original article source at https://developer.okta.com

#microservices #springboot #springcloud #java #programming 

What is GEEK

Buddha Community

Build a Microservices Architecture with Spring Boot and Spring Cloud

Enhance Amazon Aurora Read/Write Capability with ShardingSphere-JDBC

1. Introduction

Amazon Aurora is a relational database management system (RDBMS) developed by AWS(Amazon Web Services). Aurora gives you the performance and availability of commercial-grade databases with full MySQL and PostgreSQL compatibility. In terms of high performance, Aurora MySQL and Aurora PostgreSQL have shown an increase in throughput of up to 5X over stock MySQL and 3X over stock PostgreSQL respectively on similar hardware. In terms of scalability, Aurora achieves enhancements and innovations in storage and computing, horizontal and vertical functions.

Aurora supports up to 128TB of storage capacity and supports dynamic scaling of storage layer in units of 10GB. In terms of computing, Aurora supports scalable configurations for multiple read replicas. Each region can have an additional 15 Aurora replicas. In addition, Aurora provides multi-primary architecture to support four read/write nodes. Its Serverless architecture allows vertical scaling and reduces typical latency to under a second, while the Global Database enables a single database cluster to span multiple AWS Regions in low latency.

Aurora already provides great scalability with the growth of user data volume. Can it handle more data and support more concurrent access? You may consider using sharding to support the configuration of multiple underlying Aurora clusters. To this end, a series of blogs, including this one, provides you with a reference in choosing between Proxy and JDBC for sharding.

1.1 Why sharding is needed

AWS Aurora offers a single relational database. Primary-secondary, multi-primary, and global database, and other forms of hosting architecture can satisfy various architectural scenarios above. However, Aurora doesn’t provide direct support for sharding scenarios, and sharding has a variety of forms, such as vertical and horizontal forms. If we want to further increase data capacity, some problems have to be solved, such as cross-node database Join, associated query, distributed transactions, SQL sorting, page turning, function calculation, database global primary key, capacity planning, and secondary capacity expansion after sharding.

1.2 Sharding methods

It is generally accepted that when the capacity of a MySQL table is less than 10 million, the time spent on queries is optimal because at this time the height of its BTREE index is between 3 and 5. Data sharding can reduce the amount of data in a single table and distribute the read and write loads to different data nodes at the same time. Data sharding can be divided into vertical sharding and horizontal sharding.

1. Advantages of vertical sharding

  • Address the coupling of business system and make clearer.
  • Implement hierarchical management, maintenance, monitoring, and expansion to data of different businesses, like micro-service governance.
  • In high concurrency scenarios, vertical sharding removes the bottleneck of IO, database connections, and hardware resources on a single machine to some extent.

2. Disadvantages of vertical sharding

  • After splitting the library, Join can only be implemented by interface aggregation, which will increase the complexity of development.
  • After splitting the library, it is complex to process distributed transactions.
  • There is a large amount of data on a single table and horizontal sharding is required.

3. Advantages of horizontal sharding

  • There is no such performance bottleneck as a large amount of data on a single database and high concurrency, and it increases system stability and load capacity.
  • The business modules do not need to be split due to minor modification on the application client.

4. Disadvantages of horizontal sharding

  • Transaction consistency across shards is hard to be guaranteed;
  • The performance of associated query in cross-library Join is poor.
  • It’s difficult to scale the data many times and maintenance is a big workload.

Based on the analysis above, and the available studis on popular sharding middleware, we selected ShardingSphere, an open source product, combined with Amazon Aurora to introduce how the combination of these two products meets various forms of sharding and how to solve the problems brought by sharding.

ShardingSphere is an open source ecosystem including a set of distributed database middleware solutions, including 3 independent products, Sharding-JDBC, Sharding-Proxy & Sharding-Sidecar.

2. ShardingSphere introduction:

The characteristics of Sharding-JDBC are:

  1. With the client end connecting directly to the database, it provides service in the form of jar and requires no extra deployment and dependence.
  2. It can be considered as an enhanced JDBC driver, which is fully compatible with JDBC and all kinds of ORM frameworks.
  3. Applicable in any ORM framework based on JDBC, such as JPA, Hibernate, Mybatis, Spring JDBC Template or direct use of JDBC.
  4. Support any third-party database connection pool, such as DBCP, C3P0, BoneCP, Druid, HikariCP;
  5. Support any kind of JDBC standard database: MySQL, Oracle, SQLServer, PostgreSQL and any databases accessible to JDBC.
  6. Sharding-JDBC adopts decentralized architecture, applicable to high-performance light-weight OLTP application developed with Java

Hybrid Structure Integrating Sharding-JDBC and Applications

Sharding-JDBC’s core concepts

Data node: The smallest unit of a data slice, consisting of a data source name and a data table, such as ds_0.product_order_0.

Actual table: The physical table that really exists in the horizontal sharding database, such as product order tables: product_order_0, product_order_1, and product_order_2.

Logic table: The logical name of the horizontal sharding databases (tables) with the same schema. For instance, the logic table of the order product_order_0, product_order_1, and product_order_2 is product_order.

Binding table: It refers to the primary table and the joiner table with the same sharding rules. For example, product_order table and product_order_item are sharded by order_id, so they are binding tables with each other. Cartesian product correlation will not appear in the multi-tables correlating query, so the query efficiency will increase greatly.

Broadcast table: It refers to tables that exist in all sharding database sources. The schema and data must consist in each database. It can be applied to the small data volume that needs to correlate with big data tables to query, dictionary table and configuration table for example.

3. Testing ShardingSphere-JDBC

3.1 Example project

Download the example project code locally. In order to ensure the stability of the test code, we choose shardingsphere-example-4.0.0 version.

git clone https://github.com/apache/shardingsphere-example.git

Project description:

shardingsphere-example
  ├── example-core
  │   ├── config-utility
  │   ├── example-api
  │   ├── example-raw-jdbc
  │   ├── example-spring-jpa #spring+jpa integration-based entity,repository
  │   └── example-spring-mybatis
  ├── sharding-jdbc-example
  │   ├── sharding-example
  │   │   ├── sharding-raw-jdbc-example
  │   │   ├── sharding-spring-boot-jpa-example #integration-based sharding-jdbc functions
  │   │   ├── sharding-spring-boot-mybatis-example
  │   │   ├── sharding-spring-namespace-jpa-example
  │   │   └── sharding-spring-namespace-mybatis-example
  │   ├── orchestration-example
  │   │   ├── orchestration-raw-jdbc-example
  │   │   ├── orchestration-spring-boot-example #integration-based sharding-jdbc governance function
  │   │   └── orchestration-spring-namespace-example
  │   ├── transaction-example
  │   │   ├── transaction-2pc-xa-example #sharding-jdbc sample of two-phase commit for a distributed transaction
  │   │   └──transaction-base-seata-example #sharding-jdbc distributed transaction seata sample
  │   ├── other-feature-example
  │   │   ├── hint-example
  │   │   └── encrypt-example
  ├── sharding-proxy-example
  │   └── sharding-proxy-boot-mybatis-example
  └── src/resources
        └── manual_schema.sql  

Configuration file description:

application-master-slave.properties #read/write splitting profile
application-sharding-databases-tables.properties #sharding profile
application-sharding-databases.properties       #library split profile only
application-sharding-master-slave.properties    #sharding and read/write splitting profile
application-sharding-tables.properties          #table split profile
application.properties                         #spring boot profile

Code logic description:

The following is the entry class of the Spring Boot application below. Execute it to run the project.

The execution logic of demo is as follows:

3.2 Verifying read/write splitting

As business grows, the write and read requests can be split to different database nodes to effectively promote the processing capability of the entire database cluster. Aurora uses a reader/writer endpoint to meet users' requirements to write and read with strong consistency, and a read-only endpoint to meet the requirements to read without strong consistency. Aurora's read and write latency is within single-digit milliseconds, much lower than MySQL's binlog-based logical replication, so there's a lot of loads that can be directed to a read-only endpoint.

Through the one primary and multiple secondary configuration, query requests can be evenly distributed to multiple data replicas, which further improves the processing capability of the system. Read/write splitting can improve the throughput and availability of system, but it can also lead to data inconsistency. Aurora provides a primary/secondary architecture in a fully managed form, but applications on the upper-layer still need to manage multiple data sources when interacting with Aurora, routing SQL requests to different nodes based on the read/write type of SQL statements and certain routing policies.

ShardingSphere-JDBC provides read/write splitting features and it is integrated with application programs so that the complex configuration between application programs and database clusters can be separated from application programs. Developers can manage the Shard through configuration files and combine it with ORM frameworks such as Spring JPA and Mybatis to completely separate the duplicated logic from the code, which greatly improves the ability to maintain code and reduces the coupling between code and database.

3.2.1 Setting up the database environment

Create a set of Aurora MySQL read/write splitting clusters. The model is db.r5.2xlarge. Each set of clusters has one write node and two read nodes.

3.2.2 Configuring Sharding-JDBC

application.properties spring boot Master profile description:

You need to replace the green ones with your own environment configuration.

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#Activate master-slave configuration item so that sharding-jdbc can use master-slave profile
spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-master-slave.properties sharding-jdbc profile description:

spring.shardingsphere.datasource.names=ds_master,ds_slave_0,ds_slave_1
# data souce-master
spring.shardingsphere.datasource.ds_master.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master.password=Your master DB password
spring.shardingsphere.datasource.ds_master.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master.jdbc-url=Your primary DB data sourceurl spring.shardingsphere.datasource.ds_master.username=Your primary DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_0.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_0.jdbc-url=Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_0.username= Your slave DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_1.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_1.jdbc-url= Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_1.username= Your slave DB username
# Routing Policy Configuration
spring.shardingsphere.masterslave.load-balance-algorithm-type=round_robin
spring.shardingsphere.masterslave.name=ds_ms
spring.shardingsphere.masterslave.master-data-source-name=ds_master
spring.shardingsphere.masterslave.slave-data-source-names=ds_slave_0,ds_slave_1
# sharding-jdbc configures the information storage mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log,and you can see the conversion from logical SQL to actual SQL from the print
spring.shardingsphere.props.sql.show=true

 

3.2.3 Test and verification process description

  • Test environment data initialization: Spring JPA initialization automatically creates tables for testing.

  • Write data to the master instance

As shown in the ShardingSphere-SQL log figure below, the write SQL is executed on the ds_master data source.

  • Data query operations are performed on the slave library.

As shown in the ShardingSphere-SQL log figure below, the read SQL is executed on the ds_slave data source in the form of polling.

[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] Rule Type: master-slave 
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] SQL: select orderentit0_.order_id as order_id1_1_, orderentit0_.address_id as address_2_1_, 
orderentit0_.status as status3_1_, orderentit0_.user_id as user_id4_1_ from t_order orderentit0_ ::: DataSources: ds_slave_0 
---------------------------- Print OrderItem Data -------------------
Hibernate: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, orderiteme1_.user_id 
as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] Rule Type: master-slave 
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] SQL: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, 
orderiteme1_.user_id as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id ::: DataSources: ds_slave_1 

Note: As shown in the figure below, if there are both reads and writes in a transaction, Sharding-JDBC routes both read and write operations to the master library. If the read/write requests are not in the same transaction, the corresponding read requests are distributed to different read nodes according to the routing policy.

@Override
@Transactional // When a transaction is started, both read and write in the transaction go through the master library. When closed, read goes through the slave library and write goes through the master library
public void processSuccess() throws SQLException {
    System.out.println("-------------- Process Success Begin ---------------");
    List<Long> orderIds = insertData();
    printData();
    deleteData(orderIds);
    printData();
    System.out.println("-------------- Process Success Finish --------------");
}

3.2.4 Verifying Aurora failover scenario

The Aurora database environment adopts the configuration described in Section 2.2.1.

3.2.4.1 Verification process description

  1. Start the Spring-Boot project

2. Perform a failover on Aurora’s console

3. Execute the Rest API request

4. Repeatedly execute POST (http://localhost:8088/save-user) until the call to the API failed to write to Aurora and eventually recovered successfully.

5. The following figure shows the process of executing code failover. It takes about 37 seconds from the time when the latest SQL write is successfully performed to the time when the next SQL write is successfully performed. That is, the application can be automatically recovered from Aurora failover, and the recovery time is about 37 seconds.

3.3 Testing table sharding-only function

3.3.1 Configuring Sharding-JDBC

application.properties spring boot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#Activate sharding-tables configuration items
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
# spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-tables.properties sharding-jdbc profile description

## configure primary-key policy
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# configure the binding relation of t_order and t_order_item
spring.shardingsphere.sharding.binding-tables[0]=t_order,t_order_item
# configure broadcast tables
spring.shardingsphere.sharding.broadcast-tables=t_address
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.3.2 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC routing rules are configured, the client executes DDL, and Sharding-JDBC automatically creates corresponding tables according to the table splitting rules. If t_address is a broadcast table, create a t_address because there is only one master instance. Two physical tables t_order_0 and t_order_1 will be created when creating t_order.

2. Write operation

As shown in the figure below, Logic SQL inserts a record into t_order. When Sharding-JDBC is executed, data will be distributed to t_order_0 and t_order_1 according to the table splitting rules.

When t_order and t_order_item are bound, the records associated with order_item and order are placed on the same physical table.

3. Read operation

As shown in the figure below, perform the join query operations to order and order_item under the binding table, and the physical shard is precisely located based on the binding relationship.

The join query operations on order and order_item under the unbound table will traverse all shards.

3.4 Testing database sharding-only function

3.4.1 Setting up the database environment

Create two instances on Aurora: ds_0 and ds_1

When the sharding-spring-boot-jpa-example project is started, tables t_order, t_order_itemt_address will be created on two Aurora instances.

3.4.2 Configuring Sharding-JDBC

application.properties springboot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

# Activate sharding-databases configuration items
spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-databases.properties sharding-jdbc profile description

spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= spring.shardingsphere.datasource.ds_0.username= 
spring.shardingsphere.datasource.ds_0.password=
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url= 
spring.shardingsphere.datasource.ds_1.username= 
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_0

spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.4.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, physical tables will be created on ds_0 and ds_1. The three tables, t_address, t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address, each record written will also be written to the t_address tables of ds_0 and ds_1.

The tables t_order and t_order_item of the slave library are written on the table in the corresponding instance according to the slave library field and routing policy.

3. Read operation

Query order is routed to the corresponding Aurora instance according to the routing rules of the slave library .

Query Address. Since address is a broadcast table, an instance of address will be randomly selected and queried from the nodes used.

As shown in the figure below, perform the join query operations to order and order_item under the binding table, and the physical shard is precisely located based on the binding relationship.

3.5 Verifying sharding function

3.5.1 Setting up the database environment

As shown in the figure below, create two instances on Aurora: ds_0 and ds_1

When the sharding-spring-boot-jpa-example project is started, physical tables t_order_01, t_order_02, t_order_item_01,and t_order_item_02 and global table t_address will be created on two Aurora instances.

3.5.2 Configuring Sharding-JDBC

application.properties springboot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-databases.properties sharding-jdbc profile description

spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= 306/dev?useSSL=false&characterEncoding=utf-8
spring.shardingsphere.datasource.ds_0.username= 
spring.shardingsphere.datasource.ds_0.password=
spring.shardingsphere.datasource.ds_0.max-active=16
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url= 
spring.shardingsphere.datasource.ds_1.username= 
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.datasource.ds_1.max-active=16
# default library splitting policy
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
# Tables that do not meet the library splitting policy are placed on ds_0
spring.shardingsphere.sharding.default-data-source-name=ds_0
# t_order table splitting policy
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
# t_order_item table splitting policy
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mdoe
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.5.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s sharding and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, t_address will be created on both ds_0 and ds_1. The three tables, t_address, t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address, each record written will also be written to the t_address tables of ds_0 and ds_1.

The tables t_order and t_order_item of the sub-library are written to the table on the corresponding instance according to the slave library field and routing policy.

3. Read operation

The read operation is similar to the library split function verification described in section2.4.3.

3.6 Testing database sharding, table sharding and read/write splitting function

3.6.1 Setting up the database environment

The following figure shows the physical table of the created database instance.

3.6.2 Configuring Sharding-JDBC

application.properties spring boot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

# activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
spring.profiles.active=sharding-master-slave

application-sharding-master-slave.properties sharding-jdbc profile description

The url, name and password of the database need to be changed to your own database parameters.

spring.shardingsphere.datasource.names=ds_master_0,ds_master_1,ds_master_0_slave_0,ds_master_0_slave_1,ds_master_1_slave_0,ds_master_1_slave_1
spring.shardingsphere.datasource.ds_master_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0.username= 
spring.shardingsphere.datasource.ds_master_0.password=
spring.shardingsphere.datasource.ds_master_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_0.username= 
spring.shardingsphere.datasource.ds_master_0_slave_0.password=
spring.shardingsphere.datasource.ds_master_0_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_1.username= 
spring.shardingsphere.datasource.ds_master_0_slave_1.password=
spring.shardingsphere.datasource.ds_master_0_slave_1.max-active=16
spring.shardingsphere.datasource.ds_master_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1.jdbc-url= 
spring.shardingsphere.datasource.ds_master_1.username= 
spring.shardingsphere.datasource.ds_master_1.password=
spring.shardingsphere.datasource.ds_master_1.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_0.jdbc-url=
spring.shardingsphere.datasource.ds_master_1_slave_0.username=
spring.shardingsphere.datasource.ds_master_1_slave_0.password=
spring.shardingsphere.datasource.ds_master_1_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_1_slave_1.username=admin
spring.shardingsphere.datasource.ds_master_1_slave_1.password=
spring.shardingsphere.datasource.ds_master_1_slave_1.max-active=16
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_master_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# master/slave data source and slave data source configuration
spring.shardingsphere.sharding.master-slave-rules.ds_0.master-data-source-name=ds_master_0
spring.shardingsphere.sharding.master-slave-rules.ds_0.slave-data-source-names=ds_master_0_slave_0, ds_master_0_slave_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.master-data-source-name=ds_master_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.slave-data-source-names=ds_master_1_slave_0, ds_master_1_slave_1
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.6.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, t_address will be created on both ds_0 and ds_1. The three tables, t_address, t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address, each record written will also be written to the t_address tables of ds_0 and ds_1.

The tables t_order and t_order_item of the slave library are written to the table on the corresponding instance according to the slave library field and routing policy.

3. Read operation

The join query operations on order and order_item under the binding table are shown below.

3. Conclusion

As an open source product focusing on database enhancement, ShardingSphere is pretty good in terms of its community activitiy, product maturity and documentation richness.

Among its products, ShardingSphere-JDBC is a sharding solution based on the client-side, which supports all sharding scenarios. And there’s no need to introduce an intermediate layer like Proxy, so the complexity of operation and maintenance is reduced. Its latency is theoretically lower than Proxy due to the lack of intermediate layer. In addition, ShardingSphere-JDBC can support a variety of relational databases based on SQL standards such as MySQL/PostgreSQL/Oracle/SQL Server, etc.

However, due to the integration of Sharding-JDBC with the application program, it only supports Java language for now, and is strongly dependent on the application programs. Nevertheless, Sharding-JDBC separates all sharding configuration from the application program, which brings relatively small changes when switching to other middleware.

In conclusion, Sharding-JDBC is a good choice if you use a Java-based system and have to to interconnect with different relational databases — and don’t want to bother with introducing an intermediate layer.

Author

Sun Jinhua

A senior solution architect at AWS, Sun is responsible for the design and consult on cloud architecture. for providing customers with cloud-related design and consulting services. Before joining AWS, he ran his own business, specializing in building e-commerce platforms and designing the overall architecture for e-commerce platforms of automotive companies. He worked in a global leading communication equipment company as a senior engineer, responsible for the development and architecture design of multiple subsystems of LTE equipment system. He has rich experience in architecture design with high concurrency and high availability system, microservice architecture design, database, middleware, IOT etc.

Adaline  Kulas

Adaline Kulas

1594162500

Multi-cloud Spending: 8 Tips To Lower Cost

A multi-cloud approach is nothing but leveraging two or more cloud platforms for meeting the various business requirements of an enterprise. The multi-cloud IT environment incorporates different clouds from multiple vendors and negates the dependence on a single public cloud service provider. Thus enterprises can choose specific services from multiple public clouds and reap the benefits of each.

Given its affordability and agility, most enterprises opt for a multi-cloud approach in cloud computing now. A 2018 survey on the public cloud services market points out that 81% of the respondents use services from two or more providers. Subsequently, the cloud computing services market has reported incredible growth in recent times. The worldwide public cloud services market is all set to reach $500 billion in the next four years, according to IDC.

By choosing multi-cloud solutions strategically, enterprises can optimize the benefits of cloud computing and aim for some key competitive advantages. They can avoid the lengthy and cumbersome processes involved in buying, installing and testing high-priced systems. The IaaS and PaaS solutions have become a windfall for the enterprise’s budget as it does not incur huge up-front capital expenditure.

However, cost optimization is still a challenge while facilitating a multi-cloud environment and a large number of enterprises end up overpaying with or without realizing it. The below-mentioned tips would help you ensure the money is spent wisely on cloud computing services.

  • Deactivate underused or unattached resources

Most organizations tend to get wrong with simple things which turn out to be the root cause for needless spending and resource wastage. The first step to cost optimization in your cloud strategy is to identify underutilized resources that you have been paying for.

Enterprises often continue to pay for resources that have been purchased earlier but are no longer useful. Identifying such unused and unattached resources and deactivating it on a regular basis brings you one step closer to cost optimization. If needed, you can deploy automated cloud management tools that are largely helpful in providing the analytics needed to optimize the cloud spending and cut costs on an ongoing basis.

  • Figure out idle instances

Another key cost optimization strategy is to identify the idle computing instances and consolidate them into fewer instances. An idle computing instance may require a CPU utilization level of 1-5%, but you may be billed by the service provider for 100% for the same instance.

Every enterprise will have such non-production instances that constitute unnecessary storage space and lead to overpaying. Re-evaluating your resource allocations regularly and removing unnecessary storage may help you save money significantly. Resource allocation is not only a matter of CPU and memory but also it is linked to the storage, network, and various other factors.

  • Deploy monitoring mechanisms

The key to efficient cost reduction in cloud computing technology lies in proactive monitoring. A comprehensive view of the cloud usage helps enterprises to monitor and minimize unnecessary spending. You can make use of various mechanisms for monitoring computing demand.

For instance, you can use a heatmap to understand the highs and lows in computing visually. This heat map indicates the start and stop times which in turn lead to reduced costs. You can also deploy automated tools that help organizations to schedule instances to start and stop. By following a heatmap, you can understand whether it is safe to shut down servers on holidays or weekends.

#cloud computing services #all #hybrid cloud #cloud #multi-cloud strategy #cloud spend #multi-cloud spending #multi cloud adoption #why multi cloud #multi cloud trends #multi cloud companies #multi cloud research #multi cloud market

Roberta  Ward

Roberta Ward

1602547021

Consumer-Driven Contract Testing With Spring Cloud Contract

Introduction

The article demonstrates how to write a contract between the producer & the consumer and how to implements the producer & the consumer side test cases for Spring Cloud Contract through an HTTP request between two microservices.

Producer/Provider

The producer is a service that exposes an API (e.g. rest endpoint) or sends a message (e.g. Kafka Producer which publishes the message to Kafka Topic)

Consumer

The consumer is a service that consumes the API that is exposed by the producer or listens to a message from the producer (e.g. Kafka Consumer which consumes the message from Kafka Topic)

Contract

The contract is an agreement between the producer and consumer how the API/message will look like.

  • What endpoints can we use?
  • What input do the endpoints take?
  • What does the output look like?

Consumer-Driven Contract

Consumer-driven contract (CDD) is an approach where the consumer drives the changes in the API of the producer.

Consumer-driven contract testing is an approach to formalize above mentioned expectations into a contract between each consumer-provider pair. Once the contract is established between Provider and Consumer, this ensures that the contract will not break suddenly.

Spring Cloud Contract

Spring Cloud Contract is a project of spring-cloud that helps end-users in successfully implementing the Consumer Driven Contracts (CDC) approach. The Spring Cloud Contract Verifier is used as a tool that enables the development of Consumer Driven Contracts. Spring Cloud Contract Verifier is used with Contract Definition Language (DSL) written in Groovy or YAML.

Demo Application

To understand the concept of the Spring Cloud Contract, I have implemented two simple microservices. The code for these applications can be found on Github account.

request and response

Request and response between the consumer and the producer

Create-employee-application MS

It is the first microservice responsible for creating an employee’s profile based on the given details. We are only passing the FirstName, LastName, and Identification Number (e.g. National ID) of the employee. This microservice is calling another microservice to first check, based on the Identity Number, whether the profile has already been created for the employee.

Get-employee-application MS

This is the second microservice service that is just checking if an employee profile already exists. If the employee profile is matching with the Identification Number provided in the database, it will return the profile else return an empty profile with the EMPLOYEE_NOT_FOUND status.

The create-employee-application microservice is having a dependency on get-employee-application microservice, so we have written a contract of get-employee-application. We are not using any database here to store or retrieve employee details so that written simple logic which will help us to fetch the existing employee profile.

Setup

We are going to understand how we have done the setup for these applications. We are going to discuss the setup in each microservice one by one.

#tutorial #microservices #spring boot #spring cloud #spring boot microservices #spring cloud contract #microservices testing

Fredy  Larson

Fredy Larson

1602810603

Build J2EE Microservices Architecture

I posted an article in regards to a single-page application(UI), but in this post, I’m going to introduce how to build microservice architecture for the J2EE application with Spring framework and open-source SSO framework Keycloak. This post will cover the following aspects:

  • Keycloak setup
  • Eureka service registration and discovery
  • Spring Cloud API gateway
  • Spring Security (OAuth2 login) and the integration with Keycloak
  • Microservices

The code is available in my Github and please check the docker-compose.yml at first so that you can read the rest of the post easier. One thing I need to mention here is you need to replace the IP address of the keycloak server URL with your own before running the docker containers.

version: '3.4'
2
services:
3
  api-gateway:
4
    build:
5
      context: ./api-gateway
6
    ports:
7
      - "8080:8080"
8
    restart: on-failure
9
    environment:
10
      #overriding spring application.properties
11
      - eureka.client.serviceUrl.defaultZone=http://eureka-server:9091/eureka/
12
      - keycloak-client.server-url=http://10.0.0.17:18080/auth ## use host name or ip of the host machine
13
    depends_on:
14
      - eureka-server
15
  eureka-server:
16
    build:
17
      context: ./eureka-server
18
    ports:
19
      - "9091:9091"
20
    restart: on-failure
21
  microservice-consumer:
22
    build:
23
      context: ./microservice-consumer
24
    ports:
25
      - "9080:9080"
26
    restart: on-failure
27
    environment:
28
      #overriding spring application.properties
29
      - eureka.client.serviceUrl.defaultZone=http://eureka-server:9091/eureka/
30
      - keycloak-client.server-url=http://10.0.0.17:18080/auth ## use host name or ip of the host machine
31
    depends_on:
32
      - eureka-server
33
  microservice-producer:
34
    build:
35
      context: ./microservice-producer
36
    ports:
37
      - "9081:9081"
38
    restart: on-failure
39
    environment:
40
      #overriding spring application.properties
41
      - eureka.client.serviceUrl.defaultZone=http://eureka-server:9091/eureka/
42
      - keycloak-client.server-url=http://10.0.0.17:18080/auth ## use host name or ip of the host machine
43
    depends_on:
44
      - eureka-server
45
  keycloak:
46
    image: jboss/keycloak:11.0.0
47
      volumes:
48
      - ./keycloak-server/realm-export.json:/tmp/keycloak/config/realm-export.json
49
    environment:
50
        KEYCLOAK_USER: admin
51
        KEYCLOAK_PASSWORD: admin
52
        KEYCLOAK_IMPORT: /tmp/keycloak/config/realm-export.json
53
        DB_VENDOR: POSTGRES
54
        DB_ADDR: postgres
55
        DB_DATABASE: keycloak
56
        DB_USER: keycloak
57
        DB_SCHEMA: public
58
        DB_PASSWORD: password
59
    ports:
60
      - "18080:18080"
61
    command:
62
      - "-b"
63
      - "0.0.0.0"
64
      - "-Djboss.socket.binding.port-offset=10000"
65
    restart: on-failure
66
    depends_on:
67
      - postgres
68
  postgres:
69
      image: postgres
70
      volumes:
71
        - postgres_data:/var/lib/postgresql/data
72
      environment:
73
        POSTGRES_DB: keycloak
74
        POSTGRES_USER: keycloak
75
        POSTGRES_PASSWORD: password
76
volumes:
77
    postgres_data:
78
      name: keycloak_postgres_data
79
      driver: local

#spring boot #microservice #spring cloud #keycloak #eureka server #spring cloud gateway #spring secuirty 5 #sso authentication #java microservice #jwt token

shaik hameed

1604403198

Microservices Spring Boot | Microservices Full Course | Microservices Tutorial

https://youtu.be/grUXx47g7o0

#spring #spring-framework #spring-boot #microservices #cloud #springcloud