1553826080
#spring-boot #testing #java
1553827069
In this post, you’ll walk through how to build a simple Spring Boot application and test it with Junit 5. An application without testing is the proverbial Pandora’s Box.
What good is your application if you don’t know that it will work under any condition? Adding a suite of tests builds confidence that your application can handle anything thrown at it. When building your tests, it is important to use a modern and comprehensive suite of tools. Using a modern framework ensures that you can keep up with the changes within your language and libraries. A comprehensive suite of tools ensures that you can adequately test all areas of your application without the burden of writing your own test utilities. JUnit 5 handles both requirements well.
The application used for this post will be a basic REST API with endpoints to calculate a few things about a person’s birthday! There are three POST endpoints you will be able to use to determine either the day of the week, the astrological sign, or the Chinese Zodiac sign for a passed in birthday. This REST API will be secured with OAuth 2.0 and Okta. Once we have built the API, we will walk through unit testing the code with JUnit 5 and review the coverage of our JUnit tests.
The main advantage of using the Spring Framework is the ability to inject your dependencies, which makes it much easier to swap out implementations for various purposes, but not least of all for unit testing. Spring Boot makes it even easier by allowing you to do much of the dependency injection with annotations instead of having to bother with a complicated applicationContext.xml
file!
NOTE: For this post, I will be using Eclipse, as it is my preferred IDE. If you are using Eclipse as well, you will need to install a version of Oxygen or beyond in order to have JUnit 5 (Jupiter) test support included.## Create a Spring Boot App for Testing with JUnit 5
For this tutorial, the structure of the project is as shown below. I will only discuss the file names, but you can find their path using the below structure, looking through the full source, or paying attention to the package.
To get going, you’ll create a Spring Boot project from scratch.
NOTE: The following steps are for Eclipse. If you use a different IDE, there are likely equivalent steps. Optionally, you can create your own project directory structure and write the final pom.xml
file in any text editor you like.
Create a new Maven Project from File > New menu. Select the location of your new project and click next twice and then fill out the group id, artifact id, and version for your application. For this example, I used the following options:
com.example.joy
myFirstSpringBoot
0.0.1-SNAPSHOT
NOTE: For this post, I will be using Eclipse, as it is my preferred IDE. If you are using Eclipse as well, you will need to install a version of Oxygen or beyond in order to have JUnit 5 (Jupiter) test support included.
When done, this will produce apom.xml
file that looks like the following:
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.3.RELEASE</version>
</parent>
<groupId>com.example.joy</groupId>
<artifactId>myFirstSpringBoot</artifactId>
<version>0.0.1-SNAPSHOT</version>
</project>
Next, you’ll want to update the pom.xml
with some basic settings and dependencies to look like the following (add everything after version):
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.3.RELEASE</version>
</parent>
<groupId>com.example.joy</groupId>
<artifactId>myFirstSpringBoot</artifactId>
<version>0.0.1-SNAPSHOT</version>
<properties>
<java.version>1.8</java.version>
<spring.boot.version>2.1.3.RELEASE</spring.boot.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
Take note that you need to exclude the default JUnit from the spring-boot-starter-test dependency. The junit-jupiter-engine
dependency is for JUnit 5.
Let’s start with the main application file, which is the entry point for starting the Java API. This is a file called SpringBootRestApiApplication.java
that looks like this:
package com.example.joy.myFirstSpringBoot;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication(scanBasePackages = {"com.example.joy"})
public class SpringBootRestApiApplication {
public static void main(String[] args) {
SpringApplication.run(SpringBootRestApiApplication.class, args);
}
}
The SpringBootApplication
annotation tells the application that it should support auto-configuration, component scanning (of com.example.joy
package and everything under it), and bean registration.
@SpringBootApplication(scanBasePackages = {"com.example.joy"})
This line launches the REST API application:
SpringApplication.run(SpringBootRestApiApplication.class, args);
BirthdayService.java
is the interface for the birthday service. It is pretty straight forward, defining that there are four helper functions available.
package com.example.joy.myFirstSpringBoot.services;
import java.time.LocalDate;
public interface BirthdayService {
LocalDate getValidBirthday(String birthdayString) ;
String getBirthDOW(LocalDate birthday);
String getChineseZodiac(LocalDate birthday);
String getStarSign(LocalDate birthday) ;
}
BirthdayInfoController.java
handles the three post requests to get birthday information. It looks like this:
package com.example.joy.myFirstSpringBoot.controllers;
import java.time.LocalDate;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.ExceptionHandler;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import com.example.joy.myFirstSpringBoot.services.BirthdayService;
@RestController
@RequestMapping("/birthday")
public class BirthdayInfoController {
private final BirthdayService birthdayService;
public BirthdayInfoController(BirthdayService birthdayService) {
this.birthdayService = birthdayService;
}
@PostMapping("/dayOfWeek")
public String getDayOfWeek(@RequestBody String birthdayString) {
LocalDate birthday = birthdayService.getValidBirthday(birthdayString);
String dow = birthdayService.getBirthDOW(birthday);
return dow;
}
@PostMapping("/chineseZodiac")
public String getChineseZodiac(@RequestBody String birthdayString) {
LocalDate birthday = birthdayService.getValidBirthday(birthdayString);
String sign = birthdayService.getChineseZodiac(birthday);
return sign;
}
@PostMapping("/starSign")
public String getStarSign(@RequestBody String birthdayString) {
LocalDate birthday = birthdayService.getValidBirthday(birthdayString);
String sign = birthdayService.getStarSign(birthday);
return sign;
}
@ExceptionHandler(RuntimeException.class)
public final ResponseEntity<Exception> handleAllExceptions(RuntimeException ex) {
return new ResponseEntity<Exception>(ex, HttpStatus.INTERNAL_SERVER_ERROR);
}
}
First, you will notice the following annotations near the top. The @RestController
annotation tells the system that this file is a “Rest API Controller” which simply means that it contains a collection of API endpoints. You could also use the @Controller
annotation, but it means that you would have to add more boilerplate code to convert the responses to an HTTP OK response instead of simply returning the values. The second line tells it that all of the endpoints have the “/birthday” prefix in the path. I will show a full path for an endpoint later.
@RestController
@RequestMapping("/birthday")
Next, you will see a class variable for birthdayService
(of type BirthdayService
). This variable is initialized in the constructor of the class. Since Spring Framework 4.3, you no longer need to specify @Autowired
when using constructor injection. This will have the effect of loading an instance of the BasicBirthdayService
class, which we will look at shortly.
private final BirthdayService birthdayService;
public BirthdayInfoController(BirthdayService birthdayService){
this.birthdayService = birthdayService;
}
The next few methods (getDayOfWeek
, getChineseZodiac
, and getStarSign
) are where it gets juicy. They are the handlers for the three different endpoints. Each one starts with a @PostMapping
annotation which tells the system the path of the endpoint. In this case, the path would be /birthday/dayOfWeek
(the /birthday
prefix came from the @RequestMapping
annotation above).
@PostMapping("/dayOfWeek")
Each endpoint method does the following:
com.example.joy
myFirstSpringBoot
0.0.1-SNAPSHOT
Lastly, there is a method for error handling:
@ExceptionHandler(RuntimeException.class)
public final ResponseEntity<Exception> handleAllExceptions(RuntimeException ex) {
return new ResponseEntity<Exception>(ex, HttpStatus.INTERNAL_SERVER_ERROR);
}
Here, the @ExceptionHandler
annotation tells it to catch any instance of RuntimeException within the endpoint functions and return a 500 response.
BasicBirthdayService.java
handles the bulk of the actual business logic in this application. It is the class that has a function to check if a birthday string is valid as well as functions that calculate the day of the week, Chinese Zodiac, and astrological sign from a birthday.
package com.example.joy.myFirstSpringBoot.services;
import org.springframework.stereotype.Service;
import java.time.LocalDate;
import java.time.format.DateTimeFormatter;
@Service
public class BasicBirthdayService implements BirthdayService {
private static DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd");
@Override
public LocalDate getValidBirthday(String birthdayString) {
if (birthdayString == null) {
throw new RuntimeException("Must include birthday");
}
try {
LocalDate birthdate = LocalDate.parse(birthdayString, formatter);
return birthdate;
} catch (Exception e) {
throw new RuntimeException("Must include valid birthday in yyyy-MM-dd format");
}
}
@Override
public String getBirthDOW(LocalDate birthday) {
return birthday.getDayOfWeek().toString();
}
@Override
public String getChineseZodiac(LocalDate birthday) {
int year = birthday.getYear();
switch (year % 12) {
case 0:
return "Monkey";
case 1:
return "Rooster";
case 2:
return "Dog";
case 3:
return "Pig";
case 4:
return "Rat";
case 5:
return "Ox";
case 6:
return "Tiger";
case 7:
return "Rabbit";
case 8:
return "Dragon";
case 9:
return "Snake";
case 10:
return "Horse";
case 11:
return "Sheep";
}
return "";
}
@Override
public String getStarSign(LocalDate birthday) {
int day = birthday.getDayOfMonth();
int month = birthday.getMonthValue();
if (month == 12 && day >= 22 || month == 1 && day < 20) {
return "Capricorn";
} else if (month == 1 && day >= 20 || month == 2 && day < 19) {
return "Aquarius";
} else if (month == 2 && day >= 19 || month == 3 && day < 21) {
return "Pisces";
} else if (month == 3 && day >= 21 || month == 4 && day < 20) {
return "Aries";
} else if (month == 4 && day >= 20 || month == 5 && day < 21) {
return "taurus";
} else if (month == 5 && day >= 21 || month == 6 && day < 21) {
return "Gemini";
} else if (month == 6 && day >= 21 || month == 7 && day < 23) {
return "Cancer";
} else if (month == 7 && day >= 23 || month == 8 && day < 23) {
return "Leo";
} else if (month == 8 && day >= 23 || month == 9 && day < 23) {
return "Virgo";
} else if (month == 9 && day >= 23 || month == 10 && day < 23) {
return "Libra";
} else if (month == 10 && day >= 23 || month == 11 && day < 22) {
return "Scorpio";
} else if (month == 11 && day >= 22 || month == 12 && day < 22) {
return "Sagittarius";
}
return "";
}
}
The @Service
annotation is what it uses to inject this into the BirthdayInfoController
constructor. Since this class implements the BirthdayService
interface, and it is within the scan path for the application, Spring will find it, initialize it, and inject it into the constructor in BirthdayInfoController
.
The rest of the class is simply a set of functions that specify the business logic called from the BirthdayInfoController
.
At this point, you should have a working API. In Eclipse, just right click on the SpringBootRestApiApplication
file, and click run as > Java application and it will kick it off. To hit the endpoints, you can use curl to execute these commands:
Day of Week:
Request:
curl -X POST \
http://localhost:8080/birthday/dayOfWeek \
-H 'Content-Type: text/plain' \
-H 'accept: text/plain' \
-d 2005-03-09
Response:
WEDNESDAY
Chinese Zodiac:
Request:
curl -X POST \
http://localhost:8080/birthday/chineseZodiac \
-H 'Content-Type: text/plain' \
-H 'accept: text/plain' \
-d 2005-03-09
Response:
Rooster
Astrological Sign:
Request:
curl -X POST \
http://localhost:8080/birthday/starSign \
-H 'Content-Type: text/plain' \
-H 'accept: text/plain' \
-d 2005-03-09
Response:
Pisces
Now that we have the basic API created, let’s make it secure! You can do this quickly by using Okta’s OAuth 2.0 token verification. Why Okta? Okta is an identity provider that makes it easy to add authentication and authorization into your apps. It’s always on and friends don’t let friends write authentication.
After integrating Okta, the API will require the user to pass in an OAuth 2.0 access token. This token will be checked by Okta for validity and authenticity.
To do this, you will need to have a “Service Application” set up with Okta, add the Okta Spring Boot starter to the Java code, and have a way to generate tokens for this application. Let’s get started!
You will need to create an OpenID Connect Application in Okta to get your unique values to perform authentication.
To do this, you must first log in to your Okta Developer account (or sign up if you don’t have an account).
Once in your Okta Developer dashboard, click on the Applications tab at the top of the screen and then click on the Add Application button.
You will see the following screen. Click on the Service tile and then click Next.
The next screen will prompt you for a name for your application. Select something that makes sense and click Done.
The application will be created and you will be shown a screen that shows your client credentials including a Client ID and a Client secret. You can get back to this screen anytime by going to the Applications tab and then clicking on the name of the application you just created.
There are just a few steps to add authentication to your application.
Create a file called src/main/resources/application.properties
with the following contents:
okta.oauth2.issuer=https://{yourOktaDomain}/oauth2/default
okta.oauth2.clientId={clientId}
okta.oauth2.clientSecret={clientSecret}
okta.oauth2.scope=openid
Replace the items inside {...}
with your values. The {clientId}
and {clientSecret}
values will come from the application you just created. Once you have the application context configured, all you need to do is add a single dependency to your pom.xml
file and make one more Java file.
For the dependencies, add the Okta Spring Boot starter to the pom.xml
file in the dependencies section:
<!-- security - begin -->
<dependency>
<groupId>com.okta.spring</groupId>
<artifactId>okta-spring-boot-starter</artifactId>
<version>1.1.0</version>
</dependency>
<!-- security - end -->
And the last step is to update the SpringBootRestApiApplication
to include a static configuration subclass called OktaOAuth2WebSecurityConfigurerAdapter
. Your SpringBootRestApiApplication.java
file should be updated to look like this:
package com.example.joy.myFirstSpringBoot;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
@SpringBootApplication(scanBasePackages = {"com.example.joy"})
public class SpringBootRestApiApplication {
public static void main(String[] args) {
SpringApplication.run(SpringBootRestApiApplication.class, args);
}
@Configuration
static class OktaOAuth2WebSecurityConfigurerAdapter extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http
.authorizeRequests().anyRequest().authenticated()
.and().oauth2ResourceServer().jwt();
}
}
}
In order to test, you will need to be able to generate a valid token. Typically, the client application would be responsible for generating the tokens that it would use for authentication in the API. However, since you have no client application, you need a way to generate tokens in order to test the application.
An easy way to achieve a token is to generate one using OpenID Connect Debugger. First, however, you must have a client Web application setup in Okta to use with OpenID Connect’s implicit flow.
To do this, go back to the Okta developer console and select Applications > Add Application, but this time, select the Web tile.
On the next screen, you will need to fill out some information. Set the name to something you will remember as your web application. Set the Login redirect URIs field to [https://oidcdebugger.com/debug](https://oidcdebugger.com/debug "https://oidcdebugger.com/debug")
and Grant Type Allowed to Hybrid. Click Done and copy the client ID for the next step.
Now, navigate to the OpenID Connect debugger website, fill the form in like the picture below (do not forget to fill in the client ID for your recently created Okta web application). The state
field must be filled but can contain any characters. The Authorize URI should begin with your domain URL (found on your Okta dashboard):
Submit the form to start the authentication process. You’ll receive an Okta login form if you are not logged in or you’ll see the screen below with your custom token.
NOTE: The token will be valid for one hour, so you may have to repeat the process if you are testing for a long time.
You should now have a working secure API. Let’s see it in action! In Eclipse, just right click on the SpringBootRestApiApplication
file, click run as > Java application, and it will kick it off. To hit the endpoints, you can use curl to execute these commands, but be sure to include the new header that contains your token. Replace {token goes here}
with the actual token from OpenID Connect:
Day of Week:
Request:
curl -X POST \
http://localhost:8080/birthday/dayOfWeek \
-H 'Authorization: Bearer {token goes here}' \
-H 'Content-Type: text/plain' \
-H 'accept: text/plain' \
-d 2005-03-09
Response:
WEDNESDAY
Chinese Zodiac:
Request:
curl -X POST \
http://localhost:8080/birthday/chineseZodiac \
-H 'Authorization: Bearer {token goes here}' \
-H 'Content-Type: text/plain' \
-H 'accept: text/plain' \
-d 2005-03-09
Response:
Rooster
Astrological Sign:
Request:
curl -X POST \
http://localhost:8080/birthday/starSign \
-H 'Authorization: Bearer {token goes here}' \
-H 'Content-Type: text/plain' \
-H 'accept: text/plain' \
-d 2005-03-09
Response:
Pisces
Congratulations! You now have a secure API that gives you handy information about any birthdate you can imagine! What’s left? Well, you should add some unit tests to ensure that it works well.
Many people make the mistake of mixing Unit tests and Integration tests (also called end-to-end or E2E tests). I will describe the difference between the two types below.
Before getting started on the unit tests, add one more dependency to the pom.xml
file (in the <dependencies>
section).
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-test</artifactId>
<scope>test</scope>
</dependency>
For the most part, unit tests are intended to test a small chunk (or unit) of code. That is usually limited to the code within a function or sometimes extends to some helper functions called from that function. If a unit test is testing code that is dependent on another service or resource, like a database or a network resource, the unit test should “mock” and inject that dependency as to have no actual impact on that external resource. It also limits the focus to just that unit being tested. To mock a dependency, you can either use a mock library like “Mockito” or simply pass in a different implementation of the dependency that you want to replace. Mocking is outside of the scope of this article and I will simply show examples of unit tests for the BasicBirthdayService
.
The BasicBirthdayServiceTest.java
file contains the unit tests of the BasicBirthdayService
class.
package com.example.joy.myFirstSpringBoot.services;
import static org.junit.jupiter.api.Assertions.assertEquals;
import java.time.LocalDate;
import org.junit.jupiter.api.Test;
class BasicBirthdayServiceTest {
BasicBirthdayService birthdayService = new BasicBirthdayService();
@Test
void testGetBirthdayDOW() {
String dow = birthdayService.getBirthDOW(LocalDate.of(1979, 7, 14));
assertEquals("SATURDAY", dow);
dow = birthdayService.getBirthDOW(LocalDate.of(2018, 1, 23));
assertEquals("TUESDAY", dow);
dow = birthdayService.getBirthDOW(LocalDate.of(1972, 3, 17));
assertEquals("FRIDAY", dow);
dow = birthdayService.getBirthDOW(LocalDate.of(1945, 12, 2));
assertEquals("SUNDAY", dow);
dow = birthdayService.getBirthDOW(LocalDate.of(2003, 8, 4));
assertEquals("MONDAY", dow);
}
@Test
void testGetBirthdayChineseSign() {
String dow = birthdayService.getChineseZodiac(LocalDate.of(1979, 7, 14));
assertEquals("Sheep", dow);
dow = birthdayService.getChineseZodiac(LocalDate.of(2018, 1, 23));
assertEquals("Dog", dow);
dow = birthdayService.getChineseZodiac(LocalDate.of(1972, 3, 17));
assertEquals("Rat", dow);
dow = birthdayService.getChineseZodiac(LocalDate.of(1945, 12, 2));
assertEquals("Rooster", dow);
dow = birthdayService.getChineseZodiac(LocalDate.of(2003, 8, 4));
assertEquals("Sheep", dow);
}
@Test
void testGetBirthdayStarSign() {
String dow = birthdayService.getStarSign(LocalDate.of(1979, 7, 14));
assertEquals("Cancer", dow);
dow = birthdayService.getStarSign(LocalDate.of(2018, 1, 23));
assertEquals("Aquarius", dow);
dow = birthdayService.getStarSign(LocalDate.of(1972, 3, 17));
assertEquals("Pisces", dow);
dow = birthdayService.getStarSign(LocalDate.of(1945, 12, 2));
assertEquals("Sagittarius", dow);
dow = birthdayService.getStarSign(LocalDate.of(2003, 8, 4));
assertEquals("Leo", dow);
}
}
This test class is one of the most basic sets of unit tests you can make. It creates an instance of the BasicBirthdayService
class and then tests the responses of the three endpoints with various birthdates being passed in. This is a great example of a small unit being tested as it only tests a single service and doesn’t even require any configuration or applicationContext to be loaded for this test. Because it is only testing the service, it doesn’t touch on security or the HTTP rest interface.
You can run this test from your IDE or using Maven:
mvn test -Dtest=BasicBirthdayServiceTest
Integration tests are intended to test the entire integrated code path (from end-to-end) for a specific use-case. For example, an integration test of the Birthday application would be one that makes an HTTP POST call to the dayOfWeek
endpoint and then tests that the results are as expected. This call will ultimately hit both the BirthdayControllerInfo
code as well as the BasicBirthdayService
code. It will also require interacting with the security layer in order to make these calls. In a more complex system, an integration test might hit a database, read or write from a network resource, or send an email.
Because of the use of actual dependencies/resources, integration tests should typically be considered as possibly destructive and fragile (as backing data could be changed). For those reasons, integration tests should be “handled-with-care” and isolated from and run independently of normal unit tests. I personally like to use a separate system, particularly for REST API testing, rather than JUnit 5 as it keeps them completely separate from the unit tests.
If you do plan to write unit tests with JUnit 5, they should be named with a unique suffix like “IT”. Below is an example of the same tests you ran against BasicBirthdayService
, except written as an integration test. This example mocks the web security for this particular test as the scope is not to test OAuth 2.0, although an integration test may be used to test everything, including security.
The BirthdayInfoControllerIT.java
file contains the integration tests of the three API endpoints to get birthday information.
package com.example.joy.myFirstSpringBoot.controllers;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.springframework.security.test.web.servlet.request.SecurityMockMvcRequestPostProcessors.csrf;
import static org.springframework.security.test.web.servlet.request.SecurityMockMvcRequestPostProcessors.user;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;
import java.time.LocalDate;
import java.time.format.DateTimeFormatter;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.web.servlet.AutoConfigureMockMvc;
import org.springframework.boot.test.autoconfigure.web.servlet.WebMvcTest;
import org.springframework.http.MediaType;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.web.servlet.MockMvc;
import org.springframework.test.web.servlet.MvcResult;
import org.springframework.test.web.servlet.request.MockMvcRequestBuilders;
import com.example.joy.myFirstSpringBoot.services.BasicBirthdayService;
@AutoConfigureMockMvc
@ContextConfiguration(classes = {BirthdayInfoController.class, BasicBirthdayService.class})
@WebMvcTest
class BirthdayInfoControllerIT {
private final static String TEST_USER_ID = "user-id-123";
String bd1 = LocalDate.of(1979, 7, 14).format(DateTimeFormatter.ISO_DATE);
String bd2 = LocalDate.of(2018, 1, 23).format(DateTimeFormatter.ISO_DATE);
String bd3 = LocalDate.of(1972, 3, 17).format(DateTimeFormatter.ISO_DATE);
String bd4 = LocalDate.of(1945, 12, 2).format(DateTimeFormatter.ISO_DATE);
String bd5 = LocalDate.of(2003, 8, 4).format(DateTimeFormatter.ISO_DATE);
@Autowired
private MockMvc mockMvc;
@Test
public void testGetBirthdayDOW() throws Exception {
testDOW(bd1, "SATURDAY");
testDOW(bd2, "TUESDAY");
testDOW(bd3, "FRIDAY");
testDOW(bd4, "SUNDAY");
testDOW(bd5, "MONDAY");
}
@Test
public void testGetBirthdayChineseSign() throws Exception {
testZodiak(bd1, "Sheep");
testZodiak(bd2, "Dog");
testZodiak(bd3, "Rat");
testZodiak(bd4, "Rooster");
testZodiak(bd5, "Sheep");
}
@Test
public void testGetBirthdaytestStarSign() throws Exception {
testStarSign(bd1, "Cancer");
testStarSign(bd2, "Aquarius");
testStarSign(bd3, "Pisces");
testStarSign(bd4, "Sagittarius");
testStarSign(bd5, "Leo");
}
private void testDOW(String birthday, String dow) throws Exception {
MvcResult result = mockMvc.perform(MockMvcRequestBuilders.post("/birthday/dayOfWeek")
.with(user(TEST_USER_ID))
.with(csrf())
.content(birthday)
.contentType(MediaType.APPLICATION_JSON)
.accept(MediaType.APPLICATION_JSON))
.andExpect(status().isOk())
.andReturn();
String resultDOW = result.getResponse().getContentAsString();
assertNotNull(resultDOW);
assertEquals(dow, resultDOW);
}
private void testZodiak(String birthday, String czs) throws Exception {
MvcResult result = mockMvc.perform(MockMvcRequestBuilders.post("/birthday/chineseZodiac")
.with(user(TEST_USER_ID))
.with(csrf())
.content(birthday)
.contentType(MediaType.APPLICATION_JSON)
.accept(MediaType.APPLICATION_JSON))
.andExpect(status().isOk())
.andReturn();
String resultCZ = result.getResponse().getContentAsString();
assertNotNull(resultCZ);
assertEquals(czs, resultCZ);
}
private void testStarSign(String birthday, String ss) throws Exception {
MvcResult result = mockMvc.perform(MockMvcRequestBuilders.post("/birthday/starSign")
.with(user(TEST_USER_ID))
.with(csrf())
.content(birthday)
.contentType(MediaType.APPLICATION_JSON).accept(MediaType.APPLICATION_JSON))
.andExpect(status().isOk())
.andReturn();
String resultSS = result.getResponse().getContentAsString();
assertNotNull(resultSS);
assertEquals(ss, resultSS);
}
}
This test class has quite a bit to it; let’s go over a few key items.
There are a few lines of code that tells the system to mock security so you don’t need to generate a token before running this integration test. The following lines tell the system to pretend we have a valid user and token already:
.with(user(TEST_USER_ID))
.with(csrf())
MockMvc is simply a handy system built into the Spring Framework to allow us to make calls to a REST API. The @AutoConfigureMockMvc
class annotation and the @Autowired
for the MockMvc member variable tell the system to automatically configure and initialize the MockMvc
object (and in the background, an application context ) for this application. It will load the SpringBootRestApiApplication
and allow the tests to make HTTP calls to it.
If you read about test slicing, you might find yourself down a rabbit hole and feel like pulling your hair out. However, if you back out of the rabbit hole, you could see that test slicing is simply the act of trimming down what is loaded within your app for a particular unit test or integration test class. For example, if you have 15 controllers in your web application, with autowired services for each, but your test is only testing one of them, why bother loading the other 14 and their autowired services? Instead, just load the controller your testing and the supporting classes needed for that controller! So, let’s see how test slices are used in this integration test!
@ContextConfiguration(classes = {BirthdayInfoController.class, BasicBirthdayService.class})
@WebMvcTest
The WebMvcTest
annotation is the core of slicing a WebMvc application. It tells the system that you are slicing and the @ContextConfiguration
tells it precisely which controllers and dependencies to load. I have included the BirthdayInfoController
service because that is the controller I am testing. If I left that out, these tests would fail. I have also included the BasicBirthdayService
since this is an integration test and I want it to go ahead and autowire that service as a dependency to the controller. If this weren’t an integration test, I might mock that dependency instead of loading it in with the controller.
And that is it! Slicing doesn’t have to be over-complicated!
You can run this test from your IDE or using Maven:
mvn test -Dtest=BirthdayInfoControllerIT
In Eclipse, if you right-click on a folder and select run as > JUnit Test, it will run all unit tests and integration tests with no prejudice. However, it is often desired, particularly if run as part of an automated process to either just run the unit tests, or to run both. This way, there can be a quick sanity check of the units, without running the sometimes destructive Integration tests. There are many approaches to do this, but one easy way is to add the Maven Failsafe Plugin to your project. This is done by updating the <build>
section of the pom.xml
file as follows:
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
</plugin>
</plugins>
</build>
The Failsafe Plugin will differentiate the types of tests by the names. By default, it will consider any test that begins or ends with IT
as an integration test. It also considers tests that end in ITCase
an integration test.
Once the pom.xml
is setup, you can run the test
or verify
goals to test either unit tests or unit and integration tests respectively. From Eclipse, this is done by going to the project and right-clicking and selecting run as > Maven test for the test
goal. For the verify
goal, you must click on run as > Maven build… and then enter “verify” in the goals textbox and click run. From the command line, this can be done with mvn test
and mvn verify
.
The idea of “code coverage” is the question of how much of your code is tested with your unit and/or integration tests. There are a lot of tools a developer can use to do that, but since I like Eclipse, I generally use a tool called EclEmma. In older versions of Eclipse, we used to have to install this plugin separately, but it appears to be currently installed by default when installing Eclipse EE versions. If it can’t be found, you can always go to the Eclipse Marketplace (from the Eclipse Help Menu) and install it yourself.
From within Eclipse, running EclEmma is very simple. Just right click on a single test class or a folder and select coverage as > JUnit Test. This will execute your unit test or tests but also provide you with a coverage report (see the bottom of the image below). In addition, it will highlight any code in your application that is covered in green, and anything not in red. (It will cover partial coverage, like an if statement that is tested as true, but not as false with yellow).
TIP: If you notice that it is evaluating the coverage of your test cases and want that removed, go to Preferences > Java > Code Coverage and set the “Only path entries matching” option to src/main/java
.
☞ Spring & Hibernate for Beginners (includes Spring Boot)
☞ Spring Framework Master Class - Learn Spring the Modern Way!
☞ Master Microservices with Spring Boot and Spring Cloud
☞ Spring & Hibernate for Beginners (includes Spring Boot)
☞ Spring Framework Master Class - Learn Spring the Modern Way!
☞ Master Microservices with Spring Boot and Spring Cloud
☞ Build a Simple CRUD App with Spring Boot and Vue.js
☞ Build a Reactive App with Spring Boot and MongoDB
☞ Securing RESTful API with Spring Boot, Security, and Data MongoDB
☞ How to build GraphQL APIs with Kotlin, Spring Boot, and MongoDB?
*Originally published by Joy Foster at *https://developer.okta.com
1654075127
Amazon Aurora is a relational database management system (RDBMS) developed by AWS(Amazon Web Services). Aurora gives you the performance and availability of commercial-grade databases with full MySQL and PostgreSQL compatibility. In terms of high performance, Aurora MySQL and Aurora PostgreSQL have shown an increase in throughput of up to 5X over stock MySQL and 3X over stock PostgreSQL respectively on similar hardware. In terms of scalability, Aurora achieves enhancements and innovations in storage and computing, horizontal and vertical functions.
Aurora supports up to 128TB of storage capacity and supports dynamic scaling of storage layer in units of 10GB. In terms of computing, Aurora supports scalable configurations for multiple read replicas. Each region can have an additional 15 Aurora replicas. In addition, Aurora provides multi-primary architecture to support four read/write nodes. Its Serverless architecture allows vertical scaling and reduces typical latency to under a second, while the Global Database enables a single database cluster to span multiple AWS Regions in low latency.
Aurora already provides great scalability with the growth of user data volume. Can it handle more data and support more concurrent access? You may consider using sharding to support the configuration of multiple underlying Aurora clusters. To this end, a series of blogs, including this one, provides you with a reference in choosing between Proxy and JDBC for sharding.
AWS Aurora offers a single relational database. Primary-secondary, multi-primary, and global database, and other forms of hosting architecture can satisfy various architectural scenarios above. However, Aurora doesn’t provide direct support for sharding scenarios, and sharding has a variety of forms, such as vertical and horizontal forms. If we want to further increase data capacity, some problems have to be solved, such as cross-node database Join
, associated query, distributed transactions, SQL sorting, page turning, function calculation, database global primary key, capacity planning, and secondary capacity expansion after sharding.
It is generally accepted that when the capacity of a MySQL table is less than 10 million, the time spent on queries is optimal because at this time the height of its BTREE
index is between 3 and 5. Data sharding can reduce the amount of data in a single table and distribute the read and write loads to different data nodes at the same time. Data sharding can be divided into vertical sharding and horizontal sharding.
1. Advantages of vertical sharding
2. Disadvantages of vertical sharding
Join
can only be implemented by interface aggregation, which will increase the complexity of development.3. Advantages of horizontal sharding
4. Disadvantages of horizontal sharding
Join
is poor.Based on the analysis above, and the available studis on popular sharding middleware, we selected ShardingSphere, an open source product, combined with Amazon Aurora to introduce how the combination of these two products meets various forms of sharding and how to solve the problems brought by sharding.
ShardingSphere is an open source ecosystem including a set of distributed database middleware solutions, including 3 independent products, Sharding-JDBC, Sharding-Proxy & Sharding-Sidecar.
The characteristics of Sharding-JDBC are:
Hybrid Structure Integrating Sharding-JDBC and Applications
Sharding-JDBC’s core concepts
Data node: The smallest unit of a data slice, consisting of a data source name and a data table, such as ds_0.product_order_0.
Actual table: The physical table that really exists in the horizontal sharding database, such as product order tables: product_order_0, product_order_1, and product_order_2.
Logic table: The logical name of the horizontal sharding databases (tables) with the same schema. For instance, the logic table of the order product_order_0, product_order_1, and product_order_2 is product_order.
Binding table: It refers to the primary table and the joiner table with the same sharding rules. For example, product_order table and product_order_item are sharded by order_id, so they are binding tables with each other. Cartesian product correlation will not appear in the multi-tables correlating query, so the query efficiency will increase greatly.
Broadcast table: It refers to tables that exist in all sharding database sources. The schema and data must consist in each database. It can be applied to the small data volume that needs to correlate with big data tables to query, dictionary table and configuration table for example.
Download the example project code locally. In order to ensure the stability of the test code, we choose shardingsphere-example-4.0.0
version.
git clone
https://github.com/apache/shardingsphere-example.git
Project description:
shardingsphere-example
├── example-core
│ ├── config-utility
│ ├── example-api
│ ├── example-raw-jdbc
│ ├── example-spring-jpa #spring+jpa integration-based entity,repository
│ └── example-spring-mybatis
├── sharding-jdbc-example
│ ├── sharding-example
│ │ ├── sharding-raw-jdbc-example
│ │ ├── sharding-spring-boot-jpa-example #integration-based sharding-jdbc functions
│ │ ├── sharding-spring-boot-mybatis-example
│ │ ├── sharding-spring-namespace-jpa-example
│ │ └── sharding-spring-namespace-mybatis-example
│ ├── orchestration-example
│ │ ├── orchestration-raw-jdbc-example
│ │ ├── orchestration-spring-boot-example #integration-based sharding-jdbc governance function
│ │ └── orchestration-spring-namespace-example
│ ├── transaction-example
│ │ ├── transaction-2pc-xa-example #sharding-jdbc sample of two-phase commit for a distributed transaction
│ │ └──transaction-base-seata-example #sharding-jdbc distributed transaction seata sample
│ ├── other-feature-example
│ │ ├── hint-example
│ │ └── encrypt-example
├── sharding-proxy-example
│ └── sharding-proxy-boot-mybatis-example
└── src/resources
└── manual_schema.sql
Configuration file description:
application-master-slave.properties #read/write splitting profile
application-sharding-databases-tables.properties #sharding profile
application-sharding-databases.properties #library split profile only
application-sharding-master-slave.properties #sharding and read/write splitting profile
application-sharding-tables.properties #table split profile
application.properties #spring boot profile
Code logic description:
The following is the entry class of the Spring Boot application below. Execute it to run the project.
The execution logic of demo is as follows:
As business grows, the write and read requests can be split to different database nodes to effectively promote the processing capability of the entire database cluster. Aurora uses a reader/writer endpoint
to meet users' requirements to write and read with strong consistency, and a read-only endpoint
to meet the requirements to read without strong consistency. Aurora's read and write latency is within single-digit milliseconds, much lower than MySQL's binlog
-based logical replication, so there's a lot of loads that can be directed to a read-only endpoint
.
Through the one primary and multiple secondary configuration, query requests can be evenly distributed to multiple data replicas, which further improves the processing capability of the system. Read/write splitting can improve the throughput and availability of system, but it can also lead to data inconsistency. Aurora provides a primary/secondary architecture in a fully managed form, but applications on the upper-layer still need to manage multiple data sources when interacting with Aurora, routing SQL requests to different nodes based on the read/write type of SQL statements and certain routing policies.
ShardingSphere-JDBC provides read/write splitting features and it is integrated with application programs so that the complex configuration between application programs and database clusters can be separated from application programs. Developers can manage the Shard
through configuration files and combine it with ORM frameworks such as Spring JPA and Mybatis to completely separate the duplicated logic from the code, which greatly improves the ability to maintain code and reduces the coupling between code and database.
Create a set of Aurora MySQL read/write splitting clusters. The model is db.r5.2xlarge. Each set of clusters has one write node and two read nodes.
application.properties spring boot
Master profile description:
You need to replace the green ones with your own environment configuration.
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#Activate master-slave configuration item so that sharding-jdbc can use master-slave profile
spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-master-slave.properties sharding-jdbc
profile description:
spring.shardingsphere.datasource.names=ds_master,ds_slave_0,ds_slave_1
# data souce-master
spring.shardingsphere.datasource.ds_master.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master.password=Your master DB password
spring.shardingsphere.datasource.ds_master.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master.jdbc-url=Your primary DB data sourceurl spring.shardingsphere.datasource.ds_master.username=Your primary DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_0.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_0.jdbc-url=Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_0.username= Your slave DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_1.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_1.jdbc-url= Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_1.username= Your slave DB username
# Routing Policy Configuration
spring.shardingsphere.masterslave.load-balance-algorithm-type=round_robin
spring.shardingsphere.masterslave.name=ds_ms
spring.shardingsphere.masterslave.master-data-source-name=ds_master
spring.shardingsphere.masterslave.slave-data-source-names=ds_slave_0,ds_slave_1
# sharding-jdbc configures the information storage mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log,and you can see the conversion from logical SQL to actual SQL from the print
spring.shardingsphere.props.sql.show=true
As shown in the ShardingSphere-SQL log
figure below, the write SQL is executed on the ds_master
data source.
As shown in the ShardingSphere-SQL log
figure below, the read SQL is executed on the ds_slave
data source in the form of polling.
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] Rule Type: master-slave
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] SQL: select orderentit0_.order_id as order_id1_1_, orderentit0_.address_id as address_2_1_,
orderentit0_.status as status3_1_, orderentit0_.user_id as user_id4_1_ from t_order orderentit0_ ::: DataSources: ds_slave_0
---------------------------- Print OrderItem Data -------------------
Hibernate: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, orderiteme1_.user_id
as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] Rule Type: master-slave
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] SQL: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_,
orderiteme1_.user_id as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id ::: DataSources: ds_slave_1
Note: As shown in the figure below, if there are both reads and writes in a transaction, Sharding-JDBC routes both read and write operations to the master library. If the read/write requests are not in the same transaction, the corresponding read requests are distributed to different read nodes according to the routing policy.
@Override
@Transactional // When a transaction is started, both read and write in the transaction go through the master library. When closed, read goes through the slave library and write goes through the master library
public void processSuccess() throws SQLException {
System.out.println("-------------- Process Success Begin ---------------");
List<Long> orderIds = insertData();
printData();
deleteData(orderIds);
printData();
System.out.println("-------------- Process Success Finish --------------");
}
The Aurora database environment adopts the configuration described in Section 2.2.1.
3.2.4.1 Verification process description
Spring-Boot
project2. Perform a failover on Aurora’s console
3. Execute the Rest API
request
4. Repeatedly execute POST
(http://localhost:8088/save-user) until the call to the API failed to write to Aurora and eventually recovered successfully.
5. The following figure shows the process of executing code failover. It takes about 37 seconds from the time when the latest SQL write is successfully performed to the time when the next SQL write is successfully performed. That is, the application can be automatically recovered from Aurora failover, and the recovery time is about 37 seconds.
application.properties spring boot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#Activate sharding-tables configuration items
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
# spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-tables.properties sharding-jdbc
profile description
## configure primary-key policy
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# configure the binding relation of t_order and t_order_item
spring.shardingsphere.sharding.binding-tables[0]=t_order,t_order_item
# configure broadcast tables
spring.shardingsphere.sharding.broadcast-tables=t_address
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC routing rules are configured, the client
executes DDL, and Sharding-JDBC automatically creates corresponding tables according to the table splitting rules. If t_address
is a broadcast table, create a t_address
because there is only one master instance. Two physical tables t_order_0
and t_order_1
will be created when creating t_order
.
2. Write operation
As shown in the figure below, Logic SQL
inserts a record into t_order
. When Sharding-JDBC is executed, data will be distributed to t_order_0
and t_order_1
according to the table splitting rules.
When t_order
and t_order_item
are bound, the records associated with order_item
and order
are placed on the same physical table.
3. Read operation
As shown in the figure below, perform the join
query operations to order
and order_item
under the binding table, and the physical shard is precisely located based on the binding relationship.
The join
query operations on order
and order_item
under the unbound table will traverse all shards.
Create two instances on Aurora: ds_0
and ds_1
When the sharding-spring-boot-jpa-example
project is started, tables t_order
, t_order_item
,t_address
will be created on two Aurora instances.
application.properties springboot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases configuration items
spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-databases.properties sharding-jdbc
profile description
spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= spring.shardingsphere.datasource.ds_0.username=
spring.shardingsphere.datasource.ds_0.password=
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url=
spring.shardingsphere.datasource.ds_1.username=
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, physical tables will be created on ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the slave library are written on the table in the corresponding instance according to the slave library field and routing policy.
3. Read operation
Query order
is routed to the corresponding Aurora instance according to the routing rules of the slave library .
Query Address
. Since address
is a broadcast table, an instance of address
will be randomly selected and queried from the nodes used.
As shown in the figure below, perform the join
query operations to order
and order_item
under the binding table, and the physical shard is precisely located based on the binding relationship.
As shown in the figure below, create two instances on Aurora: ds_0
and ds_1
When the sharding-spring-boot-jpa-example
project is started, physical tables t_order_01
, t_order_02
, t_order_item_01
,and t_order_item_02
and global table t_address
will be created on two Aurora instances.
application.properties springboot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-databases.properties sharding-jdbc
profile description
spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= 306/dev?useSSL=false&characterEncoding=utf-8
spring.shardingsphere.datasource.ds_0.username=
spring.shardingsphere.datasource.ds_0.password=
spring.shardingsphere.datasource.ds_0.max-active=16
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url=
spring.shardingsphere.datasource.ds_1.username=
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.datasource.ds_1.max-active=16
# default library splitting policy
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
# Tables that do not meet the library splitting policy are placed on ds_0
spring.shardingsphere.sharding.default-data-source-name=ds_0
# t_order table splitting policy
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
# t_order_item table splitting policy
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mdoe
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s sharding and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, t_address
will be created on both ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the sub-library are written to the table on the corresponding instance according to the slave library field and routing policy.
3. Read operation
The read operation is similar to the library split function verification described in section2.4.3.
The following figure shows the physical table of the created database instance.
application.properties spring boot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
spring.profiles.active=sharding-master-slave
application-sharding-master-slave.properties sharding-jdbc
profile description
The url, name and password of the database need to be changed to your own database parameters.
spring.shardingsphere.datasource.names=ds_master_0,ds_master_1,ds_master_0_slave_0,ds_master_0_slave_1,ds_master_1_slave_0,ds_master_1_slave_1
spring.shardingsphere.datasource.ds_master_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0.username=
spring.shardingsphere.datasource.ds_master_0.password=
spring.shardingsphere.datasource.ds_master_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_0.username=
spring.shardingsphere.datasource.ds_master_0_slave_0.password=
spring.shardingsphere.datasource.ds_master_0_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_1.username=
spring.shardingsphere.datasource.ds_master_0_slave_1.password=
spring.shardingsphere.datasource.ds_master_0_slave_1.max-active=16
spring.shardingsphere.datasource.ds_master_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1.jdbc-url=
spring.shardingsphere.datasource.ds_master_1.username=
spring.shardingsphere.datasource.ds_master_1.password=
spring.shardingsphere.datasource.ds_master_1.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_0.jdbc-url=
spring.shardingsphere.datasource.ds_master_1_slave_0.username=
spring.shardingsphere.datasource.ds_master_1_slave_0.password=
spring.shardingsphere.datasource.ds_master_1_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_1_slave_1.username=admin
spring.shardingsphere.datasource.ds_master_1_slave_1.password=
spring.shardingsphere.datasource.ds_master_1_slave_1.max-active=16
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_master_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# master/slave data source and slave data source configuration
spring.shardingsphere.sharding.master-slave-rules.ds_0.master-data-source-name=ds_master_0
spring.shardingsphere.sharding.master-slave-rules.ds_0.slave-data-source-names=ds_master_0_slave_0, ds_master_0_slave_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.master-data-source-name=ds_master_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.slave-data-source-names=ds_master_1_slave_0, ds_master_1_slave_1
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, t_address
will be created on both ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the slave library are written to the table on the corresponding instance according to the slave library field and routing policy.
3. Read operation
The join
query operations on order
and order_item
under the binding table are shown below.
3. Conclusion
As an open source product focusing on database enhancement, ShardingSphere is pretty good in terms of its community activitiy, product maturity and documentation richness.
Among its products, ShardingSphere-JDBC is a sharding solution based on the client-side, which supports all sharding scenarios. And there’s no need to introduce an intermediate layer like Proxy, so the complexity of operation and maintenance is reduced. Its latency is theoretically lower than Proxy due to the lack of intermediate layer. In addition, ShardingSphere-JDBC can support a variety of relational databases based on SQL standards such as MySQL/PostgreSQL/Oracle/SQL Server, etc.
However, due to the integration of Sharding-JDBC with the application program, it only supports Java language for now, and is strongly dependent on the application programs. Nevertheless, Sharding-JDBC separates all sharding configuration from the application program, which brings relatively small changes when switching to other middleware.
In conclusion, Sharding-JDBC is a good choice if you use a Java-based system and have to to interconnect with different relational databases — and don’t want to bother with introducing an intermediate layer.
Author
Sun Jinhua
A senior solution architect at AWS, Sun is responsible for the design and consult on cloud architecture. for providing customers with cloud-related design and consulting services. Before joining AWS, he ran his own business, specializing in building e-commerce platforms and designing the overall architecture for e-commerce platforms of automotive companies. He worked in a global leading communication equipment company as a senior engineer, responsible for the development and architecture design of multiple subsystems of LTE equipment system. He has rich experience in architecture design with high concurrency and high availability system, microservice architecture design, database, middleware, IOT etc.
1624326103
Integration tests play a key role in ensuring the quality of the application. With a framework like Spring Boot, it is even easier to integrate such tests. Nevertheless, it is important to test applications with integration tests without deploying them to the application server.
Integration tests can help to test the data access layer of your application. Integration tests also help to test multiple units. For the Spring Boot application, we need to run an application in ApplicationContext
to be able to run tests. Integration tests can also help in testing exception handling.
For this demo, we will build a simple Spring Boot application with REST APIs. We will be using the H2 In-Memory database for storing the data. Eventually, I will show how to write an integration test. This application reads a JSON file of vulnerabilities from the National Vulnerability Database and stores it in the H2 database. REST APIs allow a user to fetch that data in a more readable format.
#java8 #spring-boot-2 #integration-testing #springboottest #spring-framework #integration testing in spring boot application
1625714967
In this video, we will learn how to test repository or DAO layer using Spring boot provided @DataJpaTest annotation.
We will write a JUnit test cases for CRUD operations - Create, Read, Update and Delete.
Spring Boot provides the @DataJpaTest annotation to test the persistence layer components that will autoconfigure in-memory embedded databases and scan for @Entity classes and Spring Data JPA repositories. The @DataJpaTest annotation doesn’t load other Spring beans (@Components, @Controller, @Service, and annotated beans) into ApplicationContext.
#junit #springboot #springdatajpa
#junit #spring #testing #spring-boot #springdatajpa
1620751200
#spring boot #spring boot tutorial #interceptor #interceptors #spring boot interceptor #spring boot tutorial for beginners
1553826080
#spring-boot #testing #java