1562750825
In this post, you’ll learn about microservices architecture and how to implement it using Spring Boot. After creating some projects with the technique, you will deploy the artifacts as Docker containers and will simulate a container orchestrator (such as Kubernetes) using Docker Compose for simplification
The icing on the cake will be authentication integration using Spring Profiles; you will see how to enable it with a production profile.
But first, let’s talk about microservices.
Microservices, as opposed to a monolith architecture, dictates you have to divide your application into small, logically related, pieces. These pieces are independent software that communicates with other pieces using HTTP or messages, for example.
There is some discussion of what size micro is. Some say a microservice is software that can be created in a single sprint; others say microservices can have bigger size if it is logically related (you can’t mix apples and oranges, for example). I agree with Martin Fowler and think size doesn’t matter that much, and it’s more related to the style.
There are many advantages to microservices:
However, there are some drawbacks:
Nowadays, it’s commonly accepted that you should avoid a microservice architecture at first. After some iterations, the code division will become clearer as will the demands of your project. It is often too expensive to handle microservices until your development team starts into small projects.
You’ll build two projects in this tutorial: a service (school-service) and a UI (school_ui). The service provides the persistent layer and business logic, and the UI provides the graphical user interface. Connecting them is possible with minimal configuration.
After the initial setup, I’ll talk about discovery and configuration services. Both services are an essential part of any massively distributed architecture. To prove this point, you will integrate it with OAuth 2.0 and use the configuration project to set the OAuth 2.0 keys.
Finally, each project will be transformed into a Docker image. Docker Compose will be used to simulate a container orchestrator as Compose will manage every container with an internal network between the services.
Lastly, Spring profiles will be introduced to change configuration based on the environment currently appropriately assigned. That way, you will have two OAuth 2.0 environments: one for development, and other for production.
Fewer words, more code! Clone this tutorial’s repository and check out the start
branch.
git clone -b start https://github.com/oktadeveloper/okta-spring-microservices-docker-example.git
The root pom.xml
file is not a requirement. However, it can be helpful to manage multiple projects at once. Let’s look inside:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0
com.okta.developer.docker_microservices
parent-pom
0.0.1-SNAPSHOT
pom
parent-project
school-service
school-ui
This is called an aggregate project because it aggregates child projects. It is useful for running the same Maven task on all declared modules. The modules do not need to use the root module as a parent.
There are two modules available: a school service, and a school UI.
The school-service
directory contains a Spring Boot project that acts as the project’s persistence layer and business rules. In a more complex scenario, you would have more services like this. The project was created using the always excellent Spring Initializr with the following configuration:
com.okta.developer.docker_microservices
school-service
You can get more details about this project by reading Spring Boot with PostgreSQL, Flyway, and JSONB. To summarize, it has the entities TeachingClass
, Course,
Student
and uses TeachingClassServiceDB
and TeachingClassController
to expose some data through a REST API. To test it, open a terminal, navigate to the school-service
directory, and run the command below:
./mvnw spring-boot:run
The application will start on port 8081
(as defined in file school-service/src/main/resources/application.properties
), so you should be able to navigate to [http://localhost:8081](http://localhost:8081)
and see the returned data.
> curl http://localhost:8081
[
{
"classId":13,
"teacherName":"Profesor Jirafales",
"teacherId":1,
"courseName":"Mathematics",
"courseId":3,
"numberOfStudents":2,
"year":1988
},
{
"classId":14,
"teacherName":"Profesor Jirafales",
"teacherId":1,
"courseName":"Spanish",
"courseId":4,
"numberOfStudents":2,
"year":1988
},
{
"classId":15,
"teacherName":"Professor X",
"teacherId":2,
"courseName":"Dealing with unknown",
"courseId":5,
"numberOfStudents":2,
"year":1995
},
{
"classId":16,
"teacherName":"Professor X",
"teacherId":2,
"courseName":"Dealing with unknown",
"courseId":5,
"numberOfStudents":1,
"year":1996
}
]
The school UI is, as the name says, the user interface that utilizes School Service. It was created using Spring Initializr with the following options:
com.okta.developer.docker_microservices
school-ui
The UI is a single web page that lists the classes available on the database. To get the information, it connects with the school-service
through a configuration in file school-ui/src/main/resources/application.properties
.
service.host=localhost:8081
The class SchoolController
class has all the logic to query the service:
package com.okta.developer.docker_microservices.ui.controller;
import com.okta.developer.docker_microservices.ui.dto.TeachingClassDto;
import org.springframework.beans.factory.annotation.*;
import org.springframework.core.ParameterizedTypeReference;
import org.springframework.http.*;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.*;
import org.springframework.web.client.RestTemplate;
import org.springframework.web.servlet.ModelAndView;
import java.util.List;
@Controller
@RequestMapping("/")
public class SchoolController {
private final RestTemplate restTemplate;
private final String serviceHost;
public SchoolController(RestTemplate restTemplate, @Value("${service.host}") String serviceHost) {
this.restTemplate = restTemplate;
this.serviceHost = serviceHost;
}
@RequestMapping("")
public ModelAndView index() {
return new ModelAndView("index");
}
@GetMapping("/classes")
public ResponseEntity> listClasses(){
return restTemplate
.exchange("http://"+ serviceHost +"/class", HttpMethod.GET, null,
new ParameterizedTypeReference>() {});
}
}
As you can see, there is a hard-coded location for the service. You can change the property setting with an environment variable like this -Dservice.host=localhost:9090
. Still, it has to be manually defined. How about having many instances of school-service application? Impossible at the current stage.
With school-service turned on, start school-ui
, and navigate to it in a browser at [http://localhost:8080](http://localhost:8080)
:
./mvnw spring-boot:run
You should see a page like the following:
Now you have a working application that uses two services to provide the information to end-user. What is wrong with it? In modern applications, developers (or operations) usually don’t know where or what port an application might be deployed on. The deployment should be automated so that no one cares about server names and physical location. (Unless you work inside a data center. If you do, I hope you care!)
Nonetheless, it is essential to have a tool that helps the services to discover their counterparts. There are many solutions available, and for this tutorial, we are going to use Eureka from Netflix as it has outstanding Spring support.
Go back to start.spring.io and create a new project as follows:
com.okta.developer.docker_microservices
discovery
Edit the main DiscoveryApplication.java
class to add an @EnableEurekaServer
annotation:
package com.okta.developer.docker_microservices.discovery;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;
@SpringBootApplication
@EnableEurekaServer
public class DiscoveryApplication {
public static void main(String[] args) {
SpringApplication.run(DiscoveryApplication.class, args);
}
}
And, you’ll need to update its application.properties
file so it runs on port 8761 and doesn’t try to register with itself.
spring.application.name=discovery-server
server.port=8761
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false
Let’s define each property:
spring.application.name
- The name of the application, also used by the discovery service to discover a service. You’ll see that every other application has an application name too.server.port
- The port the server is running. 8761
is the default port for Eureka server.eureka.client.register-with-eureka
- Tells Spring not to register itself into the discovery service.eureka.client .fetch-registry
- Indicates this instance should not fetch discovery information from the server.Now, run and access [http://localhost:8761](http://localhost:8761)
.
./mvnw spring-boot:run
The screen above shows the Eureka server ready to register new services. Now, it is time to change school-service and school-ui to use it.
NOTE: If you receive a ClassNotFoundException: javax.xml.bind.JAXBContext
error on startup, it’s because you’re running on Java 11. You can add JAXB dependencies to your pom.xml
to fix this.
javax.xml.bind
jaxb-api
2.3.1
org.glassfish.jaxb
jaxb-runtime
2.3.2
First, it is important to add the required dependencies. Add the following to both pom.xml
file (in the school-service and school-ui projects):
org.springframework.cloud
spring-cloud-starter-netflix-eureka-client
This module is part of the Spring Cloud initiative and, as such, needs a new dependency management node as follows (don’t forget to add to both projects):
org.springframework.cloud
spring-cloud-dependencies
${spring-cloud.version}
pom
import
Now you need to configure both applications to register with Eureka.
In the application.properties
file of both projects, add the following lines:
eureka.client.serviceUrl.defaultZone=${EUREKA_SERVER:http://localhost:8761/eureka}
spring.application.name=school-service
Don’t forget to change the application name from school-service
to school-ui
in the school-ui project. Notice there is a new kind of parameter in the first line: {EUREKA_SERVER:[http://localhost:8761/eureka}](http://localhost:8761/eureka})
. It means “if environment variable EUREKA_SERVER exists, use its value, if not, here’s a default value.” This will be useful in future steps. ;)
You know what? Both applications are ready to register themselves into the discovery service. You don’t need to do anything more. Our primary objective is that school-ui project does not need to know where school-service is. As such, you need to change SchoolController
(in the school-ui
project) to use school-service
in its REST endpoint. You can also remove the serviceHost
variable in this class.
package com.okta.developer.docker_microservices.ui.controller;
import com.okta.developer.docker_microservices.ui.dto.TeachingClassDto;
import org.springframework.core.ParameterizedTypeReference;
import org.springframework.http.HttpMethod;
import org.springframework.http.ResponseEntity;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.client.RestTemplate;
import org.springframework.web.servlet.ModelAndView;
import java.util.List;
@Controller
@RequestMapping("/")
public class SchoolController {
private final RestTemplate restTemplate;
public SchoolController(RestTemplate restTemplate) {
this.restTemplate = restTemplate;
}
@RequestMapping("")
public ModelAndView index() {
return new ModelAndView("index");
}
@GetMapping("/classes")
public ResponseEntity> listClasses() {
return restTemplate
.exchange("http://school-service/classes", HttpMethod.GET, null,
new ParameterizedTypeReference>() {});
}
}
Before integrating Eureka, you had a configuration pointing out where school-service was. Now, you’ve changed the service calls to use the name used by the other service: no ports, no hostname. The service you need is somewhere, and you don’t need to know where.
The school-service may have multiple instances of and it would be a good idea to load balance the calls between the instances. Thankfully, Spring has a simple solution: on the RestTemplate
bean creation, add @LoadBalanced
annotation as follows. Spring will manage multiple instance calls each time you ask something to the server.
package com.okta.developer.docker_microservices.ui;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.loadbalancer.LoadBalanced;
import org.springframework.context.annotation.Bean;
import org.springframework.web.client.RestTemplate;
import org.springframework.web.servlet.config.annotation.*;
@SpringBootApplication
public class UIWebApplication implements WebMvcConfigurer {
public static void main(String[] args) {
SpringApplication.run(UIWebApplication.class, args);
}
@Bean
@LoadBalanced
public RestTemplate restTemplate() {
return new RestTemplate();
}
@Override
public void addResourceHandlers(ResourceHandlerRegistry registry) {
if(!registry.hasMappingForPattern("/static/**")) {
registry.addResourceHandler("/static/**")
.addResourceLocations("classpath:/static/", "classpath:/static/js/");
}
}
}
Now, start restart school-service and school-ui (and keep the Discovery service up). Have a quick look at [http://localhost:8761](http://localhost:8761)
again:
Now your services are sharing info with the Discovery server. You can test the application again and see that it work as always. Just go to [http://localhost:8080](http://localhost:8080)
in your favorite browser.
While this configuration works, it’s even better to remove any trace of configuration values in the project’s source code. First, the configuration URL was removed from the project and became managed by a service. Now, you can do a similar thing for every configuration on the project using Spring Cloud Config.
First, create the configuration project using Spring Initializr and the following parameters:
com.okta.developer.docker_microservices
config
In the main class, add @EnableConfigServer
:
package com.okta.developer.docker_microservices.config;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.config.server.EnableConfigServer;
@SpringBootApplication
@EnableConfigServer
public class ConfigApplication {
...
}
Add the following properties and values in the project’s application.properties
:
spring.application.name=CONFIGSERVER
server.port=8888
spring.profiles.active=native
spring.cloud.config.server.native.searchLocations=.
eureka.client.serviceUrl.defaultZone=${EUREKA_SERVER:http://localhost:8761/eureka}
Some explanation about the properties:
spring.profiles.active=native
- Indicates Spring Cloud Config must use the native file system to obtain the configuration. Normally Git repositories are used, but we are going to stick with native filesystem for simplicity sake.spring.cloud.config.server.native.searchLocations
- The path containing the configuration files. If you change this to a specific folder on your hard drive, make sure and create the school-ui.properties
file in it.Now, you need something to configure and apply to this example. How about Okta’s configuration? Let’s put our school-ui behind an authorization layer and use the property values provided by the configuration project.
You can register for a free-forever developer account that will enable you to create as many user and applications you need to use! After creating your account, create a new Web Application in Okta’s dashboard (Applications > Add Application):
And fill the next form with the following values:
The page will return you an application ID and an secret key. Keep then safe and create a file called school-ui.properties
in the root folder of the config
project with the following contents. Do not forget to populate the variable values:
okta.oauth2.issuer=https://{yourOktaDomain}/oauth2/default
okta.oauth2.clientId={yourClientId}
okta.oauth2.clientSecret={yourClientSecret}
Now, run the config
project and check if its getting the configuration data properly:
./mvnw spring-boot:run
> curl http://localhost:8888/school-ui.properties
okta.oauth2.clientId: YOUR_CLIENT_ID
okta.oauth2.clientSecret: YOUR_CLIENT_SECRET
okta.oauth2.issuer: https://YOUR_DOMAIN/oauth2/default
Now you need to change the Spring UI project a little bit.
First, you need to change school-ui/pom.xml
and add some new dependencies:
org.springframework.cloud
spring-cloud-starter-config
com.okta.spring
okta-spring-boot-starter
1.1.0
org.thymeleaf.extras
thymeleaf-extras-springsecurity5
Create a new SecurityConfiguration
class in the com.okta...ui.config
package:
package com.okta.developer.docker_microservices.ui;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
@Configuration
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class SpringSecurityConfiguration extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http
.authorizeRequests()
.antMatchers("/").permitAll()
.anyRequest().authenticated()
.and()
.logout().logoutSuccessUrl("/")
.and()
.oauth2Login();
}
}
Change your SchoolController
so only users with scope profile
will be allowed (every authenticated user will have it).
import org.springframework.security.access.prepost.PreAuthorize;
....
@GetMapping("/classes")
@PreAuthorize("hasAuthority('SCOPE_profile')")
public ResponseEntity> listClasses(){
return restTemplate
.exchange("http://school-service/class", HttpMethod.GET, null,
new ParameterizedTypeReference>() {});
}
Some configurations need to be defined at project boot time. Spring had a clever solution to locate properly and extract configuration data before context startup. You need to create a file src/main/resources/bootstrap.yml
like this:
eureka:
client:
serviceUrl:
defaultZone: ${EUREKA_SERVER:http://localhost:8761/eureka}
spring:
application:
name: school-ui
cloud:
config:
discovery:
enabled: true
service-id: CONFIGSERVER
The bootstrap file creates a pre-boot Spring Application Context responsible for extracting configuration before the real application starts. You need to move all properties from application.properties
to this file as Spring needs to know where your Eureka Server is located and how it should search for configuration. In the example above, you enabled configuration over discovery service (spring.cloud.config.discovery.enabled
) and specified the Configuration service-id
.
Change your application.properties
file so it only has one OAuth 2.0 property:
okta.oauth2.redirect-uri=/authorization-code/callback
The last file to modify is src/main/resources/templates/index.hml
. Adjust it to show a login button if the user is not authenticated, and a logout button if the user is logged in.
Hello, world!
Logout
Login
# School classes
Course
Teacher
Year
Number of students
There are some Thymeleaf properties you should know about in this HTML:
@{/logout}
- returns the logout URL defined on the backendth:if="${#authorization.expression('isAuthenticated()')}"
- only print the HTML if the user is logged in@{//oauth2/authorization/okta}
- this is the URL that Spring Security redirects to for Okta. You could link to /login
as well, but that just renders the same link and you have to click on it.th:unless="${#authorization.expression('isAuthenticated()')}"
- only print the HTML inside the node if the user is logged offNow restart the configuration project and school-ui again. If you navigate to typing [http://localhost:8080](http://localhost:8080)
, you should see the following screen:
After logged in, the screen should appear like this one:
Congratulations, you created a microservices architecture using Spring Cloud config and Eureka for service discovery! Now, let’s go one step further and Dockerize every service.
Docker is a marvelous technology that allows creating system images similar to Virtual Machines but that shares the same Kernel of the host operating system. This feature increases system performance and startup time. Also, Docker provided an ingenious built system that guarantees once an image is created; it won’t be changed, ever. In other words: no more “it works on my machine!”
TIP: Need a deeper Docker background? Have a look at our Developer’s Guide To Docker.
You’ll need to create one Docker image for each project. Each image should have the same Maven configuration and Dockerfile
content in the root folder of each project (e.g., school-ui/Dockerfile
).
In each project’s pom, add the dockerfile-maven-plugin
:
...
com.spotify
dockerfile-maven-plugin
1.4.9
default
build
push
developer.okta.com/microservice-docker-${project.artifactId}
${project.version}
${project.build.finalName}.jar
This XML configures the Dockerfile Maven plugin to build a Docker image every time you run ./mvnw install
. Each image will be created with the name developer.okta.com/microservice-docker-${project.artifactId}
where project.artifactId
varies by project.
Create a Dockerfile
file in the root directory of each project.
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ADD target/*.jar app.jar
ENV JAVA_OPTS="
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]
The Dockerfile
follows what is recommended by Spring Boot with Docker.
Now, change school-ui/src/main/resources/bootstrap.yml
to add a new failFast
setting:
eureka:
client:
serviceUrl:
defaultZone: ${EUREKA_SERVER:http://localhost:8761/eureka}
spring:
application:
name: school-ui
cloud:
config:
discovery:
enabled: true
serviceId: CONFIGSERVER
failFast: true
The spring.cloud.failFast: true
setting tells Spring Cloud Config to terminate the application as soon as it can’t find the configuration server. This will be useful for the next step.
Create a new file called docker-compose.yml
that defines how each project starts:
version: '3'
services:
discovery:
image: developer.okta.com/microservice-docker-discovery:0.0.1-SNAPSHOT
ports:
- 8761:8761
config:
image: developer.okta.com/microservice-docker-config:0.0.1-SNAPSHOT
volumes:
- ./config-data:/var/config-data
environment:
- JAVA_OPTS=
-DEUREKA_SERVER=http://discovery:8761/eureka
-Dspring.cloud.config.server.native.searchLocations=/var/config-data
depends_on:
- discovery
ports:
- 8888:8888
school-service:
image: developer.okta.com/microservice-docker-school-service:0.0.1-SNAPSHOT
environment:
- JAVA_OPTS=
-DEUREKA_SERVER=http://discovery:8761/eureka
depends_on:
- discovery
- config
school-ui:
image: developer.okta.com/microservice-docker-school-ui:0.0.1-SNAPSHOT
environment:
- JAVA_OPTS=
-DEUREKA_SERVER=http://discovery:8761/eureka
restart: on-failure
depends_on:
- discovery
- config
ports:
- 8080:8080
As you can see, each project is now a declared service in Docker compose the file. It’ll have its ports exposed and some other properties.
-DEUREKA_SERVER=[http://discovery:8761/eureka](http://discovery:8761/eureka)
. This will tell where to find the Discovery server. Docker Compose creates a virtual network between the services and the DNS name used for each service is its name: that’s why it’s possible to use discovery
as the hostname./var/config-data
inside the docker container. Also, the property spring.cloud.config.server.native.searchLocations
will be overwritten to the same value. You must store the file school-ui.properties
in the same folder specified on the volume mapping (in the example above, the relative folder ./config-data
).restart: on-failure
. This set Docker Compose to restart the application as soon as it fails. Using together with failFast
property allows the application to keep trying to start until the Discovery and Config projects are completely ready.And that’s it! Now, build the images:
cd config && ./mvnw clean install
cd ../discovery && ./mvnw clean install
cd .. && ./mvnw clean install
The last command will likely fail with the following error in the school-ui
project:
java.lang.IllegalStateException: Failed to load ApplicationContext
Caused by: java.lang.IllegalStateException: No instances found of configserver (CONFIGSERVER)
To fix this, create a school-ui/src/test/resources/test.properties
file and add properties so Okta’s config passes, and it doesn’t use discovery or the config server when testing.
okta.oauth2.issuer=https://{yourOktaDomain}/oauth2/default
okta.oauth2.clientId=TEST
spring.cloud.discovery.enabled=false
spring.cloud.config.discovery.enabled = false
spring.cloud.config.enabled = false
Then modify UIWebApplicationTests.java
to load this file for test properties:
import org.springframework.test.context.TestPropertySource;
...
@TestPropertySource(locations="classpath:test.properties")
public class UIWebApplicationTests {
...
}
Now you should be able to run ./mvnw clean install
in the school-ui
project.
Once that completes, run Docker Compose to start all your containers (in the same directory where docker-compose.yml
is).
docker-compose up -d
Starting okta-microservice-docker-post-final_discovery_1 ... done
Starting okta-microservice-docker-post-final_config_1 ... done
Starting okta-microservice-docker-post-final_school-ui_1 ... done
Starting okta-microservice-docker-post-final_school-service_1 ... done
Now you should be able to browse the application as you did previously.
Now you’ve reached the last stage of today’s journey through microservices. Spring Profiles is a powerful tool. Using profiles, it is possible to modify program behavior by injecting different dependencies or configurations completely.
Imagine you have a well-architected software that has its persistence layer separated from business logic. You also provide support for MySQL and PostgreSQL, for example. It is possible to have different data access classes for each database that will be only loaded by the defined profile.
Another use case is for configuration: different profiles might have different configurations. Take authentication, for instance. Will your test environment have authentication? If it does, it shouldn’t use the same user directory as production.
Change your configuration project to have two apps in Okta: one default (for development) and another for production. Create a new Web application on Okta website and name it “okta-docker-production.”
Now, in your config
project, create a new file called school-ui-production.properties
. You already have school-ui.properties
, which will be used by every School UI instance. When adding the environment at the end of the file, Spring will merge both files and take precedence over the most specific file. Save the file with your production app’s client ID and secret, like this:
school-ui-production.properties
okta.oauth2.clientId={YOUR_PRODUCTION_CLIENT_ID}
okta.oauth2.clientSecret={YOUR_PRODUCTION_CLIENT_SECRET}
Now, run the configuration project using Maven, then run the following two curl
commands:
./mvnw spring-boot:run
> curl http://localhost:8888/school-ui.properties
okta.oauth2.issuer: https://{yourOktaDomain}/oauth2/default
okta.oauth2.clientId: ==YOUR DEV CLIENT ID HERE==
okta.oauth2.clientSecret: ==YOUR DEV CLIENT SECRET HERE==
> curl http://localhost:8888/school-ui-production.properties
okta.oauth2.issuer: https://{yourOktaDomain}/oauth2/default
okta.oauth2.clientId: ==YOUR PROD CLIENT ID HERE==
okta.oauth2.clientSecret: ==YOUR PROD CLIENT SECRET HERE==
As you can see, even though the file school-ui-production
has two properties, the config
project displays three (since the configurations are merged).
Now, you can change the school-ui
service in the docker-compose.yml
to use the production
profile:
school-ui:
image: developer.okta.com/microservice-docker-school-ui:0.0.1-SNAPSHOT
environment:
- JAVA_OPTS=
-DEUREKA_SERVER=http://discovery:8761/eureka
-Dspring.profiles.active=production
restart: on-failure
depends_on:
- discovery
- config
ports:
- 8080:8080
You’ll also need to copy school-ui-production.properties
to your config-data
directory. Then shut down all your Docker containers and restart them.
docker-compose down
docker-compose up -d
You should see the following printed in the logs of the school-ui
container:
The following profiles are active: production
That’s it! Now you have your microservices architecture running with a production profile. Huzzah!
TIP: If you want to prove your okta-docker-production
app is used and not okta-docker
, you can deactivate the okta-docker
app in Okta and confirm you can still log in at [http://localhost:8080](http://localhost:8080)
.
#java #spring-boot #docker #microservices
1654075127
Amazon Aurora is a relational database management system (RDBMS) developed by AWS(Amazon Web Services). Aurora gives you the performance and availability of commercial-grade databases with full MySQL and PostgreSQL compatibility. In terms of high performance, Aurora MySQL and Aurora PostgreSQL have shown an increase in throughput of up to 5X over stock MySQL and 3X over stock PostgreSQL respectively on similar hardware. In terms of scalability, Aurora achieves enhancements and innovations in storage and computing, horizontal and vertical functions.
Aurora supports up to 128TB of storage capacity and supports dynamic scaling of storage layer in units of 10GB. In terms of computing, Aurora supports scalable configurations for multiple read replicas. Each region can have an additional 15 Aurora replicas. In addition, Aurora provides multi-primary architecture to support four read/write nodes. Its Serverless architecture allows vertical scaling and reduces typical latency to under a second, while the Global Database enables a single database cluster to span multiple AWS Regions in low latency.
Aurora already provides great scalability with the growth of user data volume. Can it handle more data and support more concurrent access? You may consider using sharding to support the configuration of multiple underlying Aurora clusters. To this end, a series of blogs, including this one, provides you with a reference in choosing between Proxy and JDBC for sharding.
AWS Aurora offers a single relational database. Primary-secondary, multi-primary, and global database, and other forms of hosting architecture can satisfy various architectural scenarios above. However, Aurora doesn’t provide direct support for sharding scenarios, and sharding has a variety of forms, such as vertical and horizontal forms. If we want to further increase data capacity, some problems have to be solved, such as cross-node database Join
, associated query, distributed transactions, SQL sorting, page turning, function calculation, database global primary key, capacity planning, and secondary capacity expansion after sharding.
It is generally accepted that when the capacity of a MySQL table is less than 10 million, the time spent on queries is optimal because at this time the height of its BTREE
index is between 3 and 5. Data sharding can reduce the amount of data in a single table and distribute the read and write loads to different data nodes at the same time. Data sharding can be divided into vertical sharding and horizontal sharding.
1. Advantages of vertical sharding
2. Disadvantages of vertical sharding
Join
can only be implemented by interface aggregation, which will increase the complexity of development.3. Advantages of horizontal sharding
4. Disadvantages of horizontal sharding
Join
is poor.Based on the analysis above, and the available studis on popular sharding middleware, we selected ShardingSphere, an open source product, combined with Amazon Aurora to introduce how the combination of these two products meets various forms of sharding and how to solve the problems brought by sharding.
ShardingSphere is an open source ecosystem including a set of distributed database middleware solutions, including 3 independent products, Sharding-JDBC, Sharding-Proxy & Sharding-Sidecar.
The characteristics of Sharding-JDBC are:
Hybrid Structure Integrating Sharding-JDBC and Applications
Sharding-JDBC’s core concepts
Data node: The smallest unit of a data slice, consisting of a data source name and a data table, such as ds_0.product_order_0.
Actual table: The physical table that really exists in the horizontal sharding database, such as product order tables: product_order_0, product_order_1, and product_order_2.
Logic table: The logical name of the horizontal sharding databases (tables) with the same schema. For instance, the logic table of the order product_order_0, product_order_1, and product_order_2 is product_order.
Binding table: It refers to the primary table and the joiner table with the same sharding rules. For example, product_order table and product_order_item are sharded by order_id, so they are binding tables with each other. Cartesian product correlation will not appear in the multi-tables correlating query, so the query efficiency will increase greatly.
Broadcast table: It refers to tables that exist in all sharding database sources. The schema and data must consist in each database. It can be applied to the small data volume that needs to correlate with big data tables to query, dictionary table and configuration table for example.
Download the example project code locally. In order to ensure the stability of the test code, we choose shardingsphere-example-4.0.0
version.
git clone
https://github.com/apache/shardingsphere-example.git
Project description:
shardingsphere-example
├── example-core
│ ├── config-utility
│ ├── example-api
│ ├── example-raw-jdbc
│ ├── example-spring-jpa #spring+jpa integration-based entity,repository
│ └── example-spring-mybatis
├── sharding-jdbc-example
│ ├── sharding-example
│ │ ├── sharding-raw-jdbc-example
│ │ ├── sharding-spring-boot-jpa-example #integration-based sharding-jdbc functions
│ │ ├── sharding-spring-boot-mybatis-example
│ │ ├── sharding-spring-namespace-jpa-example
│ │ └── sharding-spring-namespace-mybatis-example
│ ├── orchestration-example
│ │ ├── orchestration-raw-jdbc-example
│ │ ├── orchestration-spring-boot-example #integration-based sharding-jdbc governance function
│ │ └── orchestration-spring-namespace-example
│ ├── transaction-example
│ │ ├── transaction-2pc-xa-example #sharding-jdbc sample of two-phase commit for a distributed transaction
│ │ └──transaction-base-seata-example #sharding-jdbc distributed transaction seata sample
│ ├── other-feature-example
│ │ ├── hint-example
│ │ └── encrypt-example
├── sharding-proxy-example
│ └── sharding-proxy-boot-mybatis-example
└── src/resources
└── manual_schema.sql
Configuration file description:
application-master-slave.properties #read/write splitting profile
application-sharding-databases-tables.properties #sharding profile
application-sharding-databases.properties #library split profile only
application-sharding-master-slave.properties #sharding and read/write splitting profile
application-sharding-tables.properties #table split profile
application.properties #spring boot profile
Code logic description:
The following is the entry class of the Spring Boot application below. Execute it to run the project.
The execution logic of demo is as follows:
As business grows, the write and read requests can be split to different database nodes to effectively promote the processing capability of the entire database cluster. Aurora uses a reader/writer endpoint
to meet users' requirements to write and read with strong consistency, and a read-only endpoint
to meet the requirements to read without strong consistency. Aurora's read and write latency is within single-digit milliseconds, much lower than MySQL's binlog
-based logical replication, so there's a lot of loads that can be directed to a read-only endpoint
.
Through the one primary and multiple secondary configuration, query requests can be evenly distributed to multiple data replicas, which further improves the processing capability of the system. Read/write splitting can improve the throughput and availability of system, but it can also lead to data inconsistency. Aurora provides a primary/secondary architecture in a fully managed form, but applications on the upper-layer still need to manage multiple data sources when interacting with Aurora, routing SQL requests to different nodes based on the read/write type of SQL statements and certain routing policies.
ShardingSphere-JDBC provides read/write splitting features and it is integrated with application programs so that the complex configuration between application programs and database clusters can be separated from application programs. Developers can manage the Shard
through configuration files and combine it with ORM frameworks such as Spring JPA and Mybatis to completely separate the duplicated logic from the code, which greatly improves the ability to maintain code and reduces the coupling between code and database.
Create a set of Aurora MySQL read/write splitting clusters. The model is db.r5.2xlarge. Each set of clusters has one write node and two read nodes.
application.properties spring boot
Master profile description:
You need to replace the green ones with your own environment configuration.
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#Activate master-slave configuration item so that sharding-jdbc can use master-slave profile
spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-master-slave.properties sharding-jdbc
profile description:
spring.shardingsphere.datasource.names=ds_master,ds_slave_0,ds_slave_1
# data souce-master
spring.shardingsphere.datasource.ds_master.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master.password=Your master DB password
spring.shardingsphere.datasource.ds_master.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master.jdbc-url=Your primary DB data sourceurl spring.shardingsphere.datasource.ds_master.username=Your primary DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_0.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_0.jdbc-url=Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_0.username= Your slave DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_1.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_1.jdbc-url= Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_1.username= Your slave DB username
# Routing Policy Configuration
spring.shardingsphere.masterslave.load-balance-algorithm-type=round_robin
spring.shardingsphere.masterslave.name=ds_ms
spring.shardingsphere.masterslave.master-data-source-name=ds_master
spring.shardingsphere.masterslave.slave-data-source-names=ds_slave_0,ds_slave_1
# sharding-jdbc configures the information storage mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log,and you can see the conversion from logical SQL to actual SQL from the print
spring.shardingsphere.props.sql.show=true
As shown in the ShardingSphere-SQL log
figure below, the write SQL is executed on the ds_master
data source.
As shown in the ShardingSphere-SQL log
figure below, the read SQL is executed on the ds_slave
data source in the form of polling.
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] Rule Type: master-slave
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] SQL: select orderentit0_.order_id as order_id1_1_, orderentit0_.address_id as address_2_1_,
orderentit0_.status as status3_1_, orderentit0_.user_id as user_id4_1_ from t_order orderentit0_ ::: DataSources: ds_slave_0
---------------------------- Print OrderItem Data -------------------
Hibernate: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, orderiteme1_.user_id
as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] Rule Type: master-slave
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] SQL: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_,
orderiteme1_.user_id as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id ::: DataSources: ds_slave_1
Note: As shown in the figure below, if there are both reads and writes in a transaction, Sharding-JDBC routes both read and write operations to the master library. If the read/write requests are not in the same transaction, the corresponding read requests are distributed to different read nodes according to the routing policy.
@Override
@Transactional // When a transaction is started, both read and write in the transaction go through the master library. When closed, read goes through the slave library and write goes through the master library
public void processSuccess() throws SQLException {
System.out.println("-------------- Process Success Begin ---------------");
List<Long> orderIds = insertData();
printData();
deleteData(orderIds);
printData();
System.out.println("-------------- Process Success Finish --------------");
}
The Aurora database environment adopts the configuration described in Section 2.2.1.
3.2.4.1 Verification process description
Spring-Boot
project2. Perform a failover on Aurora’s console
3. Execute the Rest API
request
4. Repeatedly execute POST
(http://localhost:8088/save-user) until the call to the API failed to write to Aurora and eventually recovered successfully.
5. The following figure shows the process of executing code failover. It takes about 37 seconds from the time when the latest SQL write is successfully performed to the time when the next SQL write is successfully performed. That is, the application can be automatically recovered from Aurora failover, and the recovery time is about 37 seconds.
application.properties spring boot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#Activate sharding-tables configuration items
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
# spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-tables.properties sharding-jdbc
profile description
## configure primary-key policy
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# configure the binding relation of t_order and t_order_item
spring.shardingsphere.sharding.binding-tables[0]=t_order,t_order_item
# configure broadcast tables
spring.shardingsphere.sharding.broadcast-tables=t_address
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC routing rules are configured, the client
executes DDL, and Sharding-JDBC automatically creates corresponding tables according to the table splitting rules. If t_address
is a broadcast table, create a t_address
because there is only one master instance. Two physical tables t_order_0
and t_order_1
will be created when creating t_order
.
2. Write operation
As shown in the figure below, Logic SQL
inserts a record into t_order
. When Sharding-JDBC is executed, data will be distributed to t_order_0
and t_order_1
according to the table splitting rules.
When t_order
and t_order_item
are bound, the records associated with order_item
and order
are placed on the same physical table.
3. Read operation
As shown in the figure below, perform the join
query operations to order
and order_item
under the binding table, and the physical shard is precisely located based on the binding relationship.
The join
query operations on order
and order_item
under the unbound table will traverse all shards.
Create two instances on Aurora: ds_0
and ds_1
When the sharding-spring-boot-jpa-example
project is started, tables t_order
, t_order_item
,t_address
will be created on two Aurora instances.
application.properties springboot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases configuration items
spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-databases.properties sharding-jdbc
profile description
spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= spring.shardingsphere.datasource.ds_0.username=
spring.shardingsphere.datasource.ds_0.password=
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url=
spring.shardingsphere.datasource.ds_1.username=
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, physical tables will be created on ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the slave library are written on the table in the corresponding instance according to the slave library field and routing policy.
3. Read operation
Query order
is routed to the corresponding Aurora instance according to the routing rules of the slave library .
Query Address
. Since address
is a broadcast table, an instance of address
will be randomly selected and queried from the nodes used.
As shown in the figure below, perform the join
query operations to order
and order_item
under the binding table, and the physical shard is precisely located based on the binding relationship.
As shown in the figure below, create two instances on Aurora: ds_0
and ds_1
When the sharding-spring-boot-jpa-example
project is started, physical tables t_order_01
, t_order_02
, t_order_item_01
,and t_order_item_02
and global table t_address
will be created on two Aurora instances.
application.properties springboot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-databases.properties sharding-jdbc
profile description
spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= 306/dev?useSSL=false&characterEncoding=utf-8
spring.shardingsphere.datasource.ds_0.username=
spring.shardingsphere.datasource.ds_0.password=
spring.shardingsphere.datasource.ds_0.max-active=16
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url=
spring.shardingsphere.datasource.ds_1.username=
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.datasource.ds_1.max-active=16
# default library splitting policy
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
# Tables that do not meet the library splitting policy are placed on ds_0
spring.shardingsphere.sharding.default-data-source-name=ds_0
# t_order table splitting policy
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
# t_order_item table splitting policy
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mdoe
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s sharding and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, t_address
will be created on both ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the sub-library are written to the table on the corresponding instance according to the slave library field and routing policy.
3. Read operation
The read operation is similar to the library split function verification described in section2.4.3.
The following figure shows the physical table of the created database instance.
application.properties spring boot
master profile description
# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
spring.profiles.active=sharding-master-slave
application-sharding-master-slave.properties sharding-jdbc
profile description
The url, name and password of the database need to be changed to your own database parameters.
spring.shardingsphere.datasource.names=ds_master_0,ds_master_1,ds_master_0_slave_0,ds_master_0_slave_1,ds_master_1_slave_0,ds_master_1_slave_1
spring.shardingsphere.datasource.ds_master_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0.username=
spring.shardingsphere.datasource.ds_master_0.password=
spring.shardingsphere.datasource.ds_master_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_0.username=
spring.shardingsphere.datasource.ds_master_0_slave_0.password=
spring.shardingsphere.datasource.ds_master_0_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_1.username=
spring.shardingsphere.datasource.ds_master_0_slave_1.password=
spring.shardingsphere.datasource.ds_master_0_slave_1.max-active=16
spring.shardingsphere.datasource.ds_master_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1.jdbc-url=
spring.shardingsphere.datasource.ds_master_1.username=
spring.shardingsphere.datasource.ds_master_1.password=
spring.shardingsphere.datasource.ds_master_1.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_0.jdbc-url=
spring.shardingsphere.datasource.ds_master_1_slave_0.username=
spring.shardingsphere.datasource.ds_master_1_slave_0.password=
spring.shardingsphere.datasource.ds_master_1_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_1_slave_1.username=admin
spring.shardingsphere.datasource.ds_master_1_slave_1.password=
spring.shardingsphere.datasource.ds_master_1_slave_1.max-active=16
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_master_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# master/slave data source and slave data source configuration
spring.shardingsphere.sharding.master-slave-rules.ds_0.master-data-source-name=ds_master_0
spring.shardingsphere.sharding.master-slave-rules.ds_0.slave-data-source-names=ds_master_0_slave_0, ds_master_0_slave_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.master-data-source-name=ds_master_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.slave-data-source-names=ds_master_1_slave_0, ds_master_1_slave_1
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true
1. DDL operation
JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client
executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address
is a broadcast table, t_address
will be created on both ds_0
and ds_1
. The three tables, t_address
, t_order
and t_order_item
will be created on ds_0
and ds_1
respectively.
2. Write operation
For the broadcast table t_address
, each record written will also be written to the t_address
tables of ds_0
and ds_1
.
The tables t_order
and t_order_item
of the slave library are written to the table on the corresponding instance according to the slave library field and routing policy.
3. Read operation
The join
query operations on order
and order_item
under the binding table are shown below.
3. Conclusion
As an open source product focusing on database enhancement, ShardingSphere is pretty good in terms of its community activitiy, product maturity and documentation richness.
Among its products, ShardingSphere-JDBC is a sharding solution based on the client-side, which supports all sharding scenarios. And there’s no need to introduce an intermediate layer like Proxy, so the complexity of operation and maintenance is reduced. Its latency is theoretically lower than Proxy due to the lack of intermediate layer. In addition, ShardingSphere-JDBC can support a variety of relational databases based on SQL standards such as MySQL/PostgreSQL/Oracle/SQL Server, etc.
However, due to the integration of Sharding-JDBC with the application program, it only supports Java language for now, and is strongly dependent on the application programs. Nevertheless, Sharding-JDBC separates all sharding configuration from the application program, which brings relatively small changes when switching to other middleware.
In conclusion, Sharding-JDBC is a good choice if you use a Java-based system and have to to interconnect with different relational databases — and don’t want to bother with introducing an intermediate layer.
Author
Sun Jinhua
A senior solution architect at AWS, Sun is responsible for the design and consult on cloud architecture. for providing customers with cloud-related design and consulting services. Before joining AWS, he ran his own business, specializing in building e-commerce platforms and designing the overall architecture for e-commerce platforms of automotive companies. He worked in a global leading communication equipment company as a senior engineer, responsible for the development and architecture design of multiple subsystems of LTE equipment system. He has rich experience in architecture design with high concurrency and high availability system, microservice architecture design, database, middleware, IOT etc.
1642079523
Piggy Metrics is a simple financial advisor app built to demonstrate the Microservice Architecture Pattern using Spring Boot, Spring Cloud and Docker. The project is intended as a tutorial, but you are welcome to fork it and turn it into something else!
Piggy Metrics is decomposed into three core microservices. All of them are independently deployable applications organized around certain business domains.
Contains general input logic and validation: incomes/expenses items, savings and account settings.
Method | Path | Description | User authenticated | Available from UI |
---|---|---|---|---|
GET | /accounts/{account} | Get specified account data | ||
GET | /accounts/current | Get current account data | × | × |
GET | /accounts/demo | Get demo account data (pre-filled incomes/expenses items, etc) | × | |
PUT | /accounts/current | Save current account data | × | × |
POST | /accounts/ | Register new account | × |
Performs calculations on major statistics parameters and captures time series for each account. Datapoint contains values normalized to base currency and time period. This data is used to track cash flow dynamics during the account lifetime.
Method | Path | Description | User authenticated | Available from UI |
---|---|---|---|---|
GET | /statistics/{account} | Get specified account statistics | ||
GET | /statistics/current | Get current account statistics | × | × |
GET | /statistics/demo | Get demo account statistics | × | |
PUT | /statistics/{account} | Create or update time series datapoint for specified account |
Stores user contact information and notification settings (reminders, backup frequency etc). Scheduled worker collects required information from other services and sends e-mail messages to subscribed customers.
Method | Path | Description | User authenticated | Available from UI |
---|---|---|---|---|
GET | /notifications/settings/current | Get current account notification settings | × | × |
PUT | /notifications/settings/current | Save current account notification settings | × | × |
Spring cloud provides powerful tools for developers to quickly implement common distributed systems patterns -
Spring Cloud Config is horizontally scalable centralized configuration service for the distributed systems. It uses a pluggable repository layer that currently supports local storage, Git, and Subversion.
In this project, we are going to use native profile
, which simply loads config files from the local classpath. You can see shared
directory in Config service resources. Now, when Notification-service requests its configuration, Config service responses with shared/notification-service.yml
and shared/application.yml
(which is shared between all client applications).
Client side usage
Just build Spring Boot application with spring-cloud-starter-config
dependency, autoconfiguration will do the rest.
Now you don't need any embedded properties in your application. Just provide bootstrap.yml
with application name and Config service url:
spring:
application:
name: notification-service
cloud:
config:
uri: http://config:8888
fail-fast: true
With Spring Cloud Config, you can change application config dynamically.
For example, EmailService bean is annotated with @RefreshScope
. That means you can change e-mail text and subject without rebuild and restart the Notification service.
First, change required properties in Config server. Then make a refresh call to the Notification service: curl -H "Authorization: Bearer #token#" -XPOST http://127.0.0.1:8000/notifications/refresh
You could also use Repository webhooks to automate this process
Notes
@RefreshScope
doesn't work with @Configuration
classes and doesn't ignores @Scheduled
methodsfail-fast
property means that Spring Boot application will fail startup immediately, if it cannot connect to the Config Service.Authorization responsibilities are extracted to a separate server, which grants OAuth2 tokens for the backend resource services. Auth Server is used for user authorization as well as for secure machine-to-machine communication inside the perimeter.
In this project, I use Password credentials
grant type for users authorization (since it's used only by the UI) and Client Credentials
grant for service-to-service communciation.
Spring Cloud Security provides convenient annotations and autoconfiguration to make this really easy to implement on both server and client side. You can learn more about that in documentation.
On the client side, everything works exactly the same as with traditional session-based authorization. You can retrieve Principal
object from the request, check user roles using the expression-based access control and @PreAuthorize
annotation.
Each PiggyMetrics client has a scope: server
for backend services and ui
- for the browser. We can use @PreAuthorize
annotation to protect controllers from an external access:
@PreAuthorize("#oauth2.hasScope('server')")
@RequestMapping(value = "accounts/{name}", method = RequestMethod.GET)
public List<DataPoint> getStatisticsByAccountName(@PathVariable String name) {
return statisticsService.findByAccountName(name);
}
API Gateway is a single entry point into the system, used to handle requests and routing them to the appropriate backend service or by aggregating results from a scatter-gather call. Also, it can be used for authentication, insights, stress and canary testing, service migration, static response handling and active traffic management.
Netflix opensourced such an edge service and Spring Cloud allows to use it with a single @EnableZuulProxy
annotation. In this project, we use Zuul to store some static content (the UI application) and to route requests to appropriate the microservices. Here's a simple prefix-based routing configuration for the Notification service:
zuul:
routes:
notification-service:
path: /notifications/**
serviceId: notification-service
stripPrefix: false
That means all requests starting with /notifications
will be routed to the Notification service. There is no hardcoded addresses, as you can see. Zuul uses Service discovery mechanism to locate Notification service instances and also Circuit Breaker and Load Balancer, described below.
Service Discovery allows automatic detection of the network locations for all registered services. These locations might have dynamically assigned addresses due to auto-scaling, failures or upgrades.
The key part of Service discovery is the Registry. In this project, we use Netflix Eureka. Eureka is a good example of the client-side discovery pattern, where client is responsible for looking up the locations of available service instances and load balancing between them.
With Spring Boot, you can easily build Eureka Registry using the spring-cloud-starter-eureka-server
dependency, @EnableEurekaServer
annotation and simple configuration properties.
Client support enabled with @EnableDiscoveryClient
annotation a bootstrap.yml
with application name:
spring:
application:
name: notification-service
This service will be registered with the Eureka Server and provided with metadata such as host, port, health indicator URL, home page etc. Eureka receives heartbeat messages from each instance belonging to the service. If the heartbeat fails over a configurable timetable, the instance will be removed from the registry.
Also, Eureka provides a simple interface where you can track running services and a number of available instances: http://localhost:8761
Ribbon is a client side load balancer which gives you a lot of control over the behaviour of HTTP and TCP clients. Compared to a traditional load balancer, there is no need in additional network hop - you can contact desired service directly.
Out of the box, it natively integrates with Spring Cloud and Service Discovery. Eureka Client provides a dynamic list of available servers so Ribbon could balance between them.
Hystrix is the implementation of Circuit Breaker Pattern, which gives us a control over latency and network failures while communicating with other services. The main idea is to stop cascading failures in the distributed environment - that helps to fail fast and recover as soon as possible - important aspects of a fault-tolerant system that can self-heal.
Moreover, Hystrix generates metrics on execution outcomes and latency for each command, that we can use to monitor system's behavior.
Feign is a declarative Http client which seamlessly integrates with Ribbon and Hystrix. Actually, a single spring-cloud-starter-feign
dependency and @EnableFeignClients
annotation gives us a full set of tools, including Load balancer, Circuit Breaker and Http client with reasonable default configuration.
Here is an example from the Account Service:
@FeignClient(name = "statistics-service")
public interface StatisticsServiceClient {
@RequestMapping(method = RequestMethod.PUT, value = "/statistics/{accountName}", consumes = MediaType.APPLICATION_JSON_UTF8_VALUE)
void updateStatistics(@PathVariable("accountName") String accountName, Account account);
}
@RequestMapping
part between Spring MVC controller and Feign methodsstatistics-service
, thanks to auto-discovery through EurekaIn this project configuration, each microservice with Hystrix on board pushes metrics to Turbine via Spring Cloud Bus (with AMQP broker). The Monitoring project is just a small Spring boot application with the Turbine and Hystrix Dashboard.
Let's see observe the behavior of our system under load: Statistics Service imitates a delay during the request processing. The response timeout is set to 1 second:
![]() | ![]() | ![]() | ![]() |
---|---|---|---|
0 ms delay | 500 ms delay | 800 ms delay | 1100 ms delay |
Well behaving system. Throughput is about 22 rps. Small number of active threads in the Statistics service. Median service time is about 50 ms. | The number of active threads is growing. We can see purple number of thread-pool rejections and therefore about 40% of errors, but the circuit is still closed. | Half-open state: the ratio of failed commands is higher than 50%, so the circuit breaker kicks in. After sleep window amount of time, the next request goes through. | 100 percent of the requests fail. The circuit is now permanently open. Retry after sleep time won't close the circuit again because a single request is too slow. |
Centralized logging can be very useful while attempting to identify problems in a distributed environment. Elasticsearch, Logstash and Kibana stack lets you search and analyze your logs, utilization and network activity data with ease.
Analyzing problems in distributed systems can be difficult, especially trying to trace requests that propagate from one microservice to another.
Spring Cloud Sleuth solves this problem by providing support for the distributed tracing. It adds two types of IDs to the logging: traceId
and spanId
. spanId
represents a basic unit of work, for example sending an HTTP request. The traceId contains a set of spans forming a tree-like structure. For example, with a distributed big-data store, a trace might be formed by a PUT request. Using traceId
and spanId
for each operation we know when and where our application is as it processes a request, making reading logs much easier.
The logs are as follows, notice the [appname,traceId,spanId,exportable]
entries from the Slf4J MDC:
2018-07-26 23:13:49.381 WARN [gateway,3216d0de1384bb4f,3216d0de1384bb4f,false] 2999 --- [nio-4000-exec-1] o.s.c.n.z.f.r.s.AbstractRibbonCommand : The Hystrix timeout of 20000ms for the command account-service is set lower than the combination of the Ribbon read and connect timeout, 80000ms.
2018-07-26 23:13:49.562 INFO [account-service,3216d0de1384bb4f,404ff09c5cf91d2e,false] 3079 --- [nio-6000-exec-1] c.p.account.service.AccountServiceImpl : new account has been created: test
appname
: The name of the application that logged the span from the property spring.application.name
traceId
: This is an ID that is assigned to a single request, job, or actionspanId
: The ID of a specific operation that took placeexportable
: Whether the log should be exported to ZipkinDeploying microservices, with their interdependence, is much more complex process than deploying a monolithic application. It is really important to have a fully automated infrastructure. We can achieve following benefits with Continuous Delivery approach:
Here is a simple Continuous Delivery workflow, implemented in this project:
In this configuration, Travis CI builds tagged images for each successful git push. So, there are always the latest
images for each microservice on Docker Hub and older images, tagged with git commit hash. It's easy to deploy any of them and quickly rollback, if needed.
Note that starting 8 Spring Boot applications, 4 MongoDB instances and a RabbitMq requires at least 4Gb of RAM.
.env
file for more security or leave it as it is.mvn package [-DskipTests]
In this mode, all latest images will be pulled from Docker Hub. Just copy docker-compose.yml
and hit docker-compose up
If you'd like to build images yourself, you have to clone the repository and build artifacts using maven. After that, run docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
docker-compose.dev.yml
inherits docker-compose.yml
with additional possibility to build images locally and expose all containers ports for convenient development.
If you'd like to start applications in Intellij Idea you need to either use EnvFile plugin or manually export environment variables listed in .env
file (make sure they were exported: printenv
)
http://turbine-stream-service:8080/turbine/turbine.stream
)PiggyMetrics is open source, and would greatly appreciate your help. Feel free to suggest and implement any improvements.
Download Details:
Author: sqshq
Source Code: https://github.com/sqshq/piggymetrics
License: MIT License
1630409837
In this article we will learn about “How to implement Fault Tolerance in Microservices using Resilience4j?”
https://javatechonline.com/how-to-implement-fault-tolerance-in-microservices-using-resilience4j/
#java #Java #microservices #microservice #spring-boot #spring #spring-framework
When we develop an application, especially a Microservices-based applications, there are high chances that we experience some deviations while running it in real time. Sometimes, it could be slow response, network failures, REST call failures, failures due to the high number of requests and much more. In order to tolerate these kinds of suspected faults, we need to incorporate Fault Tolerance mechanism in our application. To achieve it, we will make use of Resilience4j library. Resilience4j is a lightweight, easy-to-use fault tolerance library inspired by Netflix Hystrix, but designed for Java 8 and functional programming. Get complete detail of it with examples.
1562750825
In this post, you’ll learn about microservices architecture and how to implement it using Spring Boot. After creating some projects with the technique, you will deploy the artifacts as Docker containers and will simulate a container orchestrator (such as Kubernetes) using Docker Compose for simplification
The icing on the cake will be authentication integration using Spring Profiles; you will see how to enable it with a production profile.
But first, let’s talk about microservices.
Microservices, as opposed to a monolith architecture, dictates you have to divide your application into small, logically related, pieces. These pieces are independent software that communicates with other pieces using HTTP or messages, for example.
There is some discussion of what size micro is. Some say a microservice is software that can be created in a single sprint; others say microservices can have bigger size if it is logically related (you can’t mix apples and oranges, for example). I agree with Martin Fowler and think size doesn’t matter that much, and it’s more related to the style.
There are many advantages to microservices:
However, there are some drawbacks:
Nowadays, it’s commonly accepted that you should avoid a microservice architecture at first. After some iterations, the code division will become clearer as will the demands of your project. It is often too expensive to handle microservices until your development team starts into small projects.
You’ll build two projects in this tutorial: a service (school-service) and a UI (school_ui). The service provides the persistent layer and business logic, and the UI provides the graphical user interface. Connecting them is possible with minimal configuration.
After the initial setup, I’ll talk about discovery and configuration services. Both services are an essential part of any massively distributed architecture. To prove this point, you will integrate it with OAuth 2.0 and use the configuration project to set the OAuth 2.0 keys.
Finally, each project will be transformed into a Docker image. Docker Compose will be used to simulate a container orchestrator as Compose will manage every container with an internal network between the services.
Lastly, Spring profiles will be introduced to change configuration based on the environment currently appropriately assigned. That way, you will have two OAuth 2.0 environments: one for development, and other for production.
Fewer words, more code! Clone this tutorial’s repository and check out the start
branch.
git clone -b start https://github.com/oktadeveloper/okta-spring-microservices-docker-example.git
The root pom.xml
file is not a requirement. However, it can be helpful to manage multiple projects at once. Let’s look inside:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0
com.okta.developer.docker_microservices
parent-pom
0.0.1-SNAPSHOT
pom
parent-project
school-service
school-ui
This is called an aggregate project because it aggregates child projects. It is useful for running the same Maven task on all declared modules. The modules do not need to use the root module as a parent.
There are two modules available: a school service, and a school UI.
The school-service
directory contains a Spring Boot project that acts as the project’s persistence layer and business rules. In a more complex scenario, you would have more services like this. The project was created using the always excellent Spring Initializr with the following configuration:
com.okta.developer.docker_microservices
school-service
You can get more details about this project by reading Spring Boot with PostgreSQL, Flyway, and JSONB. To summarize, it has the entities TeachingClass
, Course,
Student
and uses TeachingClassServiceDB
and TeachingClassController
to expose some data through a REST API. To test it, open a terminal, navigate to the school-service
directory, and run the command below:
./mvnw spring-boot:run
The application will start on port 8081
(as defined in file school-service/src/main/resources/application.properties
), so you should be able to navigate to [http://localhost:8081](http://localhost:8081)
and see the returned data.
> curl http://localhost:8081
[
{
"classId":13,
"teacherName":"Profesor Jirafales",
"teacherId":1,
"courseName":"Mathematics",
"courseId":3,
"numberOfStudents":2,
"year":1988
},
{
"classId":14,
"teacherName":"Profesor Jirafales",
"teacherId":1,
"courseName":"Spanish",
"courseId":4,
"numberOfStudents":2,
"year":1988
},
{
"classId":15,
"teacherName":"Professor X",
"teacherId":2,
"courseName":"Dealing with unknown",
"courseId":5,
"numberOfStudents":2,
"year":1995
},
{
"classId":16,
"teacherName":"Professor X",
"teacherId":2,
"courseName":"Dealing with unknown",
"courseId":5,
"numberOfStudents":1,
"year":1996
}
]
The school UI is, as the name says, the user interface that utilizes School Service. It was created using Spring Initializr with the following options:
com.okta.developer.docker_microservices
school-ui
The UI is a single web page that lists the classes available on the database. To get the information, it connects with the school-service
through a configuration in file school-ui/src/main/resources/application.properties
.
service.host=localhost:8081
The class SchoolController
class has all the logic to query the service:
package com.okta.developer.docker_microservices.ui.controller;
import com.okta.developer.docker_microservices.ui.dto.TeachingClassDto;
import org.springframework.beans.factory.annotation.*;
import org.springframework.core.ParameterizedTypeReference;
import org.springframework.http.*;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.*;
import org.springframework.web.client.RestTemplate;
import org.springframework.web.servlet.ModelAndView;
import java.util.List;
@Controller
@RequestMapping("/")
public class SchoolController {
private final RestTemplate restTemplate;
private final String serviceHost;
public SchoolController(RestTemplate restTemplate, @Value("${service.host}") String serviceHost) {
this.restTemplate = restTemplate;
this.serviceHost = serviceHost;
}
@RequestMapping("")
public ModelAndView index() {
return new ModelAndView("index");
}
@GetMapping("/classes")
public ResponseEntity> listClasses(){
return restTemplate
.exchange("http://"+ serviceHost +"/class", HttpMethod.GET, null,
new ParameterizedTypeReference>() {});
}
}
As you can see, there is a hard-coded location for the service. You can change the property setting with an environment variable like this -Dservice.host=localhost:9090
. Still, it has to be manually defined. How about having many instances of school-service application? Impossible at the current stage.
With school-service turned on, start school-ui
, and navigate to it in a browser at [http://localhost:8080](http://localhost:8080)
:
./mvnw spring-boot:run
You should see a page like the following:
Now you have a working application that uses two services to provide the information to end-user. What is wrong with it? In modern applications, developers (or operations) usually don’t know where or what port an application might be deployed on. The deployment should be automated so that no one cares about server names and physical location. (Unless you work inside a data center. If you do, I hope you care!)
Nonetheless, it is essential to have a tool that helps the services to discover their counterparts. There are many solutions available, and for this tutorial, we are going to use Eureka from Netflix as it has outstanding Spring support.
Go back to start.spring.io and create a new project as follows:
com.okta.developer.docker_microservices
discovery
Edit the main DiscoveryApplication.java
class to add an @EnableEurekaServer
annotation:
package com.okta.developer.docker_microservices.discovery;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;
@SpringBootApplication
@EnableEurekaServer
public class DiscoveryApplication {
public static void main(String[] args) {
SpringApplication.run(DiscoveryApplication.class, args);
}
}
And, you’ll need to update its application.properties
file so it runs on port 8761 and doesn’t try to register with itself.
spring.application.name=discovery-server
server.port=8761
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false
Let’s define each property:
spring.application.name
- The name of the application, also used by the discovery service to discover a service. You’ll see that every other application has an application name too.server.port
- The port the server is running. 8761
is the default port for Eureka server.eureka.client.register-with-eureka
- Tells Spring not to register itself into the discovery service.eureka.client .fetch-registry
- Indicates this instance should not fetch discovery information from the server.Now, run and access [http://localhost:8761](http://localhost:8761)
.
./mvnw spring-boot:run
The screen above shows the Eureka server ready to register new services. Now, it is time to change school-service and school-ui to use it.
NOTE: If you receive a ClassNotFoundException: javax.xml.bind.JAXBContext
error on startup, it’s because you’re running on Java 11. You can add JAXB dependencies to your pom.xml
to fix this.
javax.xml.bind
jaxb-api
2.3.1
org.glassfish.jaxb
jaxb-runtime
2.3.2
First, it is important to add the required dependencies. Add the following to both pom.xml
file (in the school-service and school-ui projects):
org.springframework.cloud
spring-cloud-starter-netflix-eureka-client
This module is part of the Spring Cloud initiative and, as such, needs a new dependency management node as follows (don’t forget to add to both projects):
org.springframework.cloud
spring-cloud-dependencies
${spring-cloud.version}
pom
import
Now you need to configure both applications to register with Eureka.
In the application.properties
file of both projects, add the following lines:
eureka.client.serviceUrl.defaultZone=${EUREKA_SERVER:http://localhost:8761/eureka}
spring.application.name=school-service
Don’t forget to change the application name from school-service
to school-ui
in the school-ui project. Notice there is a new kind of parameter in the first line: {EUREKA_SERVER:[http://localhost:8761/eureka}](http://localhost:8761/eureka})
. It means “if environment variable EUREKA_SERVER exists, use its value, if not, here’s a default value.” This will be useful in future steps. ;)
You know what? Both applications are ready to register themselves into the discovery service. You don’t need to do anything more. Our primary objective is that school-ui project does not need to know where school-service is. As such, you need to change SchoolController
(in the school-ui
project) to use school-service
in its REST endpoint. You can also remove the serviceHost
variable in this class.
package com.okta.developer.docker_microservices.ui.controller;
import com.okta.developer.docker_microservices.ui.dto.TeachingClassDto;
import org.springframework.core.ParameterizedTypeReference;
import org.springframework.http.HttpMethod;
import org.springframework.http.ResponseEntity;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.client.RestTemplate;
import org.springframework.web.servlet.ModelAndView;
import java.util.List;
@Controller
@RequestMapping("/")
public class SchoolController {
private final RestTemplate restTemplate;
public SchoolController(RestTemplate restTemplate) {
this.restTemplate = restTemplate;
}
@RequestMapping("")
public ModelAndView index() {
return new ModelAndView("index");
}
@GetMapping("/classes")
public ResponseEntity> listClasses() {
return restTemplate
.exchange("http://school-service/classes", HttpMethod.GET, null,
new ParameterizedTypeReference>() {});
}
}
Before integrating Eureka, you had a configuration pointing out where school-service was. Now, you’ve changed the service calls to use the name used by the other service: no ports, no hostname. The service you need is somewhere, and you don’t need to know where.
The school-service may have multiple instances of and it would be a good idea to load balance the calls between the instances. Thankfully, Spring has a simple solution: on the RestTemplate
bean creation, add @LoadBalanced
annotation as follows. Spring will manage multiple instance calls each time you ask something to the server.
package com.okta.developer.docker_microservices.ui;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.loadbalancer.LoadBalanced;
import org.springframework.context.annotation.Bean;
import org.springframework.web.client.RestTemplate;
import org.springframework.web.servlet.config.annotation.*;
@SpringBootApplication
public class UIWebApplication implements WebMvcConfigurer {
public static void main(String[] args) {
SpringApplication.run(UIWebApplication.class, args);
}
@Bean
@LoadBalanced
public RestTemplate restTemplate() {
return new RestTemplate();
}
@Override
public void addResourceHandlers(ResourceHandlerRegistry registry) {
if(!registry.hasMappingForPattern("/static/**")) {
registry.addResourceHandler("/static/**")
.addResourceLocations("classpath:/static/", "classpath:/static/js/");
}
}
}
Now, start restart school-service and school-ui (and keep the Discovery service up). Have a quick look at [http://localhost:8761](http://localhost:8761)
again:
Now your services are sharing info with the Discovery server. You can test the application again and see that it work as always. Just go to [http://localhost:8080](http://localhost:8080)
in your favorite browser.
While this configuration works, it’s even better to remove any trace of configuration values in the project’s source code. First, the configuration URL was removed from the project and became managed by a service. Now, you can do a similar thing for every configuration on the project using Spring Cloud Config.
First, create the configuration project using Spring Initializr and the following parameters:
com.okta.developer.docker_microservices
config
In the main class, add @EnableConfigServer
:
package com.okta.developer.docker_microservices.config;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.config.server.EnableConfigServer;
@SpringBootApplication
@EnableConfigServer
public class ConfigApplication {
...
}
Add the following properties and values in the project’s application.properties
:
spring.application.name=CONFIGSERVER
server.port=8888
spring.profiles.active=native
spring.cloud.config.server.native.searchLocations=.
eureka.client.serviceUrl.defaultZone=${EUREKA_SERVER:http://localhost:8761/eureka}
Some explanation about the properties:
spring.profiles.active=native
- Indicates Spring Cloud Config must use the native file system to obtain the configuration. Normally Git repositories are used, but we are going to stick with native filesystem for simplicity sake.spring.cloud.config.server.native.searchLocations
- The path containing the configuration files. If you change this to a specific folder on your hard drive, make sure and create the school-ui.properties
file in it.Now, you need something to configure and apply to this example. How about Okta’s configuration? Let’s put our school-ui behind an authorization layer and use the property values provided by the configuration project.
You can register for a free-forever developer account that will enable you to create as many user and applications you need to use! After creating your account, create a new Web Application in Okta’s dashboard (Applications > Add Application):
And fill the next form with the following values:
The page will return you an application ID and an secret key. Keep then safe and create a file called school-ui.properties
in the root folder of the config
project with the following contents. Do not forget to populate the variable values:
okta.oauth2.issuer=https://{yourOktaDomain}/oauth2/default
okta.oauth2.clientId={yourClientId}
okta.oauth2.clientSecret={yourClientSecret}
Now, run the config
project and check if its getting the configuration data properly:
./mvnw spring-boot:run
> curl http://localhost:8888/school-ui.properties
okta.oauth2.clientId: YOUR_CLIENT_ID
okta.oauth2.clientSecret: YOUR_CLIENT_SECRET
okta.oauth2.issuer: https://YOUR_DOMAIN/oauth2/default
Now you need to change the Spring UI project a little bit.
First, you need to change school-ui/pom.xml
and add some new dependencies:
org.springframework.cloud
spring-cloud-starter-config
com.okta.spring
okta-spring-boot-starter
1.1.0
org.thymeleaf.extras
thymeleaf-extras-springsecurity5
Create a new SecurityConfiguration
class in the com.okta...ui.config
package:
package com.okta.developer.docker_microservices.ui;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
@Configuration
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class SpringSecurityConfiguration extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http
.authorizeRequests()
.antMatchers("/").permitAll()
.anyRequest().authenticated()
.and()
.logout().logoutSuccessUrl("/")
.and()
.oauth2Login();
}
}
Change your SchoolController
so only users with scope profile
will be allowed (every authenticated user will have it).
import org.springframework.security.access.prepost.PreAuthorize;
....
@GetMapping("/classes")
@PreAuthorize("hasAuthority('SCOPE_profile')")
public ResponseEntity> listClasses(){
return restTemplate
.exchange("http://school-service/class", HttpMethod.GET, null,
new ParameterizedTypeReference>() {});
}
Some configurations need to be defined at project boot time. Spring had a clever solution to locate properly and extract configuration data before context startup. You need to create a file src/main/resources/bootstrap.yml
like this:
eureka:
client:
serviceUrl:
defaultZone: ${EUREKA_SERVER:http://localhost:8761/eureka}
spring:
application:
name: school-ui
cloud:
config:
discovery:
enabled: true
service-id: CONFIGSERVER
The bootstrap file creates a pre-boot Spring Application Context responsible for extracting configuration before the real application starts. You need to move all properties from application.properties
to this file as Spring needs to know where your Eureka Server is located and how it should search for configuration. In the example above, you enabled configuration over discovery service (spring.cloud.config.discovery.enabled
) and specified the Configuration service-id
.
Change your application.properties
file so it only has one OAuth 2.0 property:
okta.oauth2.redirect-uri=/authorization-code/callback
The last file to modify is src/main/resources/templates/index.hml
. Adjust it to show a login button if the user is not authenticated, and a logout button if the user is logged in.
Hello, world!
Logout
Login
# School classes
Course
Teacher
Year
Number of students
There are some Thymeleaf properties you should know about in this HTML:
@{/logout}
- returns the logout URL defined on the backendth:if="${#authorization.expression('isAuthenticated()')}"
- only print the HTML if the user is logged in@{//oauth2/authorization/okta}
- this is the URL that Spring Security redirects to for Okta. You could link to /login
as well, but that just renders the same link and you have to click on it.th:unless="${#authorization.expression('isAuthenticated()')}"
- only print the HTML inside the node if the user is logged offNow restart the configuration project and school-ui again. If you navigate to typing [http://localhost:8080](http://localhost:8080)
, you should see the following screen:
After logged in, the screen should appear like this one:
Congratulations, you created a microservices architecture using Spring Cloud config and Eureka for service discovery! Now, let’s go one step further and Dockerize every service.
Docker is a marvelous technology that allows creating system images similar to Virtual Machines but that shares the same Kernel of the host operating system. This feature increases system performance and startup time. Also, Docker provided an ingenious built system that guarantees once an image is created; it won’t be changed, ever. In other words: no more “it works on my machine!”
TIP: Need a deeper Docker background? Have a look at our Developer’s Guide To Docker.
You’ll need to create one Docker image for each project. Each image should have the same Maven configuration and Dockerfile
content in the root folder of each project (e.g., school-ui/Dockerfile
).
In each project’s pom, add the dockerfile-maven-plugin
:
...
com.spotify
dockerfile-maven-plugin
1.4.9
default
build
push
developer.okta.com/microservice-docker-${project.artifactId}
${project.version}
${project.build.finalName}.jar
This XML configures the Dockerfile Maven plugin to build a Docker image every time you run ./mvnw install
. Each image will be created with the name developer.okta.com/microservice-docker-${project.artifactId}
where project.artifactId
varies by project.
Create a Dockerfile
file in the root directory of each project.
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ADD target/*.jar app.jar
ENV JAVA_OPTS="
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]
The Dockerfile
follows what is recommended by Spring Boot with Docker.
Now, change school-ui/src/main/resources/bootstrap.yml
to add a new failFast
setting:
eureka:
client:
serviceUrl:
defaultZone: ${EUREKA_SERVER:http://localhost:8761/eureka}
spring:
application:
name: school-ui
cloud:
config:
discovery:
enabled: true
serviceId: CONFIGSERVER
failFast: true
The spring.cloud.failFast: true
setting tells Spring Cloud Config to terminate the application as soon as it can’t find the configuration server. This will be useful for the next step.
Create a new file called docker-compose.yml
that defines how each project starts:
version: '3'
services:
discovery:
image: developer.okta.com/microservice-docker-discovery:0.0.1-SNAPSHOT
ports:
- 8761:8761
config:
image: developer.okta.com/microservice-docker-config:0.0.1-SNAPSHOT
volumes:
- ./config-data:/var/config-data
environment:
- JAVA_OPTS=
-DEUREKA_SERVER=http://discovery:8761/eureka
-Dspring.cloud.config.server.native.searchLocations=/var/config-data
depends_on:
- discovery
ports:
- 8888:8888
school-service:
image: developer.okta.com/microservice-docker-school-service:0.0.1-SNAPSHOT
environment:
- JAVA_OPTS=
-DEUREKA_SERVER=http://discovery:8761/eureka
depends_on:
- discovery
- config
school-ui:
image: developer.okta.com/microservice-docker-school-ui:0.0.1-SNAPSHOT
environment:
- JAVA_OPTS=
-DEUREKA_SERVER=http://discovery:8761/eureka
restart: on-failure
depends_on:
- discovery
- config
ports:
- 8080:8080
As you can see, each project is now a declared service in Docker compose the file. It’ll have its ports exposed and some other properties.
-DEUREKA_SERVER=[http://discovery:8761/eureka](http://discovery:8761/eureka)
. This will tell where to find the Discovery server. Docker Compose creates a virtual network between the services and the DNS name used for each service is its name: that’s why it’s possible to use discovery
as the hostname./var/config-data
inside the docker container. Also, the property spring.cloud.config.server.native.searchLocations
will be overwritten to the same value. You must store the file school-ui.properties
in the same folder specified on the volume mapping (in the example above, the relative folder ./config-data
).restart: on-failure
. This set Docker Compose to restart the application as soon as it fails. Using together with failFast
property allows the application to keep trying to start until the Discovery and Config projects are completely ready.And that’s it! Now, build the images:
cd config && ./mvnw clean install
cd ../discovery && ./mvnw clean install
cd .. && ./mvnw clean install
The last command will likely fail with the following error in the school-ui
project:
java.lang.IllegalStateException: Failed to load ApplicationContext
Caused by: java.lang.IllegalStateException: No instances found of configserver (CONFIGSERVER)
To fix this, create a school-ui/src/test/resources/test.properties
file and add properties so Okta’s config passes, and it doesn’t use discovery or the config server when testing.
okta.oauth2.issuer=https://{yourOktaDomain}/oauth2/default
okta.oauth2.clientId=TEST
spring.cloud.discovery.enabled=false
spring.cloud.config.discovery.enabled = false
spring.cloud.config.enabled = false
Then modify UIWebApplicationTests.java
to load this file for test properties:
import org.springframework.test.context.TestPropertySource;
...
@TestPropertySource(locations="classpath:test.properties")
public class UIWebApplicationTests {
...
}
Now you should be able to run ./mvnw clean install
in the school-ui
project.
Once that completes, run Docker Compose to start all your containers (in the same directory where docker-compose.yml
is).
docker-compose up -d
Starting okta-microservice-docker-post-final_discovery_1 ... done
Starting okta-microservice-docker-post-final_config_1 ... done
Starting okta-microservice-docker-post-final_school-ui_1 ... done
Starting okta-microservice-docker-post-final_school-service_1 ... done
Now you should be able to browse the application as you did previously.
Now you’ve reached the last stage of today’s journey through microservices. Spring Profiles is a powerful tool. Using profiles, it is possible to modify program behavior by injecting different dependencies or configurations completely.
Imagine you have a well-architected software that has its persistence layer separated from business logic. You also provide support for MySQL and PostgreSQL, for example. It is possible to have different data access classes for each database that will be only loaded by the defined profile.
Another use case is for configuration: different profiles might have different configurations. Take authentication, for instance. Will your test environment have authentication? If it does, it shouldn’t use the same user directory as production.
Change your configuration project to have two apps in Okta: one default (for development) and another for production. Create a new Web application on Okta website and name it “okta-docker-production.”
Now, in your config
project, create a new file called school-ui-production.properties
. You already have school-ui.properties
, which will be used by every School UI instance. When adding the environment at the end of the file, Spring will merge both files and take precedence over the most specific file. Save the file with your production app’s client ID and secret, like this:
school-ui-production.properties
okta.oauth2.clientId={YOUR_PRODUCTION_CLIENT_ID}
okta.oauth2.clientSecret={YOUR_PRODUCTION_CLIENT_SECRET}
Now, run the configuration project using Maven, then run the following two curl
commands:
./mvnw spring-boot:run
> curl http://localhost:8888/school-ui.properties
okta.oauth2.issuer: https://{yourOktaDomain}/oauth2/default
okta.oauth2.clientId: ==YOUR DEV CLIENT ID HERE==
okta.oauth2.clientSecret: ==YOUR DEV CLIENT SECRET HERE==
> curl http://localhost:8888/school-ui-production.properties
okta.oauth2.issuer: https://{yourOktaDomain}/oauth2/default
okta.oauth2.clientId: ==YOUR PROD CLIENT ID HERE==
okta.oauth2.clientSecret: ==YOUR PROD CLIENT SECRET HERE==
As you can see, even though the file school-ui-production
has two properties, the config
project displays three (since the configurations are merged).
Now, you can change the school-ui
service in the docker-compose.yml
to use the production
profile:
school-ui:
image: developer.okta.com/microservice-docker-school-ui:0.0.1-SNAPSHOT
environment:
- JAVA_OPTS=
-DEUREKA_SERVER=http://discovery:8761/eureka
-Dspring.profiles.active=production
restart: on-failure
depends_on:
- discovery
- config
ports:
- 8080:8080
You’ll also need to copy school-ui-production.properties
to your config-data
directory. Then shut down all your Docker containers and restart them.
docker-compose down
docker-compose up -d
You should see the following printed in the logs of the school-ui
container:
The following profiles are active: production
That’s it! Now you have your microservices architecture running with a production profile. Huzzah!
TIP: If you want to prove your okta-docker-production
app is used and not okta-docker
, you can deactivate the okta-docker
app in Okta and confirm you can still log in at [http://localhost:8080](http://localhost:8080)
.
#java #spring-boot #docker #microservices
1602810603
I posted an article in regards to a single-page application(UI), but in this post, I’m going to introduce how to build microservice architecture for the J2EE application with Spring framework and open-source SSO framework Keycloak. This post will cover the following aspects:
The code is available in my Github and please check the docker-compose.yml at first so that you can read the rest of the post easier. One thing I need to mention here is you need to replace the IP address of the keycloak server URL with your own before running the docker containers.
version: '3.4'
2
services:
3
api-gateway:
4
build:
5
context: ./api-gateway
6
ports:
7
- "8080:8080"
8
restart: on-failure
9
environment:
10
#overriding spring application.properties
11
- eureka.client.serviceUrl.defaultZone=http://eureka-server:9091/eureka/
12
- keycloak-client.server-url=http://10.0.0.17:18080/auth ## use host name or ip of the host machine
13
depends_on:
14
- eureka-server
15
eureka-server:
16
build:
17
context: ./eureka-server
18
ports:
19
- "9091:9091"
20
restart: on-failure
21
microservice-consumer:
22
build:
23
context: ./microservice-consumer
24
ports:
25
- "9080:9080"
26
restart: on-failure
27
environment:
28
#overriding spring application.properties
29
- eureka.client.serviceUrl.defaultZone=http://eureka-server:9091/eureka/
30
- keycloak-client.server-url=http://10.0.0.17:18080/auth ## use host name or ip of the host machine
31
depends_on:
32
- eureka-server
33
microservice-producer:
34
build:
35
context: ./microservice-producer
36
ports:
37
- "9081:9081"
38
restart: on-failure
39
environment:
40
#overriding spring application.properties
41
- eureka.client.serviceUrl.defaultZone=http://eureka-server:9091/eureka/
42
- keycloak-client.server-url=http://10.0.0.17:18080/auth ## use host name or ip of the host machine
43
depends_on:
44
- eureka-server
45
keycloak:
46
image: jboss/keycloak:11.0.0
47
volumes:
48
- ./keycloak-server/realm-export.json:/tmp/keycloak/config/realm-export.json
49
environment:
50
KEYCLOAK_USER: admin
51
KEYCLOAK_PASSWORD: admin
52
KEYCLOAK_IMPORT: /tmp/keycloak/config/realm-export.json
53
DB_VENDOR: POSTGRES
54
DB_ADDR: postgres
55
DB_DATABASE: keycloak
56
DB_USER: keycloak
57
DB_SCHEMA: public
58
DB_PASSWORD: password
59
ports:
60
- "18080:18080"
61
command:
62
- "-b"
63
- "0.0.0.0"
64
- "-Djboss.socket.binding.port-offset=10000"
65
restart: on-failure
66
depends_on:
67
- postgres
68
postgres:
69
image: postgres
70
volumes:
71
- postgres_data:/var/lib/postgresql/data
72
environment:
73
POSTGRES_DB: keycloak
74
POSTGRES_USER: keycloak
75
POSTGRES_PASSWORD: password
76
volumes:
77
postgres_data:
78
name: keycloak_postgres_data
79
driver: local
#spring boot #microservice #spring cloud #keycloak #eureka server #spring cloud gateway #spring secuirty 5 #sso authentication #java microservice #jwt token