Implementation of Saga pattern in Microservices architecture

In the last years the microservices is one of the hot topic right now in the industry, also in a context where it is not needed. Often, the architecture design is wrong, probably it’s more like a micro-monolith service. If you answer “Yes” to one of these basic questions, your architecture is wrong, probably.

  • Have you a single instance of your service?
  • Have you a single database (or schema)?
  • Is the communication between services syncronous?

There are a lot of questions to answer, but in this post I’ll show you a simple microservices architecture comply with pattern, based on the book “Microservices Patterns” by Chris Richardson.

The main idea,in my example, is to build a management software for “McPaspao”, my hypothetical fast food :-D. Following, a preliminary domain based analysis:

  • Orders Mangement
  • Kitchen Management
  • Delivery Management

The Orders Management manages the hamburger order, the Kitchen Mangement manages the kitchen job (eg: cooking hamburger or the fridge management), the Delivery Management manages the deliveries of the hamburgers. So I need three different services at least, each one with it’s own database, then each service needs to communicate with each others. In this scenario other five components are needed:

  • Orders Database
  • Kitchen Database
  • Delivery Database
  • Messaging Service
  • API Gateway

In the microservices architecture API Gateway, Messaging Service and Database per Service are common patterns used to solve a lot of problems, for example:

  • Messaging Service: Services often collaborate to handle many requests so, they must use an inter-process communication protocol. More specifically an asynchronous messaging system.
  • Database per Service: The service’s database must be part of the implementation to ensure loosly coupling so that it can be developed, deployed and scaled indipendently.
  • API Gateway: In a microservices architecture there are a lot fo services, protocols, addresses, ports, security policies, redundancy policies, etc the API Gateway pattern tries to solve this problem, it gives to the clients a single entry point managing all the listed aspects and more.

This is image title

Each Microservice is implemented following the Hexagonal architecture style: the core logic is embedded inside a hexagon, and the edges of the hexagon are considered the input and output. The aim is to layer the objects in a way that isolates your core logic from outside elements: the core logic is at the center of the picture and all the other elements are considered as integration points (DB, API, Messaging). We talk about inbound adapters that handle requests from the outside by invoking the business logic and about outbound adapters that are invoked by the business logic (to invoke external applications). A port defines a set of operations that is how the business logic interacts with what’s outside of it.

This is image title

I’ll show the details of a single microservice for an explanation of the internal architecture used, the Delivery Service. It has a single api to monitor the status of a delivery, it defines an inbound port IDeliveryAPI:

public interface IDeliveryApi {
    @ApiOperation(value = "View delivery status", response = DeliveryDTO.class,responseContainer = "list")
    @RequestMapping(value = "status", produces = MediaType.APPLICATION_JSON_VALUE, method = RequestMethod.GET)
    @ResponseBody
    List<DeliveryDTO> status();
}

The class DeliveryApi is an inbound adapter:

@RestController
@RequestMapping("/delivery/")
@Api(tags = "DeliveryServices")
public class DeliveryApi implements IDeliveryApi {

    @Autowired
    private DeliveryService deliveryService;

    @Override
    public List<DeliveryDTO> status() {
        return deliveryService.getAll();
    }
}

The class DeliveryService represents the business logic:

@Service
public class DeliveryService {
    @Autowired
    private DeliveryRepository deliveryRepository;

    @Autowired
    private DozerBeanMapper dozerBeanMapper;

    public List<DeliveryDTO> getAll()
    {
        List<Delivery> deliveryList  =deliveryRepository.findAll();
        List<DeliveryDTO> res=null;
        if(deliveryList!=null)
        {
            res=new ArrayList<>();
            for(Delivery delivery:deliveryList)
            {
                DeliveryDTO deliveryDTO=dozerBeanMapper.map(delivery,DeliveryDTO.class);
                res.add(deliveryDTO);
            }
        }
        return res;
    }
}

The interface IDeliveryPublisher is an outbound port:

public interface IDeliveryPublisher {
    void sendToOrderCallback(OrderDTO orderDTO) throws JsonProcessingException;
}

The class DeliveryPublisher is an outbound adapter:

@Service
public class DeliveryPublisher implements IDeliveryPublisher {

    @Autowired
    private ObjectMapper objectMapper;

    @Autowired
    private KafkaTemplate kafkaTemplate;

    @Override
    public void sendToOrderCallback(OrderDTO orderDTO) throws JsonProcessingException {
        kafkaTemplate.send(TOPIC_ORDER_CALLBACK,objectMapper.writeValueAsString(orderDTO));
    }
}

Each microservice (in my example) has this style of architcture internally to ensure an high loosly coupling between software layers. But this is only the internally architecture of a single microservice, it’s possible that other microservices uses a Layered architecture style, for example.

Well, a simple use case that involve every microservice is the order management, that is: a browser makes a request for one hamburger, the Order Service receives the order and write it on the database, the order management work is finished but to complete the order it needs to contact the Kitchen Service, so it sends a message on a topic (to ensure an asyncronous inter-process-communiction), the Kitchen Service is listening on this topic, it consumes the message and it processes the order, giving a feedback to the Order Service through another topic. When the Kitchen Service has cooked the hamburger it sends a message to Delivery Service, the Delivery Service processes the message, it delivers the hamburger and it sends a feedback. Every communication between the microservices goes through the message broker, in my example Kafka, I have applied a Choreography Saga pattern, that is:

To see the entire architecture, I use the docker-compose.yml (docker app) listed below:

version: '3.2'
services:

  order-service:
    image: paspaola/order-service:0.0.1
    ports:
      - 8090:8090
    depends_on:
      - mongodb-order
      - kafkabroker
    networks:
      - mcpaspao

  kitchen-service:
    image: paspaola/kitchen-service:0.0.1
    ports:
      - 8080:8080
    depends_on:
      - mongodb-kitchen
      - kafkabroker
    networks:
      - mcpaspao

  delivery-service:
    image: paspaola/delivery-service:0.0.1
    ports:
      - 8070:8070
    depends_on:
      - mongodb-delivery
      - kafkabroker
    networks:
      - mcpaspao

  mongodb-delivery:
    image: mongo:3.4.22-xenial
    ports:
      - 27017:27017
    networks:
      - mcpaspao

  mongodb-order:
    image: mongo:3.4.22-xenial
    ports:
      - 27018:27017
    networks:
      - mcpaspao

  mongodb-kitchen:
    image: mongo:3.4.22-xenial
    ports:
      - 27019:27017
    networks:
      - mcpaspao

  kafkabroker:
    image: paspaola/kafka-mcpaspao
    ports:
      - 2181:2181
      - 9092:9092
    environment:
      - KAFKA_ADVERTISED_LISTNERS=${advertised.addr}
    networks:
      - mcpaspao

  kong-mcpaspao:
    image: paspaola/kong-mcpaspao:0.0.1
    ports:
      - 8000:8000
      - 8443:8443
      - 8001:8001
      - 8444:8444
    networks:
      - mcpaspao
    depends_on:
      - delivery-service
      - kitchen-service
      - order-service

networks:
  mcpaspao:

Like into the big picture above, there are three services and three database, then there is the Kafka broker, a personalized image that already have on borad all the needed topics:

  • orderservice
  • orderservicecallback
  • kitchenservice
  • deliveryservice

In the Kafka container there is also an instance of Zookeeper, needed to start Kafka, you can read how to make it here.

The last component is the API Gateway, Kong: the classic installation uses a database like Postgresql, but it’s also possible (for development usage) to start Kong in a declarative way, following the simple configuration of Kong kong.yml:

_format_version: "1.1"

services:
  - name: order-service
    url: http://order-service:8090
    routes:
      - name: order-service
        paths:
          - /order-service

  - name: kitchen-service
    url: http://kitchen-service:8080
    routes:
      - name: kitchen-service
        paths:
          - /kitchen-service

  - name: delivery-service
    url: http://delivery-service:8070
    routes:
      - name: delivery-service
        paths:
          - /delivery-service

plugins:
  - name: request-transformer
    service: kitchen-service
    config:
      add:
        headers:
          - x-forwarded-prefix:/kitchen-service

  - name: request-transformer
    service: order-service
    config:
      add:
        headers:
          - x-forwarded-prefix:/order-service

  - name: request-transformer
    service: delivery-service
    config:
      add:
        headers:
          - x-forwarded-prefix:/delivery-service

In this example I’m using the API Gateway in the simplest way, without any Authentication and Authorization service or Service replica or Service Discovery, etc. to avoid confusing on the main aspect: the implementation of Choreography Saga pattern.

To build the project, you can use maven and then start manually every service, or you can build everything with the multistage Dockerfile (you have to enable the experimental features on Docker 19.x):

docker buildx build --target=order-service -t paspaola/order-service:0.0.1 --load . &&\
docker buildx build --target=kitchen-service -t paspaola/kitchen-service:0.0.1 --load . &&\
docker buildx build --target=delivery-service -t paspaola/delivery-service:0.0.1 --load . &&\
docker buildx build --target=kong-mcpaspao -t paspaola/kong-mcpaspao:0.0.1 --load .

and then start with command:

docker app render -s advertised.addr="your docker host ip" mcpaspao.dockerapp| docker-compose -f - up

It’s time to test!

You can verify that every microservice is runnig, using the Swagger user interface:

Now I want an hamburger!!! The kitchen needs some hamburgers, the fridge is empty, so (you have to install jq):

curl -X POST "http://localhost:8000/kitchen-service/kitchen/add?hamburgerType=KOBE&quantity=2" -H "accept: application/json"|jq -C && \
 \
curl -X GET "http://localhost:8000/kitchen-service/kitchen/status" -H "accept: application/json"|jq -C

I have added two hamburgers, now I make a request for an order with two hamburgers:

printf "\n--START--\n" && \
curl -X POST "http://localhost:8000/order-service/order/create" -H "accept: application/json" -H "Content-Type: application/json" -d "{ \"addressDTO\": { \"number\": \"string\", \"street\": \"string\" }, \"cookingType\": \"BLOOD\", \"hamburgerList\": [ { \"hamburgerType\": \"KOBE\", \"quantity\": 2 } ], \"price\": 10}" |jq -C && \
printf "\n---------\n" && \
 \
curl -X GET "http://localhost:8000/order-service/order/view" -H "accept: application/json"|jq -C  && sleep 5 && \
printf "\n---------\n" && \
curl -X GET "http://localhost:8000/order-service/order/view" -H "accept: application/json"|jq -C && sleep 5 && \
printf "\n---------\n" && \
curl -X GET "http://localhost:8000/order-service/order/view" -H "accept: application/json"|jq -C && sleep 5 && \
printf "\n---------\n" && \
curl -X GET "http://localhost:8000/order-service/order/view" -H "accept: application/json"|jq -C && \
printf "\n---------\n" && \
 \
curl -X GET "http://localhost:8000/delivery-service/delivery/status" -H "accept: application/json"|jq -C && \
printf "\n--END--\n"

In the first step the order goes in WAITING status, then COOKING, PACKAGING and DELIVERED status. If you run again the script, the system doesn’t have enough hamburgers and the next order will be in status WAITING and then ABORTED.

I hope this guide will help you to clarify the power and the complexity of a microservice architecture, this is only a pratical example implemented using simple and basic components, but you can guess when use or not it. Thank you for reading.

#microservices #spring-boot #docker #devops

What is GEEK

Buddha Community

Implementation of Saga pattern in Microservices architecture

Serverless Vs Microservices Architecture - A Deep Dive

Companies need to be thinking long-term before even starting a software development project. These needs are solved at the level of architecture: business owners want to assure agility, scalability, and performance.

The top contenders for scalable solutions are serverless and microservices. Both architectures prioritize security but approach it in their own ways. Let’s take a look at how businesses can benefit from the adoption of serverless architecture vs microservices, examine their differences, advantages, and use cases.

#serverless #microservices #architecture #software-architecture #serverless-architecture #microservice-architecture #serverless-vs-microservices #hackernoon-top-story

The Service Mesh in the Microservices World - DZone Microservices

The software industry has come a long journey and throughout this journey, Software Architecture has evolved a lot. Starting with 1-tier (Single-node), 2-tier (Client/ Server), 3-tier, and Distributed are some of the Software Architectural patterns we saw in this journey.

The Problem

The majority of software companies are moving from Monolithic architecture to Microservices architecture, and Microservices architecture is taking over the software industry day-by-day. While monolithic architecture has many benefits, it also has so many shortcomings when catering to modern software development needs. With those shortcomings of monolithic architecture, it is very difficult to meet the demand of the modern-world software requirements and as a result, microservices architecture is taking control of the software development aggressively. The Microservices architecture enables us to deploy our applications more frequently, independently, and reliably meeting modern-day software application development requirements.

#microservice architecture #istio #microservice best practices #linkerd #microservice communication #microservice design #envoy proxy #kubernetes architecture #api gateways #service mesh architecture

Tia  Gottlieb

Tia Gottlieb

1597438200

What Is a Microservice Architecture? Why Is It Important Now?

We have been building software applications for many years using various tools, technologies, architectural patterns and best practices. It is evident that many software applications become large complex monolith over a period for various reasons. A monolith software application is like a large ball of spaghetti with criss-cross dependencies among its constituent modules. It becomes more complex to develop, deploy and maintain monoliths, constraining the agility and competitive advantages of development teams. Also, let us not undermine the challenge of clearing any sort of technical debt monoliths accumulate, as changing part of monolith code may have cascading impact of destabilizing a working software in production.

Over the years, architectural patterns such as Service Oriented Architecture (SOA) and Microservices have emerged as alternatives to Monoliths.

SOA was arguably the first architectural pattern aimed at solving the typical monolith issues by breaking down a large complex software application to sub-systems or “services”. All these services communicate over a common enterprise service bus (ESB). However, these sub-systems or services are actually mid-sized monoliths, as they share the same database. Also, more and more service-aware logic gets added to ESB and it becomes the single point of failure.

Microservice as an architectural pattern has gathered steam due to large scale adoption by companies like Amazon, Netflix, SoundCloud, Spotify etc. It breaks downs a large software application to a number of loosely coupled microservices. Each microservice is responsible for doing specific discrete tasks, can have its own database and can communicate with other microservices through Application Programming Interfaces (APIs) to solve a large complex business problem. Each microservice can be developed, deployed and maintained independently as long as it operates without breaching a well-defined set of APIs called contract to communicate with other microservices.

#microservice architecture #microservice #scaling #thought leadership #microservices build #microservice

Einar  Hintz

Einar Hintz

1599055326

Testing Microservices Applications

The shift towards microservices and modular applications makes testing more important and more challenging at the same time. You have to make sure that the microservices running in containers perform well and as intended, but you can no longer rely on conventional testing strategies to get the job done.

This is where new testing approaches are needed. Testing your microservices applications require the right approach, a suitable set of tools, and immense attention to details. This article will guide you through the process of testing your microservices and talk about the challenges you will have to overcome along the way. Let’s get started, shall we?

A Brave New World

Traditionally, testing a monolith application meant configuring a test environment and setting up all of the application components in a way that matched the production environment. It took time to set up the testing environment, and there were a lot of complexities around the process.

Testing also requires the application to run in full. It is not possible to test monolith apps on a per-component basis, mainly because there is usually a base code that ties everything together, and the app is designed to run as a complete app to work properly.

Microservices running in containers offer one particular advantage: universal compatibility. You don’t have to match the testing environment with the deployment architecture exactly, and you can get away with testing individual components rather than the full app in some situations.

Of course, you will have to embrace the new cloud-native approach across the pipeline. Rather than creating critical dependencies between microservices, you need to treat each one as a semi-independent module.

The only monolith or centralized portion of the application is the database, but this too is an easy challenge to overcome. As long as you have a persistent database running on your test environment, you can perform tests at any time.

Keep in mind that there are additional things to focus on when testing microservices.

  • Microservices rely on network communications to talk to each other, so network reliability and requirements must be part of the testing.
  • Automation and infrastructure elements are now added as codes, and you have to make sure that they also run properly when microservices are pushed through the pipeline
  • While containerization is universal, you still have to pay attention to specific dependencies and create a testing strategy that allows for those dependencies to be included

Test containers are the method of choice for many developers. Unlike monolith apps, which lets you use stubs and mocks for testing, microservices need to be tested in test containers. Many CI/CD pipelines actually integrate production microservices as part of the testing process.

Contract Testing as an Approach

As mentioned before, there are many ways to test microservices effectively, but the one approach that developers now use reliably is contract testing. Loosely coupled microservices can be tested in an effective and efficient way using contract testing, mainly because this testing approach focuses on contracts; in other words, it focuses on how components or microservices communicate with each other.

Syntax and semantics construct how components communicate with each other. By defining syntax and semantics in a standardized way and testing microservices based on their ability to generate the right message formats and meet behavioral expectations, you can rest assured knowing that the microservices will behave as intended when deployed.

#testing #software testing #test automation #microservice architecture #microservice #test #software test automation #microservice best practices #microservice deployment #microservice components

Lawson  Wehner

Lawson Wehner

1615829040

The Saga Pattern in Microservices

Handling distributed transactions.

When everything is normal, the API composition and CQRS patterns can provide appropriate solutions for distributed queries.

However, maintaining the integrity of distributed data is complex. If you store all data in a single relational database and specify the appropriate constraints in the schema, you can rely on the database engine to maintain data integrity.

The situation is very different when multiple microservices keeps the data in isolated data stores (relational or non-relational). Data integrity is critical, but it must be maintained by code. The saga pattern addresses this concern.

A common measure of data integrity is that all transactions that modify data have the ACID properties:

  • Atomic: All operations in the transaction succeed or all fail.
  • Consistent: The data state meets all constraints before and after the transaction
  • Isolated: Concurrent transactions behave as if they are serialized.
  • Durable: When a transaction completes successfully, the results are persisted.

#saga #transactions #patterns #microservices