Fredy  Larson

Fredy Larson

1604020611

Monitoring & Orchestrating Your Microservices Landscape using Workflow Automation

On Wednesday, March 11, 2020, I conducted the webinar titled “Monitoring & Orchestrating Your Microservices Landscape using Workflow Automation”. Not only was I overwhelmed by the number of attendees, but we also got a huge list of interesting questions before and especially during the webinar. Some of them were answered, but a lot of them were not. I want to answer all open questions in this series of seven blog posts. Today I am posting the final two in the series.

Note that we also started to experiment with the Camunda’s question corner and discuss to make this more frequent, so keep an eye to our community for more opportunities to ask anything (especially as in-person events are canceled for some time).

Part 1: BPMN & modeling related questions (6 answers)

Part 2: Architecture related questions (12)

Part 3: Stack & technology questions (6)

Part 4: Camunda product-related questions (5)

Part 5: Camunda Optimize specific questions (3)

Part 6: Questions about best practices (5)

Part 7: Questions around project layout, journey and value proposition (3)

Questions about best practices

Q: Business data versus workflow data: if you cannot tear them apart, how can you keep them consistent? Are the eventual/transactional consistency problems simpler or more complex with Camunda BPM in the equation?

This is quite a complex question, as it depends on the exact architecture and technology you want to use.

Example 1: You use Camunda embedded as a library, probably using the Spring Boot starter. In this case, your business data could live in the same database as the workflow context. In this case you can join one ACID transaction and everything will be strongly consistent.

Example 2: You leverage Camunda Cloud and code your service in Node.JS, storing data in some database. Now you have no shared transaction. No you start living in the eventual consistent world, and need to rely on “at-least-once” semantics. This is not a problem per se, but at least requires some thinking about the situations that can arise. I should probably write an own piece about that, but I had used this picture in the past to explain the problem (and this very basic blog post might help also):

So you can end up with money charged on the credit card, but the workflow not knowing about it. But in this case you leverage the retry capabilities and will be fine soon (=eventually).

#microservices #monitoring-microservices #microservices-or #workflow-automation #process-automation #bpmn #workflow #developers-workflow

Roberta  Ward

Roberta Ward

1601359981

Monitoring & Orchestrating Your Microservices Landscape using Workflow Automation

On Wednesday, March 11, 2020, I conducted the webinar titled “Monitoring & Orchestrating Your Microservices Landscape using Workflow Automation”. Not only was I overwhelmed by the number of attendees, but we also got a huge list of interesting questions before and especially during the webinar. Some of them were answered, but a lot of them were not. I want to answer all open questions in this series of seven blog posts.

Part 1: BPMN & modeling related questions (6 answers)

Part 2: Architecture related questions (12)

Part 3: Stack & technology questions (6)

Part 4: Camunda product-related questions (5)

Part 5: Camunda Optimize specific questions (3)

Part 6: Questions about best practices (5)

Part 7: Questions around project layout, journey and value proposition (3)

Note that we also started to experiment with the Camunda’s question corner and discuss to make this more frequent, so keep an eye to our community for more opportunities to ask anything (especially as in-person events are canceled for some time).

Camunda product-related questions

Q: What is the difference between Camunda BPM and Zeebe?

Or different forms of asking the same question: How do you position Camunda BPM vs Zeebe in relation to this presentation? Is Camunda BPM still the best/most reliable solution for microservice architecture with orchestration flows? Or is Zeebe the recommended route for such a new project?

To get everybody on the same page first, within Camunda we have two open-source workflow engine projects:

Camunda BPM: A BPMN workflow engine, that persists state via a relational database. The engine itself is stateless, and if you cluster the engine all nodes meet in the database.Zeebe: A BPMN workflow engine, that persists state on its own (kind of event sourcing). Zeebe forms an own distributed system and replicates its state to other nodes using a RAFT protocol. If you want to learn more about if check out Zeebe.io — a horizontally scalable distributed workflow engine.

#microservices #workflow-autom #microservices-or #monitoring-micro #bpmn-workflow #workflow-modeling #camunda #zeebe

Monitor & Orchestrate Your Microservices Landscape via Workflow Automation (Part 1 of 7)

Back in March, I conducted the webinar: “Monitoring & Orchestrating Your Microservices Landscape using Workflow Automation”. You can find the recording of the webinar online, as well as the slides. Not only was I overwhelmed by the number of attendees, but we also got a huge list of interesting questions before and, especially, during the webinar. I was able to answer some of these, but ran out of time to answer them all.

So I want to answer all open questions in this series of seven blog posts covering:

  • Part 1: BPMN & modeling-related questions (6 answers)
  • Part 2: Architecture related questions (12)
  • Part 3: Stack & technology questions (6)
  • Part 4: Camunda product-related questions (5)
  • Part 5: Camunda Optimize specific questions (3)
  • Part 6: Questions about best practices (5)
  • Part 7: Questions around project layout, journey and value proposition (3)

We’ve also started to experiment with the Camunda’s Question Corner - an open opportunity to put your questions to Camunda experts in an interactive and fun online webinar - so keep an eye on the Camunda Forum for more details.

BPMN & modeling-related questions

Q: How to present BPMN diagrams so that common people can understand it?

There is no simple answer to that. But in my experience most people can understand a basic subset of BPMN easily. For our Real-Life BPMN book, we created a chart showing the elements we see used most often in automation-related projects:

#monitoring-microservices #microservices-orchestration #workflow-automation #bpmn #workflow-modeling #bpmn-diagrams #camunda-process-engine #hackernoon-top-story

Einar  Hintz

Einar Hintz

1594958400

The Principles of Chaos Engineering

Resilience is something those who use Kubernetes to run apps and microservices in containers aim for. When a system is resilient, it can handle losing a portion of its microservices and components without the entire system becoming inaccessible.

Resilience is achieved by integrating loosely coupled microservices. When a system is resilient, microservices can be updated or taken down without having to bring the entire system down. Scaling becomes easier too, since you don’t have to scale the whole cloud environment at once.

That said, resilience is not without its challenges. Building microservices that are independent yet work well together is not easy. You also have to create and maintain a reliable system with high fault tolerance. This is where Chaos Engineering comes into play.

What Is Chaos Engineering?

Chaos Engineering has been around for almost a decade now but it is still a relevent and useful concept to incorporate into improving your whole systems architecture. In essence, Chaos Engineering is the process of triggering and injecting faults into a system deliberately. Instead of waiting for errors to occur, engineers can take deliberate steps to cause (or simulate) errors in a controlled environment.

Chaos Engineering allows for better, more advanced resilience testing. Developers can now experiment in cloud-native distributed systems. Experiments involve testing both the physical infrastructure and the cloud ecosystem.

Chaos Engineering is not a new approach. In fact, companies like Netflix have been using resilience testing through Chaos Monkey, an in-house Chaos Engineering framework designed to improve the strength of cloud infrastructure for years now.

When dealing with a large-scale distributed system, Chaos Engineering provides an empirical way of building confidence by anticipating faults instead of reacting to them. The chaotic condition is triggered intentionally for this purpose.

There are a lot of analogies depicting how Chaos Engineering works, but the traffic light analogy represents the concept best. Conventional testing is similar to testing traffic lights individually to make sure that they work.

Chaos Engineering, on the other hand, means closing out a busy array of intersections to see how traffic reacts to the chaos of losing traffic lights. Since the test is run deliberately, more insights can be collected from the process.

#devops #chaos engineering #high fault tolerance #microservice-based architecture #microservices #microservices architecture #resilience engineering

Einar  Hintz

Einar Hintz

1599055326

Testing Microservices Applications

The shift towards microservices and modular applications makes testing more important and more challenging at the same time. You have to make sure that the microservices running in containers perform well and as intended, but you can no longer rely on conventional testing strategies to get the job done.

This is where new testing approaches are needed. Testing your microservices applications require the right approach, a suitable set of tools, and immense attention to details. This article will guide you through the process of testing your microservices and talk about the challenges you will have to overcome along the way. Let’s get started, shall we?

A Brave New World

Traditionally, testing a monolith application meant configuring a test environment and setting up all of the application components in a way that matched the production environment. It took time to set up the testing environment, and there were a lot of complexities around the process.

Testing also requires the application to run in full. It is not possible to test monolith apps on a per-component basis, mainly because there is usually a base code that ties everything together, and the app is designed to run as a complete app to work properly.

Microservices running in containers offer one particular advantage: universal compatibility. You don’t have to match the testing environment with the deployment architecture exactly, and you can get away with testing individual components rather than the full app in some situations.

Of course, you will have to embrace the new cloud-native approach across the pipeline. Rather than creating critical dependencies between microservices, you need to treat each one as a semi-independent module.

The only monolith or centralized portion of the application is the database, but this too is an easy challenge to overcome. As long as you have a persistent database running on your test environment, you can perform tests at any time.

Keep in mind that there are additional things to focus on when testing microservices.

  • Microservices rely on network communications to talk to each other, so network reliability and requirements must be part of the testing.
  • Automation and infrastructure elements are now added as codes, and you have to make sure that they also run properly when microservices are pushed through the pipeline
  • While containerization is universal, you still have to pay attention to specific dependencies and create a testing strategy that allows for those dependencies to be included

Test containers are the method of choice for many developers. Unlike monolith apps, which lets you use stubs and mocks for testing, microservices need to be tested in test containers. Many CI/CD pipelines actually integrate production microservices as part of the testing process.

Contract Testing as an Approach

As mentioned before, there are many ways to test microservices effectively, but the one approach that developers now use reliably is contract testing. Loosely coupled microservices can be tested in an effective and efficient way using contract testing, mainly because this testing approach focuses on contracts; in other words, it focuses on how components or microservices communicate with each other.

Syntax and semantics construct how components communicate with each other. By defining syntax and semantics in a standardized way and testing microservices based on their ability to generate the right message formats and meet behavioral expectations, you can rest assured knowing that the microservices will behave as intended when deployed.

#testing #software testing #test automation #microservice architecture #microservice #test #software test automation #microservice best practices #microservice deployment #microservice components