Fredy  Larson

Fredy Larson

1604020611

Monitoring & Orchestrating Your Microservices Landscape using Workflow Automation

On Wednesday, March 11, 2020, I conducted the webinar titled “Monitoring & Orchestrating Your Microservices Landscape using Workflow Automation”. Not only was I overwhelmed by the number of attendees, but we also got a huge list of interesting questions before and especially during the webinar. Some of them were answered, but a lot of them were not. I want to answer all open questions in this series of seven blog posts. Today I am posting the final two in the series.

Note that we also started to experiment with the Camunda’s question corner and discuss to make this more frequent, so keep an eye to our community for more opportunities to ask anything (especially as in-person events are canceled for some time).

Part 1: BPMN & modeling related questions (6 answers)

Part 2: Architecture related questions (12)

Part 3: Stack & technology questions (6)

Part 4: Camunda product-related questions (5)

Part 5: Camunda Optimize specific questions (3)

Part 6: Questions about best practices (5)

Part 7: Questions around project layout, journey and value proposition (3)

Questions about best practices

Q: Business data versus workflow data: if you cannot tear them apart, how can you keep them consistent? Are the eventual/transactional consistency problems simpler or more complex with Camunda BPM in the equation?

This is quite a complex question, as it depends on the exact architecture and technology you want to use.

Example 1: You use Camunda embedded as a library, probably using the Spring Boot starter. In this case, your business data could live in the same database as the workflow context. In this case you can join one ACID transaction and everything will be strongly consistent.

Example 2: You leverage Camunda Cloud and code your service in Node.JS, storing data in some database. Now you have no shared transaction. No you start living in the eventual consistent world, and need to rely on “at-least-once” semantics. This is not a problem per se, but at least requires some thinking about the situations that can arise. I should probably write an own piece about that, but I had used this picture in the past to explain the problem (and this very basic blog post might help also):

So you can end up with money charged on the credit card, but the workflow not knowing about it. But in this case you leverage the retry capabilities and will be fine soon (=eventually).

#microservices #monitoring-microservices #microservices-or #workflow-automation #process-automation #bpmn #workflow #developers-workflow

What is GEEK

Buddha Community

Monitoring & Orchestrating Your Microservices Landscape using Workflow Automation

Olivia Green

1608561255

Many people mistakenly think that the process of automating and implementing modern technologies such as this https://www.airslate.com/product/create-web-forms into their business is expensive and very difficult. They say you need to have a huge staff of employees, have high qualifications, invest bags of money. But in fact, there is nothing complicated in automating business processes. On the contrary, it is a fascinating process, during which you begin to better understand your business.

Fredy  Larson

Fredy Larson

1604020611

Monitoring & Orchestrating Your Microservices Landscape using Workflow Automation

On Wednesday, March 11, 2020, I conducted the webinar titled “Monitoring & Orchestrating Your Microservices Landscape using Workflow Automation”. Not only was I overwhelmed by the number of attendees, but we also got a huge list of interesting questions before and especially during the webinar. Some of them were answered, but a lot of them were not. I want to answer all open questions in this series of seven blog posts. Today I am posting the final two in the series.

Note that we also started to experiment with the Camunda’s question corner and discuss to make this more frequent, so keep an eye to our community for more opportunities to ask anything (especially as in-person events are canceled for some time).

Part 1: BPMN & modeling related questions (6 answers)

Part 2: Architecture related questions (12)

Part 3: Stack & technology questions (6)

Part 4: Camunda product-related questions (5)

Part 5: Camunda Optimize specific questions (3)

Part 6: Questions about best practices (5)

Part 7: Questions around project layout, journey and value proposition (3)

Questions about best practices

Q: Business data versus workflow data: if you cannot tear them apart, how can you keep them consistent? Are the eventual/transactional consistency problems simpler or more complex with Camunda BPM in the equation?

This is quite a complex question, as it depends on the exact architecture and technology you want to use.

Example 1: You use Camunda embedded as a library, probably using the Spring Boot starter. In this case, your business data could live in the same database as the workflow context. In this case you can join one ACID transaction and everything will be strongly consistent.

Example 2: You leverage Camunda Cloud and code your service in Node.JS, storing data in some database. Now you have no shared transaction. No you start living in the eventual consistent world, and need to rely on “at-least-once” semantics. This is not a problem per se, but at least requires some thinking about the situations that can arise. I should probably write an own piece about that, but I had used this picture in the past to explain the problem (and this very basic blog post might help also):

So you can end up with money charged on the credit card, but the workflow not knowing about it. But in this case you leverage the retry capabilities and will be fine soon (=eventually).

#microservices #monitoring-microservices #microservices-or #workflow-automation #process-automation #bpmn #workflow #developers-workflow

Roberta  Ward

Roberta Ward

1601359981

Monitoring & Orchestrating Your Microservices Landscape using Workflow Automation

On Wednesday, March 11, 2020, I conducted the webinar titled “Monitoring & Orchestrating Your Microservices Landscape using Workflow Automation”. Not only was I overwhelmed by the number of attendees, but we also got a huge list of interesting questions before and especially during the webinar. Some of them were answered, but a lot of them were not. I want to answer all open questions in this series of seven blog posts.

Part 1: BPMN & modeling related questions (6 answers)

Part 2: Architecture related questions (12)

Part 3: Stack & technology questions (6)

Part 4: Camunda product-related questions (5)

Part 5: Camunda Optimize specific questions (3)

Part 6: Questions about best practices (5)

Part 7: Questions around project layout, journey and value proposition (3)

Note that we also started to experiment with the Camunda’s question corner and discuss to make this more frequent, so keep an eye to our community for more opportunities to ask anything (especially as in-person events are canceled for some time).

Camunda product-related questions

Q: What is the difference between Camunda BPM and Zeebe?

Or different forms of asking the same question: How do you position Camunda BPM vs Zeebe in relation to this presentation? Is Camunda BPM still the best/most reliable solution for microservice architecture with orchestration flows? Or is Zeebe the recommended route for such a new project?

To get everybody on the same page first, within Camunda we have two open-source workflow engine projects:

Camunda BPM: A BPMN workflow engine, that persists state via a relational database. The engine itself is stateless, and if you cluster the engine all nodes meet in the database.Zeebe: A BPMN workflow engine, that persists state on its own (kind of event sourcing). Zeebe forms an own distributed system and replicates its state to other nodes using a RAFT protocol. If you want to learn more about if check out Zeebe.io — a horizontally scalable distributed workflow engine.

#microservices #workflow-autom #microservices-or #monitoring-micro #bpmn-workflow #workflow-modeling #camunda #zeebe

Monitor & Orchestrate Your Microservices Landscape via Workflow Automation (Part 1 of 7)

Back in March, I conducted the webinar: “Monitoring & Orchestrating Your Microservices Landscape using Workflow Automation”. You can find the recording of the webinar online, as well as the slides. Not only was I overwhelmed by the number of attendees, but we also got a huge list of interesting questions before and, especially, during the webinar. I was able to answer some of these, but ran out of time to answer them all.

So I want to answer all open questions in this series of seven blog posts covering:

  • Part 1: BPMN & modeling-related questions (6 answers)
  • Part 2: Architecture related questions (12)
  • Part 3: Stack & technology questions (6)
  • Part 4: Camunda product-related questions (5)
  • Part 5: Camunda Optimize specific questions (3)
  • Part 6: Questions about best practices (5)
  • Part 7: Questions around project layout, journey and value proposition (3)

We’ve also started to experiment with the Camunda’s Question Corner - an open opportunity to put your questions to Camunda experts in an interactive and fun online webinar - so keep an eye on the Camunda Forum for more details.

BPMN & modeling-related questions

Q: How to present BPMN diagrams so that common people can understand it?

There is no simple answer to that. But in my experience most people can understand a basic subset of BPMN easily. For our Real-Life BPMN book, we created a chart showing the elements we see used most often in automation-related projects:

#monitoring-microservices #microservices-orchestration #workflow-automation #bpmn #workflow-modeling #bpmn-diagrams #camunda-process-engine #hackernoon-top-story

Carmen  Grimes

Carmen Grimes

1598959140

How to Monitor Third Party API Integrations

Many enterprises and SaaS companies depend on a variety of external API integrations in order to build an awesome customer experience. Some integrations may outsource certain business functionality such as handling payments or search to companies like Stripe and Algolia. You may have integrated other partners which expand the functionality of your product offering, For example, if you want to add real-time alerts to an analytics tool, you might want to integrate the PagerDuty and Slack APIs into your application.

If you’re like most companies though, you’ll soon realize you’re integrating hundreds of different vendors and partners into your app. Any one of them could have performance or functional issues impacting your customer experience. Worst yet, the reliability of an integration may be less visible than your own APIs and backend. If the login functionality is broken, you’ll have many customers complaining they cannot log into your website. However, if your Slack integration is broken, only the customers who added Slack to their account will be impacted. On top of that, since the integration is asynchronous, your customers may not realize the integration is broken until after a few days when they haven’t received any alerts for some time.

How do you ensure your API integrations are reliable and high performing? After all, if you’re selling a feature real-time alerting, you’re alerts better well be real-time and have at least once guaranteed delivery. Dropping alerts because your Slack or PagerDuty integration is unacceptable from a customer experience perspective.

What to monitor

Latency

Specific API integrations that have an exceedingly high latency could be a signal that your integration is about to fail. Maybe your pagination scheme is incorrect or the vendor has not indexed your data in the best way for you to efficiently query.

Latency best practices

Average latency only tells you half the story. An API that consistently takes one second to complete is usually better than an API with high variance. For example if an API only takes 30 milliseconds on average, but 1 out of 10 API calls take up to five seconds, then you have high variance in your customer experience. This is makes it much harder to track down bugs and harder to handle in your customer experience. This is why 90th percentile and 95th percentiles are important to look at.

Reliability

Reliability is a key metric to monitor especially since your integrating APIs that you don’t have control over. What percent of API calls are failing? In order to track reliability, you should have a rigid definition on what constitutes a failure.

Reliability best practices

While any API call that has a response status code in the 4xx or 5xx family may be considered an error, you might have specific business cases where the API appears to successfully complete yet the API call should still be considered a failure. For example, a data API integration that returns no matches or no content consistently could be considered failing even though the status code is always 200 OK. Another API could be returning bogus or incomplete data. Data validation is critical for measuring where the data returned is correct and up to date.

Not every API provider and integration partner follows suggested status code mapping

Availability

While reliability is specific to errors and functional correctness, availability and uptime is a pure infrastructure metric that measures how often a service has an outage, even if temporary. Availability is usually measured as a percentage of uptime per year or number of 9’s.

AVAILABILITY %DOWNTIME PER YEARDOWNTIME PER MONTHDOWNTIME PER WEEKDOWNTIME PER DAY90% (“one nine”)36.53 days73.05 hours16.80 hours2.40 hours99% (“two nines”)3.65 days7.31 hours1.68 hours14.40 minutes99.9% (“three nines”)8.77 hours43.83 minutes10.08 minutes1.44 minutes99.99% (“four nines”)52.60 minutes4.38 minutes1.01 minutes8.64 seconds99.999% (“five nines”)5.26 minutes26.30 seconds6.05 seconds864.00 milliseconds99.9999% (“six nines”)31.56 seconds2.63 seconds604.80 milliseconds86.40 milliseconds99.99999% (“seven nines”)3.16 seconds262.98 milliseconds60.48 milliseconds8.64 milliseconds99.999999% (“eight nines”)315.58 milliseconds26.30 milliseconds6.05 milliseconds864.00 microseconds99.9999999% (“nine nines”)31.56 milliseconds2.63 milliseconds604.80 microseconds86.40 microseconds

Usage

Many API providers are priced on API usage. Even if the API is free, they most likely have some sort of rate limiting implemented on the API to ensure bad actors are not starving out good clients. This means tracking your API usage with each integration partner is critical to understand when your current usage is close to the plan limits or their rate limits.

Usage best practices

It’s recommended to tie usage back to your end-users even if the API integration is quite downstream from your customer experience. This enables measuring the direct ROI of specific integrations and finding trends. For example, let’s say your product is a CRM, and you are paying Clearbit $199 dollars a month to enrich up to 2,500 companies. That is a direct cost you have and is tied to your customer’s usage. If you have a free tier and they are using the most of your Clearbit quota, you may want to reconsider your pricing strategy. Potentially, Clearbit enrichment should be on the paid tiers only to reduce your own cost.

How to monitor API integrations

Monitoring API integrations seems like the correct remedy to stay on top of these issues. However, traditional Application Performance Monitoring (APM) tools like New Relic and AppDynamics focus more on monitoring the health of your own websites and infrastructure. This includes infrastructure metrics like memory usage and requests per minute along with application level health such as appdex scores and latency. Of course, if you’re consuming an API that’s running in someone else’s infrastructure, you can’t just ask your third-party providers to install an APM agent that you have access to. This means you need a way to monitor the third-party APIs indirectly or via some other instrumentation methodology.

#monitoring #api integration #api monitoring #monitoring and alerting #monitoring strategies #monitoring tools #api integrations #monitoring microservices

Einar  Hintz

Einar Hintz

1599055326

Testing Microservices Applications

The shift towards microservices and modular applications makes testing more important and more challenging at the same time. You have to make sure that the microservices running in containers perform well and as intended, but you can no longer rely on conventional testing strategies to get the job done.

This is where new testing approaches are needed. Testing your microservices applications require the right approach, a suitable set of tools, and immense attention to details. This article will guide you through the process of testing your microservices and talk about the challenges you will have to overcome along the way. Let’s get started, shall we?

A Brave New World

Traditionally, testing a monolith application meant configuring a test environment and setting up all of the application components in a way that matched the production environment. It took time to set up the testing environment, and there were a lot of complexities around the process.

Testing also requires the application to run in full. It is not possible to test monolith apps on a per-component basis, mainly because there is usually a base code that ties everything together, and the app is designed to run as a complete app to work properly.

Microservices running in containers offer one particular advantage: universal compatibility. You don’t have to match the testing environment with the deployment architecture exactly, and you can get away with testing individual components rather than the full app in some situations.

Of course, you will have to embrace the new cloud-native approach across the pipeline. Rather than creating critical dependencies between microservices, you need to treat each one as a semi-independent module.

The only monolith or centralized portion of the application is the database, but this too is an easy challenge to overcome. As long as you have a persistent database running on your test environment, you can perform tests at any time.

Keep in mind that there are additional things to focus on when testing microservices.

  • Microservices rely on network communications to talk to each other, so network reliability and requirements must be part of the testing.
  • Automation and infrastructure elements are now added as codes, and you have to make sure that they also run properly when microservices are pushed through the pipeline
  • While containerization is universal, you still have to pay attention to specific dependencies and create a testing strategy that allows for those dependencies to be included

Test containers are the method of choice for many developers. Unlike monolith apps, which lets you use stubs and mocks for testing, microservices need to be tested in test containers. Many CI/CD pipelines actually integrate production microservices as part of the testing process.

Contract Testing as an Approach

As mentioned before, there are many ways to test microservices effectively, but the one approach that developers now use reliably is contract testing. Loosely coupled microservices can be tested in an effective and efficient way using contract testing, mainly because this testing approach focuses on contracts; in other words, it focuses on how components or microservices communicate with each other.

Syntax and semantics construct how components communicate with each other. By defining syntax and semantics in a standardized way and testing microservices based on their ability to generate the right message formats and meet behavioral expectations, you can rest assured knowing that the microservices will behave as intended when deployed.

#testing #software testing #test automation #microservice architecture #microservice #test #software test automation #microservice best practices #microservice deployment #microservice components