1647640500
Building microservices through Event Driven Architecture part09: Handling updates
During this journey, I will talk about how to handle updates on a Event Sourcing System.
So , in the previous steps , I have stored all business changes of the system as events instead of storing the current state. and I rebuild the current state by applying all the events to the aggregate.
I have builded a list of domain events : business changes happened in the past expressed in the Ubiquitous Language : ex ThePackageHasBeenDeliveredToCustomer.
Domain Events are immutable, when a event happen it cannot be changed.
So, to correct a mistake on an event, I have to create a compensation event with correct values like bank account transactions.
The aggregate records committed events and protects business invariants. It is the transaction boundary. To deal with concurrency I will use Optimistic Concurrency Control (OCC) with versioning.
Without acquiring locks each transaction verifies that no other transaction has modified the data it has read. If the data have not been changed, then the transaction is committed, if the data have been changed by someone else, then the transaction rolls back and can be restarted.
With versionning , the user reads the current state of the aggregate, then send commands with the version number, if the version number match the current version of the aggregate then the transaction is committed.
If the version number does not match the current version of the aggregate, in this case it means that the data has been updated by someone else. So the user should read again the data to get the correct version and retry.
In this tutorial, I will shwo how to update the Speech Entity. It has the following properties : Title, Description, Url and Type. So the update of each property is an event and should be stored on event store.
TEST CASE 1 : ChangeTitle when title is null or empty should raise ArgumentNullAggregateException:
Here I will test that if the Title is NullOrEmpty then, the system should raise an exception.
TEST CASE 2 : ChangeTitle when expected version is not equals to aggregate version should raise ConcurrencyException:
Here I will test that if the expectedVersion is not equals to the aggregateVersion then, the system should raise an exception.
Because I create a new speech, the aggregate version is equals to zero , so if I set expectedVersion to one, the test should raise an exception.
TEST CASE 3 : ChangeTitle with valid arguments should apply SpeechTitleChangedEvent :
Here I will test that if no errors , then the newTitle should be applied to the title of the speech. In other words : Speech.Title = “value of new title after updates”
Because the Apply function applies the event to the aggregate, the Title of the speech should be equals to the title of the value of SpeechTitleChangedEvent .
ChangeTitle Final implementation :
The final implementation of ChangeTitle should look like this.
Very simple, I the title is not null or empty, apply a SpeechTitleChangedEvent . The apply function sets the speech title with the value of the event SpeechTitleChangedEvent .
The code that checks the version of the aggregate was developed in the previous steps ( see the aggregateroot.cs class)
public void ValidateVersion(long expectedVersion)
{
if (Version != expectedVersion)
{
throw new ConcurrencyException($@”Invalid version specified : expectedVersion = {Version} but originalVersion = {expectedVersion}.”);
}
}
ChangeDescription, ChangeUrl and ChangeType should follow the same scenario as ChangeTitle
TEST CASE 1 : Handling Update when Command is null should raise ApplicationArgumentNullException :
Here I will test that if the updateCommand is null, then the system should raise an exception.
So I should mock all external dependencies : IUnitOfWork, ISpeechRepository and IEventSourcingSubscriber
I will provide a null command and verify that a ApplicationArgumentNullException is raised.
TEST CASE 2 : Handling update when speech does not exist should raise ApplicationNotFoundException:
Here I will test that if the speech to update does not exist, then the system should raise an exception (ApplicationNotFoundException).
I have to arrange my repository so that it returns a null speech with mock :
moqEventStoreRepository.Setup(m => m.GetByIdAsync<Domain.SpeechAggregate.Speech>(command.SpeechId))
.Returns(Task.FromResult((Domain.SpeechAggregate.Speech)null));
and that’s it.
TEST CASE 3 : Handling Update when Command is not null should update speech Title :
Here I will test that if the command is not null and the speech to update exists in the database , then the title should be updated.
A way to verify that the Speech Title is modified is to check it’s value before sending it to repository, it should be equals to the value of the new title :
moqSpeechRepository.Verify(m =>
m.UpdateAsync(It.Is<Domain.SpeechAggregate.Speech>(n =>
n.Title.Value.Equals(command.Title)
)),Times.Once);
TEST CASE 4 : Handling Update when Expected version is not equal to aggregate version should raise ConcurrencyException :
Here I will test that if the expectedVersion is not equals to the aggregateVersion, then the system should raise an exception.
The aggregate is equals to zero, because I instanciate a new speech , then if expectedversion is not equals to zero, the systme should raise a ConcurrencyException.
TEST CASE 1 : Handling Update when Speech is null should raise RepositoryArgumentNullException :
TEST CASE 2 : Handling Update when the speech does not exist should raise NotFoundRepositoryException
TEST CASE 3 : Handling Update when the speech is valid and exist should perform update
And the final implementation
TEST CASE 1 : Update Speech When ModelState Is Invalid Should Return BadRequest :
TEST CASE 2 : UpdateSpeech When An Exception Occurred Should Raise InternalServerError
Idem with register speech (ExceptionMiddleware)
TEST CASE 3 : Update Speech When ModelState Is Valid With No Errors Should Return Ok
And the final implementation
Hit F5 and start postman and sql server.
Let’s start sql server and see what’s going on
Here I have to table’s [dbo].[Speech] and [dbo].[EventStore] , let us run a select query , you can see that these 2 tables are empty.
Let us start postman and run a post request to create a speech : http://localhost:62694/api/speech
Ok lets go
the postman scripts are here : LogCorner.EduSync.Command\src\Postman\BLOG.postman_collection.json
Now I should have a newly created speech and a event LogCorner.EduSync.Speech.Domain.Events.SpeechCreatedEvent, LogCorner.EduSync.Speech.Domain, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null
Note that the version is equals to 0.
For each new speech, the version should be equal to zero.
If I inspect the payload, I must see my event
{
“Title”: {
“Value”: “Le Lorem Ipsum est simplement du faux texte”
},
“Url”: {
“Value”: “http://www.yahoo_1.fr”
},
“Description”: {
“Value”: “Le Lorem Ipsum est simplement du faux texte employé dans la composition et la mise en page avant impression. Le Lorem Ipsum est le faux texte standard de l’imprimerie depuis les années 1500, quand un imprimeur anonyme assembla ensemble des morceaux de texte pour réaliser un livre spécimen de polices de texte”
},
“Type”: {
“Value”: 3
},
“AggregateId”: “7c8ea8a0-1900-4616-9739-7cb008d37f74”,
“EventId”: “a688cc8a-ed56-4662-bbad-81e66ed917a0”,
“AggregateVersion”: 0,
“OcurrendOn”: “2020-01-19T15:49:59.3913833Z”
}
To update the title of the speech, i run the following request http://localhost:62694/api/speech
it is a put request .
I grab the identifier of the newly created speech CF17D255-9991-4B7B-B08E-F65B54AA9335
Let us copy from sql and paste it on request body.
Ok, Now i can run the put query
Come back to sql server to verify the result
SELECT * FROM [dbo].[Speech]
SELECT * FROM [dbo].[EventStore]
I should see the updated title and a new event LogCorner.EduSync.Speech.Domain.Events.SpeechTitleChangedEvent.
The version should be 1 and the payload should be the update event
{
“Title”: “UPDATE_1__Le Lorem Ipsum est simplement du faux texte”,
“AggregateId”: “7c8ea8a0-1900-4616-9739-7cb008d37f74”,
“EventId”: “de253f69-ea89-4a54-8927-e09553cc43c7”,
“AggregateVersion”: 1,
“OcurrendOn”: “2020-01-19T15:55:14.1734365Z”
}
Cource code of this article is available here (Feature/Task/EventSourcingApplication)
#dotnet #aspdotnet #csharp #microservices
1600088400
Companies need to be thinking long-term before even starting a software development project. These needs are solved at the level of architecture: business owners want to assure agility, scalability, and performance.
The top contenders for scalable solutions are serverless and microservices. Both architectures prioritize security but approach it in their own ways. Let’s take a look at how businesses can benefit from the adoption of serverless architecture vs microservices, examine their differences, advantages, and use cases.
#serverless #microservices #architecture #software-architecture #serverless-architecture #microservice-architecture #serverless-vs-microservices #hackernoon-top-story
1596259380
Event-driven architecture, or EDA, is an integration pattern where applications are oriented around publishing events and responding to events. It provides five key benefits to modern application architecture: scalability, resilience, agility, data sharing, and cloud-enabling.
This article explores how EDA fits into enterprise integration, its three styles, how it enables business strategy, its benefits and trade-offs, and the next steps to start an EDA implementation.
Although there are many brokers you can use to publish event messages, the open-source software Apache Kafka has emerged as the market leader in this space. This article is focused on a Kafka-based EDA, but much of the principles here apply to any EDA implementation.
If asked to describe integration a year ago, I would have said there are two modes: application integration and data integration. Today I’d say that integration is on a spectrum, with data on one end, application on the other end, and event integration in the middle.
The spectrum of integration.
Application integration is REST, SOAP, ESB, etc. These are patterns for making functionality in one application run in response to a request from another app. It’s especially strong for B2B partnership and exposing value in one application to another. It’s less strong for many data use cases, like BI reporting and ML pipelines, since most application integrations wait passively to be invoked by a client, rather than actively pushing data where it needs to go.Data integration is patterns for getting data from point A to point B, including ETL, managed file transfer, etc. They’re strong for BI reporting, ML pipelines, and other data movement tasks, but weaker than application integration for many B2B partnerships and applications sharing functionality.
Event integration has one foot in data and the other in application integration, and it largely gets the benefits of both. When one application subscribes to another app’s events, it can trigger application code in response to those events, which feels a bit like an API from application integration. The events triggering this functionality also carry with them a significant amount of data, which feels a bit like data integration.
EDA strikes a balance between the two classic integration modes. Refactoring traditional application integrations into an event integration pattern opens more doors for analytics, machine learning, BI, and data synchronization between applications. It gets the best of application and data integration patterns. This is especially relevant for companies moving towards an operating model of leveraging data to drive new channels and partnerships. If your integration strategy does not unlock your data, then that strategy will fail. But if your integration strategy unlocks data at the expense of application architecture that’s scalable and agile, then again it will fail. Event integration strikes a balance between both those needs.
EDA often begins with isolated teams as a tactic for delivering projects. Ideally, such a project would have a deliberative approach to EDA and a common event message broker, usually cloud-native brokers on AWS, Azure, etc. Different teams select different brokers to meet their immediate needs. They do not consider integration beyond their project scope. Eventually, they may face the need for enterprise integration at a later date.
A major transition in EDA maturity happens when the investment in EDA shifts from a project tactic to enterprise strategy via a common event bus, usually Apache Kafka. Events can take a role in the organization’s business and technical innovation across the enterprise. Data becomes more rapidly shareable across the enterprise and also between you and your external strategic partners.
Before discussing the benefits of EDA, let’s cover the three common styles of EDA: event notification, event-carried state transfer, and event sourcing.
This pattern publishes events with minimal information: the event type, timestamps, and a key-value like an account number or some other key of the entity that raised the event. This informs subscribers that an event occurred, but if subscribers need any information about how that event changed things (like which fields changed, etc.), it must invoke a data retrieval service from the system of record. This is the simplest form of EDA, but it provides the least benefit.
In this pattern, the events carry all information about the state change, typically a before and after image. Subscribing systems can then store their cache of data without the need to retrieve it from the system of record.
This builds resilience since the subscribing systems can function if the source becomes unavailable. It helps performance, as there’s no remote call required to access source information. For example, if an inventory system publishes the full state of all inventory changes, a sales service subscribing to it can know the current inventory without retrieving from the inventory system — it can simply use the cache it built from the inventory events, even during an inventory service outage.
It also helps performance because the subscriber’s data storage can be custom-tuned just for that subscriber’s unique performance needs. Using the previous example, perhaps the inventory service is best suited using a relational database, but the sales service could get better performance from a no-SQL database like MongoDB. Since the sales services no longer need to retrieve from the inventory service, it’s at liberty to use a different DBMS than the inventory service. Additionally, if the inventory service is having an outage, the sales service would be unaffected since it pulls inventory data from its local cache.
The cons are that lots of data is copied around and there is more complexity on the receivers since they have to sort out maintaining all the state they are receiving.
#integration #microservices #data #kafka #enterprise architecture #event driven architecture #application architecture
1597438200
We have been building software applications for many years using various tools, technologies, architectural patterns and best practices. It is evident that many software applications become large complex monolith over a period for various reasons. A monolith software application is like a large ball of spaghetti with criss-cross dependencies among its constituent modules. It becomes more complex to develop, deploy and maintain monoliths, constraining the agility and competitive advantages of development teams. Also, let us not undermine the challenge of clearing any sort of technical debt monoliths accumulate, as changing part of monolith code may have cascading impact of destabilizing a working software in production.
Over the years, architectural patterns such as Service Oriented Architecture (SOA) and Microservices have emerged as alternatives to Monoliths.
SOA was arguably the first architectural pattern aimed at solving the typical monolith issues by breaking down a large complex software application to sub-systems or “services”. All these services communicate over a common enterprise service bus (ESB). However, these sub-systems or services are actually mid-sized monoliths, as they share the same database. Also, more and more service-aware logic gets added to ESB and it becomes the single point of failure.
Microservice as an architectural pattern has gathered steam due to large scale adoption by companies like Amazon, Netflix, SoundCloud, Spotify etc. It breaks downs a large software application to a number of loosely coupled microservices. Each microservice is responsible for doing specific discrete tasks, can have its own database and can communicate with other microservices through Application Programming Interfaces (APIs) to solve a large complex business problem. Each microservice can be developed, deployed and maintained independently as long as it operates without breaching a well-defined set of APIs called contract to communicate with other microservices.
#microservice architecture #microservice #scaling #thought leadership #microservices build #microservice
1600034400
The software industry has come a long journey and throughout this journey, Software Architecture has evolved a lot. Starting with 1-tier (Single-node), 2-tier (Client/ Server), 3-tier, and Distributed are some of the Software Architectural patterns we saw in this journey.
The majority of software companies are moving from Monolithic architecture to Microservices architecture, and Microservices architecture is taking over the software industry day-by-day. While monolithic architecture has many benefits, it also has so many shortcomings when catering to modern software development needs. With those shortcomings of monolithic architecture, it is very difficult to meet the demand of the modern-world software requirements and as a result, microservices architecture is taking control of the software development aggressively. The Microservices architecture enables us to deploy our applications more frequently, independently, and reliably meeting modern-day software application development requirements.
#microservice architecture #istio #microservice best practices #linkerd #microservice communication #microservice design #envoy proxy #kubernetes architecture #api gateways #service mesh architecture
1621848840
Are you trying to claw your way out of the web of API calls that ties your microservices together? Does a seemingly innocent change or bug fix result in a ripple effect across several business serving capabilities? Well, you’re not alone.
Microservices have been gaining steam since their introduction as an architectural style in 2011. Initially pioneered by companies like Amazon and Netflix as an alternative to their exploding monolithic codebases, they are now increasingly popular even at companies operating at a much smaller scale than the behemoths. And with good reason. When designed well, microservices are a great alternative to the problems often seen with their monolithic counterparts. The key phrase there though is “when designed well”. It’s simple enough when you have ten microservices - so scalable, such fun! But when those quickly grow to 50, and then 100, and then 500, you have a real problem on hand if you haven’t paid attention to how they all talk to each other.
Imagine you have several microservices all communicating via API calls. In this web of tightly coupled services, changes to one service may necessitate corresponding changes across multiple other services, and scaling one service would necessitate scaling a number of others as well. This problem was first described as the Death Star Architecture.
In the world of microservices, Death Star architecture is an anti-pattern where poorly designed microservices become highly interdependent, forming a complex network of interservice communication. When this happens, the entire thing becomes slow, inflexible, and fragile – and easy to blow up. ( ref .)
At this point, you’ve lost many of the benefits of having microservices in the first place, and in reality, are left with a distributed monolith
So how do you avoid the Death Star Architecture trap and allow your microservices to scale? How do you keep your microservice relatively isolated but still remain an integral node in the set of business flows that it serves? Enter the Event-Driven Microservice Architecture. The golden rule of Event-Driven microservices is that all communication is asynchronous. No API calls for us! Microservices instead publish records of their doings, also known as events. An Event is a record of a business action and must contain all information relevant to that action. Events are published to messaging infrastructure (think Kafka, RabbitMQ) and it is left to consuming microservices to figure out how to operate on them. By removing this tight coupling between services, it’s possible to truly reap the benefits offered by the microservices architecture pattern.
Event-Driven Messaging comes in two flavors - choreography and orchestration.
What is the choreography pattern?
Choreography is pretty much what it sounds like! Each dancer in a ballet troupe knows their position and performs their routine based on musical cues. Choreographed microservices behave in the same way - each service (dancer) is aware of it’s place in the business flow and acts on certain cues (events).
Let’s look at a simplified example of an order processing flow. The customer completes checking out their cart and the following steps need to happen next
#event-driven #microservices #microservice-architecture #scaling