The Theory and Motive Behind Active/Active Multi-Region Architectures

The Theory and Motive Behind Active/Active Multi-Region Architectures

The theory and motive of active/active multi-region architectures for cloud applications to maintain high availability and resilient software systems.

The date was 24th December 2012, Christmas eve. The world’s largest video streaming service, Netflix experienced one of its worst incidents in company history. The incident was an outage of video playback on TV devices for customers in Canada, the United States, and the LATAM region. Fortunately, the enduring efforts of responders over at Netflix, along with AWS where the Amazon Elastic Load Balancer service experiencing disruptions resulting in the cause of the incident, managed to restore services just in time for Christmas. If one were to think about the events that ensued over at Netflix and AWS that day, it would be comparable to all those movies of saving Christmas that we all love to watch around that time of year.

This idea of incident management comes from the ubiquitous fact that incidents will happen. This is not an unknown fact and best immortalized by Amazon VP and CTO Werner Vogels when he said “Everything fails all the time”. It is, therefore, understood that things will break but the question that persists is can we do anything to mitigate the impact of these inevitable incidents? The answer is of course yes.

The incident that Netflix incurred was not a ‘sudden’ wake-up call to build more resilient systems. In fact, Netflix already was aware of the possible risk of their system and was experimenting with a more resilient architecture termed Isthmus, albeit for a different purpose. The incident however did highlight the importance of such works and brought forward the need for an active/active multi-region architecture.

The concept of active/active architectures is not a new one and can in fact be traced back to the 70s when digital database systems were being newly introduced in the public sphere. Now as cloud vendors roll out new services, one of the factors they are abstracting away for users is the set-up of such a system. After all, one of the major promises of moving to the cloud is the abstraction of these types of complexities along with the promise of reliability. Today, an effective active/active multi-region architecture can be built on almost all cloud vendors out there.

Considering the ability and maturity of cloud services in the market today, this article will not act as a tutorial on how to build the intended architecture. There are already various workshop guides and talks on the matter. In fact, one of the champions of resilient and high available cloud architectures, Adrian Hornsby who is the Principal Technical Evangelist at AWS, has a great series of blogs guiding the reader through active/active multi-region architectures on AWS.

However, what is missing, or at least what has been lost, is the theory and clear understanding of the motive behind implementing such an architecture. With cloud services abstracting most of the complexity away, it is easy to dismiss the ‘how’ and ‘why’ of such architectures to magic under the hood. Therefore, this article aims to present the distributed systems knowledge behind active/active multi-region systems and showcase how the basic understanding of the concept can thus empower us to build on any cloud vendor taking into consideration the vendor’s services.

Back to the Basics

Things fail. Yes, this is established, but this is also a fact that one need not submit to in the journey to building great products. The main cause of this failure, when looking at it from a broad perspective, is the need for scaling services and the increasing velocity in their development. The probability of failure increases when both scalability and velocity increase and this is just the phenomena observed in the field and validated through academic literature.

The active/active architectural concept, which as mentioned Netflix and others are looking towards, provides measures to mitigate the consequences of the inevitable failure. It must be noted that it does not reduce the probability of the failure but rather the impact of it, and this in itself is the defining point of the concept on top of which the practice is built. Therefore, it can be seen that the concept is built on the premise that failures are inevitable, and so aims to tackle downtimes instead.

The goal here is that the Mean Time To Resolve (MTTR) is low enough that it is insignificant in how it affects the consumer’s perception of the service’s availability. Hence, the resolution should not be measured from the point of view of the impacted service but rather from the consumer’s point of view. Low MTTR values for increased availability of the service as perceived by the user.

The way in which active/active architectures achieve this, in a nutshell, is by continuously being aware of the available service resources and routing user traffic accordingly. Therefore, when one of the resources or services experiences an incident, then the overall architecture should be built such that customer requests get serviced from other available resources or services. Now, this of course is a very high-level description of what the active/active concept entails. When diving deeper into how to actually execute this idea, we come across concepts such as redundancy, replication, statelessness, and eventual consistency. Nevertheless, the problem that the creators of the active/active architecture have to wrestle can be divided into two points:

  • How to route customer traffic to available services or resources and being aware of disrupted service or resources.
  • How to ensure that each service or resource available is consistent with the others so that customers do not face discrepancies when suddenly being services by another service or resource.

From here on out, these various services or resources will be termed as nodes. That is because theoretically, everything can fail. Not only compute services but also resources such as datastores, event buses, routing services, and other such resources can all fail, and should be replaced with either copies or other like service lightening the blow of the failure. Hence for simplicity, we shall refer to all of these components as nodes.

Redundancy and Request Routing

Before addressing the traffic routing problem, let us revisit the idea of redundancy. In the world of distributed systems, redundancy can be termed as the existence of services or resources, nodes as we would like to term them, that are not strictly necessary for achieving the business logic and functioning of the system. This may seem counterintuitive and against the intrinsic principles of software such as DRY, but is critical to the idea of active/active systems. This is because, with these redundant nodes, the overall system becomes more resilient to outages, from the customer perspective. Whenever one node is down, another is brought in for service. So yes, the overall system is littered with redundant resources, but what is achieved is the much sought-after resiliency of the system with increased fault-tolerance.

As a result, the theory is that customer traffic comes into a redundant network of nodes, where each node has access to a datastore. Any user theoretically can be connected to any processing node. Now of course in the practical world, there are issues such as GDPR compliance and application localization. So clusters of processing nodes for cloud applications would be found in specific regions, but nevertheless, the concept is the same. Two or more processing nodes available where many of these nodes are redundant acting as standby for when the primary node experiences disruptions.

Now the question is how to be aware of node outages and traffic routing. There are several methods to solve the issue, where some of the more popular methods work by rerouting the request automatically when a disturbed node has been hit. The fundamental idea of how to how to reroute to another available node is an intriguing area of research in computational academia.

Both industry-backed R&D and academic institutions are continuously exploring more advanced and optimal rerouting algorithms. The field is actually becoming even more interesting as machine learning has been added to the mix in recent years. For example, the telecommunications giant, Ericsson, has been exploring graph machine learning in a distributed network. This innovation is something that also has usage in routing traffic to the next healthy node optimally when a disrupted node is hit.

In the industry, cloud vendors such as AWS are working on their own set of services to perform the routing such as Amazon Route 53. Netflix employs these services along but has also built its own ancillary technology, one of them being Zuul which even though is primarily for edge computing, Netflix has added further capabilities to aid with how they route traffic in their active/active architecture.

Overall, the idea of routing traffic is well understood, and advancements to best route traffic are being noted. However, this is only half of the work. The second part is to ensure that all nodes in this redundant network are practically the same. After all, for the customer, they must be as the experience of using the platform must go unhindered when the primary serving node fails. Hence raising the importance of stateless compute services and data replication. Both of these notions should be enforced effectively to assure synchronization across the network.

Synchronizing in Active/Active

As mentioned, when achieving synchrony across the active/active architecture we must be mindful of stateless nodes and data replication across the data stores available. The former is a much easier notion to tackle, as the latter hits a well-defined conceptual barrier.

Stateless can be defined as when a service can handle an incoming request without being aware of previous requests. It is evident why this is a crucial block for an active/active architecture. Any node apart from the primary servicing node can begin receiving requests upon the failure of the primary node. The idea of stateless services can be applied to both the datastores as well as the compute services.

When considering stateless architectures with stateless compute services, this is where serverless services can be leveraged. Serverless does not always mean stateless but does promote the notion. This is because serverless compute services such as AWS Lambda when torn down do not retain the existing state, and thus when being utilized, we must always be mindful of the stateless nature. This idea is not a new one, and there have been many explorations on how serverless can be used for stateless architectures.

Now when thinking of statelessness in data stores, this is something that can not always be achieved. There is only so much that statelessness can be achieved, and eventually, when considering actual data within the system, there needs to be some form of synchronization among the various data stores in the active/active architecture. This is where the notion of data replication kicks in, but there is an inherent barrier that we need to address and that is the CAP theorem.

cloud aws azure cloud architecture gcp multi tenancy

What is Geek Coin

What is GeekCash, Geek Token

Best Visual Studio Code Themes of 2021

Bootstrap 5 Tutorial - Bootstrap 5 Crash Course for Beginners

Nest.JS Tutorial for Beginners

Hello Vue 3: A First Look at Vue 3 and the Composition API

Multi-cloud Spending: 8 Tips To Lower Cost

Mismanagement of multi-cloud expense costs an arm and leg to business and its management has become a major pain point. Here we break down some crucial tips to take some of the management challenges off your plate and help you optimize your cloud spend.

AWS Cloud Practitioner Course | NetCom Learning

Learn AWS cloud concepts, AWS services, security, architecture under AWS cloud practitioner course from AWS certified instructors. Authorized AWS Training

Microsoft Innovates Its Azure Multi-Cloud, Multi-Edge Hybrid Capabilities

During the recent Ignite virtual conference, Microsoft announced several updates for their Azure multi-cloud and edge hybrid offerings. These updates span from security innovations to new edge capabilities.

10 Best Coursera Certifications for Cloud Computing with AWS, GCP & Azure

10 Best Coursera Certifications for Cloud Computing with AWS, GCP, and Azure in 2021 and Learn 10 Best Coursera Courses to learn Cloud Computing, AWS, and Google Cloud Platform.

AWS vs Azure vs GCP | Amazon Web Services vs Microsoft Azure vs Google Cloud Platform

This "AWS vs Azure vs Google Cloud" video by Edureka will firstly introduces the top 3 cloud service providers and then compares them with each other based on various factors like market share, growth rate, availability zones, pricing and so on.