1595307600
Application builds when broken down into multiple smaller service components, are known as microservices. When compared to the traditional Monolithic way, a Microservice Architecture treats each microservice as a standalone entity/module, essentially helping to ease the maintenance of its code and related infrastructure. Each microservice of an application can be written in a different technology stack, and further be deployed, optimized and managed independently.
Though in theory, a Microservice Architecture specifically benefits build of complex large-scale applications, however, it is also widely used for small-scale application builds (for example, a simple shopping cart) - with an eye to scale further.
A modern cloud-native application running on Microservice Architecture relies on the following critical components:
#serverless #microservice architecture #cloud native #istio #service mesh
1600034400
The software industry has come a long journey and throughout this journey, Software Architecture has evolved a lot. Starting with 1-tier (Single-node), 2-tier (Client/ Server), 3-tier, and Distributed are some of the Software Architectural patterns we saw in this journey.
The majority of software companies are moving from Monolithic architecture to Microservices architecture, and Microservices architecture is taking over the software industry day-by-day. While monolithic architecture has many benefits, it also has so many shortcomings when catering to modern software development needs. With those shortcomings of monolithic architecture, it is very difficult to meet the demand of the modern-world software requirements and as a result, microservices architecture is taking control of the software development aggressively. The Microservices architecture enables us to deploy our applications more frequently, independently, and reliably meeting modern-day software application development requirements.
#microservice architecture #istio #microservice best practices #linkerd #microservice communication #microservice design #envoy proxy #kubernetes architecture #api gateways #service mesh architecture
1598169240
Over the last 10 years, the rapid adoption of microservices architecture has resulted in enterprises with hundreds or (sometimes even thousands) of services. With the growth of containerization technologies like Docker and Kubernetes, microservice patterns have seen the strongest growth; resulting in a complex dependency matrix between these micro-services. For teams to monitor, support, and to maintain these services is becoming a challenge so most enterprises have invested in some kind of microservices management tool.
This article will explore some of the common aspects of microservice management. Then we’ll take a closer look at the centralized gateway pattern, as well as its limitations (most enterprises have started with or currently still use this pattern). Then we will look into a new pattern called “Service Mesh” which has gained a lot of attention in the last 3–4 years. Often this pattern is also referred to as the “Side Car Proxy”. So lets get started!
As enterprises start building more and more microservices, it’s becoming clear that some of the aspects of microservices are common across all microservices. So it makes sense to provide a common platform for managing these common aspects. Below are some of the key common aspects:
Service Registration and Discovery: A commonplace to register, document, search and discover microservices
Service Version Management: Ability to run multiple versions of a microservice.
**Authentication and Authorization: **Handle authentication and authorization including Mutual TLS (MTLS) between services.
Service Observability: Ability to monitor end to end traffic between services, response times, and quickly identify failures and bottlenecks.
**Rate Limiting: **Define threshold limits that traffic services can handle.
Circuit Breaker: Ability to configure and introduce a circuit breaker in case of failure scenarios (to avoid flooding downstream services with requests).
**Retry Logic: **Ability to configure and introduce retry logic dynamically in services.
So it’s a good idea to build these concerns as part of a common framework or service management tool. As a result, micro-service development teams don’t have to build these aspects in the service itself.
#service-mesh #istio-service-mesh #microservices #gateway-service #envoy-proxy
1593220572
Before going to deploy the service into istio let’s first understand what is service mesh.
The service mesh is a dedicated infrastructure layer for handling service to service communication.
Basically, it’s a way to control how different micro services deployed on Kubernetes will manage secure communication and traffic between them with lots of cross-cutting concerns like logging, security, etc.
Istio service mesh comes with lot’s of feature like –
we will not talk about the feature here, Let’s jump over to how we can deploy here so we categories the deployment process in 3 phases.
For downloading the latest version we can refer to the release page. Just download the tar.gz file and unzip it. In the directory, we will find istioctl client which we can use
Now set the istioctl client to your machine path and for installation we need to choose the configuration profile. There are a set of configuration profiles, we are going to use a demo profile which enables the components according to default settings.
use the following command for installing the demo configuration profile.
istioctl install --set profile=demo
As we know, istio automatically injects Envoy sidecar proxies using mutating webhook admission controllers when we deploy services in a particular namespace. To enable this feature we need to enable the istio-injection in a particular namespace where we will deploy the application.
kubectl label namespace default istio-injection=enabled
Now let’s deploy the sample application by applying the following yaml file.
apiVersion: v1
kind: Service
metadata:
name: sample
namespace: default
labels:
app: sample
spec:
selector:
app: sample
ports:
- name: http
port: 8081
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: sample
version: 'v1'
template:
metadata:
labels:
app: sample
version: 'v1'
spec:
initContainers:
- name: init-ds
image: busybox:latest
command:
- '/bin/sh'
- '-c'
- |
while true
do
if [ $? -eq 0 ]; then
echo "DB is UP"
break
fi
echo "DB is not yet reachable;sleep for 10s before retry"
sleep 10
done
containers:
- name: sample-app
image: lokesh/bundle123:latest
imagePullPolicy: Always
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: SPRING_SLEUTH_PROPAGATION_KEYS
value: 'x-request-id,x-ot-span-context'
- name: JAVA_OPTS
value: ' -Xmx256m -Xms256m'
resources:
requests:
memory: '256Mi'
cpu: '50m'
limits:
memory: '512Mi'
cpu: '1'
ports:
- name: http
containerPort: 8081
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sample
spec:
hosts:
- "*"
gateways:
- sample-gateway
http:
- match:
- uri:
exact: /getStudents
- uri:
exact: /accounts/create
- uri:
exact: /istio/auth
- uri:
prefix: /getTeacher
route:
- destination:
host: sample
port:
number: 8081
#devops #microservices #scala #tech blogs #deploy microservice #istio #service mesh
1595350140
Application builds when broken down into multiple smaller service components, are known as microservices. When compared to the traditional Monolithic way, a Microservice Architecture treats each microservice as a standalone entity/module, essentially helping to ease the maintenance of its code and related infrastructure. Each microservice of an application can be written in a different technology stack, and further be deployed, optimized and managed independently.
Though in theory, a Microservice Architecture specifically benefits build of complex large-scale applications, however, it is also widely used for small-scale application builds (for example, a simple shopping cart) - with an eye to scale further.
A modern cloud-native application running on Microservice Architecture relies on the following critical components:
The above three are the most important components of a microservice architecture that allow applications in a cloud-native stack to scale under load and perform even during partial failures of the cloud environment.
A large application when broken down to multiple microservices, each using a different technology stack (language, DB, etc.), requiring multiple environments to form a complex architecture to manage. Though Docker containerization helps to manage and deploy individual microservices by breaking each into multiple processes running in separate Docker Containers, the inter-services communication remains critically complicated as you have to deal with the overall system health, fault tolerance and multiple points of failure.
Let us understand this by how a shopping cart works on a Microservice Architecture. Microservices here would relate to the inventory database, the payment gateway service, the product suggestion algorithm based on the customer’s access history, etc. While all these services remain a stand-alone mini-module theoritically, they do need to interact among each other. It is important to note that a service-to-service communication is what makes microservices possible.
Now that you know the importance of a service-to-service communication in a microservice architecture, it seems apparent that the communication channel remains fault-free, secured, highly-available and robust. This is where a service mesh comes in as an infrastructure component, which ensures a controlled service-to-service communication by implementing multiple service proxies. A Service Mesh is responsible for fine-tuning communication among different services rather than adding new functionalities.
In a Service Mesh, proxies deployed alongside individual services enabling inter-service communication is widely known as the Sidecar Pattern. The sidecars (proxies) might be designed to handle any functionalities critical to inter-service communication like load balancing, circuit breaking, service discovery, etc.
Through a Service Mesh, you can:
#serverless #microservice architecture #cloud native #istio #service mesh
1595303958
Application builds when broken down into multiple smaller service components, are known as microservices. When compared to the traditional Monolithic way, a Microservice Architecture treats each microservice as a standalone entity/module, essentially helping to ease the maintenance of its code and related infrastructure. Each microservice of an application can be written in a different technology stack, and further be deployed, optimized and managed independently.
Though in theory, a Microservice Architecture specifically benefits build of complex large-scale applications, however, it is also widely used for small-scale application builds (for example, a simple shopping cart) - with an eye to scale further.
A modern cloud-native application running on Microservice Architecture relies on the following critical components:
The above three are the most important components of a microservice architecture that allow applications in a cloud-native stack to scale under load and perform even during partial failures of the cloud environment.
#serverless #microservice architecture #cloud native #istio #service mesh