The introduction of cloud-native applications and containers changed the way of how people structure and provision applications. Deploying and configuring physical servers at every edge location and trying to balance the load is just not feasible for dynamic loads and hybrid infrastructures. The solution to this problem is microservices.
In addition to supporting dynamic scalability, the move from monolithic applications to microservices avoids the bottlenecks of a central database and supports portability and code reuse. It also provides additional benefits of easier automation, integration in CI/CD pipeline, observability, and high availability, although microservices security and management may create certain new challenges.
A microservices system, like Kubernetes, is a framework for managing containers as a unified scalable distributed system for running applications.
Google, Twitter, Netflix, Amazon, and other leading SaaS and cloud companies have pioneered this approach and lead the movement to re-architect monolithic applications to microservices turning it into a new standard.
What microservices bring to the table is the ease of application delivery. When the application is split into small, contained chunks with immutable infrastructure, suddenly development becomes much simpler. Dependencies are eliminated. Different chunks can be written in different languages. Updating an interface or an external API doesn’t require the entire monolithic app to be touched. Plus there is code reuse. A microservice that works well can be used to deliver services across multiple Kubernetes pods and multiple applications.
There are also a number of additional benefits to this architecture:
#microservices #api security #service mesh #envoy