Microservices vs. Monolith Architecture

Microservices vs. Monolith Architecture

Should we start with a monolith or microservices and how do we decide? That question is what we’ll tackle today.

Originally published by Alex Barashkov at https://dev.to

The evolution of technologies has changed the way we build the architecture of applications. Docker, Cloud services, and Container Orchestration services brought us ability to develop distributed, more scalable, and reliable solutions. In this article, we will compare microservices and monolith architecture, discuss what teams and projects should use what type of architecture, and explore their advantages and disadvantages.

At a glance, the difference between those types can be illustrated like this

It is not strictly true that monolith apps are always simple, but microservices are often 10 times larger and almost always requires more resources.

Let’s discuss the pros and cons of each, point by point.

Deployment

Monolith apps allow you to set your deployment once and then simply adjust it based on ongoing changes. At the same time, however, there is also only a single point of failure during deployment and, if everything goes wrong, you could break your entire project.

Microservices require much more work; you will need to deploy each microservice independently, worry about orchestration tools, and try to unify the format of your ci/cd pipelines to reduce the a amount of time required for doing it for each new microservice. There is a bright side, however; if something goes wrong, you will only break one small microservice, which is less problematic than the entire project. It’s also much easier to rollback one small microservices than and entire monolith app.

Maintenance

If you plan to use a microservices architecture, get a DevOps for your team and prepare yourself. Not every developer will be familiar with Docker or orchestration tools, such as Kubernetes, Docker Swarm, Mesosphere, or any similar tool that could help you to manage infrastructure with a lot of moving parts. Someone has to monitor and maintain the functioning state of your CI configuration for each microservice and the whole infrastructure.

Reliability

Microservices architecture is the obvious winner here. Breaking one microservice affects only one part and causes issues for the clients that use it, but no one else. If, for example, you’re building a banking app and the microservice responsible for money withdrawal is down, this is definitely less serious than the whole app being forced to stop.

Scalability

For scalability, microservices are again better suited. Monolith apps are hard to scale because, even if you run more workers, every worker will be on the single, whole project, an inefficient way of using resources. Worse, you may write your code in the way that would render it impossible to scale it horizontally, leaving only vertical scaling possible for your monolith app. With microservices, this is much easier. Resources can be used more carefully and allow you to scale only that parts that require more resources.

Cost

Cost is tricky to calculate because monolith architecture is cheaper in some scenarios, but not in others. For example, with the better scalability of microservices, you could set up an auto-scale and only pay for that when the volume of users really requires more resources. At the same time, to keep that infrastructure up and running you need a devops that needs to be paid. With a small monolith app, you could run on a $5-$20 host and turn on the snapshot. With a larger monolith app, you may host a very expensive instance because you can’t share it over multiple small, cheap hosts.

Development

In one of our projects, we have 16 microservices and I could tell you from experience that this can be tricky to deal with it. The best way to deal with microservices is to build your docker-compose file from the beginning and develop through Docker. This helps you reduce the time spent onboarding new people; simply run the system from scratch and launch all microservices as needed. Opening 10+ terminal windows and executing commands to start each individual service is a pain.

On the other hand. when you develop one microservice, you may have a case in which you don’t need to run other parts of the application at all. This results in fewer problems with git conflicts due to the better process of breaking down tasks and the ability to isolate developers across microservices.

Doing code review and QA is simpler with microservices; you may even be able to write microservices in different languages.

Releasing

Microservices that are smaller and with a proper architecture of microservices communication allow you to release new features faster by reducing QA time, build time, and tests execution time. Monolith apps have a lot of internal dependencies that could not be broken up. There is also a higher risk that something you are committed to could depend on unfinished changes from your team members, which could potentially postpone releases.

What’s architecture better to you?

Use monolith architecture if you:

  • have a small team.
  • build the MVP version of a new product.
  • did not get millions in investments to hire DevOps or spend extra time on complex architecture.
  • have experience of development on solid frameworks, such as Ruby on Rails, Laravel, etc.
  • don’t see performance bottlenecks for some key functionality.
  • think that microservices are cool and it’s a trend.

Keep in mind that if you find out that there is need of microservices in your project, monolith architecture always carries the risk of break down as a result of these small microservices.

Use microservices architecture if you:

  • don’t have a tight deadline; microservices require you to research and architecture planning to ensure it works.
  • have a team with knowledge of different languages.
  • worry a lot about the scalability and reliability of your product.
  • potentially have a few development departments(maybe even in different countries/time zones).
  • have an existing monolith app and see problems with parts of your application that could be split across multiple microservices.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading about Microservices

An Introduction to Microservices

What is Microservices?

Build Spring Microservices and Dockerize Them for Production

Best Java Microservices Interview Questions In 2019

Build a microservices architecture with Spring Boot and Spring Cloud

Design patterns for microservices 🍂 🍂 🍂

Kotlin Microservices With Micronaut, Spring Cloud, and JPA

Build Spring Microservices and Dockerize Them for Production

Secure Service-to-Service Spring Microservices with HTTPS and OAuth 2.0

Build Secure Microservices with AWS Lambda and ASP.NET Core

How to build a basic Serverless Application using AWS Lambda

How to build a basic Serverless Application using AWS Lambda

In this post, I’m going to show you how easy it is to build a basic Serverless Application using a few AWS Services (AWS Lambda)

One of the architectural patterns I’ve championed for is Serverless Architecture. In short, a serverless application utilizes an array of cloud applications and services to perform logic that would normally be constrained to a single application hosted on a constantly running server.

Serverless itself has many benefits. It typically has reduced operational cost, as you are only paying for an application to run when it needs to. It also typically has reduced development cost, as many things you’d typically have to worry about with a legacy application, such as OS and System configurations, are handled for you by most cloud providers.

In this post, I’m going to show you how easy it is to build a basic serverless application using a few AWS services.

AWS Lambda

AWS Lambda is a service that allows you to run functions when they need to run, and only pay for them when they do. AWS allows you to upload an executable in a variety of languages, and handles spinning up and tearing down containers for you as it needs them. In this blog, I’ll be using Golang, also called Go, a language built at Google with scalability and concurrency in mind. Some of the other languages supported out of the box by AWS Lambda include Python, Node.js, and Java, to name a few. Some others, such as C++, can be used if you supply a custom runtime.

Building a Lambda Function

Let’s say we want to build a really simple API that returns information about cars. It’s not something that’s going to be used very often, and it doesn’t have to perform any complex logic. It’s just retrieving data and sending it to a client. For brevity, we’re going to avoid having a database of any kind and pretend that we magically have all of this information in memory.

In Go, there are no classes. Instead, objects are represented by structs, similar to C. Let’s build a quick struct to represent a car:

type Car struct {
  Model string `json:"model"`
  Color string `json:"color"`
  Year int `json:"year"`
}

As you can see, we have a Car struct with three fields, Model, Color and Year. The fields are capitalized so that they are exported, which is similar to public fields in some other languages. They are annotated with a json key to instruct the program how to deserialize json into an instance of the struct. More on that later.

Okay, so we have our struct now. Let’s say we want to build an HTTP endpoint that returns one of my favorite cars, a red 1999 Chevrolet Corvette. The response could look something like this:

GET /myfavoritecar
{   "model": "Corvette",   "color": "red",   "year": 1999}

AWS Lambda programs need to conform to a specific format which depends on the language they’re written in. All of them will have a handler function of some kind, which will usually include two parameters: a context and an input, the latter of which is usually some kind of JSON payload. In our example, we’re going to have our lambda function be triggered by requests to an application load balancer, which requires a specific contract be followed. More on that can be read here.

After following the AWS documentation, structs to represent an input to our Lambda function and its output will look something like this:

type LambdaPayload struct {
   RequestContext struct {
      Elb struct {
         TargetGroupArn string `json:"targetGroupArn"`
      } `json:"elb"`
   } `json:"requestContext"`
   HTTPMethod            string            `json:"httpMethod"`
   Path                  string            `json:"path"`
   Headers               map[string]string `json:"headers"`
   QueryStringParameters map[string]string `json:"queryStringParameters"`
   Body                  string            `json:"body"`
   IsBase64Encoded       bool              `json:"isBase64Encoded"`
}


type LambdaResponse struct {
   IsBase64Encoded   bool   `json:"isBase64Encoded"`
   StatusCode        int    `json:"statusCode"`
   StatusDescription string `json:"statusDescription"`
   Headers           struct {
      SetCookie   string `json:"Set-cookie"`
      ContentType string `json:"Content-Type"`
   } `json:"headers"`
   Body string `json:"body"`
}

There are fields that represent some meta info about our requests, such as Request Methods and response codes, as well as fields for request paths (in our case, /myfavoritecar), maps for headers and query parameters, the bodies of the requests and responses, and whether or not either are base 64 encoded. We can assume our lambda will automatically conform to this contract, as AWS will ensure that, and start writing our function!

When all is said and done, our function will look something like this:

func lambda_handler(ctx context.Context, payload LambdaPayload) (LambdaResponse, error) {
  response := &LambdaResponse{}
  response.Headers.ContentType = “text/html”
  response.StatusCode = http.StatusBadRequest
  response.StatusDescription = http.StatusText(http.StatusBadRequest)
  if payload.HTTPMethod == http.MethodGet && payload.Path == “/myfavoritecar” {
    car := &Car{}
    car.Model = “Corvette”
    car.Color = “Red”
    car.Year = 1999
    res, err := json.Marshal(car)
    if err != nil {
      fmt.Println(err)
      response.StatusCode = http.StatusInternalServerError
      response.StatusDescription = http.StatusText(http.StatusInternalServerError)
      return *response, err
    }
    response.Headers.ContentType = “application/json”
    response.Body = string(res)
    response.StatusCode = http.StatusOK
    response.StatusDescription = http.StatusText(http.StatusOK)
    return *response, nil
  } else {
    return *response, nil
  }
}

As you can see, we check that the request is a GET request and that we are looking for my favorite car by checking the path of the request. Otherwise, we return a Bad Request response. If we are serving the right request, we create a Car struct, assign the proper values, and marshal it as JSON (convert the struct to a JSON string) using Golang’s (great) built in JSON library. We create an instance of the Lambda response struct in either case, populate the relevant fields (and the content-type where applicable) and return it. Now, we just need a main function that specifies the above should run and we’re done:

func main(){
  lambda.Start(lambda_handler)
}

That’s it! Now let’s deploy it to AWS and see how it runs.

Deploying the Lambda to AWS

The first thing we need to do is create a function package. This differs from language to language; for Golang it is quite simple. Here’s a sample bash script for doing so:

env GOOS=linux GOARCH=amd64 go build -o /tmp/lambda golang/*.go
zip -j /tmp/lambda /tmp/lambda
aws lambda update-function-code --function-name car-lambda --region us-east-1 --zip-file fileb:///tmp/lambda.zip

As you can see, we use the Golang CLI to build an executable for the amd64 architecture, and Linux as an OS (as go executables differ depending on where they are meant to be run). For AWS lambda, we need to target the above architecture. It looks in a folder golang for any .go files, and compiles them into an executable. Then, we zip it in a location and call the AWS CLI to update the function code of our car-lambda function, which we haven’t created yet. Let’s do that now.

We can create our Lambda by going to the AWS Lambda page in the AWS Console, and clicking Create Function. We can name it car-lambda, and choose Go 1.x as our runtime. For execution role, we can choose the “Create a new role with basic Lambda permissions” option. That’s it! Now, we can run our script (assuming we have the AWS and Go CLI installed) and see output similar to the following:

{
  "FunctionName": "car-lambda",
  "FunctionArn": "arn:aws:lambda:us-east-1: YOUR_ACCOUNT_ID:function:car-lambda",
  "Runtime": "go1.x",
  "Role": "arn:aws:iam::YOUR_ACCOUNT_ID:role/service-role/car-lambda-role-w4cmi5fv",
  "Handler": "hello",
  "CodeSize": 4887939,
  "Description": "",
  "Timeout": 15,
  "MemorySize": 512,
  "LastModified": "2019–09–20T19:46:19.393+0000",
  "CodeSha256": "6h9d94EOfBj6uArZZKkDalbat7+xtlC83rOR+vBEaws=",
  "Version": "$LATEST",
  "TracingConfig": {
    "Mode": "PassThrough"
  },
  "RevisionId": "7d4cbfd8–2174–4a85-a9e0–043016f0a1d0"
}

We’re almost done deploying our lambda! However, you’ll see that the “handler” is set to “hello”. This is an AWS default. We need to change it to “lambda” (as that’s what I called my lambda program). You can do so by going to the Lambda’s page on AWS and scrolling down to the “handler” text field. You can change it to whatever you named your lambda program if you named it something different.

All right. NOW we’re done. Phew.

The next thing we need to do on AWS is create an ALB (Application Load Balancer). We can do so on the EC2 page by clicking the Load Balancers link, followed by Create Load Balancer. On the following screen, we click “Create” on the ALB square to be greeted with a page like the following (I’ve gone ahead and filled in the Name field and made it internet-facing):

For security purposes, I haven’t selected a VPC in the above image, but you’d want to choose a VPC and public subnets that are relevant to your AWS account. On the following screens, we need to set up a target group that routes traffic from the ALB to our lambda function. Select “New target group” from the dropdown, and “Lambda” as the target type. I named it car-lambda-tg.

For Step 5, choose the lambda we created earlier. When we create it, you’ll see something like this after it’s done provisioning the ALB for you:

If your ALB is in a VPC, make sure it is reachable from whichever network you’re on. You can do so by configuring it with public subnets and attaching a security group to the ALB (go to the VPC page, and then click Security Groups) that allows traffic from specific (or all) IP addresses. Read more about security groups. Then, when you try to go to the ALB’s DNS in your browser, you will (probably) see something like this:

This is good! We see the Bad Request we programmed earlier. Now, if we go to the /myfavoritecar endpoint….

We did it! Our lambda is now fully up and running on AWS. We have an API that can return responses, and we don’t have to to worry about setting up and paying for an entire server and its pain points. We can now extend it as much as we want through other APIs, endpoints, a database, more lambdas, and more. The possibilities are endless. I hope this blog post was helpful for you. As you can see, it’s super easy to get up and running with a serverless API.

Top Ten Cloud Tools From AWS

Top Ten Cloud Tools From AWS

Cound down the top ten from the number one cloud vendor in the world.

Amazon Web Services offer robust, secure, and easy-to-operate tools for databases, storage, running operations, and so on. Some of the popular global companies that use AWS are Netflix, Unilever, Airbnb, BMW, Met Office, and others. We’ve created this to help you run your business with a much higher profit. Here’s a list of all AWS services:

Here are the criteria for adding the services to the AWS services overview list:

  • Each tool responds to current trends in the cloud market
  • It will be demanded in 2020
  • It’s secure and robust
  • It’s cost-efficient
1. Simple Storage Service (S3)

S3 takes the first place in the list of AWS services for storing and protecting any amount of data in various situations, for instance, to ensure the operation of sites, mobile software, for backup and recovery, archiving, corporate programs, IoT gadgets, and significant data inquiry.

S3 offers simple-as-a-pie administration tools that let you operate data and fine-tune access restrictions to reach your commercial or legal requirements.

What kind of value you get with S3:

  • Painless increase and reduction of storage resources in accordance with current needs.
  • Data storage in various classes of S3 storage that provide different levels of access to information at the corresponding prices. You will get a significant cost reduction.
  • Protection of data from unauthorized access with encryption and access restriction tools.
  • Classification of your data, simple operational, and reporting tools.
2. Elastic Compute Cloud (EC2)

Elastic Compute Cloud (EC2) is one of the most widely-used AWS services, providing secure, scalable computing assets in the cloud. It helps developers by simplifying cloud computing across the web.

EC 2’s easy-to-use web interface lets you access and set up computing assets with almost no effort. It gives users full control over the resources.

What value you will get:

  • Amazon EC2 lets you increase or decrease computing power in minutes, not hours or days.
  • You have complete control over your instances, including root access and all the features available on any other machine.
  • Adaptable cloud hosting. The service allows you to choose different types of operating systems and software packages.
3. Lambda

AWS Lambda is the AWS service that will let you run program codes with no need to provision or operate servers. You pay only for the used time. When the code is not executed, no fee is charged.

With Lambda, you can run almost any kind of software or server services without the need for any administrative operations. All you need to do is to upload the code, and Lambda will provide all the necessary resources for its execution, scaling availability on demand.

You may like these features:

  • Lambda allows you to automatically run program codes without the need for server provisioning.
  • It automatically scales the program up or down by running the program code in response to each trigger.
  • When working with AWS Lambda, you pay for every 100 ms of program code execution and the number of triggers.
4. Glacier

Glacier is a safe, reliable, and highly cost-effective cloud storage solution for backing up information and long-term backup storage.

The cost of the service is only $0.004 per month for storing a gigabyte of data. This is a great cost reduction when compared to local storage solutions.

You have three options for extracting information for various use cases: The Accelerated option will last 1-5 minutes, Standard will last 3-5 hours, and Batch will last 5-12 hours.

Here are some more benefits that you can get from Glacier:

  • Glacier offers enhanced integration with AWS CloudTrail for running an audit, monitoring, and storing storage API call data for audit purposes. You will have different methods of encryption here.
  • Glacier is developed to be the most cost-effective object storage class.
  • There is a community of Amazon object storage services that comprises thousands of consulting companies, system integrators, and independent software vendors.
5. Lex

Lex is one of the top AWS services, offering a scalable, secure, and easy-to-use end-to-end software for creating, publishing, and monitoring bots. It offers automatic speech recognition tools and natural language recognition technologies to create a speech understanding system.

Lex offers two kinds of requests:

  • Confirmation requests that allow you to confirm a specific action before it is executed
  • Error-handling requests that allow you to ask the user to re-enter something to clarify information

Lex, by default, supports integration with AWS Lambda to retrieve data, update, and execute business logic. One more thing that you may like is single-click multi-platform deployment.

6. Polly

Polly is Amazon's text-to-speech service with access to a significant number of available languages. Polly is accessible via an API that will add an audio file directly into your program.

You pay only for the number of symbols that you transcribe into voice. The book Harry Potter and the Sorcerer’s Stone contains about 385,000 characters and text-to-voice conversion could cost as little as $2. You can benefit from Polly if you use it for commercial or personal purposes.

Here are the key benefits:

  • Natural voice
  • Speech storage and distribution
  • Real-time streaming
  • Configuring and managing voice output
7. Simple Queuing Service (SQS)

Simple Queue Service is an entirely operated message queuing service that can isolate and scale microservices, distributed systems, and serverless programs.

SQS offers two types of message queues. Standard queues provide maximum throughput, optimal ordering, and delivery of messages according to the “at once” method. Limited bandwidth FIFO SQS queues ensure that messages are processed strictly once and exclusively in the order they are sent.

8. Simple Notification Service (SNS)

Simple Notification Service is a highly reliable, secure, fully operated Pub/Sub messaging service. It can also separate microservices, distributed systems, and serverless programs.

SNS is one of the most versatile AWS services. You can deliver messages to any operating system at any time of the day or night. You can use your own software that will add messages to the SNS, and it will send them to your subscribers. It’s fast and cost-efficient.

There are different variants of payments, but you will pay about $2 for 100 thousand email notifications, and this is pretty cheap.

9. Internet of Things (IoT)

AWS IoT is one of the trendier AWS services that offers software solutions, operations, and data services. It allows you to connect devices safely, gather information, and perform activities based on the information received locally, even if there is no Internet connection.

Operating services let you supervise, manage, and protect a large and diverse fleet of devices. Data services help capitalize on IoT data.

What you can get with IoT:

  • AWS IoT Services
  • Device Software
  • You can connect and operate peripherals
  • Protection, control, and management of devices in the cloud
  • Data services
10. Athena

Athena is an online query AWS service that simplifies the process of data analysis in S3 using standard SQL tools. Athena is a serverless service where there is no infrastructure that requires configuration or operation, so you can start analyzing the data right away. You don’t even need to upload data to Athena since the service cooperates directly with data that is stored in S3.

What kind of benefits you get with Athena:

  • It’s very easy to get started with Athena Console
  • You can easily create queries using the standard SQL language
  • You pay per request
  • Integration with AWS Glue will give you an optimized query performance cost reduction 

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

AWS Certified Solution Architect Associate

AWS Lambda vs. Azure Functions vs. Google Functions

Running TensorFlow on AWS Lambda using Serverless

Deploy Docker Containers With AWS CodePipeline

A Complete Guide on Deploying a Node app to AWS with Docker

Create and Deploy AWS and AWS Lambda using Serverless Framework

Introduction To AWS Lambda

AWS DevOps: Introduction to DevOps on AWS

AWS DevOps: Introduction to DevOps on AWS

AWS DevOps: Introduction to DevOps on AWS

This is the story of how DevOps met AWS, and how their union can benefit you.

Technology has evolved over time. And with technology, the ways and needs to handle technology have also evolved. The last two decades have seen a great shift in computation and software development lifecycles. We have seen a huge demand for online DevOps training and AWS certification.

This blog focuses on the following points:

  1. What Is DevOps?
  2. What Is AWS?
  3. AWS DevOps
What Is DevOps?

In these fast-paced times, we see more emphasis being placed on faster delivery of software deployment. In order to stay competitive in the market, companies are expected to deploy quality software in defined timelines. Thus, the roles of software developers and system admins have become very important. A lot of juggling of responsibilities happens between the two teams. Let us take a look at how do these individuals contribute to the deployment process.

A programmer or a software developer is responsible for developing the software. In simple words he is supposed to develop a software which has:

  • New features
  • Security Upgrades
  • Bug Fixes

But a developer may have to wait for weeks for the product to get deployed which is also known as** “**time to market” in business terms. This delay may put pressure on the developer because he is forced to re-adjust his dependent activities like:

  • Pending code
  • Old code
  • New products
  • New features

When the product is put into the production environment, the product may show some unforeseen errors. This is because the developer writes code in the development environment, which may be different from the production environment.

Let us go ahead and take a look at this process from the operations point of view. Now the operations team or the system administrating team is responsible for maintaining and ensuring the uptime of the production environment. As the company invests time and money in more products and services, the number of servers admins have to take care of also keeps growing.

This gives rise to more challenges because the tools that were used to manage the previous number of servers may not be sufficient to cater to the needs of upcoming and growing number of servers. The operations team also needs to make slight changes to the code so that it fits into the production environment. Hence, the need to schedule these deployments accordingly also grows, which leads to time delays.

When the code is deployed, the operations team is also responsible for handling code changes or minor errors to the code. At times, the operation team may feel pressured and it may seem like developers have pushed their responsibilities to operations’ side of the responsibility wall. As you may come to realize, none of the sides can be held as the culprit.

What if these two teams could work together? What if they:

  • Could break down silos?
  • Share responsibilities?
  • Start thinking alike?
  • Work as a team?

Well, this is what DevOps does. It helps you get software developers and operations in sync to improve productivity. DevOps is the process of integrating development and operations teams in order to improve collaborations and productivity. This is done with automation of workflows and productivity and continuous measurement of application performance.

DevOps focuses on automating everything that lets them write small chunks of code that can be tested, monitored and deployed in hours, which is different from writing large chunks of codes that takes weeks to deploy. Let us move ahead and understand more about AWS and how it forms a crucial pairing with DevOps to give you AWS DevOps.


What Is AWS?

If you go back a decade, the scenario of handling and storing data was different. Companies preferred storing data using their private servers. However, with more and better usage of the internet, the trend has seen a paradigm shift for companies, as they are moving their data to the cloud. This enables companies to focus more on core competencies and stop worrying about storing and computation.

For example, Netflix is a popular video streaming service which the whole world uses today. Back in 2008, Netflix suffered a major database corruption, and for three days their operations were halted. The problem was scaling up, which is when they realized the need for a highly reliable, horizontally scalable, distributed systems in the cloud. They began using cloud services, and since then their growth has been off the charts.

Gartner says that by 2020, a corporate “no-cloud” policy will be as rare as a “no-internet” policy today. Interesting, isn’t it?

Almost every company has started to adopt cloud services, and AWS, in particular, is the leading cloud service provider in the market. Let us understand more about it.


AWS

Amazon’s AWS makes its customer base strong from small-scale companies to big enterprises like D-Link.


AWS DevOps

AWS is one of the best cloud service providers and DevOps is the popular and efficient implementation of the software development lifecycle, making AWS DevOps a highly popular amalgamation.


AWS CloudFormation

DevOps teams are required to create and release cloud instances and services more frequently than traditional development teams. AWS CloudFormation enables you to do just that. Templates of AWS resources like EC2 instances, ECS containers, and S3 storage buckets let you set up the entire stack without you having to bring everything together yourself.


AWS EC2

AWS EC2 speaks for itself. You can run containers inside EC2 instances, so you can leverage the AWS Security and management features, yet another reason why AWS DevOps is a lethal combo.


AWS CloudWatch

This monitoring tool lets you track every resource that AWS has to offer. Plus it makes it very easy to use third-party tools for monitoring.


AWS CodePipeline

CodePipeline is one popular feature from AWS which simplifies the way you manage your CI/CD toolset. It lets you integrate with tools like GitHub, Jenkins, and CodeDeploy, enabling you to visually control the flow of app updates from build to production.


Instances In AWS

AWS frequently creates and adds new instances to their list and the level of customization with these instances allow you to make it easy to use AWS DevOps together.

All these reasons make AWS one of the best platforms for DevOps.


Originally published by Vishal Padghan at https://dzone.com