What is Microservices?

What is Microservices?

Microservices are a software development technique—a variant of the service-oriented architecture (SOA) architectural style that structures an application as a collection of loosely coupled services

Originally published by Tom Huston at https://smartbear.com

Microservice architecture, or simply microservices, is a distinctive method of developing software systems that tries to focus on building single-function modules with well-defined interfaces and operations. The trend has grown popular in recent years as Enterprises look to become more Agile and move towards a DevOps and continuous testing. 

Microservices have many benefits for Agile and DevOps teams - as Martin Fowler points out, Netflix, eBay, Amazon, Twitter, PayPal, and other tech stars have all evolved from monolithic to microservices architecture. Unlike microservices, a monolith application is built as a single, autonomous unit. This make changes to the application slow as it affects the entire system. A modification made to a small section of code might require building and deploying an entirely new version of software. Scaling specific functions of an application, also means you have to scale the entire application.

Microservices solve these challenges of monolithic systems by being as modular as possible. In the simplest form, they help build an application as a suite of small services, each running in its own process and are independently deployable. These services may be written in different programming languages and may use different data storage techniques. While this results in the development of systems that are scalable and flexible, it needs a dynamic makeover. Microservices are often connecta via APIs, and can leverage many of the same tools and solutions that have grown in the RESTful and web service ecosystem. Testing these APIs can help validate the flow of data and information throughout your microservice deployment.

Understanding Microservice Architecture

Just as there is no formal definition of the term microservices, there’s no standard model that you’ll see represented in every system based on this architectural style. But you can expect most microservice systems to share a few notable characteristics. 

See How SwaggerHub Helps Teams Get Started With Microservices

The Six Characteristics Of Microservices

1) Multiple Components

Software built as microservices can, by definition, be broken down into multiple component services. Why? So that each of these services can be deployed, tweaked, and then redeployed independently without compromising the integrity of an application. As a result, you might only need to change one or more distinct services instead of having to redeploy entire applications. But this approach does have its downsides, including expensive remote calls (instead of in-process calls), coarser-grained remote APIs, and increased complexity when redistributing responsibilities between components.

2) Built For Business

The microservices style is usually organized around business capabilities and priorities. Unlike a traditional monolithic development approach—where different teams each have a specific focus on, say, UIs, databases, technology layers, or server-side logic—microservice architecture utilizes cross-functional teams. The responsibilities of each team are to make specific products based on one or more individual services communicating via message bus. In microservices, a team owns the product for its lifetime, as in Amazon’s oft-quoted maxim “You build it, you run it.

3) Simple Routing

Microservices act somewhat like the classical UNIX system: they receive requests, process them, and generate a response accordingly. This is opposite to how many other products such as ESBs (Enterprise Service Buses) work, where high-tech systems for message routing, choreography, and applying business rules are utilized. You could say that microservices have smart endpoints that process info and apply logic, and dumb pipes through which the info flows.

4) Decentralized

Since microservices involve a variety of technologies and platforms, old-school methods of centralized governance aren’t optimal. Decentralized governance is favored by the microservices community because its developers strive to produce useful tools that can then be used by others to solve the same problems. Just like decentralized governance, microservice architecture also favors decentralized data management. Monolithic systems use a single logical database across different applications. In a microservice application, each service usually manages its unique database.

5) Failure Resistant

Like a well-rounded child, microservices are designed to cope with failure. Since several unique and diverse services are communicating together, it’s quite possible that a service could fail, for one reason or another (e.g., when the supplier isn’t available). In these instances, the client should allow its neighboring services to function while it bows out in as graceful a manner as possible. However, monitoring microservices can help prevent the risk of a failure. For obvious reasons, this requirement adds more complexity to microservices as compared to monolithic systems architecture.

6) Evolutionary

Microservices architecture is an evolutionary design and, again, is ideal for evolutionary systems where you can’t fully anticipate the types of devices that may one day be accessing your application.. Many applications start based on monolithic architecture, but as several unforeseen requirements surfaced, can be slowly revamped to microservices that interact over an older monolithic architecture through APIs.

Examples of Microservices

Netflix has a widespread architecture that has evolved from monolithic to SOA. It receives more than one billion calls every day, from more than 800 different types of devices, to its streaming-video API. Each API call then prompts around five additional calls to the backend service.

Amazon has also migrated to microservices. They get countless calls from a variety of applications—including applications that manage the web service API as well as the website itself—which would have been simply impossible for their old, two-tiered architecture to handle.

The auction site eBay is yet another example that has gone through the same transition. Their core application comprises several autonomous applications, with each one executing the business logic for different function areas.

Microservice Pros and Cons

Microservices are not a silver bullet, and by implementing them you will expose communication, teamwork, and other problems that may have been previously implicit but are now forced out into the open. But API Gateways in Microservices can greatly reduce build and qa time and effort.

One common issue involves sharing schema/validation logic across services. What A requires in order to consider some data valid doesn’t always apply to B, if B has different needs. The best recommendation is to apply versioning and distribute schema in shared libraries. Changes to libraries then become discussions between teams. Also, with strong versioning comes dependencies, which can cause more overhead. The best practice to overcome this is planning around backwards compatibility, and accepting regression tests from external services/teams. These prompt you to have a conversation before you disrupt someone else’s business process, not after.

As with anything else, whether or not microservice architecture is right for you depends on your requirements, because they all have their pros and cons. Here’s a quick rundown of some of the good and bad:

Pros

  • Microservice architecture gives developers the freedom to independently develop and deploy services
  • A microservice can be developed by a fairly small team
  • Code for different services can be written in different languages (though many practitioners discourage it)
  • Easy integration and automatic deployment (using open-source continuous integration tools such as Jenkins, Hudson, etc.)
  • Easy to understand and modify for developers, thus can help a new team member become productive quickly
  • The developers can make use of the latest technologies
  • The code is organized around business capabilities
  • Starts the web container more quickly, so the deployment is also faster
  • When change is required in a certain part of the application, only the related service can be modified and redeployed—no need to modify and redeploy the entire application
  • Better fault isolation: if one microservice fails, the other will continue to work (although one problematic area of a monolith application can jeopardize the entire system)
  • Easy to scale and integrate with third-party services
  • No long-term commitment to technology stack

Cons

  • Due to distributed deployment, testing can become complicated and tedious
  • Increasing number of services can result in information barriers
  • The architecture brings additional complexity as the developers have to mitigate fault tolerance, network latency, and deal with a variety of message formats as well as load balancing
  • Being a distributed system, it can result in duplication of effort
  • When number of services increases, integration and managing whole products can become complicated
  • In addition to several complexities of monolithic architecture, the developers have to deal with the additional complexity of a distributed system
  • Developers have to put additional effort into implementing the mechanism of communication between the services
  • Handling use cases that span more than one service without using distributed transactions is not only tough but also requires communication and cooperation between different teams
How Microservice Architecture Works

1) Monoliths and Conway’s Law

To begin with, let’s explore Conway’s Law, which states: “Organizations which design systems…are constrained to produce designs which are copies of the communication structures of these organizations.”

Imagine Company X with two teams: Support and Accounting. Instinctively, we separate out the high risk activities; it’s only difficult deciding responsibilities like customer refunds. Consider how we might answer questions like “Does the Accounting team have enough people to process both customer refunds and credits?” or “Wouldn’t it be a better outcome to have our Support people be able to apply credits and deal with frustrated customers?” The answers get resolved by Company X’s new policy: Support can apply a credit, but Accounting has to process a refund to return money to a customer. The roles and responsibilities in this interconnected system have been successfully split, while gaining customer satisfaction and minimizing risks.

Likewise, at the beginning of designing any software application, companies typically assemble a team and create a project. Over time, the team grows, and multiple projects on the same codebase are completed. More often than not, this leads to competing projects: two people will find it difficult to work at cross purposes in the same area of code without introducing tradeoffs. And adding more people to the equation only makes the problem worse. As Fred Brooks puts it, nine women can’t make a baby in one month.

Moreover, in Company X or in any dev team, priorities frequently shift, resulting in management and communication issues. Last month’s highest priority item may have caused our team to push hard to ship code, but now a user is reporting an issue, and we no longer have time to resolve it because of this month’s priority. This is the most compelling reason to adopt SOA, including the microservices variety. Service-oriented approaches recognize the frictions involved between change management, domain knowledge, and business priorities, allowing dev teams to explicitly separate and address them. Of course, this in itself is a tradeoff—it requires coordination—but it allows you to centralize friction and introduce efficiency, as opposed to suffering from a large number of small inefficiencies.

Most importantly, smartly implementing an SOA or microservice architecture forces you to apply the Interface Separation Principle. Due to the connected nature of mature systems, when isolating issues of concern, the typical approach is to find a seam or communication point and then draw a dotted line between two halves of the system. Without careful thought, however, this can lead to accidentally creating two smaller but growing monoliths, now connected with some kind of bridge. The consequence of this can be marooning important code on the wrong side of a barrier: Team A doesn’t bother to look after it, while Team B needs it, so they reinvent it.

2) Microservices: Avoiding the Monoliths

We’ve named some problems that commonly emerge; now let’s begin to look at some solutions.

How do you deploy relatively independent yet integrated services without spawning accidental monoliths? Well, suppose you have a large application, as in the sample from our Company X below, and are splitting up the codebase and teams to scale. Instead of finding an entire section of an application to split off, you can look for something on the edge of the application graph. You can tell which sections these are because nothing depends on them. In our example, the arrows pointing to Printer and Storage suggest they’re two things that can be easily removed from our main application and abstracted away. Printing either a Job or Invoice is irrelevant; a Printer just wants printable data. Turning these—Printer and Storage—into external services avoids the monoliths problem alluded to before. It also makes sense as they are used multiple times, and there’s little that can be reinvented. Use cases are well known from past experience, so you can avoid accidentally removing key functionality.

3) Service Objects and Identifying Data

So how do we go from monoliths to services? One way is through service objects. Without removing code from your application, you effectively just begin to structure it as though it were completely external. To do that, you’ll first need to differentiate the actions that can be done and the data that is present as inputs and outputs of those actions. Consider the code below, with a notion of doing something useful and a status of that task.

# A class to model a core transaction and execute it

 

      class Job

        def initialize

          @status = 'Queued'

        end

        

        def do_useful_work

          ....

          @status = 'Finished'

        end

       

        def finished?

          return @status == 'Finished'

        end

       

        def ready?

          return @status == 'Queued'

        end

      end

To prepare this to begin looking like a microservice, what’s next?

# Service to do useful work and modify a status

 

    class JobService

      def do_useful_work(job_status)

        ....

       

        job_status.finish!

       

        return job_status

      end

    end

 

    # A model of our Job's status

 

    class JobStatus

      def initialize

        @status = 'Queued'

      end

     

      def finished?

        return @status == 'Finished'

      end

     

      def ready?

        return @status == 'Queued'

      end

     

      def finish!

        @status = 'Finished'

      end

    end

Now we’ve distinguished two distinct classes: one that models the data, and one that performs the operations. Importantly, our JobService class has little or no state—you can call the same actions over and over, changing only the data, and expect to get consistent results. If JobService somehow started taking place over a network, our otherwise monolithic application wouldn’t care. Shifting these types of classes into a library, and substituting a network client for the previous implementation, would allow you to transform the existing code into a scalable external service.

This is Hexagonal Architecture, where the core of your application and the coordination is in the center, and the external components are orchestrated around it to achieve your goals.

(You can read more about service objects and hexagonal architecture here and here.)

4) Coordination and Dumb Pipes

Now let’s take a closer look at what makes something a microservice as opposed to a traditional SOA.

Perhaps the most important distinction is side effects. Microservices avoid them. To see why, let’s look at an older approach: Unix pipes.

ls | wc -l

Above, two programs are chained together: the first lists all of the files in a directory, the second reads the number of lines in a stream of input. Imagine writing a comparable program, then having to modify it into the below:

ls | less

Composing small pieces of functionality relies on repeatable results, a standard mechanism for input and output, and an exit code for a program to indicate success or lack thereof. We know this works from observational evidence, and we also know that a Unix pipe is a “dumb” interface because it has no control statements. The pipe applies SRP by pushing data from A to B, and it’s up to members of the pipeline to decide if the input is unacceptable.

Let’s go back to Company X’s Job and Invoice systems. Each controls a transaction and can be used together or separately: Invoices can be created for jobs, jobs can be created without an invoice, and invoices can be created without a job. Unlike Unix shell commands, the systems that own jobs and invoices have their own users working independently. But without falling back to a policy, it’s impossible to enforce rules for either system globally.

Say we want to extract out the key operations that can be repeatedly executed—the services for sending an invoice, mutating a job status and mutating an invoice status. These are completely separate from the task of persisting data.

Here this allows us to wire together the discrete components into two pipelines:


User creates a manual invoice

  • Adds data to invoice, status created— Invokes BillingPolicyService to determine when an invoice is payable for a given customer
  • Invoice is issued to customer
  • Persists to the invoice data service, status sent

User finishes a job, creating an invoice

  • Validates job is completable
  • Adds data to invoice, status created—Invokes BillingPolicyService to determine when an invoice is payable for a given customer
  • Invoice is issued to customer
  • Persists to the invoice data service, status sent

The invoice calculation related steps are idempotent, and it’s then trivial to compose a draft invoice or preview the amounts payable by the customer by leveraging our new dedicated microservices.

Unlike traditional SOA, the difference here is that we have low-level details exposed via a simple interface, as compared to a high-level API call that might execute an entire business action. With a high-level API, in fact, it becomes difficult to rewire small components together, since the service designer has removed many of the seams or choices we can take by providing a one-shot interface.

By this point, the repetition of business logic, policy and rules leads many to traditionally push this complexity into a service bus or singular, centralized workflow orchestration tool. However, the crucial advantage of microservice architecture is not that we never share business rules/processes/policies, but that we push them into discrete packages, aligned to business needs. Not only does this mean that policy is distributed, but it also means that you can change your business processes without risk.

SOA vs. Microservices

“Wait a minute,” some of you may be murmuring over your morning coffee, “isn’t this just another name for SOA?” Service-Oriented Architecture (SOA) sprung up during the first few years of this century, and microservice architecture (abbreviated by some as MSA) bears a number of similarities. Traditional SOA, however, is a broader framework and can mean a wide variety of things. Some microservices advocates reject the SOA tag altogether, while others consider microservices to be simply an ideal, refined form of SOA. In any event, we think there are clear enough differences to justify a distinct “microservice” concept (at least as a special form of SOA, as we’ll illustrate later).

The typical SOA model, for example, usually has more dependent ESBs, with microservices using faster messaging mechanisms. SOA also focuses on imperative programming, whereas microservices architecture focuses on a responsive-actor programming style. Moreover, SOA models tend to have an outsized relational database, while microservices frequently use NoSQL or micro-SQL databases (which can be connected to conventional databases). But the real difference has to do with the architecture methods used to arrive at an integrated set of services in the first place.

Since everything changes in the digital world, agile development techniques that can keep up with the demands of software evolution are invaluable. Most of the practices used in microservices architecture come from developers who have created software applications for large enterprise organizations, and who know that today’s end users expect dynamic yet consistent experiences across a wide range of devices. Scalable, adaptable, modular, and quickly accessible cloud-based applications are in high demand. And this has led many developers to change their approach.

The Future of Microservice Architecture

Whether or not microservice architecture becomes the preferred style of developers in future, it’s clearly a potent idea that offers serious benefits for designing and implementing enterprise applications. Many developers and organizations, without ever using the name or even labeling their practice as SOA, have been using an approach toward leveraging APIs that could be classified as microservices.

We’ve also seen a number of existing technologies try to address parts of the segmentation and communication problems that microservices aim to resolve. SOAP does well at describing the operations available on a given endpoint and where to discover it via WSDLs. UDDI is theoretically a good step toward advertising what a service can do and where it can be found. But these technologies have been compromised by a relatively complex implementation, and tend not to be adopted in newer projects. REST-based services face the same issues, and although you can use WSDLs with REST, it is not widely done.

Assuming discovery is a solved problem, sharing schema and meaning across unrelated applications still remains a difficult proposition for anything other than microservices and other SOA systems. Technologies such as RDFS, OWL, and RIF exist and are standardized, but are not commonly used. JSON-LD and Schema.org offer a glimpse of what an entire open web that shares definitions looks like, but these aren’t yet adopted in large private enterprises.

The power of shared, standardized definitions are making inroads within government, though. Tim Berners Lee has been widely advocating Linked Data. The results are visible through in data.gov and data.gov.uk, and you can explore the large number of data sets available as well-described linked data here. If a large number of standardized definitions can be agreed upon, the next steps are most likely toward agents: small programs that orchestrate microservices from a large number of vendors to achieve certain goals. When you add the increasing complexity and communication requirements of SaaS apps, wearables, and the Internet of Things into the overall picture, it’s clear that microservice architecture probably has a very bright future ahead.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading about Microservices

Build Spring Microservices and Dockerize Them for Production

Best Java Microservices Interview Questions In 2019

Build a microservices architecture with Spring Boot and Spring Cloud

Design patterns for microservices 🍂 🍂 🍂

Kotlin Microservices With Micronaut, Spring Cloud, and JPA

Build Spring Microservices and Dockerize Them for Production

Secure Service-to-Service Spring Microservices with HTTPS and OAuth 2.0

Build Secure Microservices with AWS Lambda and ASP.NET Core




Angular 9 Tutorial: Learn to Build a CRUD Angular App Quickly

What's new in Bootstrap 5 and when Bootstrap 5 release date?

What’s new in HTML6

How to Build Progressive Web Apps (PWA) using Angular 9

What is new features in Javascript ES2020 ECMAScript 2020

Create and Deploy AWS and AWS Lambda using Serverless Framework

Create and Deploy AWS and AWS Lambda using Serverless Framework

An introduction to using AWS and AWS Lambda with the Serverless Framework.

AWS - Introduction

The Serverless Framework helps you develop and deploy your AWS Lambda functions, along with the AWS infrastructure resources they require. It's a CLI that offers structure, automation and best practices out-of-the-box, allowing you to focus on building sophisticated, event-driven, serverless architectures, comprised of Functions and Events.

The Serverless Framework is different from other application frameworks because:

  • It manages your code as well as your infrastructure
  • It supports multiple languages (Node.js, Python, Java, and more)
Core Concepts

Here are the Framework's main concepts and how they pertain to AWS and Lambda...

Functions

A Function is an AWS Lambda function. It's an independent unit of deployment, like a microservice. It's merely code, deployed in the cloud, that is most often written to perform a single job such as:

  • *Saving a user to the *database
  • Processing a file in a database
  • Performing a scheduled task

You can perform multiple jobs in your code, but we don't recommend doing that without good reason. Separation of concerns is best and the Framework is designed to help you easily develop and deploy Functions, as well as manage lots of them.

Events

Anything that triggers an AWS Lambda Function to execute is regarded by the Framework as an Event. Events are infrastructure events on AWS such as:

  • An AWS API Gateway HTTP endpoint request (e.g., for a REST API)
  • An AWS S3 bucket upload (e.g., for an image)
  • A CloudWatch timer (e.g., run every 5 minutes)
  • An AWS SNS topic (e.g., a message)
  • A CloudWatch Alert (e.g., something happened)
  • And more...

When you define an event for your AWS Lambda functions in the Serverless Framework, the Framework will automatically create any infrastructure necessary for that event (e.g., an API Gateway endpoint) and configure your AWS Lambda Functions to listen to it.

Resources

Resources are AWS infrastructure components which your Functions use such as:

  • An AWS DynamoDB Table (e.g., for saving Users/Posts/Comments data)
  • An AWS S3 Bucket (e.g., for saving images or files)
  • An AWS SNS Topic (e.g., for sending messages asynchronously)
  • Anything that can be defined in CloudFormation is supported by the Serverless Framework

The Serverless Framework not only deploys your Functions and the Events that trigger them, but it also deploys the AWS infrastructure components your Functions depend upon.

Services

A Service is the Framework's unit of organization. You can think of it as a project file, though you can have multiple services for a single application. It's where you define your Functions, the Events that trigger them, and the Resources your Functions use, all in one file entitled serverless.yml (or serverless.json or serverless.js). It looks like this:

# serverless.yml

service: users

functions: # Your "Functions"
 usersCreate:
 events: # The "Events" that trigger this function
 - http: post users/create
 usersDelete:
 events:
 - http: delete users/delete

resources: # The "Resources" your "Functions" use. Raw AWS CloudFormation goes in here.

When you deploy with the Framework by running serverless deploy, everything in serverless.yml is deployed at once.

Plugins

You can overwrite or extend the functionality of the Framework using Plugins. Every serverless.yml can contain a plugins: property, which features multiple plugins.

# serverless.yml

plugins:
 - serverless-offline
 - serverless-secrets

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Why Use Microservices?

Why Use Microservices?

In this article, we'll better understand the features of monolithic versus microservices architectures, and what issues microservices help to address.

Originally published by Rahul Agarwal at https://dzone.com   

Microservices are very trendy these days. Almost everybody is into it. It is not just Netflix, Amazon, or Google — it appears that almost everyone has adopted this architecture style. Although microservices have been here for quite some time now and a lot has already been written about them, I thought of writing yet another piece today, so please bear with me.

To understand the need for microservices, we need to understand problems with our typical 3-tier monolithic architecture.

What Is Monolithic Architecture?

Monolithic means composed all in one piece. A monolithic application is one which is self-contained. All components of the application must be present in order for the code to work.

Take the case of a typical 3-tier traditional web application built in three parts: a user interface, a database, and a server-side application. This server-side application is called a monolith, which is further divided into 3 layers — presentation, business layer, and data layer. The entire code is maintained in the same codebase. In order for the code to work, it is deployed as a single unit. Any small change requires the entire application to be built and deployed.

A typical monolithic application.

What Is Microservices Architecture?

Microservices architecture is an architectural style where the entire application is divided and designed as loosely-coupled, independent services modeled around a business domain. The "micro" in microservices is very deceiving. It has been debated a lot, but in my humble opinion, it does not dictate how small or big a service has to be. Again, this is another discussion we should have another day. Let's move forward.

The important point at this stage is that each independent service has a business boundary which can be independently developed, tested, deployed, monitored, and scaled. These can be even developed in different programming languages.

A typical microservices application.

In microservices-based architecture, each component or service has its own database. There is no centralized database, as in the case of a monolith. You can even use NoSQL, RDBMS, or any other database as needed for each of the individual microservices. This makes microservices truly independent.

Let's now see what concerns microservices address.

Concerns With the Monolith

Difficult to Scale

These applications can only be horizontally scaled by having multiple instances of entire application behind a load balancer. If a specific service within the application requires scaling, there is no simple option. You need to scale the application in its entirety, which is an unnecessary waste of resources. 

In contrast, a microservices-based application allows you to scale individual services independently as per your requirements. In the above diagram, if service B needs to be scaled, you can have maybe 10 instances of it while keeping the others as is. This can be changed on the fly, as needed.

Long Time to Ship

The entire codebase is deployed rather than just the impacted code. Any change made in any portion/layer of a monolithic application requires building and deploying the entire application. The individual developer is also required to download the entire application code and not just his/her impacted module for fixing and testing. This also impacts continuous deployments.

On the other hand, in microservices architecture, if a change is only needed in one of the hundred microservices, only the changed microservice is built and deployed. There is no need to deploy everything. In fact, a microservice can even be deployed several times during the day, if needed.

Complexities of Growing Applications

As a monolithic application grows (features, functionality, etc) so does the team, and soon, the application becomes complex and intertwined. As different teams keep modifying the code, it slowly becomes more and more difficult to maintain a modular structure and slowly results in spaghetti code. This not only impacts code quality, but also impacts the organization as a whole.

In a microservices-based application, each team works on separate microservices, which makes it less difficult to make intertwined code. 

No Clear Ownership

In monolithic applications, teams that look independent are not actually independent. They simultaneously work on the same codebase but are heavily dependent on each other.

In microservices-based applications, the independent teams work on separate microservices. A team will own an entire microservice. There is clear ownership of work with clear control of everything about the service, including development, deployment, and monitoring.

Failure Cascade

The failure of one part of a monolithic application can cascade and result in bringing down the entire system, if not properly designed.

In the case of microservices-based architecture, we can make use of a circuit breaker to avoid such failures.

Wall Between Dev and Ops

Dev teams normally do the development, test, and once deployed simply toss the ownership of maintenance and support to the operations team. The dev team is disbanded and the ops team takes ownership and struggles to support the monolithic application in production.

In microservices-based applications, teams are organized with the understanding that "you build it, you run it." The dev team continues to own the application in production.

Stuck in a Technology/Language

With a monolith, one gets locked into the implemented technology/language. The entire application must be rewritten if a technology/language change is needed.

With microservices, each service can be implemented in a different technology or language as per the requirements and the business. Any decision to change the technology/language of a service will only require rewriting of that particular service since all microservices are independent of each other. 

Availability of the Right Tools/Technologies to Support Microservices

A few years back, the appropriate tools and technologies were not available to support microservices. Ever since Docker containers and Cloud Infra (especially PaaS) became available to the masses, microservices are being adopted at such a large scale due to the freedom these provide without going through the traditional provisioning procedures.

Conclusion

We have talked in detail about both monolithic and microservices architecture styles. We also discussed the various key problems of monolithic applications and how microservices come forward to solve them in the new world. In a nutshell, choose microservices architecture for the following benefits:

  • Independently develop and deploy services
  • Speed and agility
  • Better code quality
  • Code created/organized around business functionality
  • Increased productivity
  • Easier to scale
  • Freedom (in a way) to choose the implementation technology/language

Even with all the benefits offered by microservices architecture, it is not a silver bullet. It has complexities of its own. Think of multiple instances of hundreds of services in a big project. How will you monitor these? In case of any service failures, how will an error be tracked, traced, and debugged?

All these are overheads which need to be addressed for an efficient application.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading about Microservices

An Introduction to Microservices

What is Microservices?

Build Spring Microservices and Dockerize Them for Production

Best Java Microservices Interview Questions In 2019

Build a microservices architecture with Spring Boot and Spring Cloud

Design patterns for microservices 🍂 🍂 🍂

Kotlin Microservices With Micronaut, Spring Cloud, and JPA

Build Spring Microservices and Dockerize Them for Production

Secure Service-to-Service Spring Microservices with HTTPS and OAuth 2.0

Build Secure Microservices with AWS Lambda and ASP.NET Core

How to build a basic Serverless Application using AWS Lambda

How to build a basic Serverless Application using AWS Lambda

In this post, I’m going to show you how easy it is to build a basic Serverless Application using a few AWS Services (AWS Lambda)

One of the architectural patterns I’ve championed for is Serverless Architecture. In short, a serverless application utilizes an array of cloud applications and services to perform logic that would normally be constrained to a single application hosted on a constantly running server.

Serverless itself has many benefits. It typically has reduced operational cost, as you are only paying for an application to run when it needs to. It also typically has reduced development cost, as many things you’d typically have to worry about with a legacy application, such as OS and System configurations, are handled for you by most cloud providers.

In this post, I’m going to show you how easy it is to build a basic serverless application using a few AWS services.

AWS Lambda

AWS Lambda is a service that allows you to run functions when they need to run, and only pay for them when they do. AWS allows you to upload an executable in a variety of languages, and handles spinning up and tearing down containers for you as it needs them. In this blog, I’ll be using Golang, also called Go, a language built at Google with scalability and concurrency in mind. Some of the other languages supported out of the box by AWS Lambda include Python, Node.js, and Java, to name a few. Some others, such as C++, can be used if you supply a custom runtime.

Building a Lambda Function

Let’s say we want to build a really simple API that returns information about cars. It’s not something that’s going to be used very often, and it doesn’t have to perform any complex logic. It’s just retrieving data and sending it to a client. For brevity, we’re going to avoid having a database of any kind and pretend that we magically have all of this information in memory.

In Go, there are no classes. Instead, objects are represented by structs, similar to C. Let’s build a quick struct to represent a car:

type Car struct {
  Model string `json:"model"`
  Color string `json:"color"`
  Year int `json:"year"`
}

As you can see, we have a Car struct with three fields, Model, Color and Year. The fields are capitalized so that they are exported, which is similar to public fields in some other languages. They are annotated with a json key to instruct the program how to deserialize json into an instance of the struct. More on that later.

Okay, so we have our struct now. Let’s say we want to build an HTTP endpoint that returns one of my favorite cars, a red 1999 Chevrolet Corvette. The response could look something like this:

GET /myfavoritecar
{   "model": "Corvette",   "color": "red",   "year": 1999}

AWS Lambda programs need to conform to a specific format which depends on the language they’re written in. All of them will have a handler function of some kind, which will usually include two parameters: a context and an input, the latter of which is usually some kind of JSON payload. In our example, we’re going to have our lambda function be triggered by requests to an application load balancer, which requires a specific contract be followed. More on that can be read here.

After following the AWS documentation, structs to represent an input to our Lambda function and its output will look something like this:

type LambdaPayload struct {
   RequestContext struct {
      Elb struct {
         TargetGroupArn string `json:"targetGroupArn"`
      } `json:"elb"`
   } `json:"requestContext"`
   HTTPMethod            string            `json:"httpMethod"`
   Path                  string            `json:"path"`
   Headers               map[string]string `json:"headers"`
   QueryStringParameters map[string]string `json:"queryStringParameters"`
   Body                  string            `json:"body"`
   IsBase64Encoded       bool              `json:"isBase64Encoded"`
}


type LambdaResponse struct {
   IsBase64Encoded   bool   `json:"isBase64Encoded"`
   StatusCode        int    `json:"statusCode"`
   StatusDescription string `json:"statusDescription"`
   Headers           struct {
      SetCookie   string `json:"Set-cookie"`
      ContentType string `json:"Content-Type"`
   } `json:"headers"`
   Body string `json:"body"`
}

There are fields that represent some meta info about our requests, such as Request Methods and response codes, as well as fields for request paths (in our case, /myfavoritecar), maps for headers and query parameters, the bodies of the requests and responses, and whether or not either are base 64 encoded. We can assume our lambda will automatically conform to this contract, as AWS will ensure that, and start writing our function!

When all is said and done, our function will look something like this:

func lambda_handler(ctx context.Context, payload LambdaPayload) (LambdaResponse, error) {
  response := &LambdaResponse{}
  response.Headers.ContentType = “text/html”
  response.StatusCode = http.StatusBadRequest
  response.StatusDescription = http.StatusText(http.StatusBadRequest)
  if payload.HTTPMethod == http.MethodGet && payload.Path == “/myfavoritecar” {
    car := &Car{}
    car.Model = “Corvette”
    car.Color = “Red”
    car.Year = 1999
    res, err := json.Marshal(car)
    if err != nil {
      fmt.Println(err)
      response.StatusCode = http.StatusInternalServerError
      response.StatusDescription = http.StatusText(http.StatusInternalServerError)
      return *response, err
    }
    response.Headers.ContentType = “application/json”
    response.Body = string(res)
    response.StatusCode = http.StatusOK
    response.StatusDescription = http.StatusText(http.StatusOK)
    return *response, nil
  } else {
    return *response, nil
  }
}

As you can see, we check that the request is a GET request and that we are looking for my favorite car by checking the path of the request. Otherwise, we return a Bad Request response. If we are serving the right request, we create a Car struct, assign the proper values, and marshal it as JSON (convert the struct to a JSON string) using Golang’s (great) built in JSON library. We create an instance of the Lambda response struct in either case, populate the relevant fields (and the content-type where applicable) and return it. Now, we just need a main function that specifies the above should run and we’re done:

func main(){
  lambda.Start(lambda_handler)
}

That’s it! Now let’s deploy it to AWS and see how it runs.

Deploying the Lambda to AWS

The first thing we need to do is create a function package. This differs from language to language; for Golang it is quite simple. Here’s a sample bash script for doing so:

env GOOS=linux GOARCH=amd64 go build -o /tmp/lambda golang/*.go
zip -j /tmp/lambda /tmp/lambda
aws lambda update-function-code --function-name car-lambda --region us-east-1 --zip-file fileb:///tmp/lambda.zip

As you can see, we use the Golang CLI to build an executable for the amd64 architecture, and Linux as an OS (as go executables differ depending on where they are meant to be run). For AWS lambda, we need to target the above architecture. It looks in a folder golang for any .go files, and compiles them into an executable. Then, we zip it in a location and call the AWS CLI to update the function code of our car-lambda function, which we haven’t created yet. Let’s do that now.

We can create our Lambda by going to the AWS Lambda page in the AWS Console, and clicking Create Function. We can name it car-lambda, and choose Go 1.x as our runtime. For execution role, we can choose the “Create a new role with basic Lambda permissions” option. That’s it! Now, we can run our script (assuming we have the AWS and Go CLI installed) and see output similar to the following:

{
  "FunctionName": "car-lambda",
  "FunctionArn": "arn:aws:lambda:us-east-1: YOUR_ACCOUNT_ID:function:car-lambda",
  "Runtime": "go1.x",
  "Role": "arn:aws:iam::YOUR_ACCOUNT_ID:role/service-role/car-lambda-role-w4cmi5fv",
  "Handler": "hello",
  "CodeSize": 4887939,
  "Description": "",
  "Timeout": 15,
  "MemorySize": 512,
  "LastModified": "2019–09–20T19:46:19.393+0000",
  "CodeSha256": "6h9d94EOfBj6uArZZKkDalbat7+xtlC83rOR+vBEaws=",
  "Version": "$LATEST",
  "TracingConfig": {
    "Mode": "PassThrough"
  },
  "RevisionId": "7d4cbfd8–2174–4a85-a9e0–043016f0a1d0"
}

We’re almost done deploying our lambda! However, you’ll see that the “handler” is set to “hello”. This is an AWS default. We need to change it to “lambda” (as that’s what I called my lambda program). You can do so by going to the Lambda’s page on AWS and scrolling down to the “handler” text field. You can change it to whatever you named your lambda program if you named it something different.

All right. NOW we’re done. Phew.

The next thing we need to do on AWS is create an ALB (Application Load Balancer). We can do so on the EC2 page by clicking the Load Balancers link, followed by Create Load Balancer. On the following screen, we click “Create” on the ALB square to be greeted with a page like the following (I’ve gone ahead and filled in the Name field and made it internet-facing):

For security purposes, I haven’t selected a VPC in the above image, but you’d want to choose a VPC and public subnets that are relevant to your AWS account. On the following screens, we need to set up a target group that routes traffic from the ALB to our lambda function. Select “New target group” from the dropdown, and “Lambda” as the target type. I named it car-lambda-tg.

For Step 5, choose the lambda we created earlier. When we create it, you’ll see something like this after it’s done provisioning the ALB for you:

If your ALB is in a VPC, make sure it is reachable from whichever network you’re on. You can do so by configuring it with public subnets and attaching a security group to the ALB (go to the VPC page, and then click Security Groups) that allows traffic from specific (or all) IP addresses. Read more about security groups. Then, when you try to go to the ALB’s DNS in your browser, you will (probably) see something like this:

This is good! We see the Bad Request we programmed earlier. Now, if we go to the /myfavoritecar endpoint….

We did it! Our lambda is now fully up and running on AWS. We have an API that can return responses, and we don’t have to to worry about setting up and paying for an entire server and its pain points. We can now extend it as much as we want through other APIs, endpoints, a database, more lambdas, and more. The possibilities are endless. I hope this blog post was helpful for you. As you can see, it’s super easy to get up and running with a serverless API.