How to Run Java Microservices on OpenShift Using Source-2-Image

<strong>With Source-2-Image (S2I), you don't have to provide Kubernetes YAML templates or build Docker images, OpenShift will do it for you. Read on to see how it works!</strong>

With Source-2-Image (S2I), you don't have to provide Kubernetes YAML templates or build Docker images, OpenShift will do it for you. Read on to see how it works!

One of the reasons you might prefer OpenShift over of Kubernetes is the simplicity of running new applications. When working with plain Kubernetes you need to provide an already built image together with the set of descriptor templates used for deploying it. OpenShift introduces the Source-2-Image feature, which is used for building reproducible Docker images from application source code. With S2I, you don’t have to provide any Kubernetes YAML templates or build Docker images by yourself, OpenShift will do it for you. Let’s see how it works. The best way to test it locally is via Minishift. But the first step is to prepare our sample application’s source code.

1. Prepare the Application Code

I have already described how to run your Java applications on Kubernetes in one of my previous articles Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker. We will use the same source code we used in that article, so you can to compare the different approaches. Our source code is available on GitHub in the **sample-spring-microservices-new **repository. We will modify the version used in Kubernetes a little by removing the Spring Cloud Kubernetes library and including some additional resources. The current version is available in the openshift branch.

Our sample system consists of three microservices which communicate with each other and use a Mongo database on the backend. Here’s the diagram that illustrates our architecture.

Every microservice is a Spring Boot application, which uses Maven as a build tool. After including spring-boot-maven-plugin it is able to generate a single fat jar with all the necessary dependencies, which is required by the source-2-image builder.


Every application includes starters for Spring Web, Spring Actuator, and Spring Data MongoDB for integration with our Mongo database. We will also include libraries for generating Swagger API documentation, and Spring Cloud OpenFeign for these applications, which calls REST endpoints exposed by other microservices.


Every Spring Boot application exposes a REST API for simple CRUD operations on a given resource. The Spring Data repository bean is injected into the controller.

public class EmployeeController {

    private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);

    EmployeeRepository repository;

    public Employee add(@RequestBody Employee employee) {"Employee add: {}", employee);

    public Employee findById(@PathVariable("id") String id) {"Employee find: id={}", id);
        return repository.findById(id).get();

    public Iterable<Employee> findAll() {"Employee find");
        return repository.findAll();

    public List<Employee> findByDepartment(@PathVariable("departmentId") Long departmentId) {"Employee find: departmentId={}", departmentId);
        return repository.findByDepartmentId(departmentId);

    public List<Employee> findByOrganization(@PathVariable("organizationId") Long organizationId) {"Employee find: organizationId={}", organizationId);
        return repository.findByOrganizationId(organizationId);


The application expects to have environment variables pointing to the database name, user, and password.

    name: employee

Inter-service communication is realized through the OpenFeign declarative REST client. It is included in thedepartment and organization microservices.

@FeignClient(name = "employee", url = "${microservices.employee.url}")
public interface EmployeeClient {

    List<Employee> findByOrganization(@PathVariable("organizationId") String organizationId);


The address of the target service accessed by the Feign client is set inside the application.ymlfile. The communication is realized via OpenShift/Kubernetes services. The name of each service is also injected through an environment variable.

    name: organization
    url: http://${EMPLOYEE_SERVICE}:8080
    url: http://${DEPARTMENT_SERVICE}:8080

2. Running Minishift

To run Minishift locally you just have to download it from that site, copy minishift.exe (for Windows) to your PATH directory, and start using the minishift start command. For more details, you may refer to my previous article about OpenShift and Java applications, A Quick Guide to Deploying Java Apps on OpenShift. The current version of Minishift used during writing this article is 1.29.0.

After starting Minishift, we need to run some additional oc commands to enable source-2-image for Java apps. First, we add some privileges to user admin to be able to access the openshiftproject. In this project, OpenShift stores all the built-in templates and image streams used, for example as S2I builders. Let’s begin from enable admin-useraddon.

$ minishift addons apply admin-user

Thanks to that plugin we are able to login to Minishift as cluster admin. Now, we can grant role cluster-admin to user admin.

$ oc login -u system:admin
$ oc adm policy add-cluster-role-to-user cluster-admin admin
$ oc login -u admin -p admin

After that, you can login to web console using credentials admin/admin. You should be able to see the openshiftproject. But that’s not all. The image used for building runnable Java apps (openjdk18-openshift) is not available by default with Minishift. We can import it manually from Red Hat registry using the oc import-imagecommand or just enable ot and apply the xpaasplugin. I prefer the second option.

$ minishift addons apply xpaas

Now, you can go to the Minishift web console (for me, it’s available at the address, select the openshift project and then navigate to Builds -> Images. You should see the image stream redhat-openjdk18-openshift on the list.

The newest version of that image is 1.3. Surprisingly, it is not the newest version on OpenShift Container Platform. There you have version 1.5. However, the newest versions of builder images have been moved to, which requires authentication.

3. Deploying a Java App Using S2I

We are finally able to deploy our app on Minishift with S2I builder. The application source code is ready, and the same with the Minishift instance. The first step is to deploy an instance of MongoDB. It is very easy with OpenShift, because the Mongo template is available in a built-in service catalog. We can provide our own configuration settings or left default values. What’s important for us, OpenShift generates a secret which is, by default, available under the name mongodb.

The S2I builder image provided by OpenShift may be used through the image stream redhat-openjdk18-openshift. This image is intended for use with Maven-based Java standalone projects that are run via the main class, for exam,ple Spring Boot applications. If you would not provide any builder while creating a new app, the type of application is auto-detected by OpenShift and the source code written Java it will be deployed on the WildFly server. The current version of the Java S2I builder image supports OpenJDK 1.8, Jolokia 1.3.5, and Maven 3.3.9-2.8.

Let’s create our first application on OpenShift. We begin from the employee microservice. Under normal circumstances, each microservice would be located in a separate Git repository. In our sample, all of them are placed in the single repository, so we have to provide the location of the current app by setting the parameter as --context-dir. We will also override the default branch to openshift, which has been created for the purposes of this article.

$ oc new-app redhat-openjdk18-openshift:1.3~ --name=employee --context-dir=employee-servic

All our microservices are connecting to the Mongo database, so we also have to inject connection settings and credentials into the application pod. This can be achieved by injecting a mongodb secret into theBuildConfigobject.

$ oc set env bc/employee --from="secret/mongodb" --prefix=MONGO_

BuildConfig is one of the OpenShift objects created after running the command, oc new-app. It also creates DeploymentConfig with a deployment definition, Service, and ImageStream with the newest Docker image of the application. After creating the application, a new build is running. First, it downloads the source code from the Git repository, then it builds it using Maven, assembles the build results into the Docker image, and, finally, saves the image in the registry.

Now, we can create the next application, department. For simplification, all three microservices are connecting to the same database, which is not recommended under normal circumstances. In that case, the only difference between the department and employee apps is the environment variable EMPLOYEE_SERVICE is set as a parameter on the oc new-app command.

$ oc new-app redhat-openjdk18-openshift:1.3~ --name=department --context-dir=department-service -e EMPLOYEE_SERVICE=employee

Here we do the same as before: we inject the mongodb secret into the BuildConfig object.

$ oc set env bc/department --from="secret/mongodb" --prefix=MONGO_

A build is started just after we create the new application, but we can also start it manually by executing the following command:

$ oc start-build department

Finally, we are deploying the last microservice. Here are the appropriate commands.

$ oc new-app redhat-openjdk18-openshift:1.3~ --name=organization --context-dir=organization-service -e EMPLOYEE_SERVICE=employee -e DEPARTMENT_SERVICE=department
$ oc set env bc/organization --from="secret/mongodb" --prefix=MONGO_

4. Deep Look Into the OpenShift Objects We Created

The list of builds may be displayed on the web console under the section Builds -> Builds. As you can see in the picture below, there are three BuildConfig objects available, one for each application. The same list can be displayed using the oc command, oc get bc.

You can take a look at the build history by selecting one of the elements from the list. You can also start a new one by clicking the  Start Build button, as shown below.

We can always display the YAML configuration file with the BuildConfig definition. But it is also possible to perform a similar action using the web console. The following picture shows the list of environment variables injected from the mongodb secret into the the BuildConfig object.

Every build generates a Docker image with the application and saves it in a Minishift internal registry. This Minishift internal registry is available under the address The list of available image streams is available under the section Builds -> Images.

Every application is automatically exposed on ports 8080 (HTTP), 8443 (HTTPS), and 8778 (Jolokia) via services. You can also expose these services outside Minishift by creating an OpenShift Route using the oc exposecommand.

5. Testing the Sample System

To proceed with the tests, we should first expose our microservices outside Minishift. To do that, just run the following commands:

$ oc expose svc employee
$ oc expose svc department
$ oc expose svc organization

After that, we can access applications atthe address [http://${APP_NAME}-${PROJ_NAME}.${MINISHIFT_IP}](http://${APP_NAME}-${PROJ_NAME}.${MINISHIFT_IP} "http://${APP_NAME}-${PROJ_NAME}.${MINISHIFT_IP}"), as shown below.

Each microservice provides Swagger2 API documentation available at swagger-ui.html. Thanks to that, we can easily test every single endpoint exposed by the service.

It’s worth noting that every application is making use of three approaches to inject environment variables into the pod:

  1. It stores the version number in the source code repository inside the file .s2i/environment. The S2I builder reads all the properties defined inside that file and sets them as environment variables for the builder pod, and then the application pod. Our property name is VERSION, which is injected using Spring @Value, and set for the Swagger API (the code is below).
  2. I have already set the names of dependent services as the ENV vars while executing the command oc new-app for the department and organization apps.
  3. I have also inject the MongoDB secret into every BuildConfig object using the oc set env command.
String version;

public static void main(String[] args) {, args);

public Docket swaggerApi() {
    return new Docket(DocumentationType.SWAGGER_2)
        .apiInfo(new ApiInfoBuilder().version(version).title("Department API").description("Documentation Department API v" + version).build());


Today, I’ve shown you that deploying your applications on OpenShift may be a very simple thing. You don’t have to create any YAML descriptor files or build Docker images by yourself to run your app. It is built directly from your source code. You can compare it with deployment on Kubernetes described in one of my previous articles A Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker.

Originally published by Piotr Mińkowski at

Learn More

☞ Two Java Beginner Coding Tips

☞ Complete Java Masterclass

☞ Complete Step By Step Java For Testers

☞ Java Web Service Complete Guide - SOAP + REST + Buide App

☞ Selenium WebDriver with Java - Basics to Advanced& Interview

☞ Java Persistence: Hibernate and JPA Fundamentals

☞ Java Swing (GUI) Programming: From Beginner to Expert

☞ Java Basics: Learn to Code the Right Way

Java Microservices: Code Examples, Tutorials, and More

Java Microservices: Code Examples, Tutorials, and More

Microservices are replacing monoliths every day. So, let's explore how Java devs can put them to work with the help of their favorite frameworks

Originally published by Angela Stringfellow

Microservices are increasingly used in the development world as developers work to create larger, more complex applications that are better developed and managed as a combination of smaller services that work cohesively together for larger, application-wide functionality. Tools are rising to meet the need to think about and build apps using a piece-by-piece methodology that is, frankly, less mind-boggling than considering the whole of the application at once. Today, we’ll take a look at microservices, the benefits of using this capability, and a few code examples.

What Are Microservices?

Microservices are a form of service-oriented architecture style (one of the most important skills for Java developers) wherein applications are built as a collection of different smaller services rather than one whole app. Instead of a monolithic app, you have several independent applications that can run on their own and may be created using different coding or programming languages. Big and complicated applications can be made up of simpler and independent programs that are executable by themselves. These smaller programs are grouped together to deliver all the functionalities of the big, monolithic app.

Microservices captures your business scenario, answering the question “What problem are you trying to solve?” It is usually developed by an engineering team with only a few members and can be written in any programming language as well as utilize any framework. Each of the involved programs is independently versioned, executed, and scaled. These microservices can interact with other microservices and can have unique URLs or names while being always available and consistent even when failures are experienced.

What Are the Benefits of Microservices?

There are several benefits to using microservices. For one, because these smaller applications are not dependent on the same coding language, the developers can use the programming language that they are most familiar with. That helps developers come up with a program faster with lower costs and fewer bugs. The agility and low costs can also come from being able to reuse these smaller programs on other projects, making it more efficient.

Examples of Microservices Frameworks for Java

There are several microservices frameworks that you can use for developing for Java. Some of these are:

  • Spring Boot: This is probably the best Java microservices framework that works on top of languages for Inversion of Control, Aspect Oriented Programming, and others.
  • Jersey: This open-source framework supports JAX-RS APIs in Java is very easy to use.
  • Swagger: Helps you in documenting API as well as gives you a development portal, which allows users to test your APIs.

Others that you can consider include: Dropwizard, Ninja Web Framework, Play Framework, RestExpress, Restlet, Restx, and Spark Framework.

How to Create Using DropWizard

DropWizard pulls together mature and stable Java libraries in lightweight packages that you can use for your own applications. It uses Jetty for HTTP, Jersey for REST, and Jackson for JSON, along with Metrics, Guava, Logback, Hibernate Validator, Apache HttpClient, Liquibase, Mustache, Joda Time, and Freemarker.

You can setup Dropwizard application using Maven. How?

In your POM, add in a dropwizard.version property using the latest version of DropWizard.

    <dropwizard.version>LATEST VERSION</dropwizard.version>
<!--Then list the dropwizard-core library:-->

This will set up a Maven project for you. From here, you can create a configuration class, an application class, a representation class, a resource class, or a health check, and you can also build Fat JARS, then run your application.

Check out the Dropwizard user manual at this link. The GitHub library is here.

Sample code:

package com.example.helloworld;
import com.yammer.dropwizard.config.Configuration;
import com.fasterxml.jackson.annotation.JsonProperty;
import org.hibernate.validator.constraints.NotEmpty;
public class HelloWorldConfiguration extends Configuration {
    private String template;
    private String defaultName = "Stranger";
    public String getTemplate() {
        return template;
    public String getDefaultName() {
        return defaultName;
Microservices With Spring Boot

Spring Boot gives you Java application to use with your own apps via an embedded server. It uses Tomcat, so you do not have to use Java EE containers. A sample Spring Boot tutorial is at this link.

You can find all Spring Boot projects here, and you will realize that Spring Boot has all the infrastructures that your applications need. It does not matter if you are writing apps for security, configuration, or big data; there is a Spring Boot project for it.

Spring Boot projects include:

  • Spring IO Platform: Enterprise-grade distribution for versioned applications.
  • Spring Framework: For transaction management, dependency injection, data access, messaging, and web apps.
  • Spring Cloud: For distributed systems and used for building or deploying your microservices.
  • Spring Data: For microservices that are related to data access, be it map-reduce, relational or non-relational.
  • Spring Batch: For high levels of batch operations.
  • Spring Security: For authorization and authentication support.
  • Spring REST Docs: For documenting RESTful services.
  • Spring Social: For connecting to social media APIs.
  • Spring Mobile: For mobile Web apps.

Sample code:

import org.springframework.boot.*;
import org.springframework.boot.autoconfigure.*;
import org.springframework.stereotype.*;
import org.springframework.web.bind.annotation.*;
public class Example {
    String home() {
        return "Hello World!";
    public static void main(String[] args) throws Exception {, args);

Jersey RESTful framework is open source, and it is based on JAX-RS specification. Jersey’s applications can extend existing JAX-RS implementations and add features and utilities that would make RESTful services simpler, as well as making client development easier.

The best thing about Jersey is that it has great documentation that is filled with examples. It is also fast and has extremely easy routing.

The documentation on how to get started with Jersey is at this link, while more documentation can be found here.

A sample code that you can try:

package org.glassfish.jersey.examples.helloworld;
public class HelloWorldResource {
    public static final String CLICHED_MESSAGE = "Hello World!";
    public String getHello() {
        return CLICHED_MESSAGE;

Jersey is very easy to use with other libraries, such as Netty or Grizzly, and it supports asynchronous connections. It does not need servlet containers. It does, however, have an unpolished dependency injection implementation.

Play Framework

Play Framework gives you an easier way to build, create and deploy Web applications using Scala and Java. Play Framework is ideal for RESTful application that requires you to handle remote calls in parallel. It is also very modular and supports async. Play Framework also has one of the biggest communities out of all microservices frameworks.

Sample code you can try:

package controllers;
import play.mvc.*;
public class Application extends Controller {
    public static void index() {
    public static void sayHello(String myName) {

Restlet helps developers create fast and scalable Web APIs that adhere to the RESTful architecture pattern. It has good routing and filtering, and available for Java SE/EE, OSGi, Google AppEngine (part of Google Compute), Android, and other major platforms.

Restlet comes with a steep learning curve that is made worse by a closed community, but you can probably get help from people at StackOverflow.

Sample code:

package firstSteps;
import org.restlet.resource.Get;
import org.restlet.resource.ServerResource;
 * Resource which has only one representation. 
public class HelloWorldResource extends ServerResource {
    public String represent() {
        return "hello, world";

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

An Introduction to Microservices

What is Microservices?

Build Spring Microservices and Dockerize Them for Production

Best Java Microservices Interview Questions In 2019

Build a microservices architecture with Spring Boot and Spring Cloud

Java and MicroProfile: Building microservices in style

Java and MicroProfile: Building microservices in style

Learn the steps required to design, build, deploy, and orchestrate a cloud native microservice architecture using Java and Eclipse MicroProfile. We’ll use Red Hat’s MicroProfile (former WildFly Swarm) implementation known as Thorntail, optimized for deployment to OpenShift and integrated with Azure services.

Thanks for watching

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

Java Programming Masterclass for Software Developers

Selenium WebDriver with Java -Basics to Advanced+Frameworks

Java In-Depth: Become a Complete Java Engineer!

Best Java Microservices Interview Questions In 2019

Introduction to Java String Interview Questions and Answers

.NET or Java for Web Development

Built a simple word game in Java using JavaFX

Simplification and Automation in Java: Yesterday, Today and Tomorrow

Java vs Golang: Choosing a language for Freshdesk Microservices

Java vs Golang: Choosing a language for Freshdesk Microservices

Java vs Golang: Choosing a language for Freshdesk Microservices

Originally published at

Ruby on Rails has an amazing set of features that has aided Freshworks in delivering various product enhancements at a rapid pace. Today, the biggest challenge that we face is scaling the Ruby-based services and keeping up with the exponentially growing customer base. It becomes essential to find an alternative language with good performance that will continue to help us develop features quickly. At this point, while we are trying to break our monolithic code and extract microservices out of it, it feels apt to start considering different alternative languages.


The following are the considerations that we had for selecting the language

  • Comparing use cases
  • RESTful APIs. (Preferably with options to generate server code from Swagger spec)
  • Consuming messages from one of the queues/streams like Kafka, SQS
  • Working with following resources
  • MySQL
  • Consuming and Producing messages from/to Kafka at high volume
  • Consuming and Producing messages from/to SQS
  • Redis
  • DynamoDB
  • Calling other internal and external RESTful APIs. Preferably option to generate client code from Swagger spec.
  • SDK for other AWS services
  • Performance
  • Latency for RESTful API services
  • Throughput for stream consumer
  • Memory footprint
  • DevOps
  • Monitoring
  • Dockerization
  • Developer productivity
  • Language features
  • Available talent pool and trainability
  • IDE
  • Formatting
  • Static code analyzers
  • Readability and maintainability of existing code
  • Code-test cycle

After initial deliberation, Golang and Java were the last two contenders.

Comparing Go and Java

Comparing Use Cases

RESTful API server

Both Java and Go have well-supported platform for developing RESTful APIs. The swagger-codegen project and a few other tools can generate server stubs for both Java and Go.

REST API libraries in Java mostly use annotations to configure various aspects of the API like the endpoint a method serves, the request/response format and such. On the other hand, Go libraries employ explicit code for specifying routing. Hence, Go will require more code in setting up the endpoints. However, it can be argued that HTTP routing in a single place is much easier for a new developer to understand than having each class defining its responsible path. Similarly annotations can be used to validate the requests in Java, while Go requires them to be hand-coded.

Given that we’ll most likely generate the server stubs from Swagger spec, all this code will be auto-generated in both scenarios. Hence, taking this use case into consideration is going to give you minimal impact. Still, the matured libraries in Java give it a slight advantage here.

Consuming messages from one of the queues/streams like Kafka, SQS

Both Java and Go have numerous libraries to consume from each of the popular queue/stream implementations.

Given that Java has been around for a long time, it has a plethora of options available for concurrent programming. With the availability of threads at the core, there are a lot of utilities available for managing threads, distributing work across them, very flexible synchronization options, and so on. Extremely sophisticated solutions can be built using these. However, given the amount of complexity these libraries/utilities bring in, it is very common for many applications to suffer greatly from contentions and/or race conditions. Also, a lot of time will be required to understand and effectively apply the various options available. Usually, synchronization and concurrent programming are among the most troubling areas for Java candidates during interviews. Given the complexities and taking into consideration that threads are considerably expensive, the majority of the programmers choose to write big chunks of logic running in a single thread unless absolutely necessary.

On the other hand, Golang has a very simple model for concurrency based on goroutines (lightweight threads) and channels (a blocking queue). Whilst the Go standard library has support for mutual exclusion, using goroutines and channels is the idiomatic way of programming in Go for majority of the cases. This model is so simple and efficient, I expect that programmers to engage with them more often and produce applications composed of many small chunks of work executed by goroutines. This will also ensure that the results are more predictable and less buggy.

Though a similar model could be built in Java, it is rarely employed. Hence, Go beats Java in this area by a big margin.

Working with datastores/other REST APIs

Both the languages have a huge collection of libraries for interacting with all the popular datastores.

Typically, Java libraries try to hide the details of the underlying implementation with a custom interface. For instance, JPA has defined its own querying language that exposes associations through object fields whereas Go libraries are normally simple wrappers on top of the underlying system. This means the developers need to learn the details of the underlying system (For eg. SQL).

This is highly debatable. On one hand, the Java libraries can hide the complexities through a sophisticated interface (associations in JPA are simple object’s field access). However, if the behavior of the library has to be changed (say, making JPA support MySQL sharding), fighting the library could be a harrowing ordeal. Similarly, if something isn’t working the way we expected it to, the debugging could be very hard. Also, the learning curve could be much steeper.

Given the longer life of Java, Java libraries are generally better documented. As the Go libraries are pretty straight forward, going through the code and understanding what’s going on is fairly simple. This could be preferred in some cases. Client/SDK for many services we use (Kafka and all of AWS SDK) are implemented in Java first and the Go libraries are released later.

This consideration is very subjective and depends on personal preference.


As JVM has been around for over 20 years, it has undergone a lot of tuning that in turn has provided us with amazing performance. It also gives us many options to choose from. For instance, we can choose a garbage collector tuned for real-time, low-latency workload; another for background, high throughput workload; yet another for low powered, single core server; and so on. Most of them use generational memory layout, which is very efficient for a majority of use cases. Pretty much everything (size of each generation, expected pause, max/min memory, etc.) can be configured through various knobs depending on the garbage collector used. Though the basic setting gives decent performance, getting every ounce of juice out of the JVM can be a daunting process.

Go, on the other hand, has a single garbage collector algorithm that is highly optimized for very low latency with GC pauses in the order of microseconds. If we look at our own Go based real-time notification server, it handles millions of messages every day with end-to-end latency of about 5ms at 95th percentile. That is impressive considering that each message hops through 4 services with persistence in between. Of course, this isn’t a silver bullet and has its own cost. As Go matures more, we can expect better collectors being implemented for catering to other use cases. Already, the Go team is working on a new collector called Request Oriented Collector, which is optimized for web server-like workloads.

Generally, the memory footprint of Go is much much smaller than Java. In our Go based real-time notification server, some of the services are running with around 70 MB of memory per process in production. With Java, pretty much nothing runs for less than 512 MB of memory. This is especially beneficial with docker allowing multiple services running on the same machine.

On the throughput side, each language wins different head-to-head benchmark battles at about equal times. Looks like Go is gaining more ground over time and might tilt the scale in its favor as it matures.


Java is very mature and has numerous libraries and tools for monitoring. We have enough in-house exposure too. NewRelic offers great support for monitoring various aspects of the application like HTTP requests and DB queries.

NewRelic also has support for Go, which covers HTTP requests and DB queries. However, as Go is still fairly young, it is unclear on how comprehensive the support is and which frameworks/libraries are covered. Go has libraries that can send standard Go runtime metrics to many of the available monitoring system like StatsD, InfluxDB, etc.

For clients of datastores that don’t have NewRelic support, some of them already have integrations with these metrics libraries and can be easily used. If they aren’t available, we might have to implement our own metrics collector, which will pass the necessary information to the metrics library.

Dockerizing both the applications are pretty straight forward. However, Java has a clear edge here.

Developer productivity

Language features

Go authors are quite deliberate in limiting the features in the language and keeping the language syntax very simple. This means that the Go syntax and its core concepts can be learnt pretty quickly. Java has added huge number of features, syntactic sugar, concurrency libraries and what not, of late.

Hence, the complexity of learning the language has increased. As mentioned earlier, learning all the nitty-gritty of implementing concurrent processing right takes a lot of effort. On top of these, other mostly defacto frameworks like Spring, Hibernate, SpringMVC/Jersey, Jetty/Tomcat, makes the learning go for a very long time. On the other hand, once these frameworks/libraries/features are learnt, they provide a lot of productivity boost as they take care of a lot of complexities underneath.

Arguably, the biggest feature that’s missing in Go is generics (aka templates in C++). Due to this, many utility functions (for instance, array.Contains() ) need to be implemented for each type repeatedly. Given that Go encourages defining new types for pretty much everything (For instance, AccountID, which probably is just an integer, can be a new type), these utility functions have to be repeated too many times.

Go has a code generation tool that can help to some extent here. Go authors say there isn’t any urgency in implementing generics at the moment as that isn’t a mandatory feature (Java camp used to say the same till generics was implemented ). I hope we won’t have to wait for too long.

Another controversial Go language choice is with respect to the error handling. Errors are returned as part of the method’s return value. The caller is supposed to check the error and handle it or propagate it further up. This causes a lot of “if err != nil” boilerplate all over the place. However, experienced Go developers claim that this produces much better error handling compared to exceptions in Java. I suppose I haven’t had that “Wow” moment yet. On top of that, Go has panics too, which will bring the whole process down if unhandled. The recommendation is to let the process crash by not handling panics. If one event has a rare input that isn’t handled properly, should the whole pipeline stop till the code is fixed? That sounds scary.

The biggest advantage of Go is the simplicity of concurrent programming as mentioned earlier.

The lambda implementation of Java is hacky at best. Go supports functions as first-class citizens and hence, passing functions as parameters and defining anonymous functions feel natural. In Go, any type that implements all the methods defined in an interface is considered to implement that interface. This is a brilliant approach and allows many elegant solutions.

The talent pool available for Java is much bigger than Go. However, simplicity of Go language will greatly help in training new developers and might compensate for the lack of available developers.

This is very subjective. I personally believe that the simplicity of Go language outweighs majority of the advantages of small productivity gains of the syntactic sugars. I feel generics is an exception and hope it gets included in Go soon. Overall, the Go code looks very simple and easy to follow than Java, especially for beginners.

IDEs and Tools

Both the languages have high quality, comparable IDEs. Java developers majorly use IntelliJ and Eclipse, both of which have been around for a long time and very mature. GoLand (a Go IDE from the same company as IntelliJ) and VS Code seem to take the lead when it comes to Go. I’m a long time user of IntelliJ and I love it. Using GoLand feels very natural and polished for such a young product. Of course, IntelliJ has far more intentions and refactoring options, which GoLand is fast catching up with.

The Go tool chain is very hard to match. Compared to this, build tools for Java like Gradle require much more work for setting up. Dependency management in Go has improved drastically in the last year. However, it still has some gaps in dealing with transitive dependencies. On the other hand, build tools for Java support robust dependency management schemes though they’re a bit hard to tame.

Java has static code analyzers that enforce coding guidelines and flag some potential bugs that could cause exceptions like NullPointerException. Go does this much better. For instance, compilation will fail if a variable is defined, but not used. There are many such checks done by the compiler itself. Many more checks are available through other linters, which are very easy to integrate with go toolchain. Go IDEs have an option to automatically format the code on save. Even if a developer isn’t using an IDE, the same formatting is available through command line. I personally like this approach as this forces everybody to use the same formatting and avoids bikeshedding.

Readability and Maintainability of existing code

Given that straight forward code with little magic is encouraged in Go, they tend to be very easy to read and reason with. Everything is expressed explicitly in Go. The very simple concurrency constructs again makes it simple to understand what is going on even when multiple things are going on at the same time. Generally, Java code is not bad either. However, if advanced constructs are used in concurrency handling or when heavy use of reflections are used to introduce magic (dependency injection by Spring, many not so obvious things done by JPA, etc.), developers without a good understanding of these libraries/frameworks could be left in the dark.

Definitely Go wins in this hands down.

Code-test cycle

When a developer is developing a feature, the cycle involved in changing code and verifying the functionality should be very fast. Again, during TDD, the Red-Green-Refactor relies heavily on compiling code and running tests quickly.

Building Go code is unbelievably fast. Builds of reasonably big codebases typically get completed in under 2 seconds. Running the application/test is also extremely fast. No bootstrapping delay is incurred as Go compiles into machine code. However, compiling Java code is considerably slow. Compiling only changed and dependent files performed by IDEs help quite a bit, but still could be slow. Starting the application/test suite will take a couple of seconds for the JVM to bootstrap. Again, HotSwap (reloading of modified classes without needing to restart JVM) can help here, though the applicability of this is very limited (only method body changes can be reloaded). Overall, running tests/restarting applications on Java can be quite slow compared to the lightning fast compile+start of Go applications. I have been running a file watcher tool (CompileDaemon), which will automatically build and restart the app on every file save. The app is ready to serve the request even before I switch to Postman client from IDE and it’s very convenient!


Go is a very simple language with many advantages. Given that the language is very easy to learn and the libraries are generally straight forward, bringing in people from other languages should be fairly easy. Extremely simple concurrency constructs also reduce the barrier for newcomers. As we expect a majority of the microservices to be simple and single-purposed, this lends well to those use cases. The whole team can be trained easily on Go and can become productive very fast.

If the service is complex with a lot of DB tables and numerous API endpoints, we feel that the productivity boost coming from the richer syntax of Java language and abstraction of complexities by libraries like JPA could be considerable enough for paying off the training efforts. In such situations, Java could be considered based on the use case.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

An Introduction to Microservices

What is Microservices?

Build Spring Microservices and Dockerize Them for Production

Best Java Microservices Interview Questions In 2019

Build a microservices architecture with Spring Boot and Spring Cloud