Shreya Joshi

Shreya Joshi

1571199148

Building microservice applications with Kubernetes and Docker

Docker is an open source platform that’s used to build, ship and run distributed services. Kubernetes is an open source orchestration platform for automating deployment, scaling and the operations of application containers across clusters of hosts. Microservices structure an application into several modular services. Here’s a quick look at why these are so useful today.

In one of my previous posts, I described an example of continuous delivery configuration for building microservices with Docker and Jenkins. It was a simple configuration where I decided to use only Docker Pipeline Plugin for building and running containers with microservices. That solution had one big disadvantage – we had to link all containers between each other to provide communication between microservices deployed inside those containers. Today I’m going to present you one smart solution which helps us to avoid that problem – Kubernetes.

Theory

Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. It was originally designed by Google. It has many features which are especially useful for applications running in production, like service naming and discovery, load balancing, application health checking, horizontal auto-scaling, and rolling updates. There are several important concepts around Kubernetes we should know before going into the sample.

Pod – This is the basic unit in Kubernetes. It can consist of one or more containers that are guaranteed to be co-located on the host machine and share the same resources. All containers deployed inside pod can see other containers via localhost. Each pod has a unique IP address within the cluster

Service – This is a set of pods that work together. By default, a service is exposed inside a cluster but it can also be exposed onto an external IP address outside your cluster. We can expose it using one of four available behaviors: ClusterIP, NodePort, LoadBalancer and ExternalName.

Replication Controller – This is a specific type of Kubernetes controller. It handles replication and scaling by running a specified number of copies of a pod across the cluster. It is also responsible for pod replacement if the underlying node fails.

Minikube

The configuration of highly available Kubernetes cluster is not an easy task to perform. Fortunately, there is a tool that makes it easy to run Kubernetes locally – Minikube. It can run a single-node cluster inside a VM, which is really important for developers who want to try it out. The beginning is really easy. For examples on Windows, you have to download minikube.exe and kubectl.exe and add them to the PATH environment variable. Then you can start it from the command line using the minikube start command and use almost all of Kubernetes features available by calling the kubectl command. An alternative for the command line option is Kubernetes Dashboard. It can be launched by calling the minikube dashboard command. We can create, update, or delete deployment from the UI dashboard, and also list and view a configuration of all pods, services, ingresses, replication controllers, etc. Here’s Kubernetes Dashboard with the list of deployments for our sample:

This is image title

Application

The concept of our sample microservices architecture is pretty similar to the concept from my article about continuous delivery with Docker and Jenkins, which I mentioned in the beginning of the article. We also have account and customer microservices. Customer service is interacting with account service while searching for customer accounts. We do not use gateway (Zuul) and discovery (Eureka) Spring Boot services, because we have such mechanisms available on Kubernetes out of the box. Here’s a picture illustrating the architecture of the presented solution. Each microservice’s pod consists of two containers: first with microservice application, and second with Mongo database. Account and customer microservices have their own database where all data is stored. Each pod is exposed as a service and can by searched by name on Kubernetes. We also configure Kubernetes Ingress, which acts as a gateway for our microservices.
This is image title

Sample application source code is available on GitHub. It consists of two modules: account-service and customer-service. It is based on the Spring Boot framework, but doesn’t use any of Spring Cloud projects except Feign client. Here’s dockerfile from account service. We use a small openjdk image – alpine. Thanks to that, our result image will have about ~120MB instead of ~650MB when using standard openjdk as a base image.

FROM openjdk:alpine
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD target/account-service.jar account-service.jar
ENTRYPOINT ["java", "-jar", "/account-service.jar"]
EXPOSE 2222

To enable MongoDB support, I add spring-boot-starter-data-mongodb dependency to pom.xml. We also have to provide connection data to application.yml and annotate entity class with @Document. The last thing is to declare repository interface extending MongoRepository which has basic CRUD methods implemented. We add two custom find methods:

public interface AccountRepository extends MongoRepository<Account, String> {
    public Account findByNumber(String number);
    public List<Account> findByCustomerId(String customerId);
}

In customer service, we are going to call API method from account service. Here’s declarative REST client @FeignClient declaration. All the pods with account service are available under the account-service name and default service port – 2222. Such settings are the results of the service configuration on Kubernetes. I will describe it in the next section.

@FeignClient(name = "account-service", url = "http://account-service:2222")
public interface AccountClient {
    @RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}")
    List<Account> getAccounts(@PathVariable("customerId") String customerId);
}

The docker image of our microservices can be built with the command visible below. After the build, you should push that image to official docker hub or your private registry. In the next section, I’ll describe how to use them on Kubernetes. Docker images of the described microservices are also available on my Docker Hub public repository as piomin/account-service and piomin/customer-service.

docker build -t piomin/account-service .
docker push piomin/account-service

Deployment

You can create deployment on Kubernetes using kubectl run command, Minikube dashboard, or YAML configuration files with kubectl create command. I’m going to show you how to create all resources from YAML configuration files, because we need to create multi-containers deployments in one step. Here’s deployment configuration file for account-service. We have to provide deployment name, image name, and exposed port. In the replicas property, we are setting the requested number of created pods.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: account-service
  labels:
    run: account-service
spec:
  replicas: 1
  template:
    metadata:
      labels:
        run: account-service
    spec:
      containers:
      - name: account-service
        image: piomin/account-service
        ports:
        - containerPort: 2222
          protocol: TCP
      - name: mongo
        image: library/mongo
        ports:
        - containerPort: 27017
          protocol: TCP

We are creating new deployment by running the command below. The same command is used for creating services and ingress. Only the YAML file format is different:

kubectl create -f deployment-account.yaml

Now, let’s take a look at the service configuration file. We have already created deployment. As you could see, the dashboard image has been pulled from Docker Hub; pod and replica set has been created. Now, we would like to expose our microservice outside. That’s why service is needed. We are also exposing Mongo database on its default port, to be able to connect database and create collections from MongoDB client.

kind: Service
apiVersion: v1
metadata:
  name: account-service
spec:
  selector:
    run: account-service
  ports:
    - name: port1
      protocol: TCP
      port: 2222
      targetPort: 2222
    - name: port2
      protocol: TCP
      port: 27017
      targetPort: 27017
  type: NodePort

This is image title

After creating a similar configuration for customer service, we have our microservices exposed. Inside Kubernetes, they are visible on default ports (2222 and 3333) and service name. That’s why inside the customer service REST client (@FeignClient), we declared URL http://account-service:2222. No matter how many pods have been created, service will always be available on that URL and requests are load balanced between all pods. If we would like to access each service outside Kubernetes, for example, in the web browser we need to call it with a port visible below the container default port – in that sample for account service, it is 31638 port, and for customer service, 31171 port. If you have run Minikube on Windows, your Kubernetes is probably available under the 192.168.99.100 address, so you could try to call account service using URL http://192.168.99.100:31638/accounts. Before such a test, you need to create a collection on the Mongo database and user micro/micro, which is set for that service inside application.yml.

This is image title

Ok, we have our two microservices available under two different ports. It is not exactly what we need. We need some kind of gateway available on IP which proxies our requests to exact service by matching the request path. Fortunately, such an option is also available on Kubernetes. This solution is Ingress. Here’s the YAML ingress configuration file. There are two rules defined, first for account-service and second for customer service. Our gateway is available under the micro.all host name and default HTTP port.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gateway-ingress
spec:
  backend:
    serviceName: default-http-backend
    servicePort: 80
  rules:
  - host: micro.all
    http:
      paths:
      - path: /account
        backend:
          serviceName: account-service
          servicePort: 2222
      - path: /customer
        backend:
          serviceName: customer-service
          servicePort: 3333

The last thing that needs to be done to make the gateway work is to add the following entry to the system hosts file (/etc/hosts for linux and* C:\Windows\System32\drivers\etc\hosts for windows*). Now, you could try to call from your web browser http://micro.all/accounts or http://micro.all/customers/{id}, which also calls account service in the background.

[MINIKUBE_IP] micro.all

Kubernetes is a great tool for microservices clustering and orchestration. It is still a relatively new solution under active development. It can be used together with Spring Boot stack or as an alternative for Spring Cloud Netflix OSS, which seems to be the most popular solution for microservices now. It also has a UI dashboard where you can manage and monitor all resources. Production grade configuration is probably more complicated than single host development configuration with Minikube, but I don’t think that is a solid argument against Kubernetes

Learn More

Thanks for reading !

#microservices #kubernetes #docker

What is GEEK

Buddha Community

Building microservice applications with Kubernetes and Docker
Christa  Stehr

Christa Stehr

1602964260

50+ Useful Kubernetes Tools for 2020 - Part 2

Introduction

Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Shreya Joshi

Shreya Joshi

1571199148

Building microservice applications with Kubernetes and Docker

Docker is an open source platform that’s used to build, ship and run distributed services. Kubernetes is an open source orchestration platform for automating deployment, scaling and the operations of application containers across clusters of hosts. Microservices structure an application into several modular services. Here’s a quick look at why these are so useful today.

In one of my previous posts, I described an example of continuous delivery configuration for building microservices with Docker and Jenkins. It was a simple configuration where I decided to use only Docker Pipeline Plugin for building and running containers with microservices. That solution had one big disadvantage – we had to link all containers between each other to provide communication between microservices deployed inside those containers. Today I’m going to present you one smart solution which helps us to avoid that problem – Kubernetes.

Theory

Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. It was originally designed by Google. It has many features which are especially useful for applications running in production, like service naming and discovery, load balancing, application health checking, horizontal auto-scaling, and rolling updates. There are several important concepts around Kubernetes we should know before going into the sample.

Pod – This is the basic unit in Kubernetes. It can consist of one or more containers that are guaranteed to be co-located on the host machine and share the same resources. All containers deployed inside pod can see other containers via localhost. Each pod has a unique IP address within the cluster

Service – This is a set of pods that work together. By default, a service is exposed inside a cluster but it can also be exposed onto an external IP address outside your cluster. We can expose it using one of four available behaviors: ClusterIP, NodePort, LoadBalancer and ExternalName.

Replication Controller – This is a specific type of Kubernetes controller. It handles replication and scaling by running a specified number of copies of a pod across the cluster. It is also responsible for pod replacement if the underlying node fails.

Minikube

The configuration of highly available Kubernetes cluster is not an easy task to perform. Fortunately, there is a tool that makes it easy to run Kubernetes locally – Minikube. It can run a single-node cluster inside a VM, which is really important for developers who want to try it out. The beginning is really easy. For examples on Windows, you have to download minikube.exe and kubectl.exe and add them to the PATH environment variable. Then you can start it from the command line using the minikube start command and use almost all of Kubernetes features available by calling the kubectl command. An alternative for the command line option is Kubernetes Dashboard. It can be launched by calling the minikube dashboard command. We can create, update, or delete deployment from the UI dashboard, and also list and view a configuration of all pods, services, ingresses, replication controllers, etc. Here’s Kubernetes Dashboard with the list of deployments for our sample:

This is image title

Application

The concept of our sample microservices architecture is pretty similar to the concept from my article about continuous delivery with Docker and Jenkins, which I mentioned in the beginning of the article. We also have account and customer microservices. Customer service is interacting with account service while searching for customer accounts. We do not use gateway (Zuul) and discovery (Eureka) Spring Boot services, because we have such mechanisms available on Kubernetes out of the box. Here’s a picture illustrating the architecture of the presented solution. Each microservice’s pod consists of two containers: first with microservice application, and second with Mongo database. Account and customer microservices have their own database where all data is stored. Each pod is exposed as a service and can by searched by name on Kubernetes. We also configure Kubernetes Ingress, which acts as a gateway for our microservices.
This is image title

Sample application source code is available on GitHub. It consists of two modules: account-service and customer-service. It is based on the Spring Boot framework, but doesn’t use any of Spring Cloud projects except Feign client. Here’s dockerfile from account service. We use a small openjdk image – alpine. Thanks to that, our result image will have about ~120MB instead of ~650MB when using standard openjdk as a base image.

FROM openjdk:alpine
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD target/account-service.jar account-service.jar
ENTRYPOINT ["java", "-jar", "/account-service.jar"]
EXPOSE 2222

To enable MongoDB support, I add spring-boot-starter-data-mongodb dependency to pom.xml. We also have to provide connection data to application.yml and annotate entity class with @Document. The last thing is to declare repository interface extending MongoRepository which has basic CRUD methods implemented. We add two custom find methods:

public interface AccountRepository extends MongoRepository<Account, String> {
    public Account findByNumber(String number);
    public List<Account> findByCustomerId(String customerId);
}

In customer service, we are going to call API method from account service. Here’s declarative REST client @FeignClient declaration. All the pods with account service are available under the account-service name and default service port – 2222. Such settings are the results of the service configuration on Kubernetes. I will describe it in the next section.

@FeignClient(name = "account-service", url = "http://account-service:2222")
public interface AccountClient {
    @RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}")
    List<Account> getAccounts(@PathVariable("customerId") String customerId);
}

The docker image of our microservices can be built with the command visible below. After the build, you should push that image to official docker hub or your private registry. In the next section, I’ll describe how to use them on Kubernetes. Docker images of the described microservices are also available on my Docker Hub public repository as piomin/account-service and piomin/customer-service.

docker build -t piomin/account-service .
docker push piomin/account-service

Deployment

You can create deployment on Kubernetes using kubectl run command, Minikube dashboard, or YAML configuration files with kubectl create command. I’m going to show you how to create all resources from YAML configuration files, because we need to create multi-containers deployments in one step. Here’s deployment configuration file for account-service. We have to provide deployment name, image name, and exposed port. In the replicas property, we are setting the requested number of created pods.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: account-service
  labels:
    run: account-service
spec:
  replicas: 1
  template:
    metadata:
      labels:
        run: account-service
    spec:
      containers:
      - name: account-service
        image: piomin/account-service
        ports:
        - containerPort: 2222
          protocol: TCP
      - name: mongo
        image: library/mongo
        ports:
        - containerPort: 27017
          protocol: TCP

We are creating new deployment by running the command below. The same command is used for creating services and ingress. Only the YAML file format is different:

kubectl create -f deployment-account.yaml

Now, let’s take a look at the service configuration file. We have already created deployment. As you could see, the dashboard image has been pulled from Docker Hub; pod and replica set has been created. Now, we would like to expose our microservice outside. That’s why service is needed. We are also exposing Mongo database on its default port, to be able to connect database and create collections from MongoDB client.

kind: Service
apiVersion: v1
metadata:
  name: account-service
spec:
  selector:
    run: account-service
  ports:
    - name: port1
      protocol: TCP
      port: 2222
      targetPort: 2222
    - name: port2
      protocol: TCP
      port: 27017
      targetPort: 27017
  type: NodePort

This is image title

After creating a similar configuration for customer service, we have our microservices exposed. Inside Kubernetes, they are visible on default ports (2222 and 3333) and service name. That’s why inside the customer service REST client (@FeignClient), we declared URL http://account-service:2222. No matter how many pods have been created, service will always be available on that URL and requests are load balanced between all pods. If we would like to access each service outside Kubernetes, for example, in the web browser we need to call it with a port visible below the container default port – in that sample for account service, it is 31638 port, and for customer service, 31171 port. If you have run Minikube on Windows, your Kubernetes is probably available under the 192.168.99.100 address, so you could try to call account service using URL http://192.168.99.100:31638/accounts. Before such a test, you need to create a collection on the Mongo database and user micro/micro, which is set for that service inside application.yml.

This is image title

Ok, we have our two microservices available under two different ports. It is not exactly what we need. We need some kind of gateway available on IP which proxies our requests to exact service by matching the request path. Fortunately, such an option is also available on Kubernetes. This solution is Ingress. Here’s the YAML ingress configuration file. There are two rules defined, first for account-service and second for customer service. Our gateway is available under the micro.all host name and default HTTP port.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gateway-ingress
spec:
  backend:
    serviceName: default-http-backend
    servicePort: 80
  rules:
  - host: micro.all
    http:
      paths:
      - path: /account
        backend:
          serviceName: account-service
          servicePort: 2222
      - path: /customer
        backend:
          serviceName: customer-service
          servicePort: 3333

The last thing that needs to be done to make the gateway work is to add the following entry to the system hosts file (/etc/hosts for linux and* C:\Windows\System32\drivers\etc\hosts for windows*). Now, you could try to call from your web browser http://micro.all/accounts or http://micro.all/customers/{id}, which also calls account service in the background.

[MINIKUBE_IP] micro.all

Kubernetes is a great tool for microservices clustering and orchestration. It is still a relatively new solution under active development. It can be used together with Spring Boot stack or as an alternative for Spring Cloud Netflix OSS, which seems to be the most popular solution for microservices now. It also has a UI dashboard where you can manage and monitor all resources. Production grade configuration is probably more complicated than single host development configuration with Minikube, but I don’t think that is a solid argument against Kubernetes

Learn More

Thanks for reading !

#microservices #kubernetes #docker

Maud  Rosenbaum

Maud Rosenbaum

1601096160

Microservice and Serverless With Micronaut + GraalVM

Micronaut® framework is creating a buzz around cloud-native (microservice, serverless ) application development due to its enriched features and optimizations based out of modern Polyglot JVM — GraalVMoptimizers.

For GraalVM ** optimizers go thought Part1 of this series GraalVM — Byte Code to Bit Code.**

Why Micronaut® and what makes it the next level stuff as a framework for Cloud-Native development :

  • Natively Cloud Native
  • Micronaut®’s cloud support is built right in, including support for common discovery services, distributed tracing tools, cloud runtimes and leading vendors ( AWS, AZURE, GCP).
  • Ready for Serverless Development
  • Micronaut® low overhead compile-time DI and AOP make it perfect for writing functions for serverless environments like AWS Lambda, Azure Functions etc.
  • Fast Start Time — Low Memory Consumption
  • Reflection-based IoC framework load and cache reflection data for every single field, method, and constructor in your code. Whereas with Micronaut®, your application startup time and memory consumption are not bound to the size of your codebase. Micronaut® build around GraalVM. Micronaut features a dependency injection and aspect-oriented programming runtime that uses no reflection. This makes it easier for Micronaut applications to run on GraalVM.
  • Micronaut apps start in a tenth of millisecond with GraalVM!
  • Built with Non-Blocking Http Server on Netty
  • With a smooth learning curve, Micronaut’s HTTP server makes it as easy as possible to expose APIs that can be consumed by HTTP clients.
  • Design for Building Resilient Microservices
  • Distributed environments require planning for failure. Micronaut’s built-in support for retry, circuit breaker, and fallbacks help you plan.
  • Faster Data Layer Configuration
  • Micronaut provides sensible defaults that automatically configure your favorite data access toolkit and APIs to make it easy to write your own integrations. Supports like Caching, SQL, NoSQL, and different DB vendor products are readily available.
  • Fast and Easy Testing
  • Easily spin up servers and clients in your unit tests, and run them instantaneously with mostly auto-generated test stuff for main source code.
  • Reloading (or “hot-loading”)
  • Out of box excellent support of hot reloading, restart, and reinitialization with build-in JVM agent support rather than relying on classloader reloading or proprietary JVM agents like JRebel etc. It is an important feature to make the development cycle efficient and fast.
  • Refreshable feature
  • @Refreshable is another interesting scope offered by Micronaut. You can refresh the state of a bean by calling the HTTP endpoint /refresh or by publishing RefreshEvent to the application context.

How to Get Started Along Microservice With Micronaut

Create an app using the Micronaut Command Line Interface.

Installing Micronaut® manually comes with a super-rich & intuitive CLI that we will be using for the rest of the project lifecycle ( its power-pack CLI features add to developers productivity). We will be doing 3case studies with Micronaut® to compare and benchmark its potential relative to today’s Cloud-Native de-facto framework — SpringBoot.

_Case A : _ Micronaut ® _ app on GraalVM optimised JIT_

_Case B : _ Micronaut ® _ app on GraalVM “Native-Image”_

Case C : SpringBoot on openJDK HotSpotVM JIT — conventional way

Let’s get to the suspense straight-in with below depiction of different benchmarking results :

Benchmarking Micronaut

Micronaut application benchmarking

#kubernetes #serverless #microservice architecture #microservice #aws lambda #cloud native applications #microservice best practices #cloud application #serverless adoption #azure function

Samanta  Moore

Samanta Moore

1638058320

How to Build Serverless Microservices with Java & Kubernetes & Docker

In this tutorial, we'll learn How to Build Serverless Microservices with Java & Kubernetes & Docker With Step by Step Example Code

The phrase “serverless” doesn’t mean servers are no longer required. It solely proposes that developers no longer have to think that much about them. Going serverless lets, developers shift their focus from the server level to the task level, writing codes.

  1. What does it mean to have servers?
  2. What are the Advantages of Serverless Architecture?
  3. How to Migrate to a Microservices Architecture?
  4. What are the benefits of Microservices Architecture?
  5. Serverless Computing: The Future of Cloud

#java #kubernetes #docker #microservices 

Iliana  Welch

Iliana Welch

1595249460

Docker Explained: Docker Architecture | Docker Registries

Following the second video about Docker basics, in this video, I explain Docker architecture and explain the different building blocks of the docker engine; docker client, API, Docker Daemon. I also explain what a docker registry is and I finish the video with a demo explaining and illustrating how to use Docker hub

In this video lesson you will learn:

  • What is Docker Host
  • What is Docker Engine
  • Learn about Docker Architecture
  • Learn about Docker client and Docker Daemon
  • Docker Hub and Registries
  • Simple demo to understand using images from registries

#docker #docker hub #docker host #docker engine #docker architecture #api