August  Murray

August Murray

1615124700

Docker Swarm: Container Orchestration Using Docker Swarm

Introduction

A swarm consists of multiple Docker hosts that run in swarm mode and act as managers (to manage membership and delegation) and workers (which run swarm services). A given Docker host can be a manager, a worker, or perform both roles.

When Docker is running in swarm mode, you can still run standalone containers on any of the Docker hosts participating in the swarm, as well as swarm services. A key difference between standalone containers and swarm services is that only swarm managers can manage a swarm, while standalone containers can be started on any daemon.

In this demonstration, we will see how to configure the docker swarm and how to perform basic tasks.

Pre-requisites

  1. For our demonstration, we will be using centos-07.
  2. We will be using 3 machines for our lab, 1 machine as a swarm Manager node and 2 swarm worker nodes. These servers have below IP details:

192.168.33.76 managernode.unixlab.com

192.168.33.77 workernode1.unixlab.com

192.168.33.78 workernode2.unixlab.com

3. The memory should be at least 2 GB and there should be at least 2 core CPUs for each node.

#docker #containers #container-orchestration #docker-swarm

What is GEEK

Buddha Community

Docker Swarm: Container Orchestration Using Docker Swarm
August  Murray

August Murray

1615124700

Docker Swarm: Container Orchestration Using Docker Swarm

Introduction

A swarm consists of multiple Docker hosts that run in swarm mode and act as managers (to manage membership and delegation) and workers (which run swarm services). A given Docker host can be a manager, a worker, or perform both roles.

When Docker is running in swarm mode, you can still run standalone containers on any of the Docker hosts participating in the swarm, as well as swarm services. A key difference between standalone containers and swarm services is that only swarm managers can manage a swarm, while standalone containers can be started on any daemon.

In this demonstration, we will see how to configure the docker swarm and how to perform basic tasks.

Pre-requisites

  1. For our demonstration, we will be using centos-07.
  2. We will be using 3 machines for our lab, 1 machine as a swarm Manager node and 2 swarm worker nodes. These servers have below IP details:

192.168.33.76 managernode.unixlab.com

192.168.33.77 workernode1.unixlab.com

192.168.33.78 workernode2.unixlab.com

3. The memory should be at least 2 GB and there should be at least 2 core CPUs for each node.

#docker #containers #container-orchestration #docker-swarm

August  Murray

August Murray

1615072500

Docker Swarm: Performing Rolling Upgrade of a Service

As per business requirement, we might want to perform rolling updates to our services for configuration changes and new Docker Image versions without any downtime.

In this part of the tutorial, we will deploy a service based on the Redis 3.0.6 container tag. Then we will upgrade the service to use the Redis 3.0.7 container image using rolling updates

Pre-requisites

  1. For our demonstration, we will be using centos-07.
  2. We will be using 3 machines for our lab, 1 machine as a swarm Manager node and 2 swarm worker nodes. These servers have below IP details:

192.168.33.76 managernode.unixlab.com

192.168.33.77 workernode1.unixlab.com

192.168.33.78 workernode2.unixlab.com

3. The memory should be at least 2 GB and there should be at least 2 core CPUs for each node.

4. You have already configured Docker Swarm. Read my previous article to understand how to configure the docker swarm.

#container-orchestration #rolling-updates #docker-container #docker #docker-swarm

Kubernetes vs Docker Swarm

·Installation and Cluster configuration

Kubernetes:

Setting up a cluster manually is complex. The configurations differ between different operating systems. It requires a lot of pre-planning as far as the setup goes. Components like storage and networks require configurations. Third-party packages like kubectl etc are required.

Docker Swarm:

Installing Docker Swarm clusters is simple. It only requires a few commands to setup a cluster, and then to add further worker or manager nodes. The setup is also OS independent, and so developers don’t have to spend any time learning new commands based on the OS.

· Load balancing

Kubernetes:

It has to be setup manually but is not very complicated. An ingress can be used to load balancing. Pods are exposed as a service.

Docker Swarm:

Load balancing is done by default and ports are assigned automatically. All containers from a cluster remain in a common network.

#docker #container-orchestration #containers #kubernetes #docker-swarm

Mikel  Okuneva

Mikel Okuneva

1602317778

Ever Wondered Why We Use Containers In DevOps?

At some point we’ve all said the words, “But it works on my machine.” It usually happens during testing or when you’re trying to get a new project set up. Sometimes it happens when you pull down changes from an updated branch.

Every machine has different underlying states depending on the operating system, other installed programs, and permissions. Getting a project to run locally could take hours or even days because of weird system issues.

The worst part is that this can also happen in production. If the server is configured differently than what you’re running locally, your changes might not work as you expect and cause problems for users. There’s a way around all of these common issues using containers.

What is a container

A container is a piece of software that packages code and its dependencies so that the application can run in any computing environment. They basically create a little unit that you can put on any operating system and reliably and consistently run the application. You don’t have to worry about any of those underlying system issues creeping in later.

Although containers were already used in Linux for years, they became more popular in recent years. Most of the time when people are talking about containers, they’re referring to Docker containers. These containers are built from images that include all of the dependencies needed to run an application.

When you think of containers, virtual machines might also come to mind. They are very similar, but the big difference is that containers virtualize the operating system instead of the hardware. That’s what makes them so easy to run on all of the operating systems consistently.

What containers have to do with DevOps

Since we know how odd happenings occur when you move code from one computing environment to another, this is also a common issue with moving code to the different environments in our DevOps process. You don’t want to have to deal with system differences between staging and production. That would require more work than it should.

Once you have an artifact built, you should be able to use it in any environment from local to production. That’s the reason we use containers in DevOps. It’s also invaluable when you’re working with microservices. Docker containers used with something like Kubernetes will make it easier for you to handle larger systems with more moving pieces.

#devops #containers #containers-devops #devops-containers #devops-tools #devops-docker #docker #docker-image

Lindsey  Koepp

Lindsey Koepp

1603763460

AWS Bottlerocket vs. Google Container-Optimized OS: Which Should You Use and When

What’s the difference between popular Container-Centric OS choices, Google’s Container-Optimized OS, and AWS’s Bottlerocket? The concepts underlying containers have been around for many years. Container technologies like Docker, Kubernetes, and an entire ecosystem of products, as well as best practices, have emerged in the last few years. This has enabled different kinds of applications to be containerized.

Web service providers like Amazon AWS and Google are giving a further boost to container innovation, for enterprises to adopt and use containers at scale. This will help them to reap the benefits containers bring, including increased portability and greater efficiency.

Linux-based OS, AWS Bottlerocket is a new option, designed for running containers on virtual machines (VMs) or bare-metal hosts. In this article, you will learn the core uses and differences between the two open-source OS.

**AWS Bottlerocket **

It is an open-source, stripped-down Linux distribution that’s similar to projects like Google’s Container-Optimized OS. This single-step update process helps reduce management overhead.

_It makes OS updates easy to automate using container orchestration services such as Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). _

**Google Container-Optimized OS **

It’s an OS image for Google Compute Engine VMs that’s optimized for running Docker containers. It allows you to bring up your Docker containers on Google Cloud Platform securely, and quickly. It is based on the open-source Chromium OS project and is maintained by Google.

But before diving into the core differences, let us give you a basic overview of containers, VMs, and container-optimized OS, and its underlying challenges to better understand the differences.

If you are already aware of all the underlying processes of containers, then you can skip to the main differences for AWS Bottlerocket vs Google Container-Optimized OS.

#containers #amazon-aws #google-cloud #container-optimized-os #aws-containers #docker-containers #linux-based-os #orchestration