1598029020
“Great Learning brings you to this live session on “” Docker swarm step-by-step”" Docker swarm is an exclusive orchestration tool put out by docker organization to compete with Kubernetes which is another orchestration service. This live session will help you to cover Docker swarm step-by-step. We will be covering the following topics,
Once you are done learning all these concepts you will have an adequate idea about what Docker swarm is and you can then apply the concepts learned here on actual application deployment."
#docker #devops
1601301859
Basically, both Kubernetes and Docker Swarm both are the container orchestration tool. The rise in interest to containers has in turn brought in higher demands for their deployment and management. Both Kubernetes and Docker Swarm are important tools that are used to deploy containers inside a cluster. So the question arises here is which one to use?
So lets discuss one by one and see the the differances between them.
The name Kubernetes originates from Greek, meaning helmsman or pilot. Kubernetes is an open-source, portable, and extensible platform for managing containerized workload and services. That facilitates both declarative configuration and automation. Kubernetes manage the containers that run the applications and ensure that there is no downtime into a huge scale production environment.
Docker swarm is a container orchestration tool, meaning that it allows the user to manage multiple containers deployed across multiple host machines. Docker Swarm is designed to work around four key principles:
Here you get to know that both in some manner are the same , So now let’s check out the differences and see:
#devops #docker #docker swarm #kubernetes #swarm
1595249460
Following the second video about Docker basics, in this video, I explain Docker architecture and explain the different building blocks of the docker engine; docker client, API, Docker Daemon. I also explain what a docker registry is and I finish the video with a demo explaining and illustrating how to use Docker hub
In this video lesson you will learn:
#docker #docker hub #docker host #docker engine #docker architecture #api
1624332660
In this guide, we will talk about setting up a Selenium Grid using Docker Swarm on any of the cloud services like GCP or AWS.
Let’s start with the basics first, i.e. what is Selenium Grid and Docker Swarm.
Selenium Grid allows the execution of WebDriver scripts on remote machines (virtual or real) by routing commands sent by the client to remote browser instances. It aims to provide an easy way to run tests in parallel on multiple machines.
Selenium Grid allows us to run tests in parallel on multiple machines, and to manage different browser versions and browser configurations centrally (instead of in each individual test).
Generally speaking, there are two reasons why you might want to use Grid.
Grid is used to speed up the execution of a test pass by using multiple machines to run tests in parallel. For example, if you have a suite of 100 tests, but you set up Grid to support 4 different machines (VMs or separate physical machines) to run those tests, your test suite will complete in (roughly) one-fourth the time as it would if you ran your tests sequentially on a single machine.
Docker swarm is a container orchestration tool, meaning that it allows the user to manage multiple containers deployed across multiple host machines.
One of the key benefits associated with the operation of a docker swarm is the high level of availability offered for applications. In a docker swarm, there are typically several worker nodes and at least one manager node that is responsible for handling the worker nodes’ resources efficiently and ensuring that the cluster operates efficiently.
Docker Swarm has two types of services: replicated and global.
**Replicated services: **Swarm mode replicated services functions by you specifying the number of replica tasks for the swarm manager to assign to available nodes.
**Global services: **Global services function by using the swam manager to schedule one task to each available node that meets the services constraints and resource requirements.
#docker-swarm #docker #selenium #docker swarm
1617996180
Docker Swarm has an excellent feature out of the box — Docker Swarm secrets. Using it, you can easily keep your sensitive data like credentials, TLS certificates, etc.
In terms of Docker Swarm services, a secret is a blob of data, such as a password, SSH private key, SSL certificate, or another piece of data that should not be transmitted over a network or stored unencrypted in a Dockerfile or in your application’s source code. You can use Docker secrets to centrally manage this data and securely transmit it to only those containers that need access to it.
So, if we want to use it to store our certificates, first we need a certificate. Here we have two options:
We will use self-signed:
$ mkdir certs && sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./certs/nginx.key -out ./certs/nginx.crt
The command above generates a certificate that expires in 1 year and places it in ./certs/ directory.
Now we have key and crt files, and we can already use them. But besides that, we should always monitor the certificate expiration date. Sure, there are a few ways to do it, but it is out of scope for this topic. Just keep in mind that you can use alerts (Prometheus + Blackbox exporter) of the certificate expiration date to trigger your script, which in its turn updates the secret with the certificate.
Next step, we need to create an Nginx docker service with our certificate. Here is a docker-compose file with a secrets section:
#devops #docker #ssl #nginx #docker swarm #swarm
1615124700
A swarm consists of multiple Docker hosts that run in swarm mode and act as managers (to manage membership and delegation) and workers (which run swarm services). A given Docker host can be a manager, a worker, or perform both roles.
When Docker is running in swarm mode, you can still run standalone containers on any of the Docker hosts participating in the swarm, as well as swarm services. A key difference between standalone containers and swarm services is that only swarm managers can manage a swarm, while standalone containers can be started on any daemon.
In this demonstration, we will see how to configure the docker swarm and how to perform basic tasks.
192.168.33.76 managernode.unixlab.com
192.168.33.77 workernode1.unixlab.com
192.168.33.78 workernode2.unixlab.com
3. The memory should be at least 2 GB and there should be at least 2 core CPUs for each node.
#docker #containers #container-orchestration #docker-swarm