Bu videoda Docker üzerinde hassas ve gizli kalması gereken bilgilerin şifreli bir şekilde yönetilmesini sağlayan araç Docker Secret’lardan bahsettim. Parola veya veritabanı bilgileri gibi önemli bilgilerin kullanımı konusunda birlikte örnekler yaptık.
In this video, I talked about Docker Secrets, a tool that provides encrypted management of sensitive and confidential information on Docker. We did examples of the use of sensitive information such as passwords or database information together.
#dockertutorial #whatisdocker #dockersecret
Kanala Abone Olmayı Unutmayın!
To Subscribe: https://bit.ly/3kvj2vw
#dockersecret #whatisdocker #dockertutorial
Basically, both Kubernetes and Docker Swarm both are the container orchestration tool. The rise in interest to containers has in turn brought in higher demands for their deployment and management. Both Kubernetes and Docker Swarm are important tools that are used to deploy containers inside a cluster. So the question arises here is which one to use?
So lets discuss one by one and see the the differances between them.
The name Kubernetes originates from Greek, meaning helmsman or pilot. Kubernetes is an open-source, portable, and extensible platform for managing containerized workload and services. That facilitates both declarative configuration and automation. Kubernetes manage the containers that run the applications and ensure that there is no downtime into a huge scale production environment.
Docker swarm is a container orchestration tool, meaning that it allows the user to manage multiple containers deployed across multiple host machines. Docker Swarm is designed to work around four key principles:
Here you get to know that both in some manner are the same , So now let’s check out the differences and see:
#devops #docker #docker swarm #kubernetes #swarm
Following the second video about Docker basics, in this video, I explain Docker architecture and explain the different building blocks of the docker engine; docker client, API, Docker Daemon. I also explain what a docker registry is and I finish the video with a demo explaining and illustrating how to use Docker hub
In this video lesson you will learn:
#docker #docker hub #docker host #docker engine #docker architecture #api
Docker Swarm has an excellent feature out of the box — Docker Swarm secrets. Using it, you can easily keep your sensitive data like credentials, TLS certificates, etc.
In terms of Docker Swarm services, a secret is a blob of data, such as a password, SSH private key, SSL certificate, or another piece of data that should not be transmitted over a network or stored unencrypted in a Dockerfile or in your application’s source code. You can use Docker secrets to centrally manage this data and securely transmit it to only those containers that need access to it.
So, if we want to use it to store our certificates, first we need a certificate. Here we have two options:
We will use self-signed:
$ mkdir certs && sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./certs/nginx.key -out ./certs/nginx.crt
The command above generates a certificate that expires in 1 year and places it in ./certs/ directory.
Now we have key and crt files, and we can already use them. But besides that, we should always monitor the certificate expiration date. Sure, there are a few ways to do it, but it is out of scope for this topic. Just keep in mind that you can use alerts (Prometheus + Blackbox exporter) of the certificate expiration date to trigger your script, which in its turn updates the secret with the certificate.
Next step, we need to create an Nginx docker service with our certificate. Here is a docker-compose file with a secrets section:
#devops #docker #ssl #nginx #docker swarm #swarm
In this guide, we will talk about setting up a Selenium Grid using Docker Swarm on any of the cloud services like GCP or AWS.
Let’s start with the basics first, i.e. what is Selenium Grid and Docker Swarm.
Selenium Grid allows the execution of WebDriver scripts on remote machines (virtual or real) by routing commands sent by the client to remote browser instances. It aims to provide an easy way to run tests in parallel on multiple machines.
Selenium Grid allows us to run tests in parallel on multiple machines, and to manage different browser versions and browser configurations centrally (instead of in each individual test).
Generally speaking, there are two reasons why you might want to use Grid.
Grid is used to speed up the execution of a test pass by using multiple machines to run tests in parallel. For example, if you have a suite of 100 tests, but you set up Grid to support 4 different machines (VMs or separate physical machines) to run those tests, your test suite will complete in (roughly) one-fourth the time as it would if you ran your tests sequentially on a single machine.
Docker swarm is a container orchestration tool, meaning that it allows the user to manage multiple containers deployed across multiple host machines.
One of the key benefits associated with the operation of a docker swarm is the high level of availability offered for applications. In a docker swarm, there are typically several worker nodes and at least one manager node that is responsible for handling the worker nodes’ resources efficiently and ensuring that the cluster operates efficiently.
Docker Swarm has two types of services: replicated and global.
**Replicated services: **Swarm mode replicated services functions by you specifying the number of replica tasks for the swarm manager to assign to available nodes.
**Global services: **Global services function by using the swam manager to schedule one task to each available node that meets the services constraints and resource requirements.
#docker-swarm #docker #selenium #docker swarm
Intro to volumes and storage.
TL;DR Overview of how docker storage & volumes work and how to manage them.
In this fourth part of the Dockerventure series, we are going to focus on how docker handles storage, how it manages container file systems, and showcase how we can effectively manage our data with volumes.
Default Docker File System
By default at creation time, docker creates the directory /var/lib/docker where it stores all its data regarding containers, images, volumes, etc.
When a new container is started, a new **read-write container layer **is added on top of the read-only image layers that were created during the build phase.
This container layer exists only while the container exists and when the container is killed this layer along with all the changes we made on top is lost.
#docker-mount #docker-volume #docker-storage #docker