1625283480
Podman vs Docker in comparison! Weโll talk about what Podman is, how it works and if you should consider switching from Docker to Podman for better security.
๐ LINKS:
Documentation - https://podman.io
Installation Instructions - https://podman.io/getting-started/installation
Portainer: https://youtu.be/ljDI5jykjE8
Watchtower: https://youtu.be/5lP_pdjcVMo
Learn more about Docker: https://www.youtube.com/playlist?list=PLj-2elZxVPZ8k8z6a2q6-J79Y-9BUQllW
BECOME A MEMBER AND BE A CODE HERO!
๐จ๏ธ Get Help & Chat: https://discord.com/invite/bz2SN7d
๐ฅณ Support me: https://www.youtube.com/channel/UCZNhwA1B5YqiY1nLzmM0ZRg/join
AWESOME COURSES & TRAINING:
๐จโ๐ ITProTV-*: https://itpro.tv/thedigitallife
HOST YOUR APPS & SERVERS:
๐ง DigitalOcean-*: https://m.do.co/c/e9f31a8c7756
FOLLOW ME EVERYWHERE:
๐ฆ Twitter News: https://twitter.com/christian_tdl
๐ท Instagram Vlog: https://instagram.com/christian_tdl
๐จโ๐ป GitHub Projects: https://github.com/xcad2k
๐ฎ Gaming and Coding: https://twitch.tv/The_Digital_Life_
๐ Read my Blog: https://www.the-digital-life.com
OTHER COOL STUFF:
๐ฅ๏ธ My Equipment: https://kit.co/thedigitallife
๐ Geek Merch: https://the-digital-life-store.creator-spring.com/
โฑ๏ธ TIMESTAMPS:
00:00 - Introduction
01:00 - What is wrong with docker? Why replace it with Podman?
02:25 - How Podman works
05:05 - Podman and sudo privileges
06:33 - Should you stop using docker now?
All links with โ*โ are affiliate links.
#podman #docker
1595249460
Following the second video about Docker basics, in this video, I explain Docker architecture and explain the different building blocks of the docker engine; docker client, API, Docker Daemon. I also explain what a docker registry is and I finish the video with a demo explaining and illustrating how to use Docker hub
In this video lesson you will learn:
#docker #docker hub #docker host #docker engine #docker architecture #api
1625283480
Podman vs Docker in comparison! Weโll talk about what Podman is, how it works and if you should consider switching from Docker to Podman for better security.
๐ LINKS:
Documentation - https://podman.io
Installation Instructions - https://podman.io/getting-started/installation
Portainer: https://youtu.be/ljDI5jykjE8
Watchtower: https://youtu.be/5lP_pdjcVMo
Learn more about Docker: https://www.youtube.com/playlist?list=PLj-2elZxVPZ8k8z6a2q6-J79Y-9BUQllW
BECOME A MEMBER AND BE A CODE HERO!
๐จ๏ธ Get Help & Chat: https://discord.com/invite/bz2SN7d
๐ฅณ Support me: https://www.youtube.com/channel/UCZNhwA1B5YqiY1nLzmM0ZRg/join
AWESOME COURSES & TRAINING:
๐จโ๐ ITProTV-*: https://itpro.tv/thedigitallife
HOST YOUR APPS & SERVERS:
๐ง DigitalOcean-*: https://m.do.co/c/e9f31a8c7756
FOLLOW ME EVERYWHERE:
๐ฆ Twitter News: https://twitter.com/christian_tdl
๐ท Instagram Vlog: https://instagram.com/christian_tdl
๐จโ๐ป GitHub Projects: https://github.com/xcad2k
๐ฎ Gaming and Coding: https://twitch.tv/The_Digital_Life_
๐ Read my Blog: https://www.the-digital-life.com
OTHER COOL STUFF:
๐ฅ๏ธ My Equipment: https://kit.co/thedigitallife
๐ Geek Merch: https://the-digital-life-store.creator-spring.com/
โฑ๏ธ TIMESTAMPS:
00:00 - Introduction
01:00 - What is wrong with docker? Why replace it with Podman?
02:25 - How Podman works
05:05 - Podman and sudo privileges
06:33 - Should you stop using docker now?
All links with โ*โ are affiliate links.
#podman #docker
1598839687
If you are undertaking a mobile app development for your start-up or enterprise, you are likely wondering whether to use React Native. As a popular development framework, React Native helps you to develop near-native mobile apps. However, you are probably also wondering how close you can get to a native app by using React Native. How native is React Native?
In the article, we discuss the similarities between native mobile development and development using React Native. We also touch upon where they differ and how to bridge the gaps. Read on.
Letโs briefly set the context first. We will briefly touch upon what React Native is and how it differs from earlier hybrid frameworks.
React Native is a popular JavaScript framework that Facebook has created. You can use this open-source framework to code natively rendering Android and iOS mobile apps. You can use it to develop web apps too.
Facebook has developed React Native based on React, its JavaScript library. The first release of React Native came in March 2015. At the time of writing this article, the latest stable release of React Native is 0.62.0, and it was released in March 2020.
Although relatively new, React Native has acquired a high degree of popularity. The โStack Overflow Developer Survey 2019โ report identifies it as the 8th most loved framework. Facebook, Walmart, and Bloomberg are some of the top companies that use React Native.
The popularity of React Native comes from its advantages. Some of its advantages are as follows:
Are you wondering whether React Native is just another of those hybrid frameworks like Ionic or Cordova? Itโs not! React Native is fundamentally different from these earlier hybrid frameworks.
React Native is very close to native. Consider the following aspects as described on the React Native website:
Due to these factors, React Native offers many more advantages compared to those earlier hybrid frameworks. We now review them.
#android app #frontend #ios app #mobile app development #benefits of react native #is react native good for mobile app development #native vs #pros and cons of react native #react mobile development #react native development #react native experience #react native framework #react native ios vs android #react native pros and cons #react native vs android #react native vs native #react native vs native performance #react vs native #why react native #why use react native
1619564940
If you have recently come across the world of containers, itโs probably not a bad idea to understand the underlying elements that work together to offer containerisation benefits. But before that, thereโs a question that you may ask. What problem do containers solve?
After building an application in a typical development lifecycle, the developer sends it to the tester for testing purposes. However, since the development and testing environments are different, the code fails to work.
Now, predominantly, there are two solutions to this โ either you use a Virtual Machine or a containerised environment such as Docker. In the good old times, organisations used to deploy VMs for running multiple applications.
So, why did they started adopting containerisation over VMs? In this article, we will provide detailed explanations of all such questions.
#docker containers #docker engine #docker #docker architecture
1599914520
Hello, in this post I will show you how to set up official Apache/Airflow with PostgreSQL and LocalExecutor using docker and docker-compose. In this post, I wonโt be going through Airflow, what it is, and how it is used. Please checktheofficial documentation for more information about that.
Before setting up and running Apache Airflow, please install Docker and Docker Compose.
In this chapter, I will show you files and directories which are needed to run airflow and in the next chapter, I will go file by file, line by line explaining what is going on.
Firstly, in the root directory create three more directories: dags, logs, and scripts. Further, create following files: **.env, docker-compose.yml, entrypoint.sh **and **dummy_dag.py. **Please make sure those files and directories follow the structure below.
#project structure
root/
โโโ dags/
โ โโโ dummy_dag.py
โโโ scripts/
โ โโโ entrypoint.sh
โโโ logs/
โโโ .env
โโโ docker-compose.yml
Created files should contain the following:
#docker-compose.yml
version: '3.8'
services:
postgres:
image: postgres
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
scheduler:
image: apache/airflow
command: scheduler
restart_policy:
condition: on-failure
depends_on:
- postgres
env_file:
- .env
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
webserver:
image: apache/airflow
entrypoint: ./scripts/entrypoint.sh
restart_policy:
condition: on-failure
depends_on:
- postgres
- scheduler
env_file:
- .env
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./scripts:/opt/airflow/scripts
ports:
- "8080:8080"
#entrypoint.sh
#!/usr/bin/env bash
airflow initdb
airflow webserver
#.env
AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://airflow:airflow@postgres/airflow
AIRFLOW__CORE__EXECUTOR=LocalExecutor
#dummy_dag.py
from airflow import DAG
from airflow.operators.dummy_operator import DummyOperator
from datetime import datetime
with DAG('example_dag', start_date=datetime(2016, 1, 1)) as dag:
op = DummyOperator(task_id='op')
Positioning in the root directory and executing โdocker-compose upโ in the terminal should make airflow accessible on localhost:8080. Image bellow shows the final result.
If you encounter permission errors, please run โchmod -R 777โ on all subdirectories, e.g. โchmod -R 777 logs/โ
For the curious ones...
In Leymanโs terms, docker is used when managing individual containers and docker-compose can be used to manage multi-container applications. It also moves many of the options you would enter on the docker run into the docker-compose.yml file for easier reuse. It works as a front end "script" on top of the same docker API used by docker. You can do everything docker-compose does with docker commands and a lot of shell scripting.
Before running our multi-container docker applications, docker-compose.yml must be configured. With that file, we define services that will be run on docker-compose up.
The first attribute of docker-compose.yml is version, which is the compose file format version. For the most recent version of file format and all configuration options click here.
Second attribute is services and all attributes one level bellow services denote containers used in our multi-container application. These are postgres, scheduler and webserver. Each container has image attribute which points to base image used for that service.
For each service, we define environment variables used inside service containers. For postgres it is defined by environment attribute, but for scheduler and webserver it is defined by .env file. Because .env is an external file we must point to it with env_file attribute.
By opening .env file we can see two variables defined. One defines executor which will be used and the other denotes connection string. Each connection string must be defined in the following manner:
dialect+driver://username:password@host:port/database
Dialect names include the identifying name of the SQLAlchemy dialect, a name such as sqlite
, mysql
, postgresql
, oracle
, or mssql
. Driver is the name of the DBAPI to be used to connect to the database using all lowercase letters. In our case, connection string is defined by:
postgresql+psycopg2://airflow:airflow@postgres/airflow
Omitting port after host part denotes that we will be using default postgres port defined in its own Dockerfile.
Every service can define command which will be run inside Docker container. If one service needs to execute multiple commands it can be done by defining an optional .sh file and pointing to it with entrypoint attribute. In our case we have entrypoint.sh inside the scripts folder which once executed, runs airflow initdb and airflow webserver. Both are mandatory for airflow to run properly.
Defining depends_on attribute, we can express dependency between services. In our example, webserver starts only if both scheduler and postgres have started, also the scheduler only starts after postgres have started.
In case our container crashes, we can restart it by restart_policy. The restart_policy configures if and how to restart containers when they exit. Additional options are condition, delay, max_attempts, and window.
Once service is running, it is being served on containers defined port. To access that service we need to expose the containers port to the host's port. That is being done by ports attribute. In our case, we are exposing port 8080 of the container to TCP port 8080 on 127.0.0.1 (localhost) of the host machine. Left side of :
defines host machines port and the right-hand side defines containers port.
Lastly, the volumes attribute defines shared volumes (directories) between host file system and docker container. Because airflows default working directory is /opt/airflow/ we need to point our designated volumes from the root folder to the airflow containers working directory. Such is done by the following command:
#general case for airflow
- ./<our-root-subdir>:/opt/airflow/<our-root-subdir>
#our case
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./scripts:/opt/airflow/scripts
...
This way, when the scheduler or webserver writes logs to its logs directory we can access it from our file system within the logs directory. When we add a new dag to the dags folder it will automatically be added in the containers dag bag and so on.
Originally published by Ivan Rezic at Towardsdatascience
#docker #how-to #apache-airflow #docker-compose #postgresql