Script only processes 1st Line of input file

bash script to read a file and process commands remotely. It currently only processes the first line (server1)

bash script to read a file and process commands remotely. It currently only processes the first line (server1)

Need to remotely process 3 commands on server1 , then server2 ......

#!/bin/bash

while read line; do
sshpass -f password ssh -o StrictHostKeyChecking=no [email protected]$line zgrep "^A30=" /var/tmp/logs1/messages.* | >> Output.txt
sshpass -f password ssh -o StrictHostKeyChecking=no [email protected]$line zgrep "^A30=" /var/tmp/logs2/messages.* | >> Output.txt
sshpass -f password ssh -o StrictHostKeyChecking=no [email protected]$line zgrep "^A30=" /var/tmp/logs3/messages.* | >> Output.txt
done < file1

file1:

server1
server2
server3


How to Install Docker on Windows 10 Home?

How to Install Docker on Windows 10 Home?

If you’ve ever tried to install Docker for Windows, you’ve probably came to realize that the installer won’t run on Windows 10 Home. Only Windows Pro, Enterprise or Education support Docker. Upgrading your Windows license is pricey, and also pointless, since you can still run Linux Containers on Windows without relying on Hyper-V technology, a requirement for Docker for Windows.

If you’ve ever tried to install Docker for Windows, you’ve probably came to realize that the installer won’t run on Windows 10 Home. Only Windows Pro, Enterprise or Education support Docker. Upgrading your Windows license is pricey, and also pointless, since you can still run Linux Containers on Windows without relying on Hyper-V technology, a requirement for Docker for Windows.

If you plan on running Windows Containers, you’ll need a specific version and build of Windows Server. Check out the Windows container version compatibility matrix for details.

99.999% of the time, you only need a Linux Container, since it supports software built using open-source and .NET technologies. In addition, Linux Containers can run on any distro and on popular CPU architectures, including x86_64, ARM and IBM.

In this tutorial, I’ll show you how to quickly setup a Linux VM on Windows Home running Docker Engine with the help of Docker Machine. Here’s a list of software you’ll need to build and run Docker containers:

  • Docker Machine: a CLI tool for installing Docker Engine on virtual hosts
  • Docker Engine: runs on top of the Linux Kernel; used for building and running containers
  • Docker Client: a CLI tool for issuing commands to Docker Engine via REST API
  • Docker Compose: a tool for defining and running multi-container applications

I’ll show how to perform the installation in the following environments:

  1. On Windows using Git Bash
  2. On Windows Subsystem for Linux 2 (running Ubuntu 18.04)

First, allow me to explain how the Docker installation will work on Windows.

How it Works

As you probably know, Docker requires a Linux kernel to run Linux Containers. For this to work on Windows, you’ll need to set up a Linux virtual machine to run as guest in Windows 10 Home.

docker windows home

Setting up the Linux VM can be done manually. The easiest way is to use Docker Machine to do this work for you by running a single command. This Docker Linux VM can either run on your local system or on a remote server. Docker client will use SSH to communicate with Docker Engine. Whenever you create and run images, the actual process will happen within the VM, not on your host (Windows).

Let’s dive into the next section to set up the environment needed to install Docker.

Initial Setup

You may or may not have the following applications installed on your system. I’ll assume you don’t. If you do, make sure to upgrade to the latest versions. I’m also assuming you’re running the latest stable version of Windows. At the time of writing, I’m using Windows 10 Home version 1903. Let’s start installing the following:

  1. Install Git Bash for Windows. This will be our primary terminal for running Docker commands.

  2. Install Chocolatey, a package manager for Windows. It will make the work of installing the rest of the programs easier.

  3. Install VirtualBox and its extension. Alternatively, If you have finished installing Chocolatey, you can simply execute this command inside an elevated PowerShell terminal:

    C:\ choco install virtualbox
    
    
  4. If you’d like to try running Docker inside the WSL2 environment, you’ll need to set up WSL2 first. You can follow this tutorial for step-by-step instructions.

Docker Engine Setup

Installing Docker Engine is quite simple. First we need to install Docker Machine.

  1. Install Docker Machine by following instructions on this page. Alternatively, you can execute this command inside an elevated PowerShell terminal:

    C:\ choco install docker-machine
    
    
  2. Using Git Bash terminal, use Docker Machine to install Docker Engine. This will download a Linux image containing the Docker Engine and have it run as a VM using VirtualBox. Simply execute the following command:

    $ docker-machine create --driver virtualbox default
    
    
  3. Next, we need to configure which ports are exposed when running Docker containers. Doing this will allow us to access our applications via localhost<:port>. Feel free to add as many as you want. To do this, you’ll need to launch Oracle VM VirtualBox from your start menu. Select default VM on the side menu. Next click on Settings > Network > Adapter 1 > Port Forwarding. You should find the ssh forwarding port already set up for you. You can add more like so:

    docker vm ports

  4. Next, we need to allow Docker to mount volumes located on your hard drive. By default, you can only mount from the C://Users/ directory. To add a different path, simply go to Oracle VM VirtualBox GUI. Select default VM and go to Settings > Shared Folders. Add a new one by clicking the plus symbol. Enter the fields like so. If there’s an option called Permanent, enable it.

    docker vm volumes

  5. To get rid of the invalid settings error as seen in the above screenshot, simply increase Video Memory under the Display tab in the settings option. Video memory is not important in this case, as we’ll run the VM in headless mode.

  6. To start the Linux VM, simply execute this command in Git Bash. The Linux VM will launch. Give it some time for the boot process to complete. It shouldn’t take more than a minute. You’ll need to do this every time you boot your host OS:

    $ docker-machine start vbox
    
    
  7. Next, we need to set up our Docker environment variables. This is to allow the Docker client and Docker Compose to communicate with the Docker Engine running in the Linux VM, default. You can do this by executing the commands in Git Bash:

    # Print out docker machine instance settings
    $ docker-machine env default
    
    # Set environment variables using Linux 'export' command
    $ eval $(docker-machine env default --shell linux)
    
    

    You’ll need to set the environment variables every time you start a new Git Bash terminal. If you’d like to avoid this, you can copy eval output and save it in your .bashrc file. It should look something like this:

    export DOCKER_TLS_VERIFY="1"
    export DOCKER_HOST="tcp://192.168.99.101:2376"
    export DOCKER_CERT_PATH="C:\Users\Michael Wanyoike\.docker\machine\machines\default"
    export DOCKER_MACHINE_NAME="default"
    export COMPOSE_CONVERT_WINDOWS_PATHS="true"
    
    

    IMPORTANT: for the DOCKER_CERT_PATH, you’ll need to change the Linux file path to a Windows path format. Also take note that there’s a chance the IP address assigned might be different from the one you saved every time you start the default VM.

In the next section, we’ll install Docker Client and Docker Compose.

Docker Tools Setup

For this section, you’ll need to install the following tools using PowerShell in admin mode. These tools are packaged inside the Docker for Windows installer. Since the installer refuses to run on Windows 10 Home, we’ll install these programs individually using Chocolatey:

C:\ choco install docker-cli
C:\ choco install docker-compose

Once the installation process is complete, you can switch back to Git Bash terminal. You can continue using PowerShell, but I prefer Linux syntax to execute commands. Let’s execute the following commands to ensure Docker is running:

# Start Docker VM
$ docker-machine start default

# Confirm Docker VM is running
$ docker-machine ls

# Configure Docker Envrionment to use Docker Vm
$ eval $(docker-machine env default --shell linux)

# Confirm Docker is connected. Should output Docker VM specs
$ docker info

# Run hello-world docker image. Should output "Hello from Docker"
$ docker run hello-world

If all the above commands run successfully, it means you’ve successfully installed Docker. If you want to try out a more ambitious example, I have a small Node.js application that that I’ve configured to run on Docker containers. First, you’ll need to install GNU Make using PowerShell with Admin privileges:

C:\ choco install make

Next, execute the following commands. Running this Node.js example will ensure you have no problem with exposed ports and mounting volumes on the Windows filesystem. First, navigate to a folder that that you’ve already mounted in VirtualBox settings. Next, execute the following commands:

$ git clone [email protected]:brandiqa/docker-node.git

$ cd docker-node/website

$ make

When you hit the last command, you should expect a similar output:

docker volume create nodemodules
nodemodules
docker-compose -f docker-compose.builder.yml run --rm install
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})

audited 9731 packages in 21.405s

docker-compose up
Starting website_dev_1 ... done
Attaching to website_dev_1
dev_1  |
dev_1  | > [email protected] start /usr/src/app
dev_1  | > parcel src/index.html --hmr-port 1235
dev_1  |
dev_1  | Server running at http://localhost:1234

Getting the above output means that volume mounting occurred successfully. Open localhost:1234 to confirm that the website can be accessed. This will confirm that you have properly configured the ports. You can edit the source code, for example change the h1 title in App.jsx. As soon as you save the file, the browser page should refresh automatically. This means hot module reloading works from a Docker container.

I would like to bring your attention to the docker-compose.yml file in use. For hot module reloading to work from a Docker Container in Windows requires the following:

  1. When using parcel, specify HMR port in your package.json start script:

    parcel src/index.html –hmr-port 1235

  2. In the VM’s Port Forwarding rules, make sure these ports are exposed to the host system:

    • 1234
    • 1235
  3. inotify doesn’t work on vboxsf filesystems, so file changes can’t be detected. The workaround is to set polling for Chokidar via environment variables in docker-compose.yml. Here’s the full file so that you can see how it’s set:

    version: '3'
    services:
      dev:
        image: node:10-jessie-slim
        volumes:
          - nodemodules:/usr/src/app/node_modules
          - ./:/usr/src/app
        working_dir: /usr/src/app
        command: npm start
        ports:
          - 1234:1234
          - 1235:1235
        environment:
        - CHOKIDAR_USEPOLLING=1
    volumes:
      nodemodules:
        external: true
    
    

Now that we have a fully working implementation of Docker on Windows 10 home, let’s set it up on WSL2 for those who are interested.

Windows Subsystem for Linux 2

Installing Docker on WSL2 is not as straightforward as it seems. Unfortunately, the latest version of Docker Engine can’t run on WSL2. However, there’s an older version, docker-ce=17.09.0~ce-0~ubuntu, that’s capable of running well in WSL2. I won’t be covering that here. Instead, I’ll show you how to access Docker Engine running in the VM we set up earlier from a WSL terminal.

All we have to do is install Docker client and Docker compose. Assuming you’re running WSL2 Ubuntu Terminal, execute the following:

  1. Install Docker using the official instructions:

    # Update the apt package list.
    sudo apt-get update -y
    
    # Install Docker's package dependencies.
    sudo apt-get install -y \
        apt-transport-https \
        ca-certificates \
        curl \
        software-properties-common
    
    # Download and add Docker's official public PGP key.
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    
    # Verify the fingerprint.
    sudo apt-key fingerprint 0EBFCD88
    
    # Add the `stable` channel's Docker upstream repository.
    #
    # If you want to live on the edge, you can change "stable" below to "test" or
    # "nightly". I highly recommend sticking with stable!
    sudo add-apt-repository \
    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) \
    stable"
    
    # Update the apt package list (for the new apt repo).
    sudo apt-get update -y
    
    # Install the latest version of Docker CE.
    sudo apt-get install -y docker-ce
    
    # Allow your user to access the Docker CLI without needing root access.
    sudo usermod -aG docker $USER
    
    
  2. Install Docker Compose using this official guide. An alternative is to use PIP, which will simply install the latest stable version:

    # Install Python and PIP.
    sudo apt-get install -y python python-pip
    
    # Install Docker Compose into your user's home directory.
    pip install --user docker-compose
    
    
  3. Fix the Docker mounting issue in WSL terminal by inserting this content in /etc/wsl.conf. Create the file if it doesn’t exist:

    [automount]
    root = /
    options = "metdata"
    
    

    You’ll need to restart your machine for this setting to take effect.

  4. Assuming that Linux Docker VM is running, you’ll need to connect the Docker tools in the WSL environment to it. If you can access docker-machine from the Ubuntu terminal, run the eval command. Otherwise, you can insert the following Docker variable in your .bashrc file. Here is an example of mine:

    export DOCKER_HOST="tcp://192.168.99.101:2376"
    export DOCKER_CERT_PATH="/c/Users/Michael Wanyoike/.docker/machine/machines/vbox"
    export DOCKER_MACHINE_NAME="vbox"
    export COMPOSE_CONVERT_WINDOWS_PATHS="true"
    
    

    You’ll need to restart your terminal or execute source ~/.bashrc for the settings to take effect. Running Docker commands should work properly in WSL without a hitch.

Switching to Linux

We’re now coming to the end of this article. The steps for setting up Docker in Windows 10 is a bit of a lengthy process. If you plan to reformat your machine, you’ll have to go through the same process again. It’s worse if your job is to install Docker on multiple machines running Windows 10 Home.

A simpler solution is to switch to Linux for development. You can create a partition and set up dual booting. You can also use VirtualBox to install and run a full Linux Distro inside Windows. Check out which popular distro you’d like to try out. I use Linux Lite because it’s lightweight and is based on Ubuntu. With VirtualBox, you can try out as many distros as you wish.

Linux Lite VM

If you’re using a distro based on Ubuntu, you can install Docker easily with these commands:

# Install https://snapcraft.io/ package manager
sudo apt install snapd

# Install Docker Engine, Docker Client and Docker Compose
sudo snap install docker

# Run Docker commands without sudo
sudo usermod -aG docker $USER

You’ll need to log out and then log in for the last command to take effect. After that, you can run any Docker command without issue. You don’t need to worry about issues with mounting or ports. Docker Engine runs as a service in Linux, which by default starts automatically. No need for provisioning a Docker VM. Everything works out of the box without relying on a hack.

Summary

I hope you’ve had smooth sailing installing and running Docker on Windows 10 Home. I believe this technique should work on older versions such as Windows 7. In case you run into a problem, just go through the instructions to see if you missed something. Do note, however, that I haven’t covered every Docker feature. You may encounter a bug or an unsupported feature that requires a workaround, or may have no solution at all. If that’s the case, I’d recommend you just switch to Linux if you want a smoother development experience using Docker.

How to Install Postman in Linux

How to Install Postman in Linux

<img src="https://linux4one.com/wp-content/uploads/2019/04/How-to-Install-Postman-in-Linux.jpg">

How to Install Postman in Linux

Table of Contents

Install Postman in Linux

Postman is one of the useful tool built for API development. Using Postman you can develop API faster. In the beginning, Postman provided as an extension of Google Chrome browser but later it is developed as an independent tool as very much growth in its popularity. Today Postman is available for all major operating systems such as Linux, Windows, and MacOS. In this tutorial, you are going to learn How to install Postman in Linux.

Prerequisites

Before you start installing Postman in Linux. You must have a non-root user account on your server with sudo privileges.

Install Postman in Linux

Installing Postman on Linux is one of the easiest things today. We are going to install Postman on Ubuntu using snappy package system tool.

Download Postman by running following command in your Linux system:

wget https://dl.pstmn.io/download/latest/linux64 -O postman-linux-x64.tar.gz

Extract the downloaded file by running the following command in /opt directory:

sudo tar -xvzf postman-linux-x64.tar.gz -C /opt

Finally, create a symbolic link running following command in terminal:

sudo ln -s /opt/Postman/Postman /usr/bin/postman

After completing the above process you have successfully installed Postman on your Linux system.

Now to create a desktop icon you can run below command:

cat << EOF > ~/.local/share/applications/postman2.desktop
[Desktop Entry]
Name=Postman
GenericName=API Client
X-GNOME-FullName=Postman API Client
Comment=Make and view REST API calls and responses
Keywords=api;
Exec=/opt/Postman/Postman
Terminal=false
Type=Application
Icon=/opt/Postman/app/resources/app/assets/icon.png
Categories=Development;Utilities;
EOF

Now you have successfully completed installtion of Postman in your Linux system.

Using Postman

To start using Postman, go to Applications -> Postman and launch Postman in Linux or you can simply run following command.

postman

On the first launch, you will see the following window for Sign Up using Email or Google Account. Otherwise, you can also Sign In if you have an existing account.

how to install Postman – Create Account or Sign In

After signing in or creating the new account you are ready to start using Postman for API development.

Following is an example in which we sending a GET request to the URL.

How to install Postman – Start Using Postman

Conclusion

In this tutorial, you have learned how to install Postman in Linux successfully. If you have any of the queries regarding this then you can comment below.

Originally published on https://linux4one.com



How To Install Linux, Nginx, MySQL, PHP (LEMP stack) on Ubuntu 18.04?

How To Install Linux, Nginx, MySQL, PHP (LEMP stack) on Ubuntu 18.04?

This guide demonstrates how to install a LEMP stack on an Ubuntu 18.04 server. The Ubuntu operating system takes care of the first requirement. We will describe how to get the rest of the components up and running.

Introduction

The LEMP software stack is a group of software that can be used to serve dynamic web pages and web applications. This is an acronym that describes a Linux operating system, with an Nginx (pronounced like “Engine-X”) web server. The backend data is stored in the MySQL database and the dynamic processing is handled by PHP.

This guide demonstrates how to install a LEMP stack on an Ubuntu 18.04 server. The Ubuntu operating system takes care of the first requirement. We will describe how to get the rest of the components up and running.

Step 1 – Installing the Nginx Web Server

In order to display web pages to our site visitors, we are going to employ Nginx, a modern, efficient web server.

All of the software used in this procedure will come from Ubuntu’s default package repositories. This means we can use the apt package management suite to complete the necessary installations.

Since this is our first time using apt for this session, start off by updating your server’s package index. Following that, install the server:

sudo apt update
sudo apt install nginx

On Ubuntu 18.04, Nginx is configured to start running upon installation.

If you have the ufw firewall running, as outlined in the initial setup guide, you will need to allow connections to Nginx. Nginx registers itself with ufw upon installation, so the procedure is rather straightforward.

It is recommended that you enable the most restrictive profile that will still allow the traffic you want. Since you haven’t configured SSL for your server in this guide, you will only need to allow traffic on port 80.

Enable this by typing:

sudo ufw allow 'Nginx HTTP'

You can verify the change by running:

sudo ufw status

This command’s output will show that HTTP traffic is allowed:

OutputStatus: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
Nginx HTTP                 ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
Nginx HTTP (v6)            ALLOW       Anywhere (v6)

With the new firewall rule added, you can test if the server is up and running by accessing your server’s domain name or public IP address in your web browser.

If you do not have a domain name pointed at your server and you do not know your server’s public IP address, you can find it by running the following command:

ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'

This will print out a few IP addresses. You can try each of them in turn in your web browser.

As an alternative, you can check which IP address is accessible, as viewed from other locations on the internet:

curl -4 icanhazip.com

Type the address that you receive in your web browser and it will take you to Nginx’s default landing page:

http://server_domain_or_IP

Nginx default page

If you see the above page, you have successfully installed Nginx.

Step 2 – Installing MySQL to Manage Site Data

Now that you have a web server, you need to install MySQL (a database management system) to store and manage the data for your site.

Install MySQL by typing:

sudo apt install mysql-server

The MySQL database software is now installed, but its configuration is not yet complete.

To secure the installation, MySQL comes with a script that will ask whether we want to modify some insecure defaults. Initiate the script by typing:

sudo mysql_secure_installation

This script will ask if you want to configure the VALIDATE PASSWORD PLUGIN.

Warning: Enabling this feature is something of a judgment call. If enabled, passwords which don’t match the specified criteria will be rejected by MySQL with an error. This will cause issues if you use a weak password in conjunction with software which automatically configures MySQL user credentials, such as the Ubuntu packages for phpMyAdmin. It is safe to leave validation disabled, but you should always use strong, unique passwords for database credentials.

Answer Y for yes, or anything else to continue without enabling.

VALIDATE PASSWORD PLUGIN can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD plugin?

Press y|Y for Yes, any other key for No:

If you’ve enabled validation, the script will also ask you to select a level of password validation. Keep in mind that if you enter 2 – for the strongest level – you will receive errors when attempting to set any password which does not contain numbers, upper and lowercase letters, and special characters, or which is based on common dictionary words.

There are three levels of password validation policy:

LOW    Length >= 8
MEDIUM Length >= 8, numeric, mixed case, and special characters
STRONG Length >= 8, numeric, mixed case, special characters and dictionary                  file

Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 1

Next, you’ll be asked to submit and confirm a root password:

Please set the password for root here.

New password:

Re-enter new password:

For the rest of the questions, you should press Y and hit the ENTER key at each prompt. This will remove some anonymous users and the test database, disable remote root logins, and load these new rules so that MySQL immediately respects the changes we have made.

Note that in Ubuntu systems running MySQL 5.7 (and later versions), the root MySQL user is set to authenticate using the auth_socket plugin by default rather than with a password. This allows for some greater security and usability in many cases, but it can also complicate things when you need to allow an external program (e.g., phpMyAdmin) to access the user.

If using the auth_socket plugin to access MySQL fits with your workflow, you can proceed to Step 3. If, however, you prefer to use a password when connecting to MySQL as root, you will need to switch its authentication method from auth_socket to mysql_native_password. To do this, open up the MySQL prompt from your terminal:

sudo mysql

Next, check which authentication method each of your MySQL user accounts use with the following command:

SELECT user,authentication_string,plugin,host FROM mysql.user;

Output+------------------+-------------------------------------------+-----------------------+-----------+
| user             | authentication_string                     | plugin                | host      |
+------------------+-------------------------------------------+-----------------------+-----------+
| root             |                                           | auth_socket           | localhost |
| mysql.session    | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
| mysql.sys        | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
| debian-sys-maint | *CC744277A401A7D25BE1CA89AFF17BF607F876FF | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+-----------+
4 rows in set (0.00 sec)

In this example, you can see that the root user does in fact authenticate using the auth_socket plugin. To configure the root account to authenticate with a password, run the following ALTER USER command. Be sure to change password to a strong password of your choosing:

ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';

Then, run FLUSH PRIVILEGES which tells the server to reload the grant tables and put your new changes into effect:

FLUSH PRIVILEGES;

Check the authentication methods employed by each of your users again to confirm that root no longer authenticates using the auth_socket plugin:

SELECT user,authentication_string,plugin,host FROM mysql.user;

Output+------------------+-------------------------------------------+-----------------------+-----------+
| user             | authentication_string                     | plugin                | host      |
+------------------+-------------------------------------------+-----------------------+-----------+
| root             | *3636DACC8616D997782ADD0839F92C1571D6D78F | mysql_native_password | localhost |
| mysql.session    | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
| mysql.sys        | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
| debian-sys-maint | *CC744277A401A7D25BE1CA89AFF17BF607F876FF | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+-----------+
4 rows in set (0.00 sec)

You can see in this example output that the root MySQL user now authenticates using a password. Once you confirm this on your own server, you can exit the MySQL shell:

exit

Note: After configuring your root MySQL user to authenticate with a password, you’ll no longer be able to access MySQL with the sudo mysql command used previously. Instead, you must run the following:

mysql -u root -p

After entering the password you just set, you will see the MySQL prompt.

At this point, your database system is now set up and you can move on to installing PHP.

Step 3 – Installing PHP and Configuring Nginx to Use the PHP Processor

You now have Nginx installed to serve your pages and MySQL installed to store and manage your data. However, you still don’t have anything that can generate dynamic content. This is where PHP comes into play.

Since Nginx does not contain native PHP processing like some other web servers, you will need to install php-fpm, which stands for “fastCGI process manager”. We will tell Nginx to pass PHP requests to this software for processing.

Note: Depending on your cloud provider, you may need to add Ubuntu’s universe repository, which includes free and open-source software maintained by the Ubuntu community, before installing the php-fpm package. You can do this by typing:

sudo add-apt-repository universe

Install the php-fpm module along with an additional helper package, php-mysql, which will allow PHP to communicate with your database backend. The installation will pull in the necessary PHP core files. Do this by typing:

sudo apt install php-fpm php-mysql

You now have all of the required LEMP stack components installed, but you still need to make a few configuration changes in order to tell Nginx to use the PHP processor for dynamic content.

This is done on the server block level (server blocks are similar to Apache’s virtual hosts). To do this, open a new server block configuration file within the /etc/nginx/sites-available/ directory. In this example, the new server block configuration file is named example.com, although you can name yours whatever you’d like:

sudo nano /etc/nginx/sites-available/example.com

By editing a new server block configuration file, rather than editing the default one, you’ll be able to easily restore the default configuration if you ever need to.

Add the following content, which was taken and slightly modified from the default server block configuration file, to your new server block configuration file:

server {
        listen 80;
        root /var/www/html;
        index index.php index.html index.htm index.nginx-debian.html;
        server_name example.com;

        location / {
                try_files $uri $uri/ =404;
        }

        location ~ \.php$ {
                include snippets/fastcgi-php.conf;
                fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
        }

        location ~ /\.ht {
                deny all;
        }
}

Here’s what each of these directives and location blocks do:

  • listen — Defines what port Nginx will listen on. In this case, it will listen on port 80, the default port for HTTP.
  • root — Defines the document root where the files served by the website are stored.
  • index — Configures Nginx to prioritize serving files named index.php when an index file is requested, if they’re available.
  • server_name — Defines which server block should be used for a given request to your server. Point this directive to your server’s domain name or public IP address.
  • location / — The first location block includes a try_files directive, which checks for the existence of files matching a URI request. If Nginx cannot find the appropriate file, it will return a 404 error.
  • location ~ \.php$ — This location block handles the actual PHP processing by pointing Nginx to the fastcgi-php.conf configuration file and the php7.2-fpm.sock file, which declares what socket is associated with php-fpm.
  • location ~ /\.ht — The last location block deals with .htaccess files, which Nginx does not process. By adding the deny all directive, if any .htaccess files happen to find their way into the document root they will not be served to visitors.

After adding this content, save and close the file. Enable your new server block by creating a symbolic link from your new server block configuration file (in the /etc/nginx/sites-available/ directory) to the /etc/nginx/sites-enabled/ directory:

sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/

Then, unlink the default configuration file from the /sites-enabled/ directory:

sudo unlink /etc/nginx/sites-enabled/default

Note: If you ever need to restore the default configuration, you can do so by recreating the symbolic link, like this:

sudo ln -s /etc/nginx/sites-available/default /etc/nginx/sites-enabled/

Test your new configuration file for syntax errors by typing:

sudo nginx -t

If any errors are reported, go back and recheck your file before continuing.

When you are ready, reload Nginx to make the necessary changes:

sudo systemctl reload nginx

This concludes the installation and configuration of your LEMP stack. However, it’s prudent to confirm that all of the components can communicate with one another.

Step 4 – Creating a PHP File to Test Configuration

Your LEMP stack should now be completely set up. You can test it to validate that Nginx can correctly hand .php files off to the PHP processor.

To do this, use your text editor to create a test PHP file called info.php in your document root:

sudo nano /var/www/html/info.php

Enter the following lines into the new file. This is valid PHP code that will return information about your server:

<?php
phpinfo();

When you are finished, save and close the file.

Now, you can visit this page in your web browser by visiting your server’s domain name or public IP address followed by /info.php:

http://your_server_domain_or_IP/info.php

You should see a web page that has been generated by PHP with information about your server:

PHP page info

If you see a page that looks like this, you’ve set up PHP processing with Nginx successfully.

After verifying that Nginx renders the page correctly, it’s best to remove the file you created as it can actually give unauthorized users some hints about your configuration that may help them try to break in. You can always regenerate this file if you need it later.

For now, remove the file by typing:

sudo rm /var/www/html/info.php

With that, you now have a fully-configured and functioning LEMP stack on your Ubuntu 18.04 server.

Conclusion

A LEMP stack is a powerful platform that will allow you to set up and serve nearly any website or application from your server.

A Deep Dive Into Docker

A Deep Dive Into Docker

Docker: Enterprise Container Platform. Docker is a set of coupled software-as-a-service and platform-as-a-service products that use operating-system-level virtualization to develop and deliver software in packages called containers.

Docker: Enterprise Container Platform. Docker is a set of coupled software-as-a-service and platform-as-a-service products that use operating-system-level virtualization to develop and deliver software in packages called containers.

Introduction

Since its open source launch in 2013, Docker became one of the most popular pieces of technology out there. A lot of companies are contributing, and a huge amount of people are using and adopting it. But why is it so popular? What does it offer that was not there before? In this blog post we want to dive deeper into the internals of Docker to understand how it works.

The first part of this post will give a quick overview about the basic architectural concepts. In the second part we will introduce four main functionalities that form the foundation for isolation in Docker containers: 1) cgroups, 2) namespaces, 3) stackable image-layers and copy-on-write, and 4) virtual network bridges. In the third section there will be a discussion about opportunities and challenges when using containers and Docker. We conclude by answering some frequently asked questions about Docker.

Basic Architecture

"Docker is an open-source project that automates the deployment of applications inside software containers." - Wikipedia
People usually refer to containers when talking about operating system level virtualization. Operating system level virtualization is a method in which the kernel of an operating system allows the existence of multiple isolated application instances. There are many implementations of containers available, one of which is Docker.

**Docker **launches containers based off of images. An image is like a blueprint, defining what should be inside the container when it is being created. The usual way to define an image is through a Dockerfile. A Dockerfile contains instructions on how to build your image step by step (don't worry you will understand more about what is going on internally later on). The following Dockerfile, for example, will start from an image containing OpenJDK, install Python 3 there, copy the requirements.txt inside the image and then install all Python packages from the requirements file.

FROM openjdk:8u212-jdk-slim RUN apt-get update \ && apt-get install -y --no-install-recommends \ Python3=3.5.3-1 \ Python3-pip=9.0.1-2+deb9u1 \ && rm -rf /var/lib/apt/lists/* COPY requirements.txt requirements.txt RUN pip3 install --upgrade -r requirements.txt 

Images are usually stored in image repositories called Docker registries. Dockerhub is a public Docker registry. In order to download images and start containers you need to have a Docker host. The Docker host is a Linux machine which runs the Docker daemon (a daemon is a background process that is always running, waiting for work to be done).

In order to launch a container, you can use the Docker client, which submits the necessary instructions to the Docker daemon. The Docker daemon is also talking to the Docker registry if it cannot find the requested image locally. The following picture illustrates the basic architecture of Docker:

What is important to note already is that Docker itself does not provide the actual containerization but merely uses what is available in Linux. Let's dive into the technical details.

Container Isolation

Docker achieves isolation of different containers through the combination of four main concepts: 1) cgroups, 2) namespaces, 3) stackable image-layers and copy-on-write, and 4) virtual network bridges. In the following sub sections we are going to explain these concepts in detail.

Control Groups (cgroups)

The Linux operating system manages the available hardware resources (memory, CPU, disk I/O, network I/O, ...) and provides a convenient way for processes to access and utilize them. The CPU scheduler of Linux, for example, takes care that every thread will eventually get some time on a CPU core so that no applications are stuck waiting for CPU time.

Control groups (cgroups) are a way to assign a subset of resources to a specific group of processes. This can be used to, e.g., ensure that even if your CPU is super busy with Python scripts, your PostgreSQL database still gets dedicated CPU and RAM. The following picture illustrates this in an example scenario with 4 CPU cores and 16 GB RAM.

All Zeppelin notebooks started in the zeppelin-grp will utilize only core 1 and 2, while the PostgreSQL processes share core 3 and 4. Same applies to the memory. Cgroups are one important building block in container isolation as they allow hardware resource isolation.

Namespaces

While cgroups isolate hardware resources, namespaces isolate and virtualize system resources. Examples of system resources that can be virtualized include process IDs, hostnames, user IDs, network access, interprocess communication, and filesystems. Let's first dive into an example of process ID (PID) namespaces to make this more clear and then briefly discuss other namespaces as well.

PID Namespaces

The Linux operating system organizes processes in a so called process tree. The tree root is the first process that gets started after the operating system is booted and it has the PID 1. As only one process tree can exist and all other processes (e.g. Firefox, terminal emulators, SSH servers) need to be (directly or indirectly) started by this process. Due to the fact that this process initializes all other processes it is often referred to as the init process.

The following figure illustrates parts of a typical process tree where the init process started a logging service (syslogd), a scheduler (cron) and a login shell (bash):

1 /sbin/init +-- 196 /usr/sbin/syslogd -s +-- 354 /usr/sbin/cron -s +-- 391 login +-- 400 bash +-- 701 /usr/local/bin/pstree 

Inside this tree, every process can see every other process and send signals (e.g. to request the process to stop) if they wish. Using PID namespaces virtualizes the PIDs for a specific process and all its sub processes, making it think that it has PID 1. It will then also not being able to see any other processes except its own children. The following figure illustrates how different PID namespaces isolate the process sub trees of two Zeppelin processes.

1 /sbin/init | + ... | +-- 506 /usr/local/zeppelin 1 /usr/local/zeppelin +-- 2 interpreter.sh +-- 3 interpreter.sh +-- 511 /usr/local/zeppelin 1 /usr/local/zeppelin +-- 2 java 

Filesystem Namespaces

Another use case for namespaces is the Linux filesystem. Similar to PID namespaces, filesystem namespaces virtualize and isolate parts of a tree - in this case the filesystem tree. The Linux filesystem is organized as a tree and it has a root, typically referred to as /.

In order to achieve isolation on a filesystem level, the namespace will map a node in the filesystem tree to a virtual root inside that namespace. Browsing the filesystem inside that namespace, Linux does not allow you to go beyond your virtualized root. The following drawing shows part of a filesystem that contains multiple "virtual" filesystem roots inside the /drives/xx folders, each containing different data.

Other Namespaces

Besides the PID and the filesystem namespaces there are also other kinds of namespaces. Docker allows you to utilize them in order to achieve the amount of isolation you require. The user namespace, e.g., allows you to map a user inside a container to a different user outside. This can be used to map the root user inside the container to a non-root user outside, so the process inside the container acts like an admin inside but outside it has no special privileges.

Stackable Image Layers and Copy-On-Write

Now that we have a more detailed understanding of how hardware and system resource isolation helps us to build containers, we are going to take a look into the way that Docker stores images. As we saw earlier, a Docker image is like a blueprint for a container. It comes with all dependencies required to start the application that it contains. But how are these dependencies stored?

Docker persists images in stackable layers. A layer contains the changes to the previous layer. If you, for example, install first Python and then copy a Python script, your image will have two additional layers: One containing the Python executables and another one containing the script. The following picture shows a Zeppelin, an Spring and a PHP image, all based on Ubuntu.

In order not to store Ubuntu three times, layers are immutable and shared. Docker uses copy-on-write to only make a copy of a file if there are changes.

When starting a container-based on an image, the Docker daemon will provide you with all the layers contained in that image and put it in an isolated filesystem namespace for this container. The combination of stackable layers, copy-on-write, and filesystem namespaces enable you to run a container completely independent of the things "installed" on the Docker host without wasting a lot of space. This is one of the reasons why containers are more lightweight compared to virtual machines.

Virtual Network Bridge

Now we know ways to isolate hardware resources (cgroups) and system resources (namespaces) and how to provide each container with a predefined set of dependencies to be independent from the host system (image layers). The last building block, the virtual network bridge, helps us in isolating the network stack inside a container.

A network bridge is a computer networking device that creates a single aggregate network from multiple communication networks or network segments. Let's look at a typical setup of a physical network bridge connecting two network segments (LAN 1 and LAN 2):

Usually we only have a limited amount of network interfaces (e.g. physical network cards) on the Docker host and all processes somehow need to share access to it. In order to isolate the networking of containers, Docker allows you to create a virtual network interface for each container. It then connects all the virtual network interfaces to the host network adapter, as shown in the following picture:

The two containers in this example have their own eth0 network interface inside their network namespace. It is mapped to a corresponding virtual network interfaces veth0 and veth1 on the Docker host. The virtual network bridge docker0 connects the host network interface eth0 to all container network interfaces.

Docker gives you a lot of freedom in configuring the bridge, so that you can expose only specific ports to the outside world or directly wire two containers together (e.g. a database container and an application which needs access to it) without exposing anything to the outside.

Connecting the Dots

Taking the techniques and features described in the previous sub sections, we are now able to "containerize" our applications. While it is possible to manually create containers using cgroups, namespaces, virtual network adapters, etc., Docker is a tool that makes it convenient and with almost no overhead. It handles all the manual, configuration intensive tasks, making containers accessible to software developers and not only Linux specialists.

In fact there is a nice talk available from one of the Docker engineers where he demonstrates how to manually create a container, also explaining the details we covered in this sub section.

Opportunities and Challenges of Docker

By now, many people are using Docker on a daily basis. What benefits do containers add? What does Docker offer that was not there before? In the end everything you need for containerizing your applications was already available in Linux for a long time, wasn't it?

Let's look at some opportunities (not an exhaustive list of course) that you have when moving to a container-based setup. Of course there are not only opportunities, but also challenges that might give you a hard time when adopting Docker. We are also going to name a few in this section.

Opportunities

Docker enables DevOps. The DevOps philosophy tries to connect development and operations activities, empowering developers to deploy their applications themselves. You build it, you run it. Having a Docker based deployment, developers can ship their artifacts together with the required dependencies directly without having to worry about dependency conflicts. Also it allows developers to write more sophisticated tests and execute them faster, e.g., creating a real database in another container and linking it to their application on their laptop in a few seconds (see Testcontainers).

Containers increase the predictability of your deployment. No more "runs on my machine". No more failing application deployments because one machine has a different version of Java installed. You build the image once and you can run it anywhere (given there is a Linux Kernel and Docker installed).

High adoption rate and good integration with many prominent cluster managers. One big part about using Docker is the software ecosystem around it. If you are planning to operate at scale, you won't get around using one or the other cluster manager. It doesn't matter if you decide to let someone else manage your deployment (e.g. Google Cloud, Docker Cloud, Heroku, AWS, ...) or want to maintain your own cluster manager (e.g. Kubernetes, Nomad, Mesos), there are plenty of solutions out there.

Lightweight containers enable fast failure recovery or auto-scaling. Imagine running an online shop. During Christmas time, people will start hitting your web servers and your current setup might not be sufficient in terms of capacity. Given that you have enough free hardware resources, starting a few more containers hosting your web application will take only a few seconds. Also failing machines can be recovered by just migrating the containers to a new machine.

Challenges

Containers give a false sense of security. There are many pitfalls when it comes to securing your applications. It is wrong to assume that one way to secure them is to put them inside containers. Containers do not secure anything, per se. If someone hacks your containerized web application he might be locked into the namespaces but there are several ways to escape this depending on the setup. Be aware of this and put as much effort into security as you would without Docker.

Docker makes it easy for people to deploy half baked solutions. Pick your favorite piece of software and enter its name it to the Google search bar, adding "Docker". You will probably find at least one if not dozens of already publicly available images containing your software at Dockerhub. So why not just execute it and give it a shot? What can go wrong? Many things can go wrong. Things happen to look shiny and awesome when put into containers and people stop paying attention to the actual software and configuration inside.

The fat container anti-pattern results in large, hard-to-manage deployment artifacts. I have seen Docker images which require you to expose more than 20 ports for different applications inside when a the container. The philosophy of Docker is that one container should do one job and you should rather compose them instead of making them heavier. If you end up putting all your tools together in one container you lose all the advantages, might have different versions of Java or Python inside and end up with a 20 GB, unmanageable image.

Deep Linux knowledge might still be required to debug certain situations. You might have heard your colleague saying that XXX does not work with Docker. There are multiple reasons why this could happen. Some applications have issues running inside a bridged network namespace if they do not distinguish properly between the network interface they bind to and the one they advertise. Another issue can be related to cgroups and namespaces where default settings in terms of shared memory are not the same as on your favorite Linux distribution, leading to OOM errors when running inside containers. However, most of the issues are not actually related to Docker but to the application not being designed properly and they are not that frequent. Still they require some deeper understanding of how Linux and Docker works which not every Docker user has.

Frequently Asked Questions

Q: What's the difference between a container and a virtual machine?

Without diving too much into details about the architecture of virtual machines (VMs), let us look at the main difference between the two on a conceptual level. Containers run inside an operating system, using kernel features to isolate applications. VMs on the other hand require a hypervisor which runs inside an operating system. The hypervisor then creates virtual hardware which can be accessed by another set of operating systems. The following illustration compares a virtual machine based application setup and a container-based setup.

As you can see, the container-based setup has less overhead as it does not require an additional operating system for each application. This is possible because the container manager (e.g. Docker) uses operating system functionality directly to isolate applications in a more lightweight fashion.

Does that mean that containers are superior to virtual machines? It depends. Both technologies have their use cases and it sometimes even make sense to combine them, running a container manager inside a VM. There are many blog posts out there discussing the pros and cons of both solutions so we're not going to go into detail right now. It is important to understand the difference and to not see containers as some kind of "lightweight VM", because internally they are different.

Q: Do containers contain?

Looking at the definition of containers and what we've learned so far, we can safely say that it is possible to use Docker to deploy isolated applications. By combining control groups and namespaces with stackable image layers and virtual network interfaces plus a virtual network bridge, we have all the tools required to completely isolate an application, possibly also locking the process in the container. The reality shows that it's not that easy though. First, it needs to be configured correctly and secondly, you will notice that completely isolated containers don't make a lot of sense most of the time.

In the end your application somehow needs to have some side effect (persisting data to disk, sending packets over the network, ...). So you will end up breaking the isolation by forwarding network traffic or mounting host volumes into your filesystem namespace. Also it is not required to use all available namespace features. While the network, PID and filesystem namespace features are enabled by default, using the user ID namespace requires you to add extra configuration options.

So it is false to assume that just by putting something inside a container makes it secure. AWS, e.g., uses a lightweight VM engine called Firecracker for secure and multi-tenant execution of short-lived workloads.

Q: Do containers make my production environment more stable?

Some people argue that containers increase stability because they isolate errors. While this is true to the extent that properly configured namespaces and cgroups will limit side effects of one process going rogue, in practice there are some things to keep in mind.

As mentioned earlier, containers do only contain if configured properly and most of the time you want them to interact with other parts of your system. It is therefore possible to say that containers can help to increase stability in your deployment but you should always keep in mind that it does not protect your applications from failing.

Conclusion

Docker is a great piece of technology to independently deploy applications in a more or less reproducible and isolated way. As always, there is no one-size-fits-all solution and you should understand your requirements in terms of security, performance, deployability, observability, and so on, before choosing Docker as the tool of your choice.