Learn to Setup Linux Ubuntu 18.04 For Web Development Environment - YouTube

Build Docker Images and Host a Docker Image Repository with GitLab

Build Docker Images and Host a Docker Image Repository with GitLab

In this tutorial, you'll learn how to build Docker images and host a Docker image repository with GitLab. We set up a new GitLab runner to build Docker images, created a private Docker registry to store them in, and updated a Node.js app to be built and tested inside of Docker containers.

In this tutorial, you'll learn how to build Docker images and host a Docker image repository with GitLab. We set up a new GitLab runner to build Docker images, created a private Docker registry to store them in, and updated a Node.js app to be built and tested inside of Docker containers.

Introduction

Containerization is quickly becoming the most accepted method of packaging and deploying applications in cloud environments. The standardization it provides, along with its resource efficiency (when compared to full virtual machines) and flexibility, make it a great enabler of the modern DevOps mindset. Many interesting cloud native deployment, orchestration, and monitoring strategies become possible when your applications and microservices are fully containerized.

Docker containers are by far the most common container type today. Though public Docker image repositories like Docker Hub are full of containerized open source software images that you can docker pull and use today, for private code you'll need to either pay a service to build and store your images, or run your own software to do so.

GitLab Community Edition is a self-hosted software suite that provides Git repository hosting, project tracking, CI/CD services, and a Docker image registry, among other features. In this tutorial we will use GitLab's continuous integration service to build Docker images from an example Node.js app. These images will then be tested and uploaded to our own private Docker registry.

Prerequisites

Before we begin, we need to set up a secure GitLab server, and a GitLab CI runner to execute continuous integration tasks. The sections below will provide links and more details.

A GitLab Server Secured with SSL

To store our source code, run CI/CD tasks, and host the Docker registry, we need a GitLab instance installed on an Ubuntu 16.04 server. GitLab currently recommends a server with at least 2 CPU cores and 4GB of RAM. Additionally, we'll secure the server with SSL certificates from Let's Encrypt. To do so, you'll need a domain name pointed at the server.

A GitLab CI Runner

Set Up Continuous Integration Pipelines with GitLab CI on Ubuntu 16.04 will give you an overview of GitLab's CI service, and show you how to set up a CI runner to process jobs. We will build on top of the demo app and runner infrastructure created in this tutorial.

Step 1 — Setting Up a Privileged GitLab CI Runner

In the prerequisite GitLab continuous integration tutorial, we set up a GitLab runner using sudo gitlab-runner register and its interactive configuration process. This runner is capable of running builds and tests of software inside of isolated Docker containers.

However, in order to build Docker images, our runner needs full access to a Docker service itself. The recommended way to configure this is to use Docker's official docker-in-docker image to run the jobs. This requires granting the runner a special privileged execution mode, so we'll create a second runner with this mode enabled.

Note: Granting the runner privileged mode basically disables all of the security advantages of using containers. Unfortunately, the other methods of enabling Docker-capable runners also carry similar security implications. Please look at the official GitLab documentation on Docker Build to learn more about the different runner options and which is best for your situation.

Read Also: How to Create Docker Image with MySQL Database

Because there are security implications to using a privileged runner, we are going to create a project-specific runner that will only accept Docker jobs on our hello_hapi project (GitLab admins can always manually add this runner to other projects at a later time). From your hello_hapi project page, click Settings at the bottom of the left-hand menu, then click CI/CD in the submenu:

Build Docker Images and Host a Docker Image Repository with GitLab

Now click the Expand button next to the Runners settings section:

Build Docker Images and Host a Docker Image Repository with GitLab

There will be some information about setting up a Specific Runner, including a registration token. Take note of this token. When we use it to register a new runner, the runner will be locked to this project only.

Build Docker Images and Host a Docker Image Repository with GitLab

While we're on this page, click the Disable shared Runners button. We want to make sure our Docker jobs always run on our privileged runner. If a non-privileged shared runner was available, GitLab might choose to use that one, which would result in build errors.

Log in to the server that has your current CI runner on it. If you don't have a machine set up with runners already, go back and complete the Installing the GitLab CI Runner Service

section of the prerequisite tutorial before proceeding.

Now, run the following command to set up the privileged project-specific runner:

    sudo gitlab-runner register -n \
      --url https://gitlab.example.com/ \
      --registration-token your-token \
      --executor docker \
      --description "docker-builder" \
      --docker-image "docker:latest" \
      --docker-privileged

Output

Registering runner... succeeded                     runner=61SR6BwV
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

Be sure to substitute your own information. We set all of our runner options on the command line instead of using the interactive prompts, because the prompts don't allow us to specify --docker-privileged mode.

Your runner is now set up, registered, and running. To verify, switch back to your browser. Click the wrench icon in the main GitLab menu bar, then click Runners in the left-hand menu. Your runners will be listed:

Build Docker Images and Host a Docker Image Repository with GitLab

Now that we have a runner capable of building Docker images, let's set up a private Docker registry for it to push images to.

Read Also: Docker All The Things

Step 2 — Setting Up GitLab's Docker Registry

Setting up your own Docker registry lets you push and pull images from your own private server, increasing security and reducing the dependencies your workflow has on outside services.

GitLab will set up a private Docker registry with just a few configuration updates. First we'll set up the URL where the registry will reside. Then we will (optionally) configure the registry to use an S3-compatible object storage service to store its data.

SSH into your GitLab server, then open up the GitLab configuration file:

sudo nano /etc/gitlab/gitlab.rb

Scroll down to the Container Registry settings section. We're going to uncomment the registry_external_url line and set it to our GitLab hostname with a port number of 5555:

/etc/gitlab/gitlab.rb

registry_external_url 'https://gitlab.example.com:5555'

Next, add the following two lines to tell the registry where to find our Let's Encrypt certificates:

/etc/gitlab/gitlab.rb

registry_nginx['ssl_certificate'] = "/etc/letsencrypt/live/gitlab.example.com/fullchain.pem"
registry_nginx['ssl_certificate_key'] = "/etc/letsencrypt/live/gitlab.example.com/privkey.pem"

Save and close the file, then reconfigure GitLab:

sudo gitlab-ctl reconfigure

Output

gitlab Reconfigured!

Update the firewall to allow traffic to the registry port:

sudo ufw allow 5555

Now switch to another machine with Docker installed, and log in to the private Docker registry. If you don’t have Docker on your local development computer, you can use whichever server is set up to run your GitLab CI jobs, as it has Docker installed already:

docker login gitlab.example.com:5555

You will be prompted for your username and password. Use your GitLab credentials to log in.

Output
Login Succeeded 

Success! The registry is set up and working. Currently it will store files on the GitLab server's local filesystem. If you'd like to use an object storage service instead, continue with this section. If not, skip down to Step 3.

To set up an object storage backend for the registry, we need to know the following information about our object storage service:

  • Access Key
  • Secret Key
  • Region (us-east-1) for example, if using Amazon S3, or Region Endpoint if using an S3-compatible service ([https://nyc.digitaloceanspaces.com](https://nyc.digitaloceanspaces.com))
  • Bucket Name

If you're using DigitalOcean Spaces, you can find out how to set up a new Space and get the above information by reading How To Create a DigitalOcean Space and API Key.

When you have your object storage information, open the GitLab configuration file:

sudo nano /etc/gitlab/gitlab.rb

Once again, scroll down to the container registry section. Look for the registry['storage'] block, uncomment it, and update it to the following, again making sure to substitute your own information where appropriate:

/etc/gitlab/gitlab.rb

registry['storage'] = {
  's3' => {
    'accesskey' => 'your-key',
    'secretkey' => 'your-secret',
    'bucket' => 'your-bucket-name',
    'region' => 'nyc3',
    'regionendpoint' => 'https://nyc3.digitaloceanspaces.com'
  }
}

If you're using Amazon S3, you only need region and not regionendpoint. If you're using an S3-compatible service like Spaces, you'll need regionendpoint. In this case region doesn't actually configure anything and the value you enter doesn't matter, but it still needs to be present and not blank.

Save and close the file.

Note: There is currently a bug where the registry will shut down after thirty seconds if your object storage bucket is empty. To avoid this, put a file in your bucket before running the next step. You can remove it later, after the registry has added its own objects.

If you are using DigitalOcean Spaces, you can drag and drop to upload a file using the Control Panel interface.

Reconfigure GitLab one more time:

sudo gitlab-ctl reconfigure

On your other Docker machine, log in to the registry again to make sure all is well:

docker login gitlab.example.com:5555

You should get a Login Succeeded message.

Now that we've got our Docker registry set up, let's update our application's CI configuration to build and test our app, and push Docker images to our private registry.

Step 3 — Updating gitlab-ci.yaml and Building a Docker Image

Note: If you didn't complete the prerequisite article on GitLab CI you'll need to copy over the example repository to your GitLab server. Follow the Copying the Example Repository From GitHub section to do so.

To get our app building in Docker, we need to update the .gitlab-ci.yml file. You can edit this file right in GitLab by clicking on it from the main project page, then clicking the Edit button. Alternately, you could clone the repo to your local machine, edit the file, then git push it back to GitLab. That would look like this:

    git clone [email protected]:sammy/hello_hapi.git
    cd hello_hapi
    # edit the file w/ your favorite editor
    git commit -am "updating ci configuration"
    git push

First, delete everything in the file, then paste in the following configuration:

.gitlab-ci.yml

image: docker:latest
services:
- docker:dind

stages:
- build
- test
- release

variables:
  TEST_IMAGE: gitlab.example.com:5555/sammy/hello_hapi:$CI_COMMIT_REF_NAME
  RELEASE_IMAGE: gitlab.example.com:5555/sammy/hello_hapi:latest

before_script:
  - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN gitlab.example.com:5555

build:
  stage: build
  script:
    - docker build --pull -t $TEST_IMAGE .
    - docker push $TEST_IMAGE

test:
  stage: test
  script:
    - docker pull $TEST_IMAGE
    - docker run $TEST_IMAGE npm test

release:
  stage: release
  script:
    - docker pull $TEST_IMAGE
    - docker tag $TEST_IMAGE $RELEASE_IMAGE
    - docker push $RELEASE_IMAGE
  only:
    - master

Be sure to update the highlighted URLs and usernames with your own information, then save with the Commit changes button in GitLab. If you're updating the file outside of GitLab, commit the changes and git push back to GitLab.

This new config file tells GitLab to use the latest docker image (image: docker:latest) and link it to the docker-in-docker service (docker:dind). It then defines build, test, and release stages. The build stage builds the Docker image using the Dockerfile provided in the repo, then uploads it to our Docker image registry. If that succeeds, the test stage will download the image we just built and run the npm test command inside it. If the test stage is successful, the release stage will pull the image, tag it as hello_hapi:latest and push it back to the registry.

Depending on your workflow, you could also add additional test stages, or even deploy stages that push the app to a staging or production environment.

Updating the configuration file should have triggered a new build. Return to the hello_hapi project in GitLab and click on the CI status indicator for the commit:

Build Docker Images and Host a Docker Image Repository with GitLab

On the resulting page you can then click on any of the stages to see their progress:

Build Docker Images and Host a Docker Image Repository with GitLab

Build Docker Images and Host a Docker Image Repository with GitLab

Eventually, all stages should indicate they were successful by showing green check mark icons. We can find the Docker images that were just built by clicking the Registry item in the left-hand menu:

Build Docker Images and Host a Docker Image Repository with GitLab

If you click the little "document" icon next to the image name, it will copy the appropriate docker pull ... command to your clipboard. You can then pull and run your image:

    docker pull gitlab.example.com:5555/sammy/hello_hapi:latest
    docker run -it --rm -p 3000:3000 gitlab.example.com:5555/sammy/hello_hapi:latest

Output

> [email protected] start /usr/src/app
> node app.js

Server running at: http://56fd5df5ddd3:3000

The image has been pulled down from the registry and started in a container. Switch to your browser and connect to the app on port 3000 to test. In this case we're running the container on our local machine, so we can access it via localhost at the following URL:

http://localhost:3000/hello/test

Output

Hello, test!

Success! You can stop the container with CTRL-C. From now on, every time we push new code to the master branch of our repository, we'll automatically build and test a new hello_hapi:latest image.

Conclusion

In this tutorial we set up a new GitLab runner to build Docker images, created a private Docker registry to store them in, and updated a Node.js app to be built and tested inside of Docker containers.

How to Install Docker on Windows 10 Home?

How to Install Docker on Windows 10 Home?

If you’ve ever tried to install Docker for Windows, you’ve probably came to realize that the installer won’t run on Windows 10 Home. Only Windows Pro, Enterprise or Education support Docker. Upgrading your Windows license is pricey, and also pointless, since you can still run Linux Containers on Windows without relying on Hyper-V technology, a requirement for Docker for Windows.

If you’ve ever tried to install Docker for Windows, you’ve probably came to realize that the installer won’t run on Windows 10 Home. Only Windows Pro, Enterprise or Education support Docker. Upgrading your Windows license is pricey, and also pointless, since you can still run Linux Containers on Windows without relying on Hyper-V technology, a requirement for Docker for Windows.

If you plan on running Windows Containers, you’ll need a specific version and build of Windows Server. Check out the Windows container version compatibility matrix for details.

99.999% of the time, you only need a Linux Container, since it supports software built using open-source and .NET technologies. In addition, Linux Containers can run on any distro and on popular CPU architectures, including x86_64, ARM and IBM.

In this tutorial, I’ll show you how to quickly setup a Linux VM on Windows Home running Docker Engine with the help of Docker Machine. Here’s a list of software you’ll need to build and run Docker containers:

  • Docker Machine: a CLI tool for installing Docker Engine on virtual hosts
  • Docker Engine: runs on top of the Linux Kernel; used for building and running containers
  • Docker Client: a CLI tool for issuing commands to Docker Engine via REST API
  • Docker Compose: a tool for defining and running multi-container applications

I’ll show how to perform the installation in the following environments:

  1. On Windows using Git Bash
  2. On Windows Subsystem for Linux 2 (running Ubuntu 18.04)

First, allow me to explain how the Docker installation will work on Windows.

How it Works

As you probably know, Docker requires a Linux kernel to run Linux Containers. For this to work on Windows, you’ll need to set up a Linux virtual machine to run as guest in Windows 10 Home.

docker windows home

Setting up the Linux VM can be done manually. The easiest way is to use Docker Machine to do this work for you by running a single command. This Docker Linux VM can either run on your local system or on a remote server. Docker client will use SSH to communicate with Docker Engine. Whenever you create and run images, the actual process will happen within the VM, not on your host (Windows).

Let’s dive into the next section to set up the environment needed to install Docker.

Initial Setup

You may or may not have the following applications installed on your system. I’ll assume you don’t. If you do, make sure to upgrade to the latest versions. I’m also assuming you’re running the latest stable version of Windows. At the time of writing, I’m using Windows 10 Home version 1903. Let’s start installing the following:

  1. Install Git Bash for Windows. This will be our primary terminal for running Docker commands.

  2. Install Chocolatey, a package manager for Windows. It will make the work of installing the rest of the programs easier.

  3. Install VirtualBox and its extension. Alternatively, If you have finished installing Chocolatey, you can simply execute this command inside an elevated PowerShell terminal:

    C:\ choco install virtualbox
    
    
  4. If you’d like to try running Docker inside the WSL2 environment, you’ll need to set up WSL2 first. You can follow this tutorial for step-by-step instructions.

Docker Engine Setup

Installing Docker Engine is quite simple. First we need to install Docker Machine.

  1. Install Docker Machine by following instructions on this page. Alternatively, you can execute this command inside an elevated PowerShell terminal:

    C:\ choco install docker-machine
    
    
  2. Using Git Bash terminal, use Docker Machine to install Docker Engine. This will download a Linux image containing the Docker Engine and have it run as a VM using VirtualBox. Simply execute the following command:

    $ docker-machine create --driver virtualbox default
    
    
  3. Next, we need to configure which ports are exposed when running Docker containers. Doing this will allow us to access our applications via localhost<:port>. Feel free to add as many as you want. To do this, you’ll need to launch Oracle VM VirtualBox from your start menu. Select default VM on the side menu. Next click on Settings > Network > Adapter 1 > Port Forwarding. You should find the ssh forwarding port already set up for you. You can add more like so:

    docker vm ports

  4. Next, we need to allow Docker to mount volumes located on your hard drive. By default, you can only mount from the C://Users/ directory. To add a different path, simply go to Oracle VM VirtualBox GUI. Select default VM and go to Settings > Shared Folders. Add a new one by clicking the plus symbol. Enter the fields like so. If there’s an option called Permanent, enable it.

    docker vm volumes

  5. To get rid of the invalid settings error as seen in the above screenshot, simply increase Video Memory under the Display tab in the settings option. Video memory is not important in this case, as we’ll run the VM in headless mode.

  6. To start the Linux VM, simply execute this command in Git Bash. The Linux VM will launch. Give it some time for the boot process to complete. It shouldn’t take more than a minute. You’ll need to do this every time you boot your host OS:

    $ docker-machine start vbox
    
    
  7. Next, we need to set up our Docker environment variables. This is to allow the Docker client and Docker Compose to communicate with the Docker Engine running in the Linux VM, default. You can do this by executing the commands in Git Bash:

    # Print out docker machine instance settings
    $ docker-machine env default
    
    # Set environment variables using Linux 'export' command
    $ eval $(docker-machine env default --shell linux)
    
    

    You’ll need to set the environment variables every time you start a new Git Bash terminal. If you’d like to avoid this, you can copy eval output and save it in your .bashrc file. It should look something like this:

    export DOCKER_TLS_VERIFY="1"
    export DOCKER_HOST="tcp://192.168.99.101:2376"
    export DOCKER_CERT_PATH="C:\Users\Michael Wanyoike\.docker\machine\machines\default"
    export DOCKER_MACHINE_NAME="default"
    export COMPOSE_CONVERT_WINDOWS_PATHS="true"
    
    

    IMPORTANT: for the DOCKER_CERT_PATH, you’ll need to change the Linux file path to a Windows path format. Also take note that there’s a chance the IP address assigned might be different from the one you saved every time you start the default VM.

In the next section, we’ll install Docker Client and Docker Compose.

Docker Tools Setup

For this section, you’ll need to install the following tools using PowerShell in admin mode. These tools are packaged inside the Docker for Windows installer. Since the installer refuses to run on Windows 10 Home, we’ll install these programs individually using Chocolatey:

C:\ choco install docker-cli
C:\ choco install docker-compose

Once the installation process is complete, you can switch back to Git Bash terminal. You can continue using PowerShell, but I prefer Linux syntax to execute commands. Let’s execute the following commands to ensure Docker is running:

# Start Docker VM
$ docker-machine start default

# Confirm Docker VM is running
$ docker-machine ls

# Configure Docker Envrionment to use Docker Vm
$ eval $(docker-machine env default --shell linux)

# Confirm Docker is connected. Should output Docker VM specs
$ docker info

# Run hello-world docker image. Should output "Hello from Docker"
$ docker run hello-world

If all the above commands run successfully, it means you’ve successfully installed Docker. If you want to try out a more ambitious example, I have a small Node.js application that that I’ve configured to run on Docker containers. First, you’ll need to install GNU Make using PowerShell with Admin privileges:

C:\ choco install make

Next, execute the following commands. Running this Node.js example will ensure you have no problem with exposed ports and mounting volumes on the Windows filesystem. First, navigate to a folder that that you’ve already mounted in VirtualBox settings. Next, execute the following commands:

$ git clone [email protected]:brandiqa/docker-node.git

$ cd docker-node/website

$ make

When you hit the last command, you should expect a similar output:

docker volume create nodemodules
nodemodules
docker-compose -f docker-compose.builder.yml run --rm install
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})

audited 9731 packages in 21.405s

docker-compose up
Starting website_dev_1 ... done
Attaching to website_dev_1
dev_1  |
dev_1  | > [email protected] start /usr/src/app
dev_1  | > parcel src/index.html --hmr-port 1235
dev_1  |
dev_1  | Server running at http://localhost:1234

Getting the above output means that volume mounting occurred successfully. Open localhost:1234 to confirm that the website can be accessed. This will confirm that you have properly configured the ports. You can edit the source code, for example change the h1 title in App.jsx. As soon as you save the file, the browser page should refresh automatically. This means hot module reloading works from a Docker container.

I would like to bring your attention to the docker-compose.yml file in use. For hot module reloading to work from a Docker Container in Windows requires the following:

  1. When using parcel, specify HMR port in your package.json start script:

    parcel src/index.html –hmr-port 1235

  2. In the VM’s Port Forwarding rules, make sure these ports are exposed to the host system:

    • 1234
    • 1235
  3. inotify doesn’t work on vboxsf filesystems, so file changes can’t be detected. The workaround is to set polling for Chokidar via environment variables in docker-compose.yml. Here’s the full file so that you can see how it’s set:

    version: '3'
    services:
      dev:
        image: node:10-jessie-slim
        volumes:
          - nodemodules:/usr/src/app/node_modules
          - ./:/usr/src/app
        working_dir: /usr/src/app
        command: npm start
        ports:
          - 1234:1234
          - 1235:1235
        environment:
        - CHOKIDAR_USEPOLLING=1
    volumes:
      nodemodules:
        external: true
    
    

Now that we have a fully working implementation of Docker on Windows 10 home, let’s set it up on WSL2 for those who are interested.

Windows Subsystem for Linux 2

Installing Docker on WSL2 is not as straightforward as it seems. Unfortunately, the latest version of Docker Engine can’t run on WSL2. However, there’s an older version, docker-ce=17.09.0~ce-0~ubuntu, that’s capable of running well in WSL2. I won’t be covering that here. Instead, I’ll show you how to access Docker Engine running in the VM we set up earlier from a WSL terminal.

All we have to do is install Docker client and Docker compose. Assuming you’re running WSL2 Ubuntu Terminal, execute the following:

  1. Install Docker using the official instructions:

    # Update the apt package list.
    sudo apt-get update -y
    
    # Install Docker's package dependencies.
    sudo apt-get install -y \
        apt-transport-https \
        ca-certificates \
        curl \
        software-properties-common
    
    # Download and add Docker's official public PGP key.
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    
    # Verify the fingerprint.
    sudo apt-key fingerprint 0EBFCD88
    
    # Add the `stable` channel's Docker upstream repository.
    #
    # If you want to live on the edge, you can change "stable" below to "test" or
    # "nightly". I highly recommend sticking with stable!
    sudo add-apt-repository \
    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) \
    stable"
    
    # Update the apt package list (for the new apt repo).
    sudo apt-get update -y
    
    # Install the latest version of Docker CE.
    sudo apt-get install -y docker-ce
    
    # Allow your user to access the Docker CLI without needing root access.
    sudo usermod -aG docker $USER
    
    
  2. Install Docker Compose using this official guide. An alternative is to use PIP, which will simply install the latest stable version:

    # Install Python and PIP.
    sudo apt-get install -y python python-pip
    
    # Install Docker Compose into your user's home directory.
    pip install --user docker-compose
    
    
  3. Fix the Docker mounting issue in WSL terminal by inserting this content in /etc/wsl.conf. Create the file if it doesn’t exist:

    [automount]
    root = /
    options = "metdata"
    
    

    You’ll need to restart your machine for this setting to take effect.

  4. Assuming that Linux Docker VM is running, you’ll need to connect the Docker tools in the WSL environment to it. If you can access docker-machine from the Ubuntu terminal, run the eval command. Otherwise, you can insert the following Docker variable in your .bashrc file. Here is an example of mine:

    export DOCKER_HOST="tcp://192.168.99.101:2376"
    export DOCKER_CERT_PATH="/c/Users/Michael Wanyoike/.docker/machine/machines/vbox"
    export DOCKER_MACHINE_NAME="vbox"
    export COMPOSE_CONVERT_WINDOWS_PATHS="true"
    
    

    You’ll need to restart your terminal or execute source ~/.bashrc for the settings to take effect. Running Docker commands should work properly in WSL without a hitch.

Switching to Linux

We’re now coming to the end of this article. The steps for setting up Docker in Windows 10 is a bit of a lengthy process. If you plan to reformat your machine, you’ll have to go through the same process again. It’s worse if your job is to install Docker on multiple machines running Windows 10 Home.

A simpler solution is to switch to Linux for development. You can create a partition and set up dual booting. You can also use VirtualBox to install and run a full Linux Distro inside Windows. Check out which popular distro you’d like to try out. I use Linux Lite because it’s lightweight and is based on Ubuntu. With VirtualBox, you can try out as many distros as you wish.

Linux Lite VM

If you’re using a distro based on Ubuntu, you can install Docker easily with these commands:

# Install https://snapcraft.io/ package manager
sudo apt install snapd

# Install Docker Engine, Docker Client and Docker Compose
sudo snap install docker

# Run Docker commands without sudo
sudo usermod -aG docker $USER

You’ll need to log out and then log in for the last command to take effect. After that, you can run any Docker command without issue. You don’t need to worry about issues with mounting or ports. Docker Engine runs as a service in Linux, which by default starts automatically. No need for provisioning a Docker VM. Everything works out of the box without relying on a hack.

Summary

I hope you’ve had smooth sailing installing and running Docker on Windows 10 Home. I believe this technique should work on older versions such as Windows 7. In case you run into a problem, just go through the instructions to see if you missed something. Do note, however, that I haven’t covered every Docker feature. You may encounter a bug or an unsupported feature that requires a workaround, or may have no solution at all. If that’s the case, I’d recommend you just switch to Linux if you want a smoother development experience using Docker.

How to Install Postman in Linux

How to Install Postman in Linux

<img src="https://linux4one.com/wp-content/uploads/2019/04/How-to-Install-Postman-in-Linux.jpg">

How to Install Postman in Linux

Table of Contents

Install Postman in Linux

Postman is one of the useful tool built for API development. Using Postman you can develop API faster. In the beginning, Postman provided as an extension of Google Chrome browser but later it is developed as an independent tool as very much growth in its popularity. Today Postman is available for all major operating systems such as Linux, Windows, and MacOS. In this tutorial, you are going to learn How to install Postman in Linux.

Prerequisites

Before you start installing Postman in Linux. You must have a non-root user account on your server with sudo privileges.

Install Postman in Linux

Installing Postman on Linux is one of the easiest things today. We are going to install Postman on Ubuntu using snappy package system tool.

Download Postman by running following command in your Linux system:

wget https://dl.pstmn.io/download/latest/linux64 -O postman-linux-x64.tar.gz

Extract the downloaded file by running the following command in /opt directory:

sudo tar -xvzf postman-linux-x64.tar.gz -C /opt

Finally, create a symbolic link running following command in terminal:

sudo ln -s /opt/Postman/Postman /usr/bin/postman

After completing the above process you have successfully installed Postman on your Linux system.

Now to create a desktop icon you can run below command:

cat << EOF > ~/.local/share/applications/postman2.desktop
[Desktop Entry]
Name=Postman
GenericName=API Client
X-GNOME-FullName=Postman API Client
Comment=Make and view REST API calls and responses
Keywords=api;
Exec=/opt/Postman/Postman
Terminal=false
Type=Application
Icon=/opt/Postman/app/resources/app/assets/icon.png
Categories=Development;Utilities;
EOF

Now you have successfully completed installtion of Postman in your Linux system.

Using Postman

To start using Postman, go to Applications -> Postman and launch Postman in Linux or you can simply run following command.

postman

On the first launch, you will see the following window for Sign Up using Email or Google Account. Otherwise, you can also Sign In if you have an existing account.

how to install Postman – Create Account or Sign In

After signing in or creating the new account you are ready to start using Postman for API development.

Following is an example in which we sending a GET request to the URL.

How to install Postman – Start Using Postman

Conclusion

In this tutorial, you have learned how to install Postman in Linux successfully. If you have any of the queries regarding this then you can comment below.

Originally published on https://linux4one.com



How To Set Up Django with Postgres, Nginx, and Gunicorn on Ubuntu 16.04

How To Set Up Django with Postgres, Nginx, and Gunicorn on Ubuntu 16.04

In this guide, we will demonstrate how to install and configure some components on Ubuntu 16.04 to support and serve Django applications. We will be setting up a PostgreSQL database instead of using the default SQLite database. We will configure the Gunicorn application server to interface with our applications. We will then set up Nginx to reverse proxy to Gunicorn, giving us access to its security and performance features to serve our apps.

In this guide, we will demonstrate how to install and configure some components on Ubuntu 16.04 to support and serve Django applications. We will be setting up a PostgreSQL database instead of using the default SQLite database. We will configure the Gunicorn application server to interface with our applications. We will then set up Nginx to reverse proxy to Gunicorn, giving us access to its security and performance features to serve our apps.

Introduction

Django is a powerful web framework that can help you get your Python application or website off the ground. Django includes a simplified development server for testing your code locally, but for anything even slightly production related, a more secure and powerful web server is required.

Prerequisites and Goals

In order to complete this guide, you should have a fresh Ubuntu 16.04 server instance with a non-root user with sudo privileges configured. You can learn how to set this up by running through our initial server setup guide.

We will be installing Django within a virtual environment. Installing Django into an environment specific to your project will allow your projects and their requirements to be handled separately.

Once we have our database and application up and running, we will install and configure the Gunicorn application server. This will serve as an interface to our application, translating client requests in HTTP to Python calls that our application can process. We will then set up Nginx in front of Gunicorn to take advantage of its high performance connection handling mechanisms and its easy-to-implement security features.

Let’s get started.

Install the Packages from the Ubuntu Repositories

To begin the process, we’ll download and install all of the items we need from the Ubuntu repositories. We will use the Python package manager pip to install additional components a bit later.

We need to update the local apt package index and then download and install the packages. The packages we install depend on which version of Python your project will use.

If you are using Python 2, type:

sudo apt-get update
sudo apt-get install python-pip python-dev libpq-dev postgresql postgresql-contrib nginx


If you are using Django with Python 3, type:

sudo apt-get update
sudo apt-get install python3-pip python3-dev libpq-dev postgresql postgresql-contrib nginx


This will install pip, the Python development files needed to build Gunicorn later, the Postgres database system and the libraries needed to interact with it, and the Nginx web server.

Create the PostgreSQL Database and User

We’re going to jump right in and create a database and database user for our Django application.

By default, Postgres uses an authentication scheme called “peer authentication” for local connections. Basically, this means that if the user’s operating system username matches a valid Postgres username, that user can login with no further authentication.

During the Postgres installation, an operating system user named postgres was created to correspond to the postgres PostgreSQL administrative user. We need to use this user to perform administrative tasks. We can use sudo and pass in the username with the -u option.

Log into an interactive Postgres session by typing:

sudo -u postgres psql


You will be given a PostgreSQL prompt where we can set up our requirements.

First, create a database for your project:

CREATE DATABASE myproject;


Every Postgres statement must end with a semi-colon, so make sure that your command ends with one if you are experiencing issues.

Next, create a database user for our project. Make sure to select a secure password:

CREATE USER myprojectuser WITH PASSWORD 'password';


Afterwards, we’ll modify a few of the connection parameters for the user we just created. This will speed up database operations so that the correct values do not have to be queried and set each time a connection is established.

We are setting the default encoding to UTF-8, which Django expects. We are also setting the default transaction isolation scheme to “read committed”, which blocks reads from uncommitted transactions. Lastly, we are setting the timezone. By default, our Django projects will be set to use UTC. These are all recommendations from the Django project itself:

ALTER ROLE myprojectuser SET client_encoding TO 'utf8';
ALTER ROLE myprojectuser SET default_transaction_isolation TO 'read committed';
ALTER ROLE myprojectuser SET timezone TO 'UTC';


Now, we can give our new user access to administer our new database:

GRANT ALL PRIVILEGES ON DATABASE myproject TO myprojectuser;


When you are finished, exit out of the PostgreSQL prompt by typing:

\q


Create a Python Virtual Environment for your Project

Now that we have our database, we can begin getting the rest of our project requirements ready. We will be installing our Python requirements within a virtual environment for easier management.

To do this, we first need access to the virtualenv command. We can install this with pip.

If you are using Python 2, upgrade pip and install the package by typing:

sudo -H pip install --upgrade pip
sudo -H pip install virtualenv


If you are using Python 3, upgrade pip and install the package by typing:

sudo -H pip3 install --upgrade pip
sudo -H pip3 install virtualenv


With virtualenv installed, we can start forming our project. Create and move into a directory where we can keep our project files:

mkdir ~/myproject
cd ~/myproject


Within the project directory, create a Python virtual environment by typing:

virtualenv myprojectenv


This will create a directory called myprojectenv within your myproject directory. Inside, it will install a local version of Python and a local version of pip. We can use this to install and configure an isolated Python environment for our project.

Before we install our project’s Python requirements, we need to activate the virtual environment. You can do that by typing:

source myprojectenv/bin/activate


Your prompt should change to indicate that you are now operating within a Python virtual environment. It will look something like this: (myprojectenv)[email protected]:~/myproject$.

With your virtual environment active, install Django, Gunicorn, and the psycopg2 PostgreSQL adaptor with the local instance of pip:

Note

Regardless of which version of Python you are using, when the virtual environment is activated, you should use the pip command (not pip3).

pip install django gunicorn psycopg2


You should now have all of the software needed to start a Django project.

Create and Configure a New Django Project

With our Python components installed, we can create the actual Django project files.

Create the Django Project

Since we already have a project directory, we will tell Django to install the files here. It will create a second level directory with the actual code, which is normal, and place a management script in this directory. The key to this is that we are defining the directory explicitly instead of allowing Django to make decisions relative to our current directory:

django-admin.py startproject myproject ~/myproject


At this point, your project directory (~/myproject in our case) should have the following content:

  • ~/myproject/manage.py: A Django project management script.
  • ~/myproject/myproject/: The Django project package. This should contain the __init__.py, settings.py, urls.py, and wsgi.py files.
  • ~/myproject/myprojectenv/: The virtual environment directory we created earlier.

Adjust the Project Settings

The first thing we should do with our newly created project files is adjust the settings. Open the settings file in your text editor:

nano ~/myproject/myproject/settings.py


Start by locating the ALLOWED_HOSTS directive. This defines a list of the server’s addresses or domain names may be used to connect to the Django instance. Any incoming requests with a Host header that is not in this list will raise an exception. Django requires that you set this to prevent a certain class of security vulnerability.

In the square brackets, list the IP addresses or domain names that are associated with your Django server. Each item should be listed in quotations with entries separated by a comma. If you wish requests for an entire domain and any subdomains, prepend a period to the beginning of the entry. In the snippet below, there are a few commented out examples used to demonstrate:

~/myproject/myproject/settings.py

. . .
# The simplest case: just add the domain name(s) and IP addresses of your Django server
# ALLOWED_HOSTS = [ 'example.com', '203.0.113.5']
# To respond to 'example.com' and any subdomains, start the domain with a dot
# ALLOWED_HOSTS = ['.example.com', '203.0.113.5']
ALLOWED_HOSTS = ['your_server_domain_or_IP', 'second_domain_or_IP', . . .]


Next, find the section that configures database access. It will start with DATABASES. The configuration in the file is for a SQLite database. We already created a PostgreSQL database for our project, so we need to adjust the settings.

Change the settings with your PostgreSQL database information. We tell Django to use the psycopg2 adaptor we installed with pip. We need to give the database name, the database username, the database user’s password, and then specify that the database is located on the local computer. You can leave the PORT setting as an empty string:

~/myproject/myproject/settings.py

. . .

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'myproject',
        'USER': 'myprojectuser',
        'PASSWORD': 'password',
        'HOST': 'localhost',
        'PORT': '',
    }
}

. . .


Next, move down to the bottom of the file and add a setting indicating where the static files should be placed. This is necessary so that Nginx can handle requests for these items. The following line tells Django to place them in a directory called static in the base project directory:

~/myproject/myproject/settings.py

. . .

STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static/')


Save and close the file when you are finished.

Complete Initial Project Setup

Now, we can migrate the initial database schema to our PostgreSQL database using the management script:

~/myproject/manage.py makemigrations
~/myproject/manage.py migrate


Create an administrative user for the project by typing:

~/myproject/manage.py createsuperuser


You will have to select a username, provide an email address, and choose and confirm a password.

We can collect all of the static content into the directory location we configured by typing:

~/myproject/manage.py collectstatic


You will have to confirm the operation. The static files will then be placed in a directory called static within your project directory.

If you followed the initial server setup guide, you should have a UFW firewall protecting your server. In order to test the development server, we’ll have to allow access to the port we’ll be using.

Create an exception for port 8000 by typing:

sudo ufw allow 8000


Finally, you can test our your project by starting up the Django development server with this command:

~/myproject/manage.py runserver 0.0.0.0:8000


In your web browser, visit your server’s domain name or IP address followed by :8000:

http://server_domain_or_IP:8000


You should see the default Django index page:

If you append /admin to the end of the URL in the address bar, you will be prompted for the administrative username and password you created with the createsuperuser command:

After authenticating, you can access the default Django admin interface:

When you are finished exploring, hit CTRL-C in the terminal window to shut down the development server.

Testing Gunicorn’s Ability to Serve the Project

The last thing we want to do before leaving our virtual environment is test Gunicorn to make sure that it can serve the application. We can do this by entering our project directory and using gunicorn to load the project’s WSGI module:

cd ~/myproject
gunicorn --bind 0.0.0.0:8000 myproject.wsgi


This will start Gunicorn on the same interface that the Django development server was running on. You can go back and test the app again.

Note: The admin interface will not have any of the styling applied since Gunicorn does not know about the static CSS content responsible for this.

We passed Gunicorn a module by specifying the relative directory path to Django’s wsgi.py file, which is the entry point to our application, using Python’s module syntax. Inside of this file, a function called application is defined, which is used to communicate with the application. To learn more about the WSGI specification, click here.

When you are finished testing, hit CTRL-C in the terminal window to stop Gunicorn.

We’re now finished configuring our Django application. We can back out of our virtual environment by typing:

deactivate


The virtual environment indicator in your prompt will be removed.

Create a Gunicorn systemd Service File

We have tested that Gunicorn can interact with our Django application, but we should implement a more robust way of starting and stopping the application server. To accomplish this, we’ll make a systemd service file.

Create and open a systemd service file for Gunicorn with sudo privileges in your text editor:

sudo nano /etc/systemd/system/gunicorn.service


Start with the [Unit] section, which is used to specify metadata and dependencies. We’ll put a description of our service here and tell the init system to only start this after the networking target has been reached:

/etc/systemd/system/gunicorn.service

[Unit]
Description=gunicorn daemon
After=network.target


Next, we’ll open up the [Service] section. We’ll specify the user and group that we want to process to run under. We will give our regular user account ownership of the process since it owns all of the relevant files. We’ll give group ownership to the www-data group so that Nginx can communicate easily with Gunicorn.

We’ll then map out the working directory and specify the command to use to start the service. In this case, we’ll have to specify the full path to the Gunicorn executable, which is installed within our virtual environment. We will bind it to a Unix socket within the project directory since Nginx is installed on the same computer. This is safer and faster than using a network port. We can also specify any optional Gunicorn tweaks here. For example, we specified 3 worker processes in this case:

/etc/systemd/system/gunicorn.service

[Unit]
Description=gunicorn daemon
After=network.target

[Service]
User=sammy
Group=www-data
WorkingDirectory=/home/sammy/myproject
ExecStart=/home/sammy/myproject/myprojectenv/bin/gunicorn --access-logfile - --workers 3 --bind unix:/home/sammy/myproject/myproject.sock myproject.wsgi:application


Finally, we’ll add an [Install] section. This will tell systemd what to link this service to if we enable it to start at boot. We want this service to start when the regular multi-user system is up and running:

/etc/systemd/system/gunicorn.service

[Unit]
Description=gunicorn daemon
After=network.target

[Service]
User=sammy
Group=www-data
WorkingDirectory=/home/sammy/myproject
ExecStart=/home/sammy/myproject/myprojectenv/bin/gunicorn --access-logfile - --workers 3 --bind unix:/home/sammy/myproject/myproject.sock myproject.wsgi:application

[Install]
WantedBy=multi-user.target


With that, our systemd service file is complete. Save and close it now.

We can now start the Gunicorn service we created and enable it so that it starts at boot:

sudo systemctl start gunicorn
sudo systemctl enable gunicorn


We can confirm that the operation was successful by checking for the socket file.

Check for the Gunicorn Socket File

Check the status of the process to find out whether it was able to start:

sudo systemctl status gunicorn


Next, check for the existence of the myproject.sock file within your project directory:

ls /home/sammy/myproject

Outputmanage.py  myproject  myprojectenv  myproject.sock  static


If the systemctl status command indicated that an error occurred or if you do not find the myproject.sock file in the directory, it’s an indication that Gunicorn was not able to start correctly. Check the Gunicorn process logs by typing:

sudo journalctl -u gunicorn


Take a look at the messages in the logs to find out where Gunicorn ran into problems. There are many reasons that you may have run into problems, but often, if Gunicorn was unable to create the socket file, it is for one of these reasons:

  • ~/myproject/manage.py: A Django project management script.
  • ~/myproject/myproject/: The Django project package. This should contain the __init__.py, settings.py, urls.py, and wsgi.py files.
  • ~/myproject/myprojectenv/: The virtual environment directory we created earlier.

If you make changes to the /etc/systemd/system/gunicorn.service file, reload the daemon to reread the service definition and restart the Gunicorn process by typing:

sudo systemctl daemon-reload
sudo systemctl restart gunicorn


Make sure you troubleshoot any of the above issues before continuing.

Configure Nginx to Proxy Pass to Gunicorn

Now that Gunicorn is set up, we need to configure Nginx to pass traffic to the process.

Start by creating and opening a new server block in Nginx’s sites-available directory:

sudo nano /etc/nginx/sites-available/myproject


Inside, open up a new server block. We will start by specifying that this block should listen on the normal port 80 and that it should respond to our server’s domain name or IP address:

/etc/nginx/sites-available/myproject

server {
    listen 80;
    server_name server_domain_or_IP;
}


Next, we will tell Nginx to ignore any problems with finding a favicon. We will also tell it where to find the static assets that we collected in our ~/myproject/static directory. All of these files have a standard URI prefix of “/static”, so we can create a location block to match those requests:

/etc/nginx/sites-available/myproject

server {
    listen 80;
    server_name server_domain_or_IP;

    location = /favicon.ico { access_log off; log_not_found off; }
    location /static/ {
        root /home/sammy/myproject;
    }
}


Finally, we’ll create a location / {} block to match all other requests. Inside of this location, we’ll include the standard proxy_params file included with the Nginx installation and then we will pass the traffic to the socket that our Gunicorn process created:

/etc/nginx/sites-available/myproject

server {
    listen 80;
    server_name server_domain_or_IP;

    location = /favicon.ico { access_log off; log_not_found off; }
    location /static/ {
        root /home/sammy/myproject;
    }

    location / {
        include proxy_params;
        proxy_pass http://unix:/home/sammy/myproject/myproject.sock;
    }
}


Save and close the file when you are finished. Now, we can enable the file by linking it to the sites-enabled directory:

sudo ln -s /etc/nginx/sites-available/myproject /etc/nginx/sites-enabled


Test your Nginx configuration for syntax errors by typing:

sudo nginx -t


If no errors are reported, go ahead and restart Nginx by typing:

sudo systemctl restart nginx


Finally, we need to open up our firewall to normal traffic on port 80. Since we no longer need access to the development server, we can remove the rule to open port 8000 as well:

sudo ufw delete allow 8000
sudo ufw allow 'Nginx Full'


You should now be able to go to your server’s domain or IP address to view your application.

Note

After configuring Nginx, the next step should be securing traffic to the server using SSL/TLS. This is important because without it, all information, including passwords are sent over the network in plain text.

If you have a domain name, the easiest way get an SSL certificate to secure your traffic is using Let’s Encrypt. Follow this guide to set up Let’s Encrypt with Nginx on Ubuntu 16.04.

If you do not have a domain name, you can still secure your site for testing and learning with a self-signed SSL certificate.

Troubleshooting Nginx and Gunicorn

If this last step does not show your application, you will need to troubleshoot your installation.

Nginx Is Showing the Default Page Instead of the Django Application

If Nginx displays the default page instead of proxying to your application, it usually means that you need to adjust the server_name within the /etc/nginx/sites-available/myproject file to point to your server’s IP address or domain name.

Nginx uses the server_name to determine which server block to use to respond to requests. If you are seeing the default Nginx page, it is a sign that Nginx wasn’t able to match the request to a sever block explicitly, so it’s falling back on the default block defined in /etc/nginx/sites-available/default.

The server_name in your project’s server block must be more specific than the one in the default server block to be selected.

Nginx Is Displaying a 502 Bad Gateway Error Instead of the Django Application

A 502 error indicates that Nginx is unable to successfully proxy the request. A wide range of configuration problems express themselves with a 502 error, so more information is required to troubleshoot properly.

The primary place to look for more information is in Nginx’s error logs. Generally, this will tell you what conditions caused problems during the proxying event. Follow the Nginx error logs by typing:

sudo tail -F /var/log/nginx/error.log


Now, make another request in your browser to generate a fresh error (try refreshing the page). You should see a fresh error message written to the log. If you look at the message, it should help you narrow down the problem.

You might see some of the following message:

connect() to unix:/home/sammy/myproject/myproject.sock failed (2: No such file or directory)

This indicates that Nginx was unable to find the myproject.sock file at the given location. You should compare the proxy_pass location defined within /etc/nginx/sites-available/myproject file to the actual location of the myproject.sock file generated in your project directory.

If you cannot find a myproject.sock file within your project directory, it generally means that the gunicorn process was unable to create it. Go back to the section on checking for the Gunicorn socket file to step through the troubleshooting steps for Gunicorn.

connect() to unix:/home/sammy/myproject/myproject.sock failed (13: Permission denied)

This indicates that Nginx was unable to connect to the Gunicorn socket because of permissions problems. Usually, this happens when the procedure is followed using the root user instead of a sudo user. While the Gunicorn process is able to create the socket file, Nginx is unable to access it.

This can happen if there are limited permissions at any point between the root directory (/) the myproject.sock file. We can see the permissions and ownership values of the socket file and each of its parent directories by passing the absolute path to our socket file to the namei command:

namei -nom /home/sammy/myproject/myproject.sock

Outputf: /home/sammy/myproject/myproject.sock
 drwxr-xr-x root  root     /
 drwxr-xr-x root  root     home
 drwxr-xr-x sammy sammy    sammy
 drwxrwxr-x sammy sammy    myproject
 srwxrwxrwx sammy www-data myproject.sock


The output displays the permissions of each of the directory components. By looking at the permissions (first column), owner (second column) and group owner (third column), we can figure out what type of access is allowed to the socket file.

In the above example, the socket file and each of the directories leading up to the socket file have world read and execute permissions (the permissions column for the directories end with r-x instead of ---). The Nginx process should be able to access the socket successfully.

If any of the directories leading up to the socket do not have world read and execute permission, Nginx will not be able to access the socket without allowing world read and execute permissions or making sure group ownership is given to a group that Nginx is a part of. For sensitive locations like the /root directory, both of the above options are dangerous. It’s better to move the project files outside of the directory, where you can safely control access without compromising security.

Django Is Displaying: “could not connect to server: Connection refused”

One message that you may see from Django when attempting to access parts of the application in the web browser is:

OperationalError at /admin/login/
could not connect to server: Connection refused
    Is the server running on host "localhost" (127.0.0.1) and accepting
    TCP/IP connections on port 5432?


This indicates that Django is unable to connect to the Postgres database. Make sure that the Postgres instance is running by typing:

sudo systemctl status postgresql


If it is not, you can start it and enable it to start automatically at boot (if it is not already configured to do so) by typing:

sudo systemctl start postgresql
sudo systemctl enable postgresql


If you are still having issues, make sure the database settings defined in the ~/myproject/myproject/settings.py file are correct.

Further Troubleshooting

For additional troubleshooting, the logs can help narrow down root causes. Check each of them in turn and look for messages indicating problem areas.

The following logs may be helpful:

  • ~/myproject/manage.py: A Django project management script.
  • ~/myproject/myproject/: The Django project package. This should contain the __init__.py, settings.py, urls.py, and wsgi.py files.
  • ~/myproject/myprojectenv/: The virtual environment directory we created earlier.

As you update your configuration or application, you will likely need to restart the processes to adjust to your changes.

If you update your Django application, you can restart the Gunicorn process to pick up the changes by typing:

sudo systemctl restart gunicorn


If you change gunicorn systemd service file, reload the daemon and restart the process by typing:

sudo systemctl daemon-reload
sudo systemctl restart gunicorn


If you change the Nginx server block configuration, test the configuration and then Nginx by typing:

sudo nginx -t && sudo systemctl restart nginx


These commands are helpful for picking up changes as you adjust your configuration.

Conclusion

In this guide, we’ve set up a Django project in its own virtual environment. We’ve configured Gunicorn to translate client requests so that Django can handle them. Afterwards, we set up Nginx to act as a reverse proxy to handle client connections and serve the correct project depending on the client request.

Django makes creating projects and applications simple by providing many of the common pieces, allowing you to focus on the unique elements. By leveraging the general tool chain described in this article, you can easily serve the applications you create from a single server.

Learn More

Creating Web Sites using Python and Flask

Complete Python: Go from zero to hero in Python

An A-Z of useful Python tricks

A Complete Machine Learning Project Walk-Through in Python

Learning Python: From Zero to Hero

MongoDB with Python Crash Course - Tutorial for Beginners

Introduction to PyTorch and Machine Learning

Python Tutorial for Beginners (2019) - Learn Python for Machine Learning and Web Development

Python and Django Full Stack Web Developer Bootcamp

Django 2.1 & Python | The Ultimate Web Development Bootcamp

Python Django Dev To Deployment

Build a Backend REST API with Python & Django - Advanced