Deploying a python-django application using docker

Deploying a python-django application using docker

Deploying a python-django application using docker - Docker is an open-source tool that automates the deployment of an application inside a software container. which are like virtual machines, ...

Originally published by Lewis kori at  lewiskori.com

Hey there I was inspired to write this post based on my experience trying to move my deployments to use docker, particularly for django applications and couldn't get a comprehensive place/article that covered what I needed.Hopefully this article will help anyone out there who is feeling as stuck as I was.

A lot of you might have already heard this phrase being thrown around almost everywhere you turn. You probably googled the term docker up and even tried experimenting with it but might have given up along the way. Heck, to be honest I did hold up on it one or two times before taking a full dive. It can be a little intimidating at first, but oh boy! Once you start using docker, there's no going back. The ease of moving from production to development environment is simply mind blowing to say the least!!

so enough rumbling, let's get started.

what is docker?

Docker is an open-source tool that automates the deployment of an application inside a software container. which are like virtual machines, only more portable, more resource-friendly, and more dependent on the host operating system. 

for detailed information on the workings of docker, I'd recommend reading this article and for those not comfortable reading long posts, this tutorial series on youtube was especially useful in introducing me to the concepts of docker.

Installing docker.

In case you don't have docker installed on your machine follow the detailed steps below as per your operating system.

Going forwards, I've assumed you already had an existing django application, so this tutorial will just be a guide on how to containerize it.

1.windows 10 pro

2.windows 10 that's not pro

3.ubuntu

Getting started

For deploying a typical django application you would need the following services in order to get it running.

  1. Nginx - to serve static files and webserver
  2. Postgres/any database of your choice
  3. python with gunicorn installed

To launch each of these services, you'll need a dockerfile. This is basically a text document highlighting all the commands on the cli and steps you would normally take to assemble an image.

1.Python image

FROM python:3.6

RUN mkdir /code
WORKDIR /code

COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

  1. The first line has to start with the FROM keyword. It tells docker, from which base image you want to base your image from. In this case, we are creating an image from the python 3.6 image.
  2. The second line is the command RUN is used to run instructions on the image, in this case we are creating a directory by the name code.After this the WORKDIR sets the code directory as the working directory so that any further instructions on the dockerfile occur within this directory.
  3. COPY command copies specific files from the host machine to the image we are creating. The requirements.txt file will be copied into the working directory set previously. After this RUN the pip install command to install the python packages needed for your project.
  4. Finally COPY your current working directory's project files from the host machine onto the docker image.

In order to build this image run the simple command

docker build .


on the current dockerfile location directory.

For our use case we'll be having multiple images and running this command for every image will be tiresome. Hence the need for docker-compose, More on that as we finalize.

2. Nginx image

FROM nginx

RUN rm /etc/nginx/conf.d/default.conf
COPY mysite.conf /etc/nginx/conf.d

the commands are the same as for python only specific to nginx

in this case we use the nginx base image, delete the default configuration file that ships with nginx and replace it with our custom config file.

which might look something like this

upstream my_site {
server web:8080;
}

server {

listen 80;
charset utf-8;
server_name  127.0.0.1;


client_max_body_size 4G;
access_log /code/logs/nginx-access.log;
error_log /code/logs/nginx-error.log;


location / {
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $host;
    proxy_redirect off;
    if (!-f $request_filename) {
        proxy_pass http://my_site;
        break;
    }
}

location /static/ {
    autoindex on;
    alias /code/static_cdn/;
}

location /media/ {
    autoindex on;
    alias /code/media_cdn/;
}

}

the file locations will of course be relative to your own configurations.

3. postgres

And lastly we get to the database, in this use case, I used postgres.

FROM postgres:latest

COPY ./init/01-db_setup.sh /docker-entrypoint-initdb.d/01-db-setup.sh

and now you're thinking

"But Lewis, what's this init file?"

for context let's take a look at the postgres directory within our project

postgres

├── postgres/Dockerfile

└── postgres/init

└── postgres/init/01-db_setup.sh

this is a shell script(docker entry point) specifying what commands to run on the database container, things like creating the database, users and granting privileges to the said user.

#!/bin/sh

psql -U postgres -c "CREATE USER $POSTGRES_USER PASSWORD '$POSTGRES_PASSWORD'"
psql -U postgres -c "CREATE DATABASE $POSTGRES_DB OWNER $POSTGRES_USER"
psql -U postgres -c "GRANT ALL PRIVILEGES ON DATABASE $POSTGRES_DB TO $POSTGRES_USER"

note: when you create this file, don't forget to make it executable by running

sudo chmod u+x filename.sh

4. wrapping things up with docker-compose

at this point, you've probably noticed that we have a lot of dockerfiles,

with docker-compose, we can conveniently build all this images using 

the command

docker-compose build .

First off, we'll need to create a docker-compose.yml file within our project directory. we'll specify the services needed for our webapp to run within this file.

version: '3'
services:

web:

  build: .
  container_name: great
  volumes: 
  - .:/code
  - static:/code/static_cdn
  - media:/code/media_cdn
  depends_on: 
      - postgres
  expose: 
    - 8080
  command: bash -c "python manage.py collectstatic --no-input && python manage.py makemigrations && python manage.py migrate && gunicorn --workers=3 projectname.wsgi -b 0.0.0.0:8080"

postgres:
build: ./postgres
restart: unless-stopped
expose:
- "5432"
environment: # will be used by the init script
LC_ALL: C.UTF-8
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassowrd.
POSTGRES_DB: mydb
volumes:
- pgdata:/var/lib/postgresql/data/

nginx:
restart: always
build: ./nginx/
volumes:
- ./nginx/:/etc/nginx/conf.d
- ./logs/:/code/logs
- static:/code/static_cdn
- media:/code/media_cdn
ports:
- "1221:80"
links:
- web
volumes:
pgdata:
media:
static:

Going through this commands line by line:

  1. version - specifies the syntax version of docker-compose we'll be using
  2. services - from this point, we'll highlight the different services we'll be launching. As specified above, these will be nginx,python and postgres, and name them as we want. In my case i've named them nginx, web and postgres.
  3. build - remember all those dockerfiles we spent time writing? Good.using the build command you can specify the location of each individual dockerfile and based on the commands on these files, an image will be build.
  4. container_name - this gives the container the name you specified once, the containers are up and running.
  5. Volumes - this is a way of sharing data between the docker-containers and the host machine. They also allow persistence of data even after the docker-containers are destroyed and recreated again as this is something you'll find yourself doing often.for more on volumes and how to use them check out this article.
  6. ports - this is to specify which ports from the docker containers are mapped to the host machine, taking the nginx service for example, the container's port 80 is mapped to the host machine's port 1221.
  7. expose - Exposing a port makes the port accessible to linked services, but not from the host machine.
  8. restart - specifies the behavior of the containers in case of unforeseen shutdown
  9. command - instructs the containers which commands to run before starting, in this case the chained commands in the web service are for checking for changes in the database and binding the web service to the port 8080.

5. final steps

To build the images, it's now a matter of simply running

docker-compose build

This might take a few minutes to build as the base images are downloading in case you didn't have them locally,

To start the various service containers, simply run

docker-compose up

or if you want to specify which compose file to run in case of multiple docker-compose files within one directory

docker-compose -f filename.yml up

DISCLAIMER: Don't forget to set the Debug = False and allowed hosts, in the settings.py file of django to reflect the domain name or ip-address you'll be using.

In addition to this, change the database from the default sqlite3 that comes with django to reflect the database and usernames we specified

in the environment section of the postgres service like so

DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'mydb',
'USER': 'myuser',
'PASSWORD': 'mypassword',
'HOST': 'postgres',
'PORT': 5432,
}
}

and that's it.

To view the running site, run

  1. localhost:1221
  2. virtual-box-machine-ip:1221(for those using docker-toolbox)

in case you want to stop the containers

docker-compose stop

to start the stopped containers

docker-compose start

to destroy the containers

docker-compose down

you made changes to the docker-files and need those changes applied

docker-compose down && docker-compose build && docker-compose up

Now to get the site up and running on the web, simply create a configuration file for your local machines nginx(or apache) on the web server and simply point it to the docker-container running your django app. In this case you'll point it to the nginx container.

127.0.0.1:1221

To get a list of common commands you'll need for docker, read this concise post

for laravel developers 

here's something to get you started with docker

Thank you very much for your time and I hope this article was useful. If you want more of this , feel free to contact me

Originally published by Lewis kori at  lewiskori.com

===================================================================

Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter

Learn More

☞ Complete Python Bootcamp: Go from zero to hero in Python 3

☞ Python and Django Full Stack Web Developer Bootcamp

☞ Python for Time Series Data Analysis

☞ Python Programming For Beginners From Scratch

☞ Beginner’s guide on Python: Learn python from scratch! (New)

☞ Python for Beginners: Complete Python Programming

☞ Django 2.1 & Python | The Ultimate Web Development Bootcamp

☞ Python eCommerce | Build a Django eCommerce Web Application

☞ Python Django Dev To Deployment

Python Django Tutorial | Django Course

Python Django Tutorial | Django Course

🔥Intellipaat Django course: https://intellipaat.com/python-django-training/ 👉This Python Django tutorial will help you learn what is django web development &...

This Python Django tutorial will help you learn what is django web development & application, what is django and introduction to django framework, how to install django and start programming, how to create a django project and how to build django app. There is a short django project as well to master this python django framework.

Why should you watch this Django tutorial?

You can learn Django much faster than any other programming language and this Django tutorial helps you do just that. Our Django tutorial has been created with extensive inputs from the industry so that you can learn Django and apply it for real world scenarios.

Python Django with Docker and Gitlab CI

Python Django with Docker and Gitlab CI

Python Django with Docker and Gitlab CI - In this article i will describe how we set up Gitlab CI to run tests for Django project. But first couple of words about what tools we were using...✈️✈️✈️✈️✈️

Python Django with Docker and Gitlab CI - In this article i will describe how we set up Gitlab CI to run tests for Django project. But first couple of words about what tools we were using..

For a project I was specifically asked to build an API using Python Django. So, my first starting point was to google “django cookiecutter” which immediately brought me to this amazing cookiecutter project. What I am going to demonstrate here is how to quickly setup the project (for the sake of completeness) and use Gitlab Continuous Integration to automatically unit test, run linters, generate documentation, build a container and release it.

Setup the project

We start with initiating the project using the mentioned cookiecutter project,although you can also use another cookiecutter or build on your existing project; you probably need to make some alterations here and there. Here is a small list of prerequisites:

  • you have docker installed locally
  • you have Python installed locally
  • you have a Gitlab account and you can push using ssh keys

Now, install cookiecutter and generate the project:

pip install "cookiecutter>=1.4.0"
cookiecutter https://github.com/pydanny/cookiecutter-django

Provide the options in any way you like, so the Django project will be created. Type y when asked to include Docker (because that is why we are here!!!).

Walk trough the options for the Django Cookiecutter

Enter the project, create a git repo and push it there:

cd my_django_api
git init
git add .
git commit -m "first awesome commit"
git remote add origin [email protected]:jwdobken/my-django-api.git
git push -u origin master

Obviously replace my-django-api with your project name and jwdobken with your own Gitlab account name.

You can read here how to develop running docker locally. It is something I do with all my projects of any type; the dev and production environments are more alike and it has been years since I worked with something like virtual environments and I am not missing it!

Add a test environment

Make a test environment by copying the local environment:

cp local.yml test.yml
cp requirements/local.txt requirements/test.txt
cp -r compose/local compose/test

In compose/test/django/Dockerfile change requirements/local.txt to requirements/test.txt . You can make more alterations to the test environment later.

The Gitlab-CI file

Finally we get to the meat. Here is the .gitlab-ci.yml file:

image: docker:latest
	services:
	  - docker:dind
	

	variables:
	  DOCKER_HOST: tcp://docker:2375
	  DOCKER_DRIVER: overlay2
	  CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_BUILD_REF_SLUG
	  CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE:latest
	

	stages:
	  - test
	  - build
	  - release
	

	test:
	  stage: test
	  image: tiangolo/docker-with-compose
	  script:
	    - docker-compose -f test.yml build
	    # - docker-compose -f test.yml run --rm django pydocstyle
	    - docker-compose -f test.yml run --rm django flake8
	    - docker-compose -f test.yml run django coverage run -m pytest
	    - docker-compose -f local.yml run --rm django coverage html
	    - docker-compose -f local.yml run --rm django /bin/sh -c "cd docs && apk add make && make html"
	    - docker-compose -f local.yml run django coverage report
	  coverage: "/TOTAL.+ ([0-9]{1,3}%)/"
	  artifacts:
	    paths:
	      - htmlcov
	      - docs/_build
	    expire_in: 5 days
	

	build:
	  stage: build
	  script:
	    - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
	    - docker build -t $CONTAINER_TEST_IMAGE -f compose/production/django/Dockerfile .
	    - docker push $CONTAINER_TEST_IMAGE
	

	release:
	  stage: release
	  script:
	    - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
	    - docker pull $CONTAINER_TEST_IMAGE
	    - docker tag $CONTAINER_TEST_IMAGE $CONTAINER_RELEASE_IMAGE
	    - docker push $CONTAINER_RELEASE_IMAGE
	  only:
	    - master
	

	pages:
	  stage: release
	  script:
	    - mkdir -p public/coverage
	    - mv htmlcov/* public/coverage
	    - mkdir -p public/docs
	    - mv -v docs/_build/html/* public/docs
	  artifacts:
	    paths:
	      - public
	    expire_in: 30 days
	  only:
	    - master

The test stage builds the container stack in the test environment, runs the unit tests with flake8, copies the html coverage report and catches the total coverage. Also, we misuse the test build to generate the sphinxdocumentation for which we need to install Make.

The build stage builds the production container and pushes it to the Gitlab container registry.

The release stage pulls the build container and tags it as the latest release before pushing it to the container registry.

The page part publishes the test and documentation artifacts with Gitlab Pages.

Push your code to Gitlab where you should find a running pipeline.

the pipeline is running

the pipeline has succesfully finished

In the container registry of the project you can find two images: the latest master image and the latest release image. The page itself explains how to pull images from here to anywhere.

Badges

Gitlab enables badges on the repo page to give any specific information. On the Gitlab project page, go to Settings, go to Badges. Here you can add the following badges:

Pipeline status of the master branch:

  • Link: https://gitlab.com/%{project_path}/pipelines

  • Badge image URL: https://gitlab.com/%{project_path}/badges/%{default_branch}/pipeline.svg

Test coverage and report:

  • Link: https://.gitlab.io/my-django-api/coverage/

  • Badge image URL: https://gitlab.com/%{project_path}/badges/%{default_branch}/coverage.svg?job=test

Documentation:

  • Link:https://.gitlab.io/my-django-api/docs/

  • Badge image URL: https://img.shields.io/static/v1.svg?label=sphinx&message=documentation&color=blue

Note that the URL link of Gitlab Pages, for the test coverage report and documentation, is not straightforward. Replace your username with a groupname if you work in a group. In the case of a subgroup, provide the full path. Usually I end up with a bit of trial-and-error; this article explains most of it.

status badges shown on the repo page

Pydocstyle

Finally, I highly recommend to check the existence and quality of your docstrings using pydocstyle. Add the following line to requirements/test.txtand requirements/local.txt in the Code quality section:

pydocstyle==3.0.0  # https://github.com/PyCQA/pydocstyle

Add the following lines to setup.cfg to configure pydocstyle:

[pydocstyle]
match = (?!\d{4}_).*\.py

And finally add the following line to .gitlab-ci.yml in the script section of the test stage (just after the build):

- docker-compose -f test.yml run — rm django pydocstyle

Be warned that the project does not comply with pydocstyle by default, so you will have to complete the code with docstrings to pass the test again.

Finally

We now have a fresh Django project with a neat CI pipeline on Gitlab for automated unit tests, documentation and container image release. You can later include Continuous Deployment to the pipeline; I left it out of the scope, because it depends too much on your production environment. You can read more about Gitlab CI here.

Currently the pipeline is quite slow mainy caused by the build of the images. The running time can be accelerated by caching dependencies.

There is a soft (10GB) size restriction for registry on GitLab.com, as part of the repository size limit. Therefore, when the number of images increases, you probably need to archive old images manually.

===================================================================

Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter

How to Create Docker Containers for Python

How to Create Docker Containers for Python

This tutorial walks you through the full process of containerizing an existing Python application using Docker and pushing the app image to a Docker registry, all within Visual Studio Code. The tutorial also demonstrates how to use base container images that include production-ready web servers (uwsgi and nginx), and how to configure those servers for both Django and Flask web apps, which is helpful to know no matter what your deployment target.

Create Docker containers for Python

This tutorial walks you through the full process of containerizing an existing Python application using Docker and pushing the app image to a Docker registry, all within Visual Studio Code. The tutorial also demonstrates how to use base container images that include production-ready web servers (uwsgi and nginx), and how to configure those servers for both Django and Flask web apps, which is helpful to know no matter what your deployment target.

If you have any problems, feel free to file an issue for this tutorial in the VS Code documentation repository.

An introduction to containers

Docker is a system that allows you to deploy and run apps using containers rather than setting up dedicated environments like virtual machines. A container is a lightweight runtime environment that shares the resources of the host operating system with other containers. Docker is the layer that sits above the operating system to manage resources on behalf of containers.

A container is specifically an instance of a Docker image, an executable package that contains everything needed to run your app: app code, configuration files, runtimes, and all of app's dependencies. An image can be used to instantiate any number of identical containers, which is especially useful when scaling out a cloud-based web app. Because container images are much smaller than virtual machine images, instances can be started and stopped much more quickly than virtual machines, enabling your app to be highly responsive to varying loads at a minimal cost. (When used to scale web apps, containers are often managed in clusters, which are then managed by an orchestration agent such as Kubernetes.)

Images, for their part, are built in multiple layers. The lowest or base layers of an image are typically common elements like the Python runtime; the higher layers contain more specialized elements like your application code. Because of layering, it takes very little time to rebuild an image when changing only the top layer with your app code. Similarly, when you push an image to a container registry, an online repository for images from which you can deploy to cloud services like Azure, only the modified layers need be uploaded and redeployed. As a result, using containers has only a small impact on your develop-test-deploy loop.

You experience the basics of containers and images in the course of this tutorial. For additional background, including helpful diagrams, refer to the Docker documentation.

Prerequisites

App code

If you don't already have an app you'd like to work with, use one of the following samples, which already include the Docker-related files described in this tutorial:

After verifying that your app runs properly, generate a requirements.txt file (using pip freeze > requirements.txt, for example) so that those dependencies can be automatically installed in the Docker image. The samples each include a requirements.txt file.

Create a container registry

As mentioned earlier, a container registry is an online repository for container images that allows a cloud service, like Azure App Service, to acquire the image whenever it needs to start a container instance. Because the registry manages images separate from container instances, the same image in a registry can be used to start any number of concurrent instances, as happens when scaling out a web app to handle increased loads.

Because setting up a registry is a one-time affair, you do that step now before creating images that you then push to that registry.

Registry options include the following:

  • The Azure Container Registry (ACR), a private, secure, hosted registry for your images.
  • Docker Hub, Docker's own hosted registry that provides a free way to share images.
  • A private registry running on your own server, as described on Docker registry in the Docker documentation.

To create an Azure Container Registry, as shown later in this tutorial, do the following:

  1. Follow the first part of Quickstart: Create a container registry using the Azure portal through the "Log in to ACR" section. You don't need to complete the sections "Push image to ACR" and later because you do those steps within VS Code as part of this tutorial.

  2. Make sure that the registry endpoint you created is visible under Registries in the Docker explorer of VS Code:

Create a container image

A container image is a bundle of your app code and its dependencies. To create an image, Docker needs a Dockerfile that describes how to structure the app code in the container and how to get that code running. The Dockerfile, in other words, is the template for your image. The Docker extension helps you create these files with customization for production servers.

Create the Docker files
  1. In VS Code, open the Command Palette (⇧⌘P (Windows, Linux Ctrl+Shift+P)) and select the Docker: Add Docker files to workspace command.

  2. When the prompt appears after a few moments, select Python as the app type.

  3. Specify the port on which your app listens, such as 8000 (as in the Django sample) or 5000 (as in the Flask sample). The port value ends up only in the Docker compose files (see below) and have no impact on your container image.

  4. With all this information, the Docker extension creates the following files:

    • The Dockerfile file describes the contents of your app's layer in the image. Your app layer is added on top of the base image indicated in the Dockerfile.. By default, the name of the image is the name of the workspace folder in VS Code.

    • A .dockerignore file that reduces image size by excluding files and folders that aren't needed in the image, such as .git and .vscode. For Python, add another line to the file for __pycache__.

    • docker-compose.yml and docker-compose.debug.yml files that are used with Docker compose. For the purposes of this tutorial, you can ignore or delete these files.

Tip: VS Code provides great support for Docker files. See the Working with Docker article to learn about rich language features like smart suggestions, completions, and error detection.

Using production servers

For Python, the Docker extension by default specifies the base image python:alpine in the Dockerfile and includes commands to run only the Flask development server. These defaults obviously don't accommodate Django, for one, and when deploying to the cloud, as with Azure App Service, you should also use production-ready web servers instead of a development server. (If you're used Flask, you're probably accustomed to seeing the development server's warning in this regard!)

For this reason, you need to modify the Dockerfile to use a base image with production servers, then provide the necessary configuration for your app. The following sections provide details for both Flask and Django.

Changes for Flask apps

A good base image for Flask is tiangolo/uwsgi-nginx-flask:python3.6-alpine3.7, which is also available for other versions of Python (see the tiangolo/uwsgi-nginx-flask repository on GitHub). This image already contains Flask and the production-ready uwsgi and nginx servers.

By default, the image assumes that (a) your app code is located in an app folder, (b) the Flask app object is named app, and (c) the app object is located in main.py. Because your app may have a different structure, you can indicate the correct folders in the Dockerfile and provide the necessary parameters the uwsgi server in a uwsgi.ini file.

The following steps summarize the configuration used in the python-sample-vscode-flask-tutorial app, which you can adapt for your own code.

  1. The Dockerfile indicates the location and name of the Flask app object, the location of static files for nginx, and the location of the uwsgi.ini file. (The Dockerfile in the sample contains further explanatory comments that are omitted here.)

    FROM tiangolo/uwsgi-nginx-flask:python3.6-alpine3.7
    
    ENV LISTEN_PORT=5000
    EXPOSE 5000
    
    # Indicate where uwsgi.ini lives
    ENV UWSGI_INI uwsgi.ini
    
    # Tell nginx where static files live.
    ENV STATIC_URL /hello_app/static
    
    # Set the folder where uwsgi looks for the app
    WORKDIR /hello_app
    
    # Copy the app contents to the image
    COPY . /hello_app
    
    # If you have additional requirements beyond Flask (which is included in the
    # base image), generate a requirements.txt file with pip freeze and uncomment
    # the next three lines.
    #COPY requirements.txt /
    #RUN pip install --no-cache-dir -U pip
    #RUN pip install --no-cache-dir -r /requirements.txt
    
  2. The uwsgi.ini file, which is in the root of the sample project folder, provides configuration arguments for the uwsgi server. For the sample, the configuration below says that the Flask app object is found in the hello_app/webapp.py module, and that it's named (that is, "callable" as) app. The other values are additional common uwsgi settings:

    [uwsgi]
    module = hello_app.webapp
    callable = app
    uid = 1000
    master = true
    threads = 2
    processes = 4
    

Changes for Django apps

A good base image for Django is tiangolo/uwsgi-nginx:python3.6-alpine3.7, which is also available for other versions of Python (see the tiangolo/uwsgi-nginx repository on GitHub).

This base image already contains the production-ready uwsgi and nginx servers, but does not include Django. It's also necessary to provide settings to uwsgi so it can find the app's startup code.

The following steps summarize the configuration used in the python-sample-vscode-django-tutorial app that you can adapt for your own code.

  1. Make sure you have a requirements.txt file in your project that contains Django and its dependencies. You can generate requirements.txt using the command pip freeze > requirements.txt.

  2. In your Django project's settings.py file, modify the ALLOWED_HOSTS list to include the root URL to which you intend to deploy the app. For example, the following code assumes deployment to an Azure App Service (azurewebsites.net) named "vsdocs-django-sample-container":

    ALLOWED_HOSTS = [
        # Example host name only; customize to your specific host
        "vsdocs-django-sample-container.azurewebsites.net"
    ]
    

    Without this entry, you'll eventually get all the way through the deployment only to see a "DisallowedHost" message that instructs to you add the domain to ALLOWED_HOSTS, which requires that you rebuild, push, and redeploy the image all over again!

  3. Create a uwsgi.ini file in the Django project folder (alongside manage.py) that contains startup arguments for the uwsgi server. In the sample, the Django project is in a folder called web_project, which is where the wsgi.py and setting.py files live.

    [uwsgi]
    chdir = .
    module = web_project.wsgi:application
    env = DJANGO_SETTINGS_MODULE=web_project.settings
    uid = 1000
    master = true
    threads = 2
    processes = 4
    
  4. To serve static files, copy the nginx.conf file from the django-react-devcontainer repo into your Django project folder.

  5. Modify the Dockerfile to indicate the location of uwsgi.ini, set the location of static files for nginx, and make sure the SQLite database file is writable. (The Dockerfile in the sample contains further explanatory comments that are omitted here.)

    FROM tiangolo/uwsgi-nginx:python3.6-alpine3.7
    
    ENV LISTEN_PORT=8000
    EXPOSE 8000
    
    # Indicate where uwsgi.ini lives
    ENV UWSGI_INI uwsgi.ini
    
    # Tell nginx where static files live (as typically collected using Django's
    # collectstatic command.
    ENV STATIC_URL /app/static_collected
    
    # Copy the app files to a folder and run it from there
    WORKDIR /app
    ADD . /app
    
    # Make app folder writable for the sake of db.sqlite3, and make that file also writable.
    RUN chmod g+w /app
    RUN chmod g+w /app/db.sqlite3
    
    # Make sure dependencies are installed
    RUN python3 -m pip install -r requirements.txt
    

Note: When building a Docker image on Windows, you typically see the message below, which is why the Dockerfile shown here includes the two chmod commands. If need to make other files writable, add the appropriate chmod commands to your Dockerfile.

SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.

Build and test the image

With the necessary Dockerfile in place, you're ready to build the Docker image and run it locally:

  1. Make sure that Docker is running on your computer.

  2. On the VS Code Command Palette (⇧⌘P (Windows, Linux Ctrl+Shift+P)), select Docker: Build Image.

  3. When prompted for the Docker file, choose the Dockerfile that you created in the previous section. (VS Code remembers your selection so you won't need to enter it again to rebuild.)

  4. When prompted for a name to give the image, use a name that follows the conventional form of /:, where `` is typically latest. Here are some examples (when using the Azure Container Registry):

    # Examples for Azure Container Registry, prefixed with the registry name
    vsdocsregistry.azurecr.io/python-sample-vscode-django-tutorial:latest
    vsdocsregistry.azurecr.io/python-sample-vscode-flask-tutorial:latest
    vsdocsregistry.azurecr.io/myexpressapp:latest
    
    # Examples for Docker hub, prefixed with your username
    vsdocs-team/python-sample-vscode-django-tutorial:latest
    vsdocs-team/python-sample-vscode-flask-tutorial:latest
    vsdocs-team/myexpressapp:latest
    
  5. Each step of Docker's build process appears in the VS Code Terminal panel, including any errors that occur running the steps in the Dockerfile.

    Tip: every time you run the Docker: Build image command, the Docker extension opens another Terminal in VS Code in which to run the command. You can close each terminal once the build is complete. Alternately, you can reuse the same terminal to build the image by scrolling up in the command history using the up arrow.

  6. When the build is complete, the image appears in the Docker explorer under Images:

  1. Run and test your container locally by using the following command, replacing `` with your specific image, and changing the port numbers as needed. For web apps, you can then open browser to localhost: to see the running app.

    # For Flask sample
    docker run --rm -it -p 5000:5000 
    
    # For Django sample
    docker run --rm -it -p 8000:8000 
    

Two useful features of the Docker extension

The Docker extension provides a simple UI to manage and even run your images rather than using the Docker CLI. Just expand the Image node in the Docker explorer, right-click any image, and select any of the menu items:

In addition, on the top of the Docker explorer, next to the refresh button, is a button for System Prune. This command cleans up any dangling and otherwise unused images on your local computer. It's a good idea to periodically use the command to reclaim space on your file system.

Push the image to a registry

Once you're confident that your image works, the next step is to push it to your container registry:

  1. On the Command Palette (⇧⌘P (Windows, Linux Ctrl+Shift+P)), select Docker: Push.

  2. Choose the image you just built to push the image to the registry; upload progress appears in the Terminal.

  3. Once completed, expand the Registries > Azure (or DockerHub) node in the Docker explorer, then expand the registry and image name to see the exact image. (You may need to refresh the Docker explorer.)

Tip: The first time you push an image, you see that VS Code uploads all of the different layers that make up the image. Subsequent push operations, however, upload only those layers that have changed. Because it's typically only your app code that's changes, those uploads happen much more quickly, making for a tight edit-build-deploy-test loop. To see this, make a small change to your code, rebuild the image, and then push again to the registry. The whole process typically completes in a matter of seconds.

The end

Now that you've created a container with your app, you're ready to deploy it to any container-ready cloud service. For details on deploying to Azure App Service, see Deploy a container.

You can also learn more about the Docker extension for VS Code by visiting the vscode-docker repository on GitHub.

Thank you for reading !