By the end of this tutorial, you will be able to:
Dependencies:
Along with Django and Docker the demo project that we’ll be using includes Postgres, Nginx and Gunicorn
Start by cloning down the base project:
$ git clone https://gitlab.com/testdriven/django-gitlab-digitalocean.git --branch base --single-branch
$ cd django-gitlab-digitalocean
To test locally, build the images and spin up the containers:
$ docker-compose up -d --build
Navigate to http://localhost:8000/. You should see:
{
"hello": "world"
}
Let’s set up DigitalOcean to work with our application.
First, you’ll need to sign up for a DigitalOcean account (if you don’t already have one), and then generate an access token so you can access the DigitalOcean API
Add the token to your environment:
$ export DIGITAL_OCEAN_ACCESS_TOKEN=[your_digital_ocean_token]
Next, create a new Droplet with Docker pre-installed
$ curl -X POST \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer '$DIGITAL_OCEAN_ACCESS_TOKEN'' \
-d '{"name":"django-docker","region":"sfo2","size":"s-2vcpu-4gb","image":"docker-18-04"}' \
"https://api.digitalocean.com/v2/droplets"
Check the status:
$ curl \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer '$DIGITAL_OCEAN_ACCESS_TOKEN'' \
"https://api.digitalocean.com/v2/droplets?name=django-docker"
If you have jq installed, then you can parse the JSON response like so:
$ curl \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer '$DIGITAL_OCEAN_ACCESS_TOKEN'' \
"https://api.digitalocean.com/v2/droplets?name=django-docker" \
| jq '.droplets[0].status'
The root password should be emailed to you. Retrieve it. Then, once the status of the droplet is active
, SSH into the instance as root and update the password when prompted.
Next, generate a new SSH key:
$ ssh-keygen -t rsa
Save the key to /root/.ssh/id_rsa and don’t set a password. This will generate a public and private key – id_rsa and id_rsa.pub, respectively. To set up passwordless SSH login, copy the public key over to the authorized_keys file and set the proper permissions:
$ cat ~/.ssh/id_rsa.pub
$ vi ~/.ssh/authorized_keys
$ chmod 600 ~/.ssh/authorized_keys
$ chmod 600 ~/.ssh/id_rsa
Copy the contents of the private key:
$ cat ~/.ssh/id_rsa
Set it as an environment variable on your local machine:
export PRIVATE_KEY='-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA04up8hoqzS1+APIB0RhjXyObwHQnOzhAk5Bd7mhkSbPkyhP1
...
iWlX9HNavcydATJc1f0DpzF0u4zY8PY24RVoW8vk+bJANPp1o2IAkeajCaF3w9nf
q/SyqAWVmvwYuIhDiHDaV2A==
-----END RSA PRIVATE KEY-----'
Add the key to the ssh-agent
$ ssh-add - <<< "${PRIVATE_KEY}"
To test, run:
$ ssh -o StrictHostKeyChecking=no root@<instance-ip> whoami
root
Then, create a new new directory for the app:
$ ssh -o StrictHostKeyChecking=no root@<instance-ip> mkdir /app
Moving along, let’s spin up a production Postgres database via DigitalOcean’s Managed Databases
$ curl -X POST \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer '$DIGITAL_OCEAN_ACCESS_TOKEN'' \
-d '{"name":"django-docker-db","region":"sfo2","engine":"pg","version":"11","size":"db-s-2vcpu-4gb","num_nodes":1}' \
"https://api.digitalocean.com/v2/databases"
Check the status:
$ curl \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer '$DIGITAL_OCEAN_ACCESS_TOKEN'' \
"https://api.digitalocean.com/v2/databases?name=django-docker-db" \
| jq '.databases[0].status'
It should take a few minutes to spin up. Once the status is online
, grab the connection information:
$ curl \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer '$DIGITAL_OCEAN_ACCESS_TOKEN'' \
"https://api.digitalocean.com/v2/databases?name=django-docker-db" \
| jq '.databases[0].connection'
Example response:
{
"protocol": "postgresql",
"uri": "postgresql://doadmin:si2p6hg0vfj84efv@django-docker-db-do-user-778274-0.db.ondigitalocean.com:25060/defaultdb?sslmode=require",
"database": "defaultdb",
"host": "django-docker-db-do-user-778274-0.db.ondigitalocean.com",
"port": 25060,
"user": "doadmin",
"password": "si2p6hg0vfj84efv",
"ssl": true
}
Sign up for a GitLab account (if necessary), and then create a new project (again, if necessary).
Next, add a GitLab CI/CD config file called .gitlab-ci.yml to the project root:
image:
name: docker/compose:1.24.1
entrypoint: [""]
services:
- docker:dind
stages:
- build
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
build:
stage: build
before_script:
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE:web
- export NGINX_IMAGE=$IMAGE:nginx
script:
- apk add --no-cache bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $IMAGE:web || true
- docker pull $IMAGE:nginx || true
- docker-compose -f docker-compose.ci.yml build
- docker push $IMAGE:web
- docker push $IMAGE:nginx
Here, we defined a single build
stage where we:
IMAGE
, WEB_IMAGE
, and NGINX_IMAGE
environment variablesAdd the setup_env.sh file to the project root:
#!/bin/sh
echo DEBUG=0 >> .env
echo SQL_ENGINE=django.db.backends.postgresql >> .env
echo DATABASE=postgres >> .env
echo SECRET_KEY=$SECRET_KEY >> .env
echo SQL_DATABASE=$SQL_DATABASE >> .env
echo SQL_USER=$SQL_USER >> .env
echo SQL_PASSWORD=$SQL_PASSWORD >> .env
echo SQL_HOST=$SQL_HOST >> .env
echo SQL_PORT=$SQL_PORT >> .env
This file will create the required .env file, based on the environment variables found in your GitLab project’s CI/CD settings (Settings > CI / CD > Variables). Add the variables based on the connection information.
Once done, commit and push your code up to GitLab to trigger a new build. Make sure it passes. You should see the images in the GitLab Container Registry:
Next, add a deploy
stage to .gitlab-ci.yml and create a global before_script
that’s used for both stages:
image:
name: docker/compose:1.24.1
entrypoint: [""]
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
before_script:
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE:web
- export NGINX_IMAGE=$IMAGE:nginx
- apk add --no-cache openssh-client bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
build:
stage: build
script:
- docker pull $IMAGE:web || true
- docker pull $IMAGE:nginx || true
- docker-compose -f docker-compose.ci.yml build
- docker push $IMAGE:web
- docker push $IMAGE:nginx
deploy:
stage: deploy
script:
- mkdir -p ~/.ssh
- echo "$PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- cat ~/.ssh/id_rsa
- chmod 700 ~/.ssh/id_rsa
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
- chmod +x ./deploy.sh
- scp -o StrictHostKeyChecking=no -r ./.env ./docker-compose.prod.yml root@$DIGITAL_OCEAN_IP_ADDRESS:/app
- bash ./deploy.sh
So, in the deploy
stage we:
Add deploy…sh to the project root:
#!/bin/sh
ssh -o StrictHostKeyChecking=no root@$DIGITAL_OCEAN_IP_ADDRESS << 'ENDSSH'
cd /app
export $(cat .env | xargs)
docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
docker pull $IMAGE:web
docker pull $IMAGE:nginx
docker-compose -f docker-compose.prod.yml up -d
ENDSSH
So, after SSHing into the server, we
Add the DIGITAL_OCEAN_IP_ADDRESS
and PRIVATE_KEY
environment variables to GitLab.
Update the setup_env.sh file:
#!/bin/sh
echo DEBUG=0 >> .env
echo SQL_ENGINE=django.db.backends.postgresql >> .env
echo DATABASE=postgres >> .env
echo SECRET_KEY=$SECRET_KEY >> .env
echo SQL_DATABASE=$SQL_DATABASE >> .env
echo SQL_USER=$SQL_USER >> .env
echo SQL_PASSWORD=$SQL_PASSWORD >> .env
echo SQL_HOST=$SQL_HOST >> .env
echo SQL_PORT=$SQL_PORT >> .env
echo WEB_IMAGE=$IMAGE:web >> .env
echo NGINX_IMAGE=$IMAGE:nginx >> .env
echo CI_REGISTRY_USER=$CI_REGISTRY_USER >> .env
echo CI_JOB_TOKEN=$CI_JOB_TOKEN >> .env
echo CI_REGISTRY=$CI_REGISTRY >> .env
echo IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME >> .env
Next, add the server’s IP to the ALLOWED_HOSTS
list in the Django settings.
Commit and push your code to trigger a new build. Once the build passes, navigate to the IP of your instance. You should see:
{
"hello": "world"
}
Finally, update the deploy
stage so that it runs only when changes are made to the master
branch:
deploy:
stage: deploy
script:
- mkdir -p ~/.ssh
- echo "$PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- cat ~/.ssh/id_rsa
- chmod 700 ~/.ssh/id_rsa
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
- chmod +x ./deploy.sh
- scp -o StrictHostKeyChecking=no -r ./.env ./docker-compose.prod.yml root@$DIGITAL_OCEAN_IP_ADDRESS:/app
- bash ./deploy.sh
only:
- master
To test, create a new develop
branch. Add an explanation point after world
in urls. py
def home(request):
return JsonResponse({"hello": "world!"})
Commit and push your changes to GitLab. Ensure only the build
stage runs. Once the build passes open a PR against the master
branch and merge the changes. This will trigger a new pipeline with both stages – build
and deploy
. Ensure the deploy works as expected:
{
"hello": "world!"
}
That’s it! You can find the final code in the repo.
Hope this tutorial will surely help and you! Please share if you liked it!
Originally published on testdriven.io/blog
#docker #django #python #web-development