Find Security Vulnerabilities when Scan Your Docker Images

Find Security Vulnerabilities when Scan Your Docker Images

How to find security vulnerabilities before it’s too late. Scan Your Docker Images for Vulnerabilities

How to find security vulnerabilities before it’s too late. Scan Your Docker Images for Vulnerabilities.

So you’ve crafted a Dockerfile, tested your container in your development workstation, and you’re waiting for the CI/CD to pick it up. Eventually, pre-prod is updated, integration tests passed and functional testers give the green-light. Is it now time to roll-out to prod? Not so fast.

Docker Image Layers Inheritance

Each batch of files added to an image end up creating a layer that is added to the image. Your Docker image is the concatenation of all these layers in the specific order in which they’ve originally been created.

The same principle applies when you create an image inheriting a parent image using the FROM directive in your Dockerfile. Your final image will include all the layers of your parent image, augmented with the layers you’ve created yourself.

What if you use a parent image that also uses another parent image, that may also use another parent image, that finally uses a base image like Ubuntu or Alpine? I guess you see where this is going: You end up inheriting multiple layers of content (i.e. files and executables) from upstream images that you have never seen (let alone controlled) yourself.

What if a security vulnerability is included in any of these upstream layers? We’ll look next at how to detect these. But first, what exactly is a security vulnerability?

Security Vulnerabilities

As you can see on the top-left part of the above figure (openjdk:8-jre image), there are multiple layers. On the right part, you can also visualise the files included in that image, courtesy of the Dive¹ tool. Many of those files are executables and, as with all source code we write, susceptible to security issues and vulnerabilities.

If those were files in your local filesystem you’d probably run a virus scan and, by all means, do so when feasible. In a broader sense, a virus could be regarded as a security vulnerability itself. However, a computer virus is a type of computer program that, when executed, replicates itself by modifying other computer programs and inserting its own code. When this replication succeeds, the affected areas are then said to be infected with a computer virus².

Security vulnerabilities are not viruses.

Security vulnerabilities exist in, usually, good-intended source code that has a logical or technical flaw resulting in a system weakness that can be exploited to compromise a system. Such vulnerabilities may exist undiscovered for years until someone discovers them, either while actively looking for them or by mere luck.

Vulnerability databases

The responsible thing to do when you discover a vulnerability, which could affect thousands or millions of users, is to report it. First, privately to the owner of the source code, providing enough time for a fix to be pushed out, and then publically to raise awareness for everybody else.

There are currently many well-established online vulnerability databases that can be used for such public announcements, such as CVE³, NVD⁴, and VULDB⁵.

Docker Static Vulnerability Scanning

Let’s recap on what we’ve established so far:

  • A Docker image consists of layers with files and executables.
  • Security vulnerabilities of executable (or library) source code are publicly held in online databases.

What if we combine those two points? Could we try to compare the executables found in our layers against the entries of an online vulnerability database to find out if our Docker image is exposed to already-known threats?

Let’s try that next.

Anchore Engine

There are many tools available, both open-source and commercial, allowing you to scan your images for known vulnerabilities. Such tools can be run as part of your CI/CD pipeline or can be connected with your images registry and scan new images as they become available. Some of these tools include Clair, Dadga, Nexus Repository Pro, Black Duck, Qualys, Harbor, and Twistlock.

For the hands-on part of this post, I’m going to show you how to use Anchore⁶. Anchore consists of a commercial edition (Anchore Enterprise) and an open-source edition (Anchor Engine).

Anchor has an impressive clientele comprising of companies like Cisco, eBay, Atlassian, Nvidia, and RedHat. The commercial edition provides you with an extra UI, RBAC, and support among others — however, it still uses the underlying, open-source edition, Anchor Engine we’re about to use here.

Installation

Anchore Engine is provided as a set of Docker images that can be run standalone or within an orchestration platform such as Kubernetes, Docker Swarm, Rancher, Amazon ECS, and other container orchestration platforms. You can quickly boot up your local version of Anchore Engine using Docker Compose and the following one-liner:

curl https://raw.githubusercontent.com/anchore/anchore-engine/master/docker-compose.yaml | docker-compose -p anchore -f - up

The above docker-compose.yaml will create five containers and then try to fetch online vulnerability databases, so it may take a few minutes to complete.

Running the client

Anchor Engine is accessed via a command-line client. You can conveniently run the CLI client via another Docker image:

docker run --rm -e ANCHORE_CLI_URL=http://anchore_engine-api_1:8228/v1/ --network anchore_default -it anchore/engine-cli

You now have a shell to the Anchore CLI client where, for now, you can execute a test command, like anchore-cli --version:

Checking-in a Docker image

Anchore Engine provides with you a vulnerabilities assessment report in two steps. You first need to add an image to be scanned and then you can request the vulnerability report for that image, allowing enough time between those two commands for the image to be downloaded and scanned.

In the following example, we will be using an old Wordpress image known to have vulnerabilities.

If you intend to use Wordpress with Docker, make sure you use a recent image instead.

So, time to add our first Docker image with the CLI client:

anchore-cli image add wordpress:4.6.0 && anchore-cli image wait wordpress:4.6.0

With the above command, we add a new image to be analysed and wait until Anchore reports that the analysis is completed.

Viewing vulnerabilities

To see the discovered security vulnerabilities you can execute the following command:

anchore-cli image vuln wordpress:4.6.0 all

In an old image like the one we used above, we can get many, many vulnerabilities. In fact, Anchore reported 1420 known vulnerabilities for our Wordpress testing-image back from 2016:

As you can see, instantiating a Docker container with the above image is an action bearing high risk. If this was an image you have created to distribute your own application with, you should probably block this release until a vulnerability assessment takes place first.

Conclusion

Software is (still) written by humans and humans make mistakes. Don’t let such mistakes haunt your Docker images. Use a Docker image security vulnerability scanner and, at least, be protected from already-discovered security issues. Integrate vulnerability scanning as part of your CI/CD pipeline and establish rules to conditionally block release roll-out when vulnerabilities are discovered.

Thank you for reading !

Develop this one fundamental skill if you want to become a successful developer

Throughout my career, a multitude of people have asked me&nbsp;<em>what does it take to become a successful developer?</em>

Throughout my career, a multitude of people have asked me what does it take to become a successful developer?

It’s a common question newbies and those looking to switch careers often ask — mostly because they see the potential paycheck. There is also a Hollywood level of coolness attached to working with computers nowadays. Being a programmer or developer is akin to being a doctor or lawyer. There is job security.

But a lot of people who try to enter the profession don’t make it. So what is it that separates those who make it and those who don’t? 

Read full article here

Docker for front-end developers

Docker for front-end developers

This is short and simple guide of docker, useful for frontend developers.

Since Docker’s release in 2013, the use of containers has been on the rise, and it’s now become a part of the stack in most tech companies out there. Sadly, when it comes to front-end development, this concept is rarely touched.

Therefore, when front-end developers have to interact with containerization, they often struggle a lot. That is exactly what happened to me a few weeks ago when I had to interact with some services in my company that I normally don’t deal with.

The task itself was quite easy, but due to a lack of knowledge of how containerization works, it took almost two full days to complete it. After this experience, I now feel more secure when dealing with containers and CI pipelines, but the whole process was quite painful and long.

The goal of this post is to teach you the core concepts of Docker and how to manipulate containers so you can focus on the tasks you love!

The what and why for Docker

Let’s start by defining what Docker is in plain, approachable language (with some help from Docker Curriculum

Docker is a tool that allows developers, sys-admins, etc. to easily deploy their applications in a sandbox (called containers) to run on the host operating system.
The key benefit of using containers is that they package up code and all its dependencies so the application runs quickly and reliably regardless of the computing environment.

This decoupling allows container-based applications to be deployed easily and consistently regardless of where the application will be deployed: a cloud server, internal company server, or your personal computer.

Terminology

In the Docker ecosystem, there are a few key definitions you’ll need to know to understand what the heck they are talking about:

  • Image: The blueprints of your application, which forms the basis of containers. It is a lightweight, standalone, executable package of software that includes everything needed to run an application, i.e., code, runtime, system tools, system libraries, and settings.
  • Containers: These are defined by the image and any additional configuration options provided on starting the container, including but not limited to the network connections and storage options.
  • Docker daemon: The background service running on the host that manages the building, running, and distribution of Docker containers. The daemon is the process that runs in the OS the clients talk to.
  • Docker client: The CLI that allows users to interact with the Docker daemon. It can also be in other forms of clients, too, such as those providing a UI interface.
  • Docker Hub: A registry of images. You can think of the registry as a directory of all available Docker images. If required, you can host your own Docker registries and pull images from there.
‘Hello, World!’ demo

To fully understand the aforementioned terminologies, let’s set up Docker and run an example.

The first step is installing Docker on your machine. To do that, go to the official Docker page, choose your current OS, and start the download. You might have to create an account, but don’t worry, they won’t charge you in any of these steps.

After installing Docker, open your terminal and execute docker run hello-world. You should see the following message:

➜ ~ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:6540fc08ee6e6b7b63468dc3317e3303aae178cb8a45ed3123180328bcc1d20f
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

Let’s see what actually happened behind the scenes:

  1. docker is the command that enables you to communicate with the Docker client.
  2. When you run docker run [name-of-image], the Docker daemon will first check if you have a local copy of that image on your computer. Otherwise, it will pull the image from Docker Hub. In this case, the name of the image is hello-world.
  3. Once you have a local copy of the image, the Docker daemon will create a container from it, which will produce the message Hello from Docker!
  4. The Docker daemon then streams the output to the Docker client and sends it to your terminal.
Node.js demo

The “Hello, World!” Docker demo was quick and easy, but the truth is we were not using all Docker’s capabilities. Let’s do something more interesting. Let’s run a Docker container using Node.js.

So, as you might guess, we need to somehow set up a Node environment in Docker. Luckily, the Docker team has created an amazing marketplace where you can search for Docker images inside their public Docker Hub. To look for a Node.js image, you just need to type “node” in the search bar, and you most probably will find this one.

So the first step is to pull the image from the Docker Hub, as shown below:

➜ ~ docker pull node

Then you need to set up a basic Node app. Create a file called node-test.js, and let’s do a simple HTTP request using JSON Placeholder. The following snippet will fetch a Todo and print the title:

const https = require('https');

https
  .get('https://jsonplaceholder.typicode.com/todos/1', response => {
    let todo = '';

    response.on('data', chunk => {
      todo += chunk;
    });

    response.on('end', () => {
      console.log(`The title is "${JSON.parse(todo).title}"`);
    });
  })
  .on('error', error => {
    console.error('Error: ' + error.message);
  });

I wanted to avoid using external dependencies like node-fetch or axios to keep the focus of the example just on Node and not in the dependencies manager.

Let’s see how to run a single file using the Node image and explain the docker run flags:

➜ ~ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/app -w /usr/src/app node node node-test.js

  • -it runs the container in the interactive mode, where you can execute several commands inside the container.
  • --rm automatically removes the container after finishing its execution.
  • --name [name] provides a name to the process running in the Docker daemon.
  • -v [local-path: docker-path] mounts a local directory into Docker, which allows exchanging information or access to the file system of the current system. This is one of my favorite features of Docker!
  • -w [docker-path] sets the working directory (start route). By default, this is /.
  • node is the name of the image to run. It always comes after all the docker run flags.
  • node node-test.js are instructions for the container. These always come after the name of the image.

The output of running the previous command should be: The title is "delectus aut autem".

React.js demo

Since this post is focused on front-end developers, let’s run a React application in Docker!

Let’s start with a base project. For that, I recommend using the create-react-app CLI, but you can use whatever project you have at hand; the process will be the same.

➜ ~ npx create-react-app react-test
➜ ~ cd react-test
➜ ~ yarn start

You should be able to see the homepage of the create-react-app project. Then, let’s introduce a new concept, the Dockerfile.

In essence, a Dockerfile is a simple text file with instructions on how to build your Docker images. In this file, you’d normally specify the image you want to use, which files will be inside, and whether you need to execute some commands before building.

Let’s now create a file inside the root of the react-test project. Name this Dockerfile, and write the following:

# Select the image to use
FROM node

## Install dependencies in the root of the Container
COPY package.json yarn.lock ./
ENV NODE_PATH=/node_modules
ENV PATH=$PATH:/node_modules/.bin
RUN yarn

# Add project files to /app route in Container
ADD . /app

# Set working dir to /app
WORKDIR /app

# expose port 3000
EXPOSE 3000

When working in yarn projects, the recommendation is to remove the node_modules from the /app and move it to root. This is to take advantage of the cache that yarn provides. Therefore, you can freely do rm -rf node_modules/ inside your React application.

After that, you can build a new image given the above Dockerfile, which will run the commands defined step by step.

➜ ~ docker image build -t react:test .

To check if the Docker image is available, you can run docker image ls.

➜ ~ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
react test b530cde7aba1 50 minutes ago 1.18GB
hello-world latest fce289e99eb9 7 months ago 1.84kB

Now it’s time to run the container by using the command you used in the previous examples: docker run.

➜ ~ docker run -it -p 3000:3000 react:test /bin/bash

Be aware of the -it flag, which, after you run the command, will give you a prompt inside the container. Here, you can run the same commands as in your local environment, e.g., yarn start or yarn build.

To quit the container, just type exit, but remember that the changes you make in the container won’t remain when you restart it. In case you want to keep the changes to the container in your file system, you can use the -v flag and mount the current directory into /app.

➜ ~ docker run -it -p 3000:3000 -v $(pwd):/app react:test /bin/bash

[email protected]:/app# yarn build

After the command is finished, you can check that you now have a /build folder inside your local project.

Conclusion

This has been an amazing journey into the fundamentals of how Docker works.

Learn More

Thanks for reading

How to Security Credentials for PHP with Docker

How to Security Credentials for PHP with Docker

A simple configuration file is all it takes to set up an environment and can be used to rebuild it at any time. So, if we move forward with current technology, we need a way to security our credentials in a Docker-based environment that makes use of PHP-FPM and Nginx.

Environment Overview

Before we dive into the configurations and files required to make this all happen, I wanted to take a brief look at the pieces involved here and how they'll fit together. I've mentioned some of them above in passing but here's the list all in one place:

  1. Docker to build and manage the environment
  2. Nginx to handle the web requests and responses
  3. PHP-FPM to parse and execute the PHP for the request
  4. Vault to store and manage the secrets

I'll also be making use of a simple Vault client - psecio/vaultlib to make the requests to Vault for the secrets. Using a combination of these technologies and a bit of configuration a working system isn't too difficult.

Protecting Credentials

There are several ways to get secrets into a Docker-based environment, some being more secure than others. Here's a list of some of these options and their pros and cons:

Passing them in as command-line options

One option that Docker allows is the passing in of values on the command-line when you're bringing up the container. For example, if you wanted to execute a command inside of a container and pass in values that become environment variables, you could use the -e option:

docker run -e "test=foo" whoami

In this command, we're executing the whoami command and passing in an environment variable of test with a value of foo. While this is useful, it's limited to only being used in single commands and not in the environment as a whole when it starts up. Additionally, when you run a command on the command-line, the command and all of its arguments could show up in the process list. This would expose the plain-text version of the variable to anyone with access to the server.

Using Docker "secrets"

Another option that's one of the most secure in the list is the use of Docker's own "secrets" handling. This functionality allows you to store secret values inside of an encrypted storage location but still allows them to be accessed from inside of the Docker containers. You use the docker secret command to set the value and grant access to the services that should have access. Their documentation has several examples of setting it up and how to use it in more real-world situations (such as a WordPress blog).

While this storage option is one of the better ones, it also comes with a caveat: it can only be used in a Docker Swarm situation. Docker Swarm is functionality built into Docker that makes it easier to manage a cluster of Docker instances rather than just one. If you're not using Swarm mode, you're out of luck on using this "secrets" storage method.

Hard-coding them in the docker-compose configuration

There's another option with Docker Compose to get values pushed into the environment as variables: through settings in the docker-compose.yml configuration file.

The Setup

Before I get too far along in the setup, I want to outline the file and directory structure of what we'll be working with. There are several configuration files involved and I wanted to call them out so they're all in place.

For the examples, we'll be working in a project1/ directory which will contain the following files:

  • docker-compose.yml
  • www.conf
  • site.conf
  • .env
Staring with Docker

To start, we need to build out the environment our application is going to live in. This is a job for Docker or, more specifically Docker Compose. For those not familiar with Docker Compose, you can think of it as a layer that sits on top of Docker and makes building out the environments simpler than having a bunch of Dockerfile configuration files lying around. It joins the different containers together as "services" and provides a configuration structure that abstracts away much of the manual commands that just using the docker command line tool would require.

In a Compose configuration file, you define the "services" that you want to create and various settings about them. For example, if we just wanted to create a simple server with Nginx running on port 8080, we could create a docker-compose.yml configuration like this:

version: '2'

services:
web:
image: nginx:latest
ports:
- "8080:80"

Easy, right? You can create the same kind of thing with just Dockerfile configurations but Compose makes it a bit simpler.

The docker-compose.yml configuration

Using this structure we're going to create our environment that includes:

  • A container running Nginx that mounts our code/ directory to its document root
  • A container running PHP-FPM (PHP 7) to handle the incoming PHP requests (linked to the Nginx container)
  • The Vault container that runs the Vault service (linked to the PHP container)

Here's what that looks like:

version: '2'

services:
web:
image: nginx:latest
ports:
- "8080:80"
volumes:
- ./code:/code
- ./site.conf:/etc/nginx/conf.d/site.conf
links:
- php

php:
image: php:7-fpm
volumes:
- ./www.conf:/usr/local/etc/php-fpm.d/www.conf
- ./code:/code
environment:
- VAULT_KEY=${VAULT_KEY}
- VAULT_TOKEN=${VAULT_TOKEN}
- ENC_KEY=${ENC_KEY}

vault:
image: vault:latest
links:
- php
environment:
- VAULT_ADDR=http://127.0.0.1:8200

Lets walk through this so you can understand each part. First we create the web service - this is our Nginx container that installs from the nginx:lastest image. It then defines the ports to use, setting up the container to respond on port 8080 and proxy that to port 80 on the local machine (the default port for HTTP). The volumes section defines two things to mount from the local system to the remote system: our code/ directory and the site.conf that's copied over to the Nginx configuration path of /etc/nginx/conf.d/site.conf. Finally, in the links section, we tell Docker that we want to link the web and php containers so they're aware of each other. This link makes it possible for the Nginx configuration to be able to call PHP-FPM as a handler on *.php requests. The contents of the site.conf file are explained in a section later in this article.

Next is the php service. This service installs from the php:7-fpm image, loading in the latest version of PHP-FPM that uses a 7.x version. Again we have a volumes section that copies over the code/ to the container but this time we're moving in a different configuration file: the www.conf configuration. This is the configuration PHP-FPM uses when processing PHP requests. More on this configuration will be shared later too.

What about the environment settings in the php service, you might be asking. Don't worry, I'll get to those later but those are one of the keys to how we'll be getting values from Docker pushed into the service containers for later use.

Finally, we get to the vault service. This service uses the vault:latest image to pull in the latest version of the Vault container and runs the setup process. There's also a link over to the php service so that Vault and PHP can talk. The last part there, the environment setting, is just a Vault-specific setting so that we know a predictable address and port to access the Vault service from PHP.

The site.conf configuration (Nginx)

I mentioned this configuration before when walking through the docker-compose.yml configuration but lets get into a bit more detail. First, here's the contents of our site.conf:

server {
index index.php index.html;
server_name php-docker.local;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /code;

location ~ \.php$ {
    try_files $uri =404;
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass php:9000;
    fastcgi_index index.php;
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_param PATH_INFO $fastcgi_path_info;
}

}

If you've ever worked with PHP-FPM and Nginx before, this configuration probably looks pretty similar. It sets up the server configuration (with the hostname of php-docker.local, add this to 127.0.0.1 in /etc/hosts) to hand off any requests for .php scripts to PHP-FPM via FastCGI. Our index setting lets us use either a index.php or index.html file for the base without having to specify it in the URL. Pretty simple, right?

When we fire up Docker Compose this configuration will be copied into the container at the /etc/nginx/conf.d/site.conf path. With that settled, we'll move on to the next file: the PHP-FPM configuration.

The www.conf configuration (PHP-FPM)

This configuration sets up how the PHP-FPM process behaves when Nginx passes the incoming request over to it. I've reduced down the contents of the file (removing extra comments) to help make it clearer here. Here are the contents of the file:

[www]
user = www-data
group = www-data

listen = 127.0.0.1:9000

pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3

clear_env = no

env[VAULT_KEY] = $VAULT_KEY
env[VAULT_TOKEN] = $VAULT_TOKEN

While most of this configuration is default settings, there are a few things to note here, starting with the clear_env line. PHP-FPM, by default, will no import any environment variables that were set when the process started up. This clear_env setting being set to no tells it to import those values and make them accessible to Nginx. In the next few lines, there are a few values that are manually defined with the env[] directive. These are variables that come from the environment and are then passed along to the PHP process as $_ENV values.

If you're paying attention, you might notice how things are starting to line up between the configuration files and how environment variables are being passed around.

This configuration will be copied into place by Compose to the /usr/local/etc/php-fpm.d/www.conf path. With this file in place, we get to the last piece of the puzzle: the .env file.

The .env configuration (Environment variables)

One of the handy things about Docker Compose is its ability to read from a default .env file when the build/up commands are run and automatically import them. In this case we have a few settings that we don't want to hard-code in the docker-compose.yml configuration and don't want to hard-code in our actual PHP code:

  • the key to seal/unseal the Vault
  • the token used to access the Vault API
  • the key used for the encryption of configuration values

We can define these in our .env file in the base project1/ directory:

VAULT_KEY=[ key to use for locking/unlocking ]
VAULT_TOKEN=[ token to use for API requests]

Obviously, you'll want to replace the [...] strings with your values when creating the files.

NOTE: DO NOT use the root token and key in a production environment. Using it here is only for example purposes without having to get into further setup and configuration on the Vault instance of other credentials to use. For more information about authentication method options in Vault

One of the tricky things to note here is that, when you (re)build the Vault container, it starts from scratch and will drop any users you've created (and even reset the root key/token). The key here is to grab these values once the environment is built, put them into the project1/.env and then rebuild the php service to pull the new environment values in:

docker-compose build -d php
It's all about the code

Alright, now that we've worked through the four configuration files needed to set up the environment we need to talk about code. In this case, it's the PHP code that lives in project1/code/. Since we're going to keep this super simple, the example will only have one file: index.php. The basic idea behind the code is to be able to extract secrets values from the Vault server that we'll need in our application. Since we're going to use the psecio/vaultlib library, we need to install it via Composer:

composer require psecio/vaultlib

If you run that on your local system in the project1/code/ directory, it will set up the vendor/ directory with everything you need. Since the code/ directory is mounted as a volume on the php service, it will pull it from the local version when you make the web request.

With this installed, we can then initialize our Vault connection and set our first value:

<?php
require_once DIR.'/vendor/autoload.php';

$accessToken = $_ENV['VAULT_TOKEN'];
$baseUrl = 'http://vault:8200';

$client = new \Psecio\Vaultlib\Client($accessToken, $baseUrl);

// If the vault is sealed, unseal it
if ($client->isSealed() == true) {
$client->unseal($_ENV['VAULT_KEY']);
}

// Now set our secret value for "my-secret"
$result = $client->setSecret('my-secret', ['testing1' => 'foo']);
echo 'Result: '.var_export($result, true);

?>

Now if you make a request to the local instance on port 8080 and all goes well, you should see the message "Result: true". If you see exceptions there might be something up with the container build. You can use docker-compose down to destroy all of the current instances and then docker-compose build; docker-compose up to bring them all back up. If you do this, be sure to swap out the Vault token and key and rebuild the php service.

In the code above we create an instance of the Psecio\Vaultlib\Client and pass in our token pulled from an environment variable. This variable exists because of a few special lines in our configuration file. Here's the flow:

  1. The values are set in the .env file for Docker to pull in.
  2. Those values are pushed into the php container using the environment section in the docker-compose.yml configuration.
  3. The PHP-FPM configuration then imports the environment variables and makes them available for use in the $_ENV superglobal.

These secrets exist in-memory in the containers and don't have to be written to a file inside of the container itself where they could potentially be compromised at rest. Once the Docker containers have started up, the .env file can be removed without impacting the values inside of the containers.

The tricky part here is that, if you remove the .env file once the containers are up and running, you'll need to put it back if there's ever a need to run the build command again.
But why is this good?

I started this article off by giving examples of a few methods you could use for secret storage when Docker is in use but they all had rather large downsides. With this method, there's a huge plus that you won't get with the other methods: the secrets defined in the .env file will only live in-memory but are still accessible to the PHP processes. This provides a pretty significant layer of protection for them and makes it more difficult for an attacker to access them directly.

I will say one thing, however. Much like the fact that nothing is 100% secure, this method isn't either. It does protect the secrets by not requiring them to be sitting at rest somewhere but it doesn't prevent the $_ENV values from being accessed directly. If an attacker were able to perform a remote code execution attack - tricking your application to run their code - they would be able to access these values.

Unfortunately, because of the way that PHP works there's not a very good built-in method for protecting values. That's why Vault is included in this environment. It's designed specifically to store secret values and protect them at rest. By only passing in the token and key to access it, we're reducing the risk level of the system overall. Vault also includes controls to let you fine-tune the access levels of your setup. This would allow you to do something like creating a read-only user your application can use. Even if there was a compromise, at least your secret values would be protected from change.

Hopefully, with the code, configuration and explanation I've provided here, you have managed to get an environment up and running and can use it to test out your own applications and secrets management.

I hope this tutorial will surely help and you if you liked this tutorial, please consider sharing it with others.

This post was originally published here