Top 17 Tips and Tricks | Docker

Top 17 Tips and Tricks | Docker

These tips are regarding speed, configuration, storage, and more. So, If you are aspired to learn Docker and build a successful career, reading "most important docker tips and tricks" in this post.

Before proceeding towards the best docker tips, let us take a quick recap on Docker. A thorough understanding of how Docker works can help in getting the most out of the powerful features of Docker. Docker is a container engine which uses containers as an easier and effective approach for packaging and distributing software with simple instructions. Containers are beneficial in the predictability of their execution.

The most noticeable advantage of Docker containers is visible in their speed. Docker performance tuning is one of the reasons for providing excessively fast performance. So, closer attention to Docker tips and tricks could help you leverage the maximum potential and advantages of Docker. The following discussion would present twelve different tips to help you make the most of Docker with ease and perfection.

Top 12 Docker Tips and Tricks

Now that, you have been familiar with the introduction to Docker, here we move to the tips and tricks that you can implement while using Docker. Though you can know about these tricks after a significant experience and continual reading. There are a number of Docker books that can help you broaden your knowledge and skills. These tips are regarding speed, configuration, storage, and more. So, without any further hold-up, let’s get into the world of most important docker tips and tricks.

1. Try exploring the New Things

Many people are not aware of the uses of multi-staged builds. Also, you should know about the fact that Docker now manages configs and secrets. Therefore, docker tips imply that you should look at all the events that happen in implementation now. So, you should try to get one project under the Docker or Moby organizations on GitHub. The vast assortment of implemented things in these organizations on GitHub can provide new insights on the working of Docker.

2. Emphasizing on Docker Configuration

You need to note that Docker does not provide optimal performance as it is! Therefore, a prominent mention among must know Docker tips is configuration before running Docker. Docker configuration is very crucial, and so you need to ensure the availability of sufficient system resources for performing the desired workloads. You can use the capabilities of various cloud providers for setting triggers that can modify or launch machines under special conditions.

3. Review the Application Performance First

One of the prominent Docker tips relates to finding the infrastructure element responsible for affecting Docker performance. Sometimes, the infrastructure itself or the application running inside the container can be influential on Docker performance. So, you need to note among Docker tips that applications with poor design issues cannot just get better by bringing Docker on board.

Therefore, you should aim at evaluating application performance before taking Docker. The first proven method for evaluating application performance is the use of visualization tools. The visualization tools show the current status of the execution of the software. Application logs are also proven measures for evaluating application performance. Application logs contain metadata given out by a running application and indicate the application’s performance.

4. Improve Network Latencies

You can work on improving network latencies by focusing on this mention among the best docker tips. Various aspects of the Docker build process involves the internet, and large images generally lead to performance issues. Why? Because of the constant pushing or pulling of the image across the internet. Docker will check the base image specified for building a Docker image on your machine.

If it does not find the base image locally, then Dockerhub would come into the picture, thereby resulting in latency performance issues. In such cases, the risks of depending on Dockerhub become prominent. Therefore, docker tricks suggest the creation of your registry that is easy to locate within your organization and infrastructure. As a result, the speed of pushing or pulling images increases alongside providing additional redundancies in case of Dockerhub outage.

5. Start Small

Smaller beginnings tend to be the staple trait in almost every great story! So, the next entry in docker tips suggests starting small by the use of Docker in development. Deploy Docker to a single server and learn from your mistakes gradually. The keynote here is that you should not be afraid of using Docker. Just remember that Docker is not all about deploying a multi-datacenter load-balanced cluster of services. You can build your way up gradually to learn more about its applications at different scales and levels of skill.

6. Use a VPS for Additional Speed

The most interesting mention among docker tips would be to use a VPS. The objective of this pointer is to ensure that Docker runs at additional speed. Therefore, you could use Virtual Private Server (VPS) like Linode or Digital Ocean for obtaining better bandwidth on pulls and pushes.

This trick can work effectively for users who face constant issues with the bandwidth of their internet connectivity. Additional investment in a VPS of your choice could go a long way in helping you enjoy a seamless Docker experience. The investment would provide helpful returns by restricting the possibilities for any downtime or loss of work due to slow internet connectivity.

7. Editor highlight and linter

Once we venture into docker-compose.yml file, we suddenly have more keyword to remember other than just FROM, RUN and WORKDIR.

No matter which editor/ide you are using, having a linter and code highlight is going to prove useful here. You don’t want to spend an hour figuring out why your file doesn’t work only to find out that is a typo(We all been there).

8. Keep the Docker Images Lightweight

The third mention among docker tips and tricks relates to keeping the Docker images lightweight. The first concern on your mind should be to create a Dockerfile. The Dockerfile is a set of instructions that describes the process of building an image. It contains the files that will be included, necessary environment variables, installation steps, relevant commands for using, and networking details.

The file context of Dockerfile has a huge influence on build-time performance. Larger contexts lead to slower Docker builds. So, you have to add unneeded files to the “.dockerignore” file that excludes the concerned files from the build. Many docker tricks suggest that large asset files, as well as additional library files, influence build-time performance heavily.

9. Bash Completion

Refer to --help or using a cheatsheet is certainly useful. But sometimes you know what you want but just can’t remember the exact command. Bash completion can certainly save you a hell lot of time.

Trusting the bash as your friend is another addition in almost every docker cheat sheet. You must have used many aliases for ‘git’ to save keystrokes. Therefore, you could implement the same for using Docker, especially if you are heavily involved. Create little shortcuts and then add them to the “~ /. bashrc” or it’s equivalent and make your Docker usage easier.

10. Bringing the Nyan-cat Instantly

Another promising mention among must know Docker tips is the instant access to Nyan-cat. You have docker, and you want a Nyan-cat in your terminal! All you need to do is activate one command to get the desired result. The command to activate the Nyan-cat is “docker run -it supertest2014/nyan”.

11. Using Wetty for In-browser Terminals

An uncommon addition in this docker cheat sheet would be Wetty. Wetty is a JavaScript-powered in-browser terminal emulator that provides better opportunities to develop engaging web applications. All you need to do is create a container running an instance of Wetty. Wetty helps users in embedding isolated terminal applications in web applications according to requirements alongside the advantages of controlling the execution environment with precision.

12. There is something call .dockerignore

Many time we start learning something by referring to tutorials online and testing it on our own side project. Somehow from the tutorials that I tried, none really explain or even use .dockerignore. Maybe is not in the scope of the tutorial, but in my opinion, it is really important to know it exists and what it does.

.dockerignore file allows us to define rules and exceptions for files and folder to be excluded from the build context. Many time in our Dockerfile, we have ADD or COPY. With .dockerignore, it will first look into the rules and exclude whatever that is being defined.

13. Use docker logs -f

Once you start getting the hang of using docker especially with compose command, you tend to run it with the -d detach flag. Once in awhile, the docker container is running but does not work like expected. I have this issue with a container running postgres. I would usually just kill the container and start the container again. It usually works(??)!

In cases like this, use docker logs -f CONTAINER_ID to find out what exactly happens. I manage to fix my problem by using a after looking at the logs. Somehow, whenever I run without the -d flag the error just don’t happen.

So, the next time when a container does not work as expected, even if you are in the detached mode, you can still look at the logs and debug it.

14. Dealing with Troublesome Middleware

In the case of immutable infrastructure, you may come across middleware using the filesystem as a cache. Many would want to avoid the persistence of such behavior. Therefore, Docker tips suggest constraining the middleware by running them as a read-only container for identifying exactly when the middleware needs to access the filesystem. Following this, the creation of a volume for actual persistent data directory and a tmpfs for caches and log files is the ideal step.

15. Shorthand to remove container

While learning and playing with containers, very often we need to delete it after use. I often do a docker container ls -a to list all container and copy the CONTAINER_ID and paste it in docker rm CONTAINER_ID to remove it.

But actually, we could just type the first few characters (even one character) of the CONTAINER ID and it will work as well.

16. Subscribe to courses online!

Nothing works better than following an expert courses online. However, before diving straight into courses, I would suggest to mess around with docker yourself first. I notice that I learn a lot faster when messing with docker on my side project. Referring to officially documentation and other tutorial/article that I found online.

Once I feel like is time to get serious about Docker, I look for popular courses online from Udemy.

17. Participate in the Community

The final mention among docker tips in this discussion reflects largely on the need for information and community involvement. You can join the #docker channel on Freenode on IRC. This is an ideal place for you to meet many Docker peers online and ask questions. The best thing about the channel is the fact that you can get helpful tips and guidance from experts.

You can find almost 1000 or more people on the channel at any given point of time, thereby serving the purpose of a community. Also, it serves as a prolific learning resource. Now, many people hesitate to get on IRC due to the complexity of setting it up and using it. However, you should keep in mind that all your efforts would ultimately lead to promising returns in the form of knowledge. All you need to do is follow a few simple steps to get started. First of all, download an IRC client. Then, connect to the “” network and join the #docker channel.

Final Words

The docker tips showed us how to use simple tweaks for harnessing the optimal power of Docker. Docker has changed the way we looked at application development in the past. The future of Docker is more likely to emphasize on improving speed and resource-effectiveness. Furthermore, the open-source container engine also presents adequate prospects for introducing new developments.

As a result, we can expect many new features in Docker shortly. Therefore, community involvement also stands as a formidable requirement for learning about Docker comprehensively. On a concluding note, learning never stops, and so, you should always strive to find ways to make Docker usage easier!

Thank ou for reading !

Develop this one fundamental skill if you want to become a successful developer

Throughout my career, a multitude of people have asked me&nbsp;<em>what does it take to become a successful developer?</em>

Throughout my career, a multitude of people have asked me what does it take to become a successful developer?

It’s a common question newbies and those looking to switch careers often ask — mostly because they see the potential paycheck. There is also a Hollywood level of coolness attached to working with computers nowadays. Being a programmer or developer is akin to being a doctor or lawyer. There is job security.

But a lot of people who try to enter the profession don’t make it. So what is it that separates those who make it and those who don’t? 

Read full article here

Docker for front-end developers

Docker for front-end developers

This is short and simple guide of docker, useful for frontend developers.

Since Docker’s release in 2013, the use of containers has been on the rise, and it’s now become a part of the stack in most tech companies out there. Sadly, when it comes to front-end development, this concept is rarely touched.

Therefore, when front-end developers have to interact with containerization, they often struggle a lot. That is exactly what happened to me a few weeks ago when I had to interact with some services in my company that I normally don’t deal with.

The task itself was quite easy, but due to a lack of knowledge of how containerization works, it took almost two full days to complete it. After this experience, I now feel more secure when dealing with containers and CI pipelines, but the whole process was quite painful and long.

The goal of this post is to teach you the core concepts of Docker and how to manipulate containers so you can focus on the tasks you love!

The what and why for Docker

Let’s start by defining what Docker is in plain, approachable language (with some help from Docker Curriculum

Docker is a tool that allows developers, sys-admins, etc. to easily deploy their applications in a sandbox (called containers) to run on the host operating system.
The key benefit of using containers is that they package up code and all its dependencies so the application runs quickly and reliably regardless of the computing environment.

This decoupling allows container-based applications to be deployed easily and consistently regardless of where the application will be deployed: a cloud server, internal company server, or your personal computer.


In the Docker ecosystem, there are a few key definitions you’ll need to know to understand what the heck they are talking about:

  • Image: The blueprints of your application, which forms the basis of containers. It is a lightweight, standalone, executable package of software that includes everything needed to run an application, i.e., code, runtime, system tools, system libraries, and settings.
  • Containers: These are defined by the image and any additional configuration options provided on starting the container, including but not limited to the network connections and storage options.
  • Docker daemon: The background service running on the host that manages the building, running, and distribution of Docker containers. The daemon is the process that runs in the OS the clients talk to.
  • Docker client: The CLI that allows users to interact with the Docker daemon. It can also be in other forms of clients, too, such as those providing a UI interface.
  • Docker Hub: A registry of images. You can think of the registry as a directory of all available Docker images. If required, you can host your own Docker registries and pull images from there.
‘Hello, World!’ demo

To fully understand the aforementioned terminologies, let’s set up Docker and run an example.

The first step is installing Docker on your machine. To do that, go to the official Docker page, choose your current OS, and start the download. You might have to create an account, but don’t worry, they won’t charge you in any of these steps.

After installing Docker, open your terminal and execute docker run hello-world. You should see the following message:

➜ ~ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:6540fc08ee6e6b7b63468dc3317e3303aae178cb8a45ed3123180328bcc1d20f
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

Let’s see what actually happened behind the scenes:

  1. docker is the command that enables you to communicate with the Docker client.
  2. When you run docker run [name-of-image], the Docker daemon will first check if you have a local copy of that image on your computer. Otherwise, it will pull the image from Docker Hub. In this case, the name of the image is hello-world.
  3. Once you have a local copy of the image, the Docker daemon will create a container from it, which will produce the message Hello from Docker!
  4. The Docker daemon then streams the output to the Docker client and sends it to your terminal.
Node.js demo

The “Hello, World!” Docker demo was quick and easy, but the truth is we were not using all Docker’s capabilities. Let’s do something more interesting. Let’s run a Docker container using Node.js.

So, as you might guess, we need to somehow set up a Node environment in Docker. Luckily, the Docker team has created an amazing marketplace where you can search for Docker images inside their public Docker Hub. To look for a Node.js image, you just need to type “node” in the search bar, and you most probably will find this one.

So the first step is to pull the image from the Docker Hub, as shown below:

➜ ~ docker pull node

Then you need to set up a basic Node app. Create a file called node-test.js, and let’s do a simple HTTP request using JSON Placeholder. The following snippet will fetch a Todo and print the title:

const https = require('https');

  .get('', response => {
    let todo = '';

    response.on('data', chunk => {
      todo += chunk;

    response.on('end', () => {
      console.log(`The title is "${JSON.parse(todo).title}"`);
  .on('error', error => {
    console.error('Error: ' + error.message);

I wanted to avoid using external dependencies like node-fetch or axios to keep the focus of the example just on Node and not in the dependencies manager.

Let’s see how to run a single file using the Node image and explain the docker run flags:

➜ ~ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/app -w /usr/src/app node node node-test.js

  • -it runs the container in the interactive mode, where you can execute several commands inside the container.
  • --rm automatically removes the container after finishing its execution.
  • --name [name] provides a name to the process running in the Docker daemon.
  • -v [local-path: docker-path] mounts a local directory into Docker, which allows exchanging information or access to the file system of the current system. This is one of my favorite features of Docker!
  • -w [docker-path] sets the working directory (start route). By default, this is /.
  • node is the name of the image to run. It always comes after all the docker run flags.
  • node node-test.js are instructions for the container. These always come after the name of the image.

The output of running the previous command should be: The title is "delectus aut autem".

React.js demo

Since this post is focused on front-end developers, let’s run a React application in Docker!

Let’s start with a base project. For that, I recommend using the create-react-app CLI, but you can use whatever project you have at hand; the process will be the same.

➜ ~ npx create-react-app react-test
➜ ~ cd react-test
➜ ~ yarn start

You should be able to see the homepage of the create-react-app project. Then, let’s introduce a new concept, the Dockerfile.

In essence, a Dockerfile is a simple text file with instructions on how to build your Docker images. In this file, you’d normally specify the image you want to use, which files will be inside, and whether you need to execute some commands before building.

Let’s now create a file inside the root of the react-test project. Name this Dockerfile, and write the following:

# Select the image to use
FROM node

## Install dependencies in the root of the Container
COPY package.json yarn.lock ./
ENV NODE_PATH=/node_modules
ENV PATH=$PATH:/node_modules/.bin
RUN yarn

# Add project files to /app route in Container
ADD . /app

# Set working dir to /app

# expose port 3000

When working in yarn projects, the recommendation is to remove the node_modules from the /app and move it to root. This is to take advantage of the cache that yarn provides. Therefore, you can freely do rm -rf node_modules/ inside your React application.

After that, you can build a new image given the above Dockerfile, which will run the commands defined step by step.

➜ ~ docker image build -t react:test .

To check if the Docker image is available, you can run docker image ls.

➜ ~ docker image ls
react test b530cde7aba1 50 minutes ago 1.18GB
hello-world latest fce289e99eb9 7 months ago 1.84kB

Now it’s time to run the container by using the command you used in the previous examples: docker run.

➜ ~ docker run -it -p 3000:3000 react:test /bin/bash

Be aware of the -it flag, which, after you run the command, will give you a prompt inside the container. Here, you can run the same commands as in your local environment, e.g., yarn start or yarn build.

To quit the container, just type exit, but remember that the changes you make in the container won’t remain when you restart it. In case you want to keep the changes to the container in your file system, you can use the -v flag and mount the current directory into /app.

➜ ~ docker run -it -p 3000:3000 -v $(pwd):/app react:test /bin/bash

[email protected]:/app# yarn build

After the command is finished, you can check that you now have a /build folder inside your local project.


This has been an amazing journey into the fundamentals of how Docker works.

Learn More

Thanks for reading

WordPress in Docker. Part 1: Dockerization

WordPress in Docker. Part 1: Dockerization

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

This entry-level guide will tell you why and how to Dockerize your WordPress projects.