Differences Between Docker And Kubernetes Explained Under 5 Minutes

After Kubernetes announced that they are ending the support for Docker in late 2021, the people on Twitter went crazy over this and started asking questions, “How can they remove the Docker support?” , “Docker == Container, now what will I use?” , “isn’t k8s just automation for docker?” So just to give you a heads-up, I have written this article to depict the core differences between Kubernetes and Docker.

Let’s discuss some similarities first, shall we?

Well, to say the least, I often agree with people why they are so bogged down as both of these platforms have a lot of things in common. Naturally, it is confusing especially for the beginners when reading articles about Docker and Kubernetes. But, only a few ideas are common of these platforms.

  1. Both Docker and Kubernetes work well with microservice-architecture and they were designed for that.
  2. Both of these platforms have dedicated open-source communities and unlike other communities, they are genuinely quite welcoming to the newcomers.
  3. You can learn either of these platforms without knowing the other. Obviously, knowing both of these gives you an edge but this still holds true if you are working on an application and running on a single machine using Docker, you don’t need Kubernetes, and vice-versa.


What is GEEK

Buddha Community

Differences Between Docker And Kubernetes Explained Under 5 Minutes
Christa  Stehr

Christa Stehr


50+ Useful Kubernetes Tools for 2020 - Part 2


Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Iliana  Welch

Iliana Welch


Docker Explained: Docker Architecture | Docker Registries

Following the second video about Docker basics, in this video, I explain Docker architecture and explain the different building blocks of the docker engine; docker client, API, Docker Daemon. I also explain what a docker registry is and I finish the video with a demo explaining and illustrating how to use Docker hub

In this video lesson you will learn:

  • What is Docker Host
  • What is Docker Engine
  • Learn about Docker Architecture
  • Learn about Docker client and Docker Daemon
  • Docker Hub and Registries
  • Simple demo to understand using images from registries

#docker #docker hub #docker host #docker engine #docker architecture #api

Maud  Rosenbaum

Maud Rosenbaum


Kubernetes in the Cloud: Strategies for Effective Multi Cloud Implementations

Kubernetes is a highly popular container orchestration platform. Multi cloud is a strategy that leverages cloud resources from multiple vendors. Multi cloud strategies have become popular because they help prevent vendor lock-in and enable you to leverage a wide variety of cloud resources. However, multi cloud ecosystems are notoriously difficult to configure and maintain.

This article explains how you can leverage Kubernetes to reduce multi cloud complexities and improve stability, scalability, and velocity.

Kubernetes: Your Multi Cloud Strategy

Maintaining standardized application deployments becomes more challenging as your number of applications and the technologies they are based on increase. As environments, operating systems, and dependencies differ, management and operations require more effort and extensive documentation.

In the past, teams tried to get around these difficulties by creating isolated projects in the data center. Each project, including its configurations and requirements were managed independently. This required accurately predicting performance and the number of users before deployment and taking down applications to update operating systems or applications. There were many chances for error.

Kubernetes can provide an alternative to the old method, enabling teams to deploy applications independent of the environment in containers. This eliminates the need to create resource partitions and enables teams to operate infrastructure as a unified whole.

In particular, Kubernetes makes it easier to deploy a multi cloud strategy since it enables you to abstract away service differences. With Kubernetes deployments you can work from a consistent platform and optimize services and applications according to your business needs.

The Compelling Attributes of Multi Cloud Kubernetes

Multi cloud Kubernetes can provide multiple benefits beyond a single cloud deployment. Below are some of the most notable advantages.


In addition to the built-in scalability, fault tolerance, and auto-healing features of Kubernetes, multi cloud deployments can provide service redundancy. For example, you can mirror applications or split microservices across vendors. This reduces the risk of a vendor-related outage and enables you to create failovers.

#kubernetes #multicloud-strategy #kubernetes-cluster #kubernetes-top-story #kubernetes-cluster-install #kubernetes-explained #kubernetes-infrastructure #cloud

Kubernetes Vs. Docker: Primary Differences You Should Know

Kubernetes vs Docker is an essential topic of debate among professionals. Both of them are related to containerization, and both of them have their sets of features. So, the community is divided into two sections, which can lead to confusion.

That’s why you should read this article as we’ve discussed all the significant differences between these two solutions. Let’s get started.

#kubernetes vs docker #kubernetes #docker

小泉  晃

小泉 晃


為 Python 開發人員學習一些 Docker 最佳實踐

本文著眼於編寫 Dockerfile 和使用 Docker 時應遵循的一些最佳實踐。儘管列出的大多數實踐適用於所有開發人員,無論使用哪種語言,但少數實踐僅適用於開發基於 Python 的應用程序的開發人員。



利用多階段構建來創建更精簡、更安全的 Docker 映像。

多階段 Docker 構建允許您將 Dockerfile 分解為多個階段。例如,您可以有一個用於編譯和構建應用程序的階段,然後可以將其複製到後續階段。由於只使用最後階段來創建映像,因此與構建應用程序相關的依賴項和工具將被丟棄,留下一個精益且模塊化的生產就緒映像。


# temp stage
FROM python:3.9-slim as builder



RUN apt-get update && \
    apt-get install -y --no-install-recommends gcc

COPY requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels -r requirements.txt

# final stage
FROM python:3.9-slim


COPY --from=builder /app/wheels /wheels
COPY --from=builder /app/requirements.txt .

RUN pip install --no-cache /wheels/*

在此示例中,安裝某些 Python 包需要GCC編譯器,因此我們添加了一個臨時的構建時間階段來處理構建階段。由於最終的運行時映像不包含 GCC,因此它更輕、更安全。


REPOSITORY                 TAG                    IMAGE ID       CREATED          SIZE
docker-single              latest                 8d6b6a4d7fb6   16 seconds ago   259MB
docker-multi               latest                 813c2fa9b114   3 minutes ago    156MB


# temp stage
FROM python:3.9 as builder

RUN pip wheel --no-cache-dir --no-deps --wheel-dir /wheels jupyter pandas

# final stage
FROM python:3.9-slim

WORKDIR /notebooks

COPY --from=builder /wheels /wheels
RUN pip install --no-cache /wheels/*


REPOSITORY                  TAG                   IMAGE ID       CREATED         SIZE
ds-multi                    latest                b4195deac742   2 minutes ago   357MB
ds-single                   latest                7c23c43aeda6   6 minutes ago   969MB

In summary, multi-stage builds can decrease the size of your production images, helping you save time and money. In addition, this will simplify your production containers. Also, due to the smaller size and simplicity, there's potentially a smaller attack surface.

Order Dockerfile Commands Appropriately

Pay close attention to the order of your Dockerfile commands to leverage layer caching.

Docker caches each step (or layer) in a particular Dockerfile to speed up subsequent builds. When a step changes, the cache will be invalidated not only for that particular step but all succeeding steps.


FROM python:3.9-slim


COPY sample.py .

COPY requirements.txt .

RUN pip install -r /requirements.txt

In this Dockerfile, we copied over the application code before installing the requirements. Now, each time we change sample.py, the build will reinstall the packages. This is very inefficient, especially when using a Docker container as a development environment. Therefore, it's crucial to keep the files that frequently change towards the end of the Dockerfile.

You can also help prevent unwanted cache invalidations by using a .dockerignore file to exclude unnecessary files from being added to the Docker build context and the final image. More on this here shortly.

So, in the above Dockerfile, you should move the COPY sample.py . command to the bottom:

FROM python:3.9-slim


COPY requirements.txt .

RUN pip install -r /requirements.txt

COPY sample.py .


  1. Always put layers that are likely to change as low as possible in the Dockerfile.
  2. Combine RUN apt-get update and RUN apt-get install commands. (This also helps to reduce the image size. We'll touch on this shortly.)
  3. If you want to turn off caching for a particular Docker build, add the --no-cache=True flag.

Use Small Docker Base Images

Smaller Docker images are more modular and secure.

Building, pushing, and pulling images is quicker with smaller images. They also tend to be more secure since they only include the necessary libraries and system dependencies required for running your application.

Which Docker base image should you use?

Unfortunately, it depends.

Here's a size comparison of various Docker base images for Python:

REPOSITORY   TAG                 IMAGE ID       CREATED      SIZE
python       3.9.6-alpine3.14    f773016f760e   3 days ago   45.1MB
python       3.9.6-slim          907fc13ca8e7   3 days ago   115MB
python       3.9.6-slim-buster   907fc13ca8e7   3 days ago   115MB
python       3.9.6               cba42c28d9b8   3 days ago   886MB
python       3.9.6-buster        cba42c28d9b8   3 days ago   886MB

While the Alpine flavor, based on Alpine Linux, is the smallest, it can often lead to increased build times if you can't find compiled binaries that work with it. As a result, you may end up having to build the binaries yourself, which can increase the image size (depending on the required system-level dependencies) and the build times (due to having to compile from the source).

Refer to The best Docker base image for your Python application and Using Alpine can make Python Docker builds 50× slower for more on why it's best to avoid using Alpine-based base images.

In the end, it's all about balance. When in doubt, start with a *-slim flavor, especially in development mode, as you're building your application. You want to avoid having to continually update the Dockerfile to install necessary system-level dependencies when you add a new Python package. As you harden your application and Dockerfile(s) for production, you may want to explore using Alpine for the final image from a multi-stage build.

Also, don't forget to update your base images regularly to improve security and boost performance. When a new version of a base image is released -- i.e., 3.9.6-slim -> 3.9.7-slim -- you should pull the new image and update your running containers to get all the latest security patches.

Minimize the Number of Layers

It's a good idea to combine the RUN, COPY, and ADD commands as much as possible since they create layers. Each layer increases the size of the image since they are cached. Therefore, as the number of layers increases, the size also increases.

You can test this out with the docker history command:

$ docker images
dockerfile   latest    180f98132d02   51 seconds ago   259MB

$ docker history 180f98132d02

IMAGE          CREATED              CREATED BY                                      SIZE      COMMENT
180f98132d02   58 seconds ago       COPY . . # buildkit                             6.71kB    buildkit.dockerfile.v0
<missing>      58 seconds ago       RUN /bin/sh -c pip install -r requirements.t…   35.5MB    buildkit.dockerfile.v0
<missing>      About a minute ago   COPY requirements.txt . # buildkit              58B       buildkit.dockerfile.v0
<missing>      About a minute ago   WORKDIR /app

Take note of the sizes. Only the RUN, COPY, and ADD commands add size to the image. You can reduce the image size by combining commands wherever possible. For example:

RUN apt-get update
RUN apt-get install -y netcat

Can be combined into a single RUN command:

RUN apt-get update && apt-get install -y netcat

Thus, creating a single layer instead of two, which reduces the size of the final image.

While it's a good idea to reduce the number of layers, it's much more important for that to be less of a goal in itself and more a side-effect of reducing the image size and build times. In other words, focus more on the previous three practices -- multi-stage builds, order of your Dockerfile commands, and using a small base image -- than trying to optimize every single command.


  1. RUN, COPY, and ADD each create layers.
  2. Each layer contains the differences from the previous layer.
  3. Layers increase the size of the final image.


  1. Combine related commands.
  2. Remove unnecessary files in the same RUN step that created them.
  3. Minimize the number of times apt-get upgrade is run since it upgrades all packages to the latest version.
  4. With multi-stage builds, don't worry too much about overly optimizing the commands in temp stages.

Finally, for readability, it's a good idea to sort multi-line arguments alphanumerically:

RUN apt-get update && apt-get install -y \
    git \
    gcc \
    matplotlib \
    pillow  \
    && rm -rf /var/lib/apt/lists/*

Use Unprivileged Containers

By default, Docker runs container processes as root inside of a container. However, this is a bad practice since a process running as root inside the container is running as root in the Docker host. Thus, if an attacker gains access to your container, they have access to all the root privileges and can perform several attacks against the Docker host, like-

  1. copying sensitive info from the host's filesystem to the container
  2. executing remote commands

To prevent this, make sure to run container processes with a non-root user:

RUN addgroup --system app && adduser --system --group app

USER app

You can take it a step further and remove shell access and ensure there's no home directory as well:

RUN addgroup --gid 1001 --system app && \
    adduser --no-create-home --shell /bin/false --disabled-password --uid 1001 --system --group app

USER app


$ docker run -i sample id

uid=1001(app) gid=1001(app) groups=1001(app)

在這裡,容器內的應用程序在非 root 用戶下運行。但是,請記住,Docker 守護進程和容器本身仍以 root 權限運行。請務必查看以非 root 用戶身份運行 Docker 守護程序以獲取有關以非 root 用戶身份運行守護程序和容器的幫助。




這兩個命令都允許您將文件從特定位置複製到 Docker 映像中:

ADD <src> <dest>
COPY <src> <dest>


  • COPY用於將本地文件或目錄從 Docker 主機複製到鏡像。
  • ADD可用於相同的事情以及下載外部文件。此外,如果您使用壓縮文件(tar、gzip、bzip2 等)作為<src>參數,ADD將自動將內容解壓縮到給定位置。
# copy local files on the host  to the destination
COPY /source/path  /destination/path
ADD /source/path  /destination/path

# download external file and copy to the destination
ADD http://external.file/url  /destination/path

# copy and extract local compresses files
ADD source.file.tar.gz /destination/path

將 Python 包緩存到 Docker 主機


您可以通過將 pip 緩存目錄映射到主機上的目錄來避免這種情況。因此,對於每次重建,緩存的版本都會持續存在並可以提高構建速度。

將捲添加到 docker run 作為-v $HOME/.cache/pip-docker/:/root/.cache/pip或作為 Docker Compose 文件中的映射。


將緩存從 docker 映像移動到主機可以節省最終映像中的空間。

如果您使用Docker BuildKit,請使用 BuildKit 緩存掛載來管理緩存:

# syntax = docker/dockerfile:1.2


COPY requirements.txt .

RUN --mount=type=cache,target=/root/.cache/pip \
        pip install -r requirements.txt




假設您的應用程序堆棧由兩個 Web 服務器和一個數據庫組成。雖然您可以輕鬆地從單個容器中運行所有三個,但您應該在單獨的容器中運行每個,以便更容易重用和擴展每個單獨的服務。

  1. 擴展- 每個服務都在一個單獨的容器中,您可以根據需要水平擴展您的一個 Web 服務器以處理更多流量。
  2. 可重用性——也許您有另一個需要容器化數據庫的服務。您可以簡單地重用同一個數據庫容器,而無需同時帶來兩個不必要的服務。
  3. 日誌記錄- 耦合容器使日誌記錄更加複雜。我們將在本文後面更詳細地討論這個問題。
  4. 可移植性和可預測性- 當需要處理的表面積較小時,製作安全補丁或調試問題要容易得多。


您可以在 Dockerfile 中以數組 (exec) 或字符串 (shell) 格式編寫CMD和命令:ENTRYPOINT

# array (exec)
CMD ["gunicorn", "-w", "4", "-k", "uvicorn.workers.UvicornWorker", "main:app"]

# string (shell)
CMD "gunicorn -w 4 -k uvicorn.workers.UvicornWorker main:app"

兩者都是正確的,並且實現了幾乎相同的目標;但是,您應該盡可能使用 exec 格式。從Docker 文檔

  1. 確保您在 Dockerfile 中CMD使用exec 形式。ENTRYPOINT
  2. 例如使用["program", "arg1", "arg2"]not "program arg1 arg2"。使用字符串形式會導致 Docker 使用 bash 運行您的進程,這不能正確處理信號。Compose 始終使用 JSON 格式,因此如果您覆蓋 Compose 文件中的命令或入口點,請不要擔心。

因此,由於大多數 shell 不處理子進程的信號,因此如果您使用 shell 格式,CTRL-C(生成 a SIGTERM)可能不會停止子進程。


FROM ubuntu:18.04

# BAD: shell format

# GOOD: exec format
ENTRYPOINT ["top", "-d"]

試試這兩個。請注意,使用 shell 格式風格,CTRL-C不會終止進程。相反,您會看到^C^C^C^C^C^C^C^C^C^C^C.

另一個需要注意的是,shell 格式帶有 shell 的 PID,而不是進程本身。

# array format
root@18d8fd3fd4d2:/app# ps ax
    1 ?        Ss     0:00 python manage.py runserver
    7 ?        Sl     0:02 /usr/local/bin/python manage.py runserver
   25 pts/0    Ss     0:00 bash
  356 pts/0    R+     0:00 ps ax

# string format
root@ede24a5ef536:/app# ps ax
    1 ?        Ss     0:00 /bin/sh -c python manage.py runserver
    8 ?        S      0:00 python manage.py runserver
    9 ?        Sl     0:01 /usr/local/bin/python manage.py runserver
   13 pts/0    Ss     0:00 bash
  342 pts/0    R+     0:00 ps ax


我應該使用 ENTRYPOINT 還是 CMD 來運行容器進程?


CMD ["gunicorn", "config.wsgi", "-b", ""]

# and

ENTRYPOINT ["gunicorn", "config.wsgi", "-b", ""]

兩者本質上都做同樣的事情:config.wsgi使用 Gunicorn 服務器啟動應用程序並將其綁定到0.0.0.0:8000.

CMD很容易被覆蓋。如果你運行docker run <image_name> uvicorn config.asgi,上面的 CMD 會被新的參數取代——例如,uvicorn config.asgi. 而要覆蓋ENTRYPOINT命令,必須指定--entrypoint選項:

docker run --entrypoint uvicorn config.asgi <image_name>




ENTRYPOINT ["gunicorn", "config.wsgi", "-w"]
CMD ["4"]


gunicorn config.wsgi -w 4


docker run <image_name> 6

這將以六個 Gunicorn 工人而不是四個啟動容器。


使用 aHEALTHCHECK來確定容器中運行的進程是否不僅已啟動並正在運行,而且是否“健康”。

Docker exposes an API for checking the status of the process running in the container, which provides much more information than just whether the process is "running" or not since "running" covers "it is up and working", "still launching", and even "stuck in some infinite loop error state". You can interact with this API via the HEALTHCHECK instruction.

For example, if you're serving up a web app, you can use the following to determine if the / endpoint is up and can handle serving requests:

HEALTHCHECK CMD curl --fail http://localhost:8000 || exit 1

If you run docker ps, you can see the status of the HEALTHCHECK.

Healthy example:

CONTAINER ID   IMAGE         COMMAND                  CREATED          STATUS                            PORTS                                       NAMES
09c2eb4970d4   healthcheck   "python manage.py ru…"   10 seconds ago   Up 8 seconds (health: starting)>8000/tcp, :::8000->8000/tcp   xenodochial_clarke

Unhealthy example:

CONTAINER ID   IMAGE         COMMAND                  CREATED              STATUS                          PORTS                                       NAMES
09c2eb4970d4   healthcheck   "python manage.py ru…"   About a minute ago   Up About a minute (unhealthy)>8000/tcp, :::8000->8000/tcp   xenodochial_clarke

You can take it a step further and set up a custom endpoint used only for health checks and then configure the HEALTHCHECK to test against the returned data. For example, if the endpoint returns a JSON response of {"ping": "pong"}, you can instruct the HEALTHCHECK to validate the response body.

Here's how you view the status of the health check status using docker inspect:

❯ docker inspect --format "{{json .State.Health }}" ab94f2ac7889
  "Status": "healthy",
  "FailingStreak": 0,
  "Log": [
      "Start": "2021-09-28T15:22:57.5764644Z",
      "End": "2021-09-28T15:22:57.7825527Z",
      "ExitCode": 0,
      "Output": "..."

Here, the output is trimmed as it contains the whole HTML output.

You can also add a health check to a Docker Compose file:

version: "3.8"

    build: .
      - '8000:8000'
      test: curl --fail http://localhost:8000 || exit 1
      interval: 10s
      timeout: 10s
      start_period: 10s
      retries: 3


  • test: The command to test.
  • interval: The interval to test for -- i.e., test every x unit of time.
  • timeout: Max time to wait for the response.
  • start_period: When to start the health check. It can be used when additional tasks are performed before the containers are ready, like running migrations.
  • retries: Maximum retries before designating a test as failed.

If you're using an orchestration tool other than Docker Swarm -- i.e., Kubernetes or AWS ECS -- it's highly likely that the tool has its own internal system for handling health checks. Refer to the docs of the particular tool before adding the HEALTHCHECK instruction.


Version Docker Images

Whenever possible, avoid using the latest tag.





  1. 時間戳
  2. Docker 鏡像 ID
  3. Git 提交哈希
  4. 語義版本

有關更多選項,請查看“Properly Versioning Docker Images” Stack Overflow 問題中的答案


docker build -t web-prod-a072c4e5d94b5a769225f621f08af3d4bf820a07-0.1.4 .


  1. 項目名稱:web
  2. 環境名稱:prod
  3. Git提交哈希:a072c4e5d94b5a769225f621f08af3d4bf820a07
  4. 語義版本:0.1.4



機密是敏感信息,例如密碼、數據庫憑據、SSH 密鑰、令牌和 TLS 證書等等。這些不應該在未經加密的情況下被烘焙到您的圖像中,因為獲得圖像訪問權限的未經授權的用戶只能檢查層以提取秘密。

不要以明文形式向 Dockerfile 添加機密,尤其是當您將圖像推送到像Docker Hub這樣的公共註冊表時:

FROM python:3.9-slim



  1. 環境變量(運行時)
  2. 構建時參數(在構建時)
  3. 像 Docker Swarm(通過 Docker 機密)或 Kubernetes(通過 Kubernetes 機密)這樣的編排工具



Finally, be explicit about what files are getting copied over to the image rather than copying all files recursively:

COPY . .

copy ./app.py .

Being explicit also helps to limit cache-busting.

Environment Variables

You can pass secrets via environment variables, but they will be visible in all child processes, linked containers, and logs, as well as via docker inspect. It's also difficult to update them.

$ docker run --detach --env "DATABASE_PASSWORD=SuperSecretSauce" python:3.9-slim


$ docker inspect --format='{{range .Config.Env}}{{println .}}{{end}}' d92cf5cf870eb0fdbf03c666e7fcf18f9664314b79ad58bc7618ea3445e39239


This is the most straightforward approach to secrets management. While it's not the most secure, it will keep the honest people honest since it provides a thin layer of protection, helping to keep the secrets hidden from curious wandering eyes.

Passing secrets in using a shared volume is a better solution, but they should be encrypted, via Vault or AWS Key Management Service (KMS), since they are saved to disc.

Build-time Arguments

You can pass secrets in at build-time using build-time arguments, but they will be visible to those who have access to the image via docker history.


FROM python:3.9-slim



$ docker build --build-arg "DATABASE_PASSWORD=SuperSecretSauce" .

If you only need to use the secrets temporarily as part of the build -- i.e., SSH keys for cloning a private repo or downloading a private package -- you should use a multi-stage build since the builder history is ignored for temporary stages:

# temp stage
FROM python:3.9-slim as builder

# secret

# install git
RUN apt-get update && \
    apt-get install -y --no-install-recommends git

# use ssh key to clone repo
RUN mkdir -p /root/.ssh/ && \
    echo "${PRIVATE_SSH_KEY}" > /root/.ssh/id_rsa
RUN touch /root/.ssh/known_hosts &&
    ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts
RUN git clone git@github.com:testdrivenio/not-real.git

# final stage
FROM python:3.9-slim


# copy the repository from the temp image
COPY --from=builder /your-repo /app/your-repo

# use the repo for something!


您還可以使用--secretDocker 構建中的新選項將機密傳遞給未存儲在映像中的 Docker 映像。

# "docker_is_awesome" > secrets.txt

FROM alpine

# shows secret from default secret location:
RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret



docker build --no-cache --progress=plain --secret id=mysecret,src=secrets.txt .

# output
#4 [1/2] FROM docker.io/library/alpine
#4 sha256:665ba8b2cdc0cb0200e2a42a6b3c0f8f684089f4cd1b81494fbb9805879120f7

#5 [2/2] RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
#5 sha256:75601a522ebe80ada66dedd9dd86772ca932d30d7e1b11bba94c04aa55c237de
#5 0.635 docker_is_awesome#5 DONE 0.7s

#6 exporting to image


❯ docker history 49574a19241c
IMAGE          CREATED         CREATED BY                                      SIZE      COMMENT
49574a19241c   5 minutes ago   CMD ["/bin/sh"]                                 0B        buildkit.dockerfile.v0
<missing>      5 minutes ago   RUN /bin/sh -c cat /run/secrets/mysecret # b…   0B        buildkit.dockerfile.v0
<missing>      4 weeks ago     /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B
<missing>      4 weeks ago     /bin/sh -c #(nop) ADD file:aad4290d27580cc1a…   5.6MB

有關構建時機密的更多信息,請查看不要洩露 Docker 映像的構建機密


如果您使用的是Docker Swarm,則可以使用Docker secrets管理機密。

例如,初始化 Docker Swarm 模式:

$ docker swarm init


$ echo "supersecretpassword" | docker secret create postgres_password -

$ docker secret ls
ID                          NAME                DRIVER    CREATED         UPDATED
qdqmbpizeef0lfhyttxqfbty0   postgres_password             4 seconds ago   4 seconds ago

當容器被授予訪問上述機密的權限時,它將掛載在/run/secrets/postgres_password. 該文件將以明文形式包含密鑰的實際值。


  1. AWS EKS -將 AWS Secrets Manager 密鑰與 Kubernetes 結合使用
  2. DigitalOcean Kubernetes -保護 DigitalOcean Kubernetes 集群的推薦步驟
  3. Google Kubernetes Engine -將 Secret Manager 與其他產品一起使用
  4. Nomad - Vault 集成和檢索動態機密

使用 .dockerignore 文件

我們已經多次提到使用.dockerignore文件。此文件用於指定您不想添加到初始構建上下文中的文件和文件夾,這些文件和文件夾發送到 Docker 守護程序,然後它將構建您的映像。換句話說,您可以使用它來定義您需要的構建上下文。

構建 Docker 映像時,整個 Docker 上下文(即項目的根目錄)在評估 or 命令之前被發送到Docker守護程序這可能非常昂貴,特別是如果您的項目中有許多依賴項、大型數據文件或構建工件。另外,Docker CLI 和守護進程可能不在同一台機器上。因此,如果守護程序在遠程機器上執行,您應該更加註意構建上下文的大小。COPYADD


  1. 臨時文件和文件夾
  2. 構建日誌
  3. 本地秘密
  4. 本地開發文件,如docker-compose.yml
  5. 版本控製文件夾,如“.git”、“.hg”和“.svn”




  1. 減小 Docker 鏡像的大小
  2. 加快構建過程
  3. 防止不必要的緩存失效
  4. 防止洩露秘密

Lint 並掃描您的 Dockerfile 和圖像

Linting 是檢查源代碼中是否存在可能導致潛在缺陷的程序和風格錯誤以及不良做法的過程。就像編程語言一樣,靜態文件也可以被檢查。特別是對於您的 Dockerfile,linter 可以幫助確保它們是可維護的、避免不推薦使用的語法並遵守最佳實踐。對圖像進行檢查應該是 CI 管道的標準部分。

Hadolint是最流行的 Dockerfile linter:

$ hadolint Dockerfile

Dockerfile:1 DL3006 warning: Always tag the version of an image explicitly
Dockerfile:7 DL3042 warning: Avoid the use of cache directory with pip. Use `pip install --no-cache-dir <package>`
Dockerfile:9 DL3059 info: Multiple consecutive `RUN` instructions. Consider consolidation.
Dockerfile:17 DL3025 warning: Use arguments JSON notation for CMD and ENTRYPOINT arguments

您可以在https://hadolint.github.io/hadolint/在線查看它的實際效果。還有一個VS Code Extension

您可以將 Dockerfile 與掃描圖像和容器的漏洞結合起來。


  1. Snyk是 Docker 原生漏洞掃描的獨家提供商。您可以使用docker scanCLI 命令掃描圖像。
  2. Trivy可用於掃描容器鏡像、文件系統、git 存儲庫和其他配置文件。
  3. Clair是一個開源項目,用於靜態分析應用程序容器中的漏洞。
  4. Anchore是一個開源項目,為容器鏡像的檢查、分析和認證提供集中服務。

總之,檢查並掃描您的 Dockerfile 和圖像以發現任何偏離最佳實踐的潛在問題。



篡改可以通過中間人(MITM) 攻擊或完全被破壞的註冊表來進行。

Docker Content Trust (DCT) 支持從遠程註冊表對 Docker 映像進行簽名和驗證。




Error: remote trust data does not exist for docker.io/namespace/unsigned-image:
notary.docker.io does not have trust data for docker.io/namespace/unsigned-image

您可以從Signing Images with Docker Content Trust文檔中了解有關簽署圖像的信息。

從 Docker Hub 下載鏡像時,請確保使用官方鏡像或來自可信來源的經過驗證的鏡像。較大的團隊應該考慮使用他們自己的內部私有容器註冊表


使用 Python 虛擬環境




# temp stage
FROM python:3.9-slim as builder



RUN apt-get update && \
    apt-get install -y --no-install-recommends gcc

COPY requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels -r requirements.txt

# final stage
FROM python:3.9-slim


COPY --from=builder /app/wheels /wheels
COPY --from=builder /app/requirements.txt .

RUN pip install --no-cache /wheels/*

使用 virtualenv 的示例:

# temp stage
FROM python:3.9-slim as builder



RUN apt-get update && \
    apt-get install -y --no-install-recommends gcc

RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

COPY requirements.txt .
RUN pip install -r requirements.txt

# final stage
FROM python:3.9-slim

COPY --from=builder /opt/venv /opt/venv


ENV PATH="/opt/venv/bin:$PATH"

設置內存和 CPU 限制

限制 Docker 容器的內存使用是一個好主意,尤其是當您在單台機器上運行多個容器時。這可以防止任何容器使用所有可用內存,從而削弱其餘容器。

限制內存使用的最簡單方法是在 Docker cli中使用--memory和選項:--cpu

$ docker run --cpus=2 -m 512m nginx

上述命令將容器使用限制為 2 個 CPU 和 512 兆字節的主內存。

您可以在 Docker Compose 文件中執行相同的操作,如下所示:

version: "3.9"
    image: redis:alpine
          cpus: 2
          memory: 512M
          cpus: 1
          memory: 256M

注意reservations字段。它用於設置軟限制,當主機內存或 CPU 資源不足時優先。


  1. 內存、CPU 和 GPU 的運行時選項
  2. Docker Compose 資源約束


在 Docker 容器中運行的應用程序應將日誌消息寫入標準輸出 (stdout) 和標準錯誤 (stderr) 而不是文件。

然後,您可以配置 Docker 守護程序以將日誌消息發送到集中式日誌記錄解決方案(如CloudWatch LogsPapertrail)。

有關更多信息,請查看The Twelve-Factor App中的將日誌視為事件流和Docker 文檔中的配置日誌記錄驅動程序。

為 Gunicorn Heartbeat 使用共享內存掛載

Gunicorn 使用基於文件的心跳系統來確保所有分叉的工作進程都處於活動狀態。

在大多數情況下,心跳文件位於“/tmp”中,通常通過tmpfs在內存中。由於 Docker 默認不利用 tmpfs,因此文件將存儲在磁盤支持的文件系統上。這可能會導致問題,例如由於心跳系統使用隨機凍結os.fchmod,如果目錄實際上位於磁盤支持的文件系統上,則可能會阻止工作人員。


gunicorn --worker-tmp-dir /dev/shm config.wsgi -b


本文著眼於使您的 Dockerfile 和映像更清潔、更精簡和更安全的幾個最佳實踐。

來源:  https ://testdriven.io

#python #docker