Quan Huynh

Quan Huynh


How Kubernetes works with Container Runtime

Continuing the previous post, we learned about the Container Runtime. In this post, we are going to learn about a rather exciting topic how Kubernetes uses the Container Runtime and the types of Container Runtime that Kubernetes uses.

#devops #kubernetes #docker #containers 


How Kubernetes works with Container Runtime
Quan Huynh

Quan Huynh


Deep into Container Runtime

Continuing the previous post, we learned about the two main components of building Containers are Linux Namespaces and Cgroups. In this post, we are going to learn what is Container Runtime and how does it work with Container?

#devops #docker #containers 


Deep into Container Runtime
Quan Huynh

Quan Huynh


Linux Namespaces and Cgroups: What are containers made from?

If we do DevOps, we are probably familiar with Kubernetes, Docker, and Containers. But have we ever wondered what the hell is docker? What are containers? Docker is a container? Docker is not a container and I will explain what it is in this post.


 #devops #containers #kubernetes 

Linux Namespaces and Cgroups: What are containers made from?
Waylon  Bruen

Waylon Bruen


Moby: A Collaborative Project for The Container Ecosystem

The Moby Project

Moby is an open-source project created by Docker to enable and accelerate software containerization.

It provides a "Lego set" of toolkit components, the framework for assembling them into custom container-based systems, and a place for all container enthusiasts and professionals to experiment and exchange ideas. Components include container build tools, a container registry, orchestration tools, a runtime and more, and these can be used as building blocks in conjunction with other tools and projects.


Moby is an open project guided by strong principles, aiming to be modular, flexible and without too strong an opinion on user experience. It is open to the community to help set its direction.

  • Modular: the project includes lots of components that have well-defined functions and APIs that work together.
  • Batteries included but swappable: Moby includes enough components to build fully featured container system, but its modular architecture ensures that most of the components can be swapped by different implementations.
  • Usable security: Moby provides secure defaults without compromising usability.
  • Developer focused: The APIs are intended to be functional and useful to build powerful tools. They are not necessarily intended as end user tools but as components aimed at developers. Documentation and UX is aimed at developers not end users.


The Moby Project is intended for engineers, integrators and enthusiasts looking to modify, hack, fix, experiment, invent and build systems based on containers. It is not for people looking for a commercially supported system, but for people who want to work and learn with open source code.

Relationship with Docker

The components and tools in the Moby Project are initially the open source components that Docker and the community have built for the Docker Project. New projects can be added if they fit with the community goals. Docker is committed to using Moby as the upstream for the Docker Product. However, other projects are also encouraged to use Moby as an upstream, and to reuse the components in diverse ways, and all these uses will be treated in the same way. External maintainers and contributors are welcomed.

The Moby project is not intended as a location for support or feature requests for Docker products, but as a place for contributors to work on open source code, fix bugs, and make the code more useful. The releases are supported by the maintainers, community and users, on a best efforts basis only, and are not intended for customers who want enterprise or commercial support; Docker EE is the appropriate product for these use cases.


Brought to you courtesy of our legal counsel. For more context, please see the NOTICE document in this repo.

Use and transfer of Moby may be subject to certain restrictions by the United States and other governments.

It is your responsibility to ensure that your use and/or transfer does not violate applicable laws.

For more information, please see https://www.bis.doc.gov

Author: Moby
Source Code: https://github.com/moby/moby 
License: Apache-2.0 license

#go #golang #docker #containers 

Moby: A Collaborative Project for The Container Ecosystem
坂本  篤司

坂本 篤司







docker logs <container_id>


docker logs --tail 10 <container_id>



docker logs -t <container_id>





docker exec -it <container_id> /bin/sh



docker exec -u 0 -it <container_id> /bin/sh





docker commit <container_name> <new_image>



docker commit --change="ENV foo=bar" <container_name> <new_image>





docker inspect --format "{{ .Image }}" <container_id>






docker build --tag <tag> --no-cache .





#docker #containers 


5 Consejos Simples Para Depurar Contenedores Docker

A veces, los contenedores Docker pueden ser una caja negra. Ya sea que haya creado la imagen subyacente o esté usando una pública, un contenedor que se agita es frustrante. Averiguar qué está pasando puede ser difícil debido a la forma en que se ejecutan los contenedores y cómo manejan el registro.

En este artículo, exploraremos algunos comandos y parámetros básicos que puede usar para solucionar problemas de contenedores particularmente quisquillosos. Si el contenedor no se inicia, explota intermitentemente o simplemente desea obtener más información sobre los detalles de la imagen, estas opciones simples son un verdadero cambio de juego.

1. Mejor registro y marcas de tiempo

El primer y más simple ejemplo es usar las herramientas de registro que ya proporciona Docker. La mayoría de la gente ya sabe cómo mirar los registros dentro de un contenedor:

docker logs <container_id>

Pero, ¿qué pasa si este contenedor en particular ha estado funcionando durante mucho tiempo y tiene un registro del tamaño de Texas? En casos como este, simplemente puede agregar el --tailparámetro adicional:

docker logs --tail 10 <container_id>

Usando la --tailopción puede ver solo las últimas nlíneas del registro. Pasar la cantidad de líneas que desea ver le permite saltar directamente a la información más relevante y reciente.

Si su salida de registro desde dentro del contenedor no contiene marcas de tiempo de forma predeterminada, también puede agregarlas. Docker le permite pasar la -tbandera a logla que antepondrá cada línea con una marca de tiempo:

docker logs -t <container_id>

Estas opciones también se pueden combinar para formar un instrumento de solución de problemas preciso. Ahora podrá saber exactamente cuándo sucedió algo sin tener que alterar nada dentro del contenedor.

2. Ejecutar comandos como root

Si está utilizando una imagen que se ejecuta como usuario raíz predeterminado, entonces esto no es un problema. Cuando no se ejecuta como root y en su lugar utiliza un usuario sin privilegios, esta es una gran herramienta de solución de problemas.

Si tu corres:

docker exec -it <container_id> /bin/sh

Esto siempre se ejecutará como lo define el usuario en la imagen subyacente. Si este usuario no tiene privilegios de raíz, puede ser difícil intentar ejecutarlo en un contenedor en ejecución para solucionar problemas (especialmente si necesita instalar algo).

Si desea saltar al contenedor como usuario raíz, todo lo que tiene que hacer es pasar lo siguiente:

docker exec -u 0 -it <container_id> /bin/sh

Esto le indicará a Docker que use el usuario que tiene ID 0. Esta es la raíz. Ahora, cuando ingrese al contenedor, estará listo para depurar con todos los privilegios.

3. Comprometer un contenedor como imagen

Esta es una característica de Docker que a menudo se pasa por alto. De hecho, puede crear una nueva imagen a partir de un contenedor existente. Esto significa que si ha estado jugando con un contenedor y ha realizado algunos cambios para corregir algunos errores, puede generar nuevos contenedores de inmediato. Ni siquiera tiene que ir a reconstruir el Dockerfile.

El siguiente comando confirmará una nueva imagen del contenedor existente:

docker commit <container_name> <new_image>

Esto creará una nueva imagen con el nombre que especifique y puede usarla inmediatamente para crear nuevos contenedores.

Otro beneficio adicional del commitcomando es que puede pasarle la sintaxis de Dockerfile durante el proceso de confirmación. Si quisiera confirmar un contenedor existente pero cambiar una de las variables de entorno en él, podría usar la --changebandera para pasar eso:

docker commit --change="ENV foo=bar" <container_name> <new_image>

Puede pasar varios comandos diferentes para facilitar la creación de imágenes impresionantemente granulares en la línea de comandos changes.commit

4. Hash de imágenes coincidentes

Si está solucionando problemas de un contenedor que existe desde hace un tiempo, es posible que no sepa con qué versión particular de una imagen se creó. Si usa un registro de contenedor como Docker Hub o Elastic Container Registry, puede obtener fácilmente el hash de la imagen para compararlo con su contenedor.

Una forma rápida de capturar todos los metadatos sobre un contenedor es usar el inspectcomando. Esto está bien, pero te da un montón de información. Si todo lo que busca es el hash de la imagen, puede obtenerlo usando un poco de magia de formato como esta:

docker inspect --format "{{ .Image }}" <container_id>

Esto debería generar el hash sha256 de la imagen que se está ejecutando el contenedor. El hash se puede comparar con el de su registro para determinar cuándo se creó.

Ahora puede estar absolutamente seguro de qué versión se está ejecutando y dónde.

5. Omitir el caché de compilación

Si realmente tiene dificultades para entender por qué falla una compilación, tiene errores o simplemente no incluye algunos cambios que realizó, entonces podría ser el momento de eliminar el caché. Aunque Docker debe reconocer los cambios en las capas y reconstruir según sea necesario, a veces necesita la tranquilidad de comenzar desde cero.

Si desea crear una imagen sin aprovechar ningún caché de compilación existente, puede ejecutar el siguiente comando:

docker build --tag <tag> --no-cache .

Esto ignorará cualquier elemento construido previamente en el caché y forzará que todo se construya desde cero. Útil si está trabajando en varias iteraciones de una imagen y desea asegurarse de incluir cambios muy sutiles en algunas capas.


¡Gracias por leer!

Esta historia se publicó originalmente en https://betterprogramming.pub/5-simple-tips-for-debugging-docker-containers-271cb3dee77a

#docker #containers 

5 Consejos Simples Para Depurar Contenedores Docker
Lawson  Wehner

Lawson Wehner


Docker2: Simple Library for Viewing & Controlling Docker Images

Docker CLI is Dart library for controlling docker images and containers.

Docker CLI wraps the docker cli tooling.


    /// If we don't have the image pull it.
    var alpineImage = Docker().pull('alpine');

    /// If the container exists then lets delete it so we can recreate it.
    var existing = Docker().findContainerByName('alpine_sleep_inifinity');
    if (existing != null) {

    /// create container named alpine_sleep_inifinity
    var container = alpineImage.create('alpine_sleep_inifinity',
        argString: 'sleep infinity');

    if (Docker().findContainerByName('alpine_sleep_inifinity') == null) {
      print('Huston we have a container');

    // start the container.
    /// stop the container.

    while (container.isRunning)


Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add docker2

With Flutter:

 $ flutter pub add docker2

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

  docker2: ^2.2.5

Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:docker2/docker2.dart';


import 'package:dcli/dcli.dart';
import 'package:docker2/docker2.dart';

void main(List<String> args) {
  final image = Docker().findImageByName('alpine');
  if (image != null) {

  /// If we don't have the image pull it.
  final alpineImage = Docker().pull('alpine');

  /// If the container exists then lets delete it so we can recreate it.
  final existing = Docker().findContainerByName('alpine_sleep_inifinity');
  if (existing != null) {

  /// create container named alpine_sleep_inifinity
  final container =
      alpineImage.create('alpine_sleep_inifinity', argString: 'sleep infinity');

  if (Docker().findContainerByName('alpine_sleep_inifinity') == null) {
    print('Huston we have a container');

  // start the container.

  /// stop the container.

  while (container.isRunning) {

Author: Noojee
Source Code: https://github.com/noojee/docker2 
License: MIT license

#docker #flutter #dart #containers 

Docker2: Simple Library for Viewing & Controlling Docker Images
Annie  Emard

Annie Emard


Vulnerability Static Analysis for Containers


Note: The main branch may be in an unstable or even broken state during development. Please use releases instead of the main branch in order to get stable binaries.

Clair is an open source project for the static analysis of vulnerabilities in application containers (currently including OCI and docker).

Clients use the Clair API to index their container images and can then match it against known vulnerabilities.

Our goal is to enable a more transparent view of the security of container-based infrastructure. Thus, the project was named Clair after the French term which translates to clear, bright, transparent.

The book contains all the documentation on Clair's architecture and operation.



See CONTRIBUTING for details on submitting patches and the contribution workflow.

Author: quay
Source Code: https://github.com/quay/clair
License: Apache-2.0 License

#analysis #containers 

Vulnerability Static Analysis for Containers

A Flake8 Plugin to Ensure A Consistent format For Multiline Containers


A Flake8 plugin to ensure a consistent format for multiline containers.


Install from pip with:

pip install flake8-multiline-containers


JS101Multi-line container not broken after opening character
JS102Multi-line container does not close on same column as opening


# Right: Opens and closes on same line
foo = {'a': 'hello', 'b': 'world'}

# Right: Line break after parenthesis, closes on same column as opening
foo = {
    'a': 'hello',
    'b': 'world',

# Right: Line break after parenthesis, closes on same column as opening
foo = [
    'hello', 'world',

# Wrong: JS101
foo = {'a': 'hello',
       'b': 'world',

# Wrong: JS101, JS102
foo = {'a': 'hello',
       'b': 'world'}

# Wrong: JS101, JS102
foo = {'hello',

Author: jsfehler
Source Code: https://github.com/jsfehler/flake8-multiline-containers
License: MIT license

#python #containers 

A Flake8 Plugin to Ensure A Consistent format For Multiline Containers
Jade Bird

Jade Bird


Miniboss: Python App for Running, Rebuilding and Restarting a Collection of Docker Services


miniboss is a Python application for locally running a collection of interdependent docker services, individually rebuilding and restarting them, and managing application state with lifecycle hooks. Services definitions can be written in Python, allowing the use of programming logic instead of markup.

Why not docker-compose?

First and foremost, good old Python instead of YAML. docker-compose is in the school of yaml-as-service-description, which means that going beyond a static description of a service set necessitates templates, or some kind of scripting. One could just as well use a full-blown programming language, while trying to keep simple things simple. Another thing sorely missing in docker-compose is lifecycle hooks, i.e. a mechanism whereby scripts can be executed when the state of a container changes. Lifecycle hooks have been requested multiple times, but were not deemed to be in the domain of docker-compose.


miniboss is on PyPi; you can install it with the following:

pip install miniboss


Here is a very simple service specification:

#! /usr/bin/env python3
import miniboss


class Database(miniboss.Service):
    name = "appdb"
    image = "postgres:10.6"
    env = {"POSTGRES_PASSWORD": "dbpwd",
           "POSTGRES_USER": "dbuser",
           "POSTGRES_DB": "appdb" }
    ports = {5432: 5433}

class Application(miniboss.Service):
    name = "python-todo"
    image = "afroisalreadyin/python-todo:0.0.1"
    env = {"DB_URI": "postgresql://dbuser:dbpwd@appdb:5432/appdb"}
    dependencies = ["appdb"]
    ports = {8080: 8080}
    stop_signal = "SIGINT"

if __name__ == "__main__":

The first use of miniboss is in the call to miniboss.group_name, which specifies a name for this group of services. If you don't set it, sluggified form of the directory name will be used. Group name is used to identify the services and the network defined in a miniboss file. Setting it manually to a non-default value will allow miniboss to manage multiple collections in the same directory.

A service is defined by subclassing miniboss.Service and overriding, in the minimal case, the fields image and name. The env field specifies the environment variables. As in the case of the appdb service, you can use ordinary variables anywhere Python accepts them. The other available fields are explained in the section Service definition fields. In the above example, we are creating two services: The application service python-todo (a simple Flask todo application defined in the sample-apps directory) depends on appdb (a Postgresql container), specified through the dependencies field. As in docker-compose, this means that python-todo will get started after appdb reaches running status.

The miniboss.cli function is the main entry point; you need to call it in the main section of your script. Let's run the script above without arguments, which leads to the following output:

Usage: miniboss-main.py [OPTIONS] COMMAND [ARGS]...

  --help  Show this message and exit.


We can start our small collection of services by running ./miniboss-main.py start. After spitting out some logging text, you will see that starting the containers failed, with the python-todo service throwing an error that it cannot reach the database. The reason for this error is that the Postgresql process has started, but is still initializing, and does not accept connections yet. The standard way of dealing with this issue is to include backoff code in your application that checks on the database port regularly, until the connection is accepted. miniboss offers an alternative with lifecycle events. For the time being, you can simply rerun ./miniboss-main.py start, which will restart only the python-todo service, as the other one is already running. You should be able to navigate to http://localhost:8080 and view the todo app page.

You can also exclude services from the list of services to be started with the --exclude argument; ./miniboss-main.py start --exclude python-todo will start only appdb. If you exclude a service that is depended on by another, you will get an error. If a service fails to start (i.e. container cannot be started or the lifecycle events fail), it and all the other services that depend on it are registered as failed.

Stopping services

Once you are done working with a collection, you can stop the running services with miniboss-main.py stop. This will stop the services in the reverse order of dependency, i.e. first python-todo and then appdb. Exclusion is possible also when stopping services with the same --exclude argument. Running ./miniboss-main.py stop --exclude appdb will stop only the python-todo service. If you exclude a service whose dependency will be stopped, you will get an error. If, in addition to stopping the service containers, you want to remove them, include the option --remove. If you don't remove the containers, miniboss will restart the existing containers (modulo changes in service definition) instead of creating new ones the next time it's called with start. This behavior can be modified with the always_start_new field; see the details in Service definition fields.

Reloading a service

miniboss also allows you to reload a specific service by building a new container image from a directory. You need to provide the path to the directory in which the Dockerfile and build context of a service resides in order to use this feature. You can also provide an alternative Dockerfile name. Here is an example:

class Application(miniboss.Service):
    name = "python-todo"
    image = "afroisalreadyin/python-todo:0.0.1"
    env = {"DB_URI": "postgresql://dbuser:dbpwd@appdb:5432/appdb"}
    dependencies = ["appdb"]
    ports = {8080: 8080}
    build_from = "python-todo/"
    dockerfile = "Dockerfile"

The build_from option has to be a path relative to the main miniboss file. With such a service configuration, you can run ./miniboss-main.py reload python-todo, which will cause miniboss to build the container image, stop the running service container, and restart the new image. Since the context generated at start is saved in a file, any context values used in the service definition are available to the new container.

Lifecycle events

One of the differentiating feature of miniboss is lifecycle events, which are hooks that can be customized to execute code at certain points in a service's or the whole collection's lifecycle.

Per-service events

For per-service events, miniboss.Service has three methods that can be overriden in order to correctly change states and execute actions on the container:

Service.pre_start(): Executed before the service is started. Can be used for things like initializing mount directory contents or downloading online content.

Service.ping(): Executed repeatedly right after the service starts with a 0.1 second delay between executions. If this method does not return True within a given timeout value (can be set with the --timeout argument, default is 300 seconds), the service is registered as failed. Any exceptions in this method will be propagated, and also cause the service to fail. If there is already a service instance running, it is not pinged.

Service.post_start(): This method is executed after a successful ping. It can be used to prime a service by e.g. creating data on it, or bringing it to a certain state. You can also use the global context in this method; see The global context for details. If there is already a service running, or an existing container image is started insted of creating a new one, this method is not called.

These methods are noop by default. A service is not registered as properly started before lifecycle methods are executed successfully; only then are the dependant services started.

The ping method is particularly useful if you want to avoid the situation described above, where a container starts, but the main process has not completed initializing before any dependent services start. Here is an example for how one would ping the appdb service to make sure the Postgresql database is accepting connections:

import psycopg2

class Database(miniboss.Service):
    # fields same as above

    def ping(self):
            connection = psycopg2.connect("postgresql://dbuser:dbpwd@localhost:5433/appdb")
            cur = connection.cursor()
            cur.execute('SELECT 1')
        except psycopg2.OperationalError:
            return False
            return True

One thing to pay attention to is that, in the call to psycopg2.connect, we are using localhost:5433 as host and port, whereas the python-todo environment variable DBURI has appdb:5433 instead. This is because the ping method is executed on the host computer. The next section explains the details.

Collection events

It is possible to hook into collection change commands using the following hooks. You can call them on the base miniboss module and set a hook by passing it in as the sole argument, e.g. as follows:

import miniboss

def print_services(service_list):
    print("Started ", ' '.join(service_list))


on_start_services hook is called after the miniboss.start command is executed. The single argument is a list of the names of the services that were successfully started.

on_stop_services hook is called after the miniboss.stop command is executed. The single argument is a list of the services that were stopped.

on_reload_service hook is called after the miniboss.reload command is executed. The single argument is the name of the service that was reloaded.

Ports and hosts

miniboss starts services on an isolated bridge network, mapping no ports by default. The name of this service can be specified with the --network-name argument when starting a group. If it's not specified, the name will be generated from the group name by prefixing it with miniboss-. On the collection network, services can be contacted under the service name as hostname, on the ports they are listening on. The appdb Postgresql service above, for example, can be contacted on the port 5432, the default port on which Postgresql listens. This is the reason the host part of the DB_URI environment variable on the python-todo service is appdb:5432. If you want to reach appdb on the port 5433 from the host system, which would be necessary to implement the ping method as above, you need to make this mapping explicit with the ports field of the service definition. This field accepts a dictionary of integer keys and values. The key is the service container port, and the value is the host port. In the case of appdb, the Postgresql port of the container is mapped to port 5433 on the local machine, in order not to collide with any local Postgresql instances. With this configuration, the appdb database can be accessed at localhost:5433.

The global context

The object miniboss.Context, derived from the standard dict class, can be used to store values that are accessible to other service definitions, especially in the env field. For example, if you create a user in the post_start method of a service, and would like to make the ID of this user available to a dependant service, you can set it on the context with Context['user_id'] = user.id. In the definition of the second service, you can refer to this value in a field with the standard Python keyword formatting syntax, as in the following:

class DependantService(miniboss.Service):
    # other fields
    env = {'USER_ID': '{user_id}'}

You can of course also programmatically access it as Context['user_id'] once a value has been set.

When a service collection is started, the generated context is saved in the file .miniboss-context, in order to be used when the same containers are restarted or a specific service is reloaded.

Service definition fields

name: The name of the service. Must be non-empty and unique for one miniboss definition module. The container can be contacted on the network under this name; it must therefore be a valid hostname.

image: Container image of the service. Must be non-empty. You can use a repository URL here; if the image is not locally available, it will be pulled. You are highly advised to specify a tag, even if it's latest, because otherwise miniboss will not be able to identify which container image was used for a service, and start a new container each time. If the tag of the image is latest, and the build_from directory option is specified, the container image will be built each time the service is started.

entrypoint: Container entrypoint, the executable that is run when the container starts. See Docker documentation for details.

cmd: CMD option for a container. See Docker documentation for details.

user: USER option for a container See Docker documentation for details.

dependencies: A list of the dependencies of a service by name. If there are any invalid or circular dependencies, an exception will be raised.

env: Environment variables to be injected into the service container, as a dict. The values of this dict can contain extrapolations from the global context; these extrapolations are executed when the service starts.

ports: A mapping of the ports that must be exposed on the running host. Keys are ports local to the container, values are the ports of the running host. See Ports and hosts for more details on networking.

volumes: Directories to be mounted inside the services as a volume, on which mount points. The value of volumes can be either a list of strings, in the format "directory:mount_point:mode", or in the dictionary format {directory: {"bind": mount_point, "mode": mode}}. In both cases, mode is optional. See the Using volumes section of Docker Python SDK documentation for details.

always_start_new: Whether to create a new container each time a service is started or restart an existing but stopped container. Default value is False, meaning that by default existing container will be restarted.

stop_signal: Which stop signal Docker should use to stop the container, by name (not by integer value, so don't use values from the signal standard library module here). Default is SIGTERM. Accepted values are SIGINT, SIGTERM, SIGKILL and SIGQUIT.

build_from: The directory from which a service can be reloaded. It should be either absolute, or relative to the main script. Required if you want to be able to reload a service. If this option is specified, and the tag of the image option is latest, the container image will be built each time the service is started.

dockerfile: Dockerfile to use when building a service from the build_from directory. Default is Dockerfile.

Release notes


  • Linting
  • Pull container image if it doesn't exist
  • Integration tests
  • Mounting volumes
  • Pre-start lifetime event


  • Don't fail on start if excluded services depend on each other
  • Destroy service if it cannot be started
  • Log when custom post_start is done
  • Don't start new if int-string env keys don't differ
  • Don't run pre-start if container found
  • Multiple clusters on single host with group id
  • Build container if tag doesn't exist and it has build_from
  • Better pypi readme with release notes


  • Tests for CLI commands
  • Collection lifecycle hooks


  • Removed group name requirement
  • Logging fixes
  • Sample app fixes


  • Entrypoint, cmd and user fields on service
  • Type hints
  • Use tbump for version bumping


  • Corrected docker lcient library version in dependencies


  •  User attrs properly with types
  •  Add stop-only command
  •  Add start-only command
  •  Making easier to test on the cloud??
  •  Run tests in container (how?)
  •  Exporting environment values for use in shell
  •  Running one-off containers
  •  Configuration object extrapolation
  •  Running tests once system started
  •  Using context values in tests
  •  Dependent test suites and setups

Download Details: 
Author: afroisalreadyinu
Source Code: https://github.com/afroisalreadyinu/miniboss 
License: MIT

#python #testing #docker #containers

Miniboss: Python App for Running, Rebuilding and Restarting a Collection of Docker Services

Kubernetes: Production-Grade Container Scheduling and Management

Kubernetes (K8s)

Kubernetes, also known as K8s, is an open source system for managing containerized applications across multiple hosts. It provides basic mechanisms for deployment, maintenance, and scaling of applications.

Kubernetes builds upon a decade and a half of experience at Google running production workloads at scale using a system called Borg, combined with best-of-breed ideas and practices from the community.

Kubernetes is hosted by the Cloud Native Computing Foundation (CNCF). If your company wants to help shape the evolution of technologies that are container-packaged, dynamically scheduled, and microservices-oriented, consider joining the CNCF. For details about who's involved and how Kubernetes plays a role, read the CNCF announcement.

To start using K8s

See our documentation on kubernetes.io.

Try our interactive tutorial.

Take a free course on Scalable Microservices with Kubernetes.

To use Kubernetes code as a library in other applications, see the list of published components. Use of the k8s.io/kubernetes module or k8s.io/kubernetes/... packages as libraries is not supported.

To start developing K8s

The community repository hosts all information about building Kubernetes from source, how to contribute code and documentation, who to contact about what, etc.

If you want to build Kubernetes right away there are two options:

You have a working Go environment.

mkdir -p $GOPATH/src/k8s.io
cd $GOPATH/src/k8s.io
git clone https://github.com/kubernetes/kubernetes
cd kubernetes

You have a working Docker environment.

git clone https://github.com/kubernetes/kubernetes
cd kubernetes
make quick-release

For the full story, head over to the developer's documentation.


If you need support, start with the troubleshooting guide, and work your way through the process that we've outlined.

That said, if you have questions, reach out to us one way or another.

Community Meetings

The Calendar has the list of all the meetings in Kubernetes community in a single location.


The User Case Studies website has real-world use cases of organizations across industries that are deploying/migrating to Kubernetes.


Kubernetes project is governed by a framework of principles, values, policies and processes to help our community and constituents towards our shared goals.

The Kubernetes Community is the launching point for learning about how we organize ourselves.

The Kubernetes Steering community repo is used by the Kubernetes Steering Committee, which oversees governance of the Kubernetes project.


The Kubernetes Enhancements repo provides information about Kubernetes releases, as well as feature tracking and backlogs.

Download Details: 
Author: kubernetes
Source Code: https://github.com/kubernetes/kubernetes 
License: Apache-2.0 License
#go #kubernetes #containers

Kubernetes: Production-Grade Container Scheduling and Management
Haylie  Conn

Haylie Conn


Docker Containers, The Latest, Greatest Way to Deploy Applications

From app testing to reducing infrastructure costs and beyond, Docker has many great use cases. But developers should remember that, like any technology, Docker has limitations. But which use cases does Docker support? When should or shouldn't you use Docker as an alternative to VMs or other application deployment techniques?

#docker #containers 

Docker Containers, The Latest, Greatest Way to Deploy Applications
Misael  Stark

Misael Stark


Learn How to Communication Between Docker Containers

Let’s See How The Communication Between Docker Containers Is Done. Containers are a form of operating system virtualization. A single container might be used to run anything from a small microservice or software process to a larger application. Inside a container are all the necessary executables, binary code, libraries, and configuration files.

#docker #containers 

Learn How to Communication Between Docker Containers
HI Python

HI Python


Aiodi: A Container for The Dependency injection In Python

Python Dependency Injection library

aiodi is a Container for the Dependency Injection in Python.


Use the package manager pip to install aiodi.

pip install aiodi



from abc import ABC, abstractmethod
from logging import Logger, getLogger, NOTSET, StreamHandler, Formatter
from os import getenv

from aiodi import Container
from typing import Optional, Union

_CONTAINER: Optional[Container] = None

def get_simple_logger(
        name: Optional[str] = None,
        level: Union[str, int] = NOTSET,
        fmt: str = '[%(asctime)s] - %(name)s - %(levelname)s - %(message)s',
) -> Logger:
    logger = getLogger(name)
    handler = StreamHandler()
    formatter = Formatter(fmt)
    return logger

class GreetTo(ABC):
    def __call__(self, who: str) -> None:

class GreetToWithPrint(GreetTo):
    def __call__(self, who: str) -> None:
        print('Hello ' + who)

class GreetToWithLogger(GreetTo):
    _logger: Logger

    def __init__(self, logger: Logger) -> None:
        self._logger = logger

    def __call__(self, who: str) -> None:
        self._logger.info('Hello ' + who)

def container() -> Container:
    global _CONTAINER
    if _CONTAINER:
        return _CONTAINER
    di = Container({'env': {
        'name': getenv('APP_NAME', 'aiodi'),
        'log_level': getenv('APP_LEVEL', 'INFO'),
                'name': di.resolve_parameter(lambda di_: di_.get('env.name', typ=str)),
                'level': di.resolve_parameter(lambda di_: di_.get('env.log_level', typ=str)),
        (GreetTo, GreetToWithLogger),  # -> (GreetTo, GreetToWithLogger, {})
        GreetToWithPrint,  # -> (GreetToWithPrint, GreetToWithPrint, {})
    di.set('who', 'World!')
    # ...
    _CONTAINER = di
    return di

def main() -> None:
    di = container()

    di.get(Logger).info('Just simple call get with the type')

    for greet_to in di.get(GreetTo, instance_of=True):

if __name__ == '__main__':


  • Python >= 3.6


Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

Download Details:
Author: ticdenis
Source Code: https://github.com/ticdenis/python-aiodi
License: MIT License

#python #containers 

Aiodi: A Container for The Dependency injection In Python
Elian  Harber

Elian Harber


Minimize Go Apps Container Image


Image is one necessary thing that you must plan when you want to containerize your apps. Building a large image means you need more data to transfer between your image repository, CI/CD platform, and deployment server. Creating a smaller container is a must to save time. There’s no need to be difficult when it comes to reducing container image size. Especially with Go apps, it has already come with a binary, which means it doesn’t need any environmental server like Nginx, Node, Etc.

In this article, you will learn how to reduce your Go apps container image using Docker. You can also use another builder like Buildah that used by Podman. In this case, you will reduce your container image’s size using a multi-stage build with a distroless image, UPX, and especially for Go apps, utilize the ldflags.

#golang #containers 

Minimize Go Apps Container Image