Kevon  Krajcik

Kevon Krajcik

1661373540

Docker Pushrm: A Docker CLI Plugin to Update Container Repo Docs

Docker Push Readme

Update the README of your container repo on Dockerhub, Quay or Harbor with a simple Docker command:

$ ls
README.md
$ docker pushrm my-user/hello-world

hello moon

About

docker-pushrm is a Docker CLI plugin that adds a new docker pushrm (speak: "push readme") command to Docker.

It pushes the README file from the current working directory to a container registry server where it appears as repo description in the webinterface.

It currently supports Dockerhub (cloud), Red Hat Quay (cloud and self-hosted/OpenShift) and Harbor v2 (self-hosted).

For most registry types docker-pushrm uses authentication info from the Docker credentials store - so it "just works" for registry servers that you're already logged into with Docker.

(For some other registry types, you'll need to pass an API key via env var or config file).

Usage example

Let's build a container image, push it to Dockerhub and then also push the README to Dockerhub:

$ ls
Dockerfile    README.md
$ docker login
Username: my-user
Password: ********
Login Succeeded
$ docker build -t my-user/hello-world .
$ docker push my-user/hello-world
$ docker pushrm my-user/hello-world

When we now browse to the repo in the Dockerhub webinterface we should find the repo's README to be updated with the contents of the local README file.

The same works for Harbor version 2 registry servers:

docker pushrm --provider harbor2 demo.goharbor.io/myproject/hello-world

And also for Quay/OpenShift cloud and self-hosted registry servers:

docker pushrm --provider quay quay.io/my-user/hello-world

For Dockerhub it's also possible to set the repo's short description with -s "some description".

In case that you want different content to appear in the README on the container registry than on the git repo (for github/gitlab), you can create a dedicated README-containers.md, which takes precedence. It's also possible to specify a path to a README file with --file <path>.

Installation

  • make sure Docker or Docker Desktop is installed
  • Download docker-pushrm for your platform from the release page.
  • copy it to:
    • Windows: c:\Users\<your-username>\.docker\cli-plugins\docker-pushrm.exe
    • Mac + Linux: $HOME/.docker/cli-plugins/docker-pushrm
  • on Mac/Linux make it executable: chmod +x $HOME/.docker/cli-plugins/docker-pushrm

Now you should be able to run docker pushrm --help.

Running docker-pushrm as a container

There's also a Docker/OCI container image of this tool. See separate docs for how to use it. This is mainly intended for use in CI workflows.

Use with github actions

This tool is also available as a github action here.

Use with GitLab CI/CD

Here's an example for a .gitlab-ci.yml that uses the docker-pushrm container image. (DOCKER_USER and DOCKER_PASS need to be set as project or group variables):

stages:
  - release

pushrm:
  stage: release
  image:
    name: chko/docker-pushrm
    entrypoint: ["/bin/sh", "-c", "/docker-pushrm"]
  variables:
    DOCKER_USER: $DOCKER_USER
    DOCKER_PASS: $DOCKER_PASS
    PUSHRM_SHORT: My short description
    PUSHRM_TARGET: docker.io/$DOCKER_USER/my-repo
    PUSHRM_DEBUG: 1
    PUSHRM_FILE: $CI_PROJECT_DIR/README.md
  script: "/bin/true"

(Note: The above entrypoint/script setup is a workaround for a GitLab limitation. For the same reason the docker-pushrm container images include busybox).

How to log in to container registries

Log in to Dockerhub registry

docker login

Both password and Personal Access Token (PAT) should work. When using a PAT, make sure it has sufficient privileges (admin scope).

Log in to Harbor v2 registry

docker login <servername>

Example:

docker login demo.goharbor.io

Log in to Quay registry

If you want to be able to push containers, you need to log in as usual:

  • for Quay cloud: docker login quay.io
  • for self-hosted Quay server or OpenShift: docker login <servername> (example: docker login my-server.com)

In addition to be able to use docker-pushrm you need to set up an API key:

First, log into the Quay webinterface and create an API key:

  • if you don't have an organization create a new organization (your repos don't need to be under the organization's namespace, this is just to unlock the "apps" settings page)
  • navigate to the org and open the applications tab
  • create new app and give it some name
  • click on the app name and open to the generate token tab
  • create a token with permissions Read/Write to any accessible repositories
  • after confirming you should now see the token secret. Write it down in a safe place.

(Refer to the Quay docs for more info)

Then, make the API key available to docker-pushrm. There are two options for that: Either set an environment variable (recommended for CI) or add it to the Docker config file (recommended for Desktop use). (If both are present, the env var takes precedence).

env var for Quay API key

set an environment variable DOCKER_APIKEY=<apikey> or APIKEY__<SERVERNAME>_<DOMAIN>=<apikey>

example for servername quay.io:

export APIKEY__QUAY_IO=my-api-key
docker pushrm quay.io/my-user/my-repo

configure Quay API key in Docker config file

In the Docker config file (default: $HOME/.docker/config.json) add a json key plugins.docker-pushrm.apikey_<servername> with the api key as string value.

Example for servername quay.io:

{

  ...,


  "plugins" : {
    "docker-pushrm" : {
      "apikey_quay.io" : "my-api-key"
    }
  },

  ...
}

Log in with environment variables (for CI)

Alternatively credentials can be set as environment variables. Environment variables take precedence over the Docker credentials store. Environment variables can be specified with or without a server name. The variant without a server name takes precedence.

This is intended for running docker-pushrm as a standalone tool in a CI environment (no full Docker installation needed).

  • DOCKER_USER and DOCKER_PASS
  • DOCKER_USER__<SERVER>_<DOMAIN> and DOCKER_PASS__<SERVER>_<DOMAIN> (example for server docker.io: DOCKER_USER__DOCKER_IO=my-user and DOCKER_PASS__DOCKER_IO=my-password)

The provider 'quay' needs an additional env var for the API key in form of APIKEY__<SERVERNAME>_<DOMAIN>=<apikey>.

Example:

DOCKER_USER=my-user DOCKER_PASS=mypass docker-pushrm my-user/my-repo

What if I use [podman, img, k3c, buildah, ...] instead of Docker?

You can still use docker-pushrm as standalone executable.

The only obstacle is that you need to provide it credentials in the Docker style.

The easiest way for that is to set up a minimal Docker config file with the registry server logins that you need. (Alternatively credentials can be passed in environment variables )

You can either create this config file on a computer with Docker installed (by running docker login and then copying the $HOME/.docker/config.json file).

Or alternatively you can also set it up manually. Here's an example:

{
    "auths": {
        "https://index.docker.io/v1/": {
            "auth": "xxx"
        },
        "https://demo.goharbor.io": {
            "auth": "xxx"
        }

    },
}

The auth value is base64 of <user>:<passwd> (i.e. myuser:mypasswd)

It's also possible to use Docker credential helpers on systems that don't have Docker installed to avoid clear text passwords in the config file. The credential helper needs to be configured in the Docker config file and the credential helper executable needs to be in the PATH. (Check the Docker docs for details).

Can you add support for registry [XY...]?

Please open an issue.

Installation for all users

To install the plugin for all users of a system copy it to the following path (instead of to the user home dir). Requires admin/root privs.

  • Linux: depending on the distro, either /usr/lib/docker/cli-plugins/docker-pushrm or /usr/libexec/docker/cli-plugins/docker-pushrm
  • Windows: %ProgramData%\Docker\cli-plugins\docker-pushrm.exe
  • Mac: /Applications/Docker.app/Contents/Resources/cli-plugins/docker-pushrm

On Mac/Linux make it executable and readable for all users: chmod a+rx <path>/docker-pushrm

Using env vars instead of cmdline params

All cmdline parameters can also be set as env vars with prefix PUSHRM_.

Cmdline parameters take precedence over env vars. (Except for login env vars, which take precedence over the local credentials store).

This is mainly intended for running this tool in a container in 12fa style.

A list of all supported env vars is here.

Limitations

Problem with Harbor2 OpenID connect logins

This tool currently doesn't work for Harbor2 users that that authenticate through a 3rd party OpenID Connect (OIDC) provider like auth0, Keycloak, okta, dex, etc). (Local users and LDAP users are not affected and should work). This limitation is under investigation, contributions are welcome!


All trademarks, logos and website designs belong to their respective owners.


Download Details:

Author: christian-korneck
Source code: https://github.com/christian-korneck/docker-pushrm
License: MIT license
#docker 

What is GEEK

Buddha Community

Docker Pushrm: A Docker CLI Plugin to Update Container Repo Docs
Christa  Stehr

Christa Stehr

1602964260

50+ Useful Kubernetes Tools for 2020 - Part 2

Introduction

Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Kevon  Krajcik

Kevon Krajcik

1661373540

Docker Pushrm: A Docker CLI Plugin to Update Container Repo Docs

Docker Push Readme

Update the README of your container repo on Dockerhub, Quay or Harbor with a simple Docker command:

$ ls
README.md
$ docker pushrm my-user/hello-world

hello moon

About

docker-pushrm is a Docker CLI plugin that adds a new docker pushrm (speak: "push readme") command to Docker.

It pushes the README file from the current working directory to a container registry server where it appears as repo description in the webinterface.

It currently supports Dockerhub (cloud), Red Hat Quay (cloud and self-hosted/OpenShift) and Harbor v2 (self-hosted).

For most registry types docker-pushrm uses authentication info from the Docker credentials store - so it "just works" for registry servers that you're already logged into with Docker.

(For some other registry types, you'll need to pass an API key via env var or config file).

Usage example

Let's build a container image, push it to Dockerhub and then also push the README to Dockerhub:

$ ls
Dockerfile    README.md
$ docker login
Username: my-user
Password: ********
Login Succeeded
$ docker build -t my-user/hello-world .
$ docker push my-user/hello-world
$ docker pushrm my-user/hello-world

When we now browse to the repo in the Dockerhub webinterface we should find the repo's README to be updated with the contents of the local README file.

The same works for Harbor version 2 registry servers:

docker pushrm --provider harbor2 demo.goharbor.io/myproject/hello-world

And also for Quay/OpenShift cloud and self-hosted registry servers:

docker pushrm --provider quay quay.io/my-user/hello-world

For Dockerhub it's also possible to set the repo's short description with -s "some description".

In case that you want different content to appear in the README on the container registry than on the git repo (for github/gitlab), you can create a dedicated README-containers.md, which takes precedence. It's also possible to specify a path to a README file with --file <path>.

Installation

  • make sure Docker or Docker Desktop is installed
  • Download docker-pushrm for your platform from the release page.
  • copy it to:
    • Windows: c:\Users\<your-username>\.docker\cli-plugins\docker-pushrm.exe
    • Mac + Linux: $HOME/.docker/cli-plugins/docker-pushrm
  • on Mac/Linux make it executable: chmod +x $HOME/.docker/cli-plugins/docker-pushrm

Now you should be able to run docker pushrm --help.

Running docker-pushrm as a container

There's also a Docker/OCI container image of this tool. See separate docs for how to use it. This is mainly intended for use in CI workflows.

Use with github actions

This tool is also available as a github action here.

Use with GitLab CI/CD

Here's an example for a .gitlab-ci.yml that uses the docker-pushrm container image. (DOCKER_USER and DOCKER_PASS need to be set as project or group variables):

stages:
  - release

pushrm:
  stage: release
  image:
    name: chko/docker-pushrm
    entrypoint: ["/bin/sh", "-c", "/docker-pushrm"]
  variables:
    DOCKER_USER: $DOCKER_USER
    DOCKER_PASS: $DOCKER_PASS
    PUSHRM_SHORT: My short description
    PUSHRM_TARGET: docker.io/$DOCKER_USER/my-repo
    PUSHRM_DEBUG: 1
    PUSHRM_FILE: $CI_PROJECT_DIR/README.md
  script: "/bin/true"

(Note: The above entrypoint/script setup is a workaround for a GitLab limitation. For the same reason the docker-pushrm container images include busybox).

How to log in to container registries

Log in to Dockerhub registry

docker login

Both password and Personal Access Token (PAT) should work. When using a PAT, make sure it has sufficient privileges (admin scope).

Log in to Harbor v2 registry

docker login <servername>

Example:

docker login demo.goharbor.io

Log in to Quay registry

If you want to be able to push containers, you need to log in as usual:

  • for Quay cloud: docker login quay.io
  • for self-hosted Quay server or OpenShift: docker login <servername> (example: docker login my-server.com)

In addition to be able to use docker-pushrm you need to set up an API key:

First, log into the Quay webinterface and create an API key:

  • if you don't have an organization create a new organization (your repos don't need to be under the organization's namespace, this is just to unlock the "apps" settings page)
  • navigate to the org and open the applications tab
  • create new app and give it some name
  • click on the app name and open to the generate token tab
  • create a token with permissions Read/Write to any accessible repositories
  • after confirming you should now see the token secret. Write it down in a safe place.

(Refer to the Quay docs for more info)

Then, make the API key available to docker-pushrm. There are two options for that: Either set an environment variable (recommended for CI) or add it to the Docker config file (recommended for Desktop use). (If both are present, the env var takes precedence).

env var for Quay API key

set an environment variable DOCKER_APIKEY=<apikey> or APIKEY__<SERVERNAME>_<DOMAIN>=<apikey>

example for servername quay.io:

export APIKEY__QUAY_IO=my-api-key
docker pushrm quay.io/my-user/my-repo

configure Quay API key in Docker config file

In the Docker config file (default: $HOME/.docker/config.json) add a json key plugins.docker-pushrm.apikey_<servername> with the api key as string value.

Example for servername quay.io:

{

  ...,


  "plugins" : {
    "docker-pushrm" : {
      "apikey_quay.io" : "my-api-key"
    }
  },

  ...
}

Log in with environment variables (for CI)

Alternatively credentials can be set as environment variables. Environment variables take precedence over the Docker credentials store. Environment variables can be specified with or without a server name. The variant without a server name takes precedence.

This is intended for running docker-pushrm as a standalone tool in a CI environment (no full Docker installation needed).

  • DOCKER_USER and DOCKER_PASS
  • DOCKER_USER__<SERVER>_<DOMAIN> and DOCKER_PASS__<SERVER>_<DOMAIN> (example for server docker.io: DOCKER_USER__DOCKER_IO=my-user and DOCKER_PASS__DOCKER_IO=my-password)

The provider 'quay' needs an additional env var for the API key in form of APIKEY__<SERVERNAME>_<DOMAIN>=<apikey>.

Example:

DOCKER_USER=my-user DOCKER_PASS=mypass docker-pushrm my-user/my-repo

What if I use [podman, img, k3c, buildah, ...] instead of Docker?

You can still use docker-pushrm as standalone executable.

The only obstacle is that you need to provide it credentials in the Docker style.

The easiest way for that is to set up a minimal Docker config file with the registry server logins that you need. (Alternatively credentials can be passed in environment variables )

You can either create this config file on a computer with Docker installed (by running docker login and then copying the $HOME/.docker/config.json file).

Or alternatively you can also set it up manually. Here's an example:

{
    "auths": {
        "https://index.docker.io/v1/": {
            "auth": "xxx"
        },
        "https://demo.goharbor.io": {
            "auth": "xxx"
        }

    },
}

The auth value is base64 of <user>:<passwd> (i.e. myuser:mypasswd)

It's also possible to use Docker credential helpers on systems that don't have Docker installed to avoid clear text passwords in the config file. The credential helper needs to be configured in the Docker config file and the credential helper executable needs to be in the PATH. (Check the Docker docs for details).

Can you add support for registry [XY...]?

Please open an issue.

Installation for all users

To install the plugin for all users of a system copy it to the following path (instead of to the user home dir). Requires admin/root privs.

  • Linux: depending on the distro, either /usr/lib/docker/cli-plugins/docker-pushrm or /usr/libexec/docker/cli-plugins/docker-pushrm
  • Windows: %ProgramData%\Docker\cli-plugins\docker-pushrm.exe
  • Mac: /Applications/Docker.app/Contents/Resources/cli-plugins/docker-pushrm

On Mac/Linux make it executable and readable for all users: chmod a+rx <path>/docker-pushrm

Using env vars instead of cmdline params

All cmdline parameters can also be set as env vars with prefix PUSHRM_.

Cmdline parameters take precedence over env vars. (Except for login env vars, which take precedence over the local credentials store).

This is mainly intended for running this tool in a container in 12fa style.

A list of all supported env vars is here.

Limitations

Problem with Harbor2 OpenID connect logins

This tool currently doesn't work for Harbor2 users that that authenticate through a 3rd party OpenID Connect (OIDC) provider like auth0, Keycloak, okta, dex, etc). (Local users and LDAP users are not affected and should work). This limitation is under investigation, contributions are welcome!


All trademarks, logos and website designs belong to their respective owners.


Download Details:

Author: christian-korneck
Source code: https://github.com/christian-korneck/docker-pushrm
License: MIT license
#docker 

Mikel  Okuneva

Mikel Okuneva

1602317778

Ever Wondered Why We Use Containers In DevOps?

At some point we’ve all said the words, “But it works on my machine.” It usually happens during testing or when you’re trying to get a new project set up. Sometimes it happens when you pull down changes from an updated branch.

Every machine has different underlying states depending on the operating system, other installed programs, and permissions. Getting a project to run locally could take hours or even days because of weird system issues.

The worst part is that this can also happen in production. If the server is configured differently than what you’re running locally, your changes might not work as you expect and cause problems for users. There’s a way around all of these common issues using containers.

What is a container

A container is a piece of software that packages code and its dependencies so that the application can run in any computing environment. They basically create a little unit that you can put on any operating system and reliably and consistently run the application. You don’t have to worry about any of those underlying system issues creeping in later.

Although containers were already used in Linux for years, they became more popular in recent years. Most of the time when people are talking about containers, they’re referring to Docker containers. These containers are built from images that include all of the dependencies needed to run an application.

When you think of containers, virtual machines might also come to mind. They are very similar, but the big difference is that containers virtualize the operating system instead of the hardware. That’s what makes them so easy to run on all of the operating systems consistently.

What containers have to do with DevOps

Since we know how odd happenings occur when you move code from one computing environment to another, this is also a common issue with moving code to the different environments in our DevOps process. You don’t want to have to deal with system differences between staging and production. That would require more work than it should.

Once you have an artifact built, you should be able to use it in any environment from local to production. That’s the reason we use containers in DevOps. It’s also invaluable when you’re working with microservices. Docker containers used with something like Kubernetes will make it easier for you to handle larger systems with more moving pieces.

#devops #containers #containers-devops #devops-containers #devops-tools #devops-docker #docker #docker-image

Iliana  Welch

Iliana Welch

1597368540

Docker Tutorial for Beginners 8 - Build and Run C++ Applications in a Docker Container

Docker is an open platform that allows use package, develop, run, and ship software applications in different environments using containers.
In this course We will learn How to Write Dockerfiles, Working with the Docker Toolbox, How to Work with the Docker Machine, How to Use Docker Compose to fire up multiple containers, How to Work with Docker Kinematic, Push images to Docker Hub, Pull images from a Docker Registery, Push stacks of servers to Docker Hub.
How to install Docker on Mac.

#docker tutorial #c++ #docker container #docker #docker hub #devopstools

August  Murray

August Murray

1615072500

Docker Swarm: Performing Rolling Upgrade of a Service

As per business requirement, we might want to perform rolling updates to our services for configuration changes and new Docker Image versions without any downtime.

In this part of the tutorial, we will deploy a service based on the Redis 3.0.6 container tag. Then we will upgrade the service to use the Redis 3.0.7 container image using rolling updates

Pre-requisites

  1. For our demonstration, we will be using centos-07.
  2. We will be using 3 machines for our lab, 1 machine as a swarm Manager node and 2 swarm worker nodes. These servers have below IP details:

192.168.33.76 managernode.unixlab.com

192.168.33.77 workernode1.unixlab.com

192.168.33.78 workernode2.unixlab.com

3. The memory should be at least 2 GB and there should be at least 2 core CPUs for each node.

4. You have already configured Docker Swarm. Read my previous article to understand how to configure the docker swarm.

#container-orchestration #rolling-updates #docker-container #docker #docker-swarm