error pulling image configuration when apply ./get-docker-images.sh

<strong>when apply :</strong>&nbsp;[email protected]:/vagrant/bin$&nbsp;<strong>./get-docker-images.sh</strong>

Pulling hyperledger/fabric-couchdb:amd64-0.4.13
Error response from daemon: Get https://registry-1.docker.io/v2/hyperledger/fabric-couchdb/manifests/amd64-0.4.13: read tcp 10.0.2.15:47240->54.175.43.85:443: read: connection reset by peer

when apply : [email protected]:/vagrant/bin$ ./get-docker-images.sh

i am using VM virtual tool box with Ubuntu 16.04 on windows 8

any one have idea how to solve the problem ?

Content of get-docker-images.sh:

# set the default Docker namespace and tag

DOCKER_NS=hyperledger
ARCH=amd64
VERSION=1.3.0
BASE_DOCKER_TAG=amd64-0.4.13

set of Hyperledger Fabric images

FABRIC_IMAGES=(fabric-peer fabric-orderer fabric-ccenv fabric-tools)

for image in ${FABRIC_IMAGES[@]}; do
echo "Pulling ${DOCKER_NS}/$image:${ARCH}-${VERSION}"
docker pull ${DOCKER_NS}/$image:${ARCH}-${VERSION}
done

THIRDPARTY_IMAGES=(fabric-kafka fabric-zookeeper fabric-couchdb fabric-baseos)

for image in ${THIRDPARTY_IMAGES[@]}; do
echo "Pulling ${DOCKER_NS}/$image:${BASE_DOCKER_TAG}"
docker pull ${DOCKER_NS}/$image:${BASE_DOCKER_TAG}
done

The executed command :

[email protected]:/vagrant/bin$ ./get-docker-images.sh
Pulling hyperledger/fabric-peer:amd64-1.3.0
amd64-1.3.0: Pulling from hyperledger/fabric-peer
Digest: sha256:c521647ccedf6e02a737e20ee66d6957293c8d85c2f272bf7b62fae1e2be81a5
Status: Image is up to date for hyperledger/fabric-peer:amd64-1.3.0
Pulling hyperledger/fabric-orderer:amd64-1.3.0
amd64-1.3.0: Pulling from hyperledger/fabric-orderer
Digest: sha256:510e0baa4d5df084f7e1de8072f2be6f0db766d668a8932b3eef19c3e9d65399
Status: Image is up to date for hyperledger/fabric-orderer:amd64-1.3.0
Pulling hyperledger/fabric-ccenv:amd64-1.3.0
amd64-1.3.0: Pulling from hyperledger/fabric-ccenv
Digest: sha256:ea988663d2af2e392d686524f2d7a7ab70ee4ee783c50792b5bc9745450d776d
Status: Image is up to date for hyperledger/fabric-ccenv:amd64-1.3.0
Pulling hyperledger/fabric-tools:amd64-1.3.0
amd64-1.3.0: Pulling from hyperledger/fabric-tools
Digest: sha256:638a53bba0582adf71c08ba3658b5d05d79f49c44f38344cca7ede10dbab3290
Status: Image is up to date for hyperledger/fabric-tools:amd64-1.3.0
Pulling hyperledger/fabric-kafka:amd64-0.4.13
amd64-0.4.13: Pulling from hyperledger/fabric-kafka
Digest: sha256:892f3ce913ea826d842bbe7e1babecf9194e873168d563c23668866d2fd29600
Status: Image is up to date for hyperledger/fabric-kafka:amd64-0.4.13
Pulling hyperledger/fabric-zookeeper:amd64-0.4.13
amd64-0.4.13: Pulling from hyperledger/fabric-zookeeper
Digest: sha256:f2c0d4a4d73614e34e0161929d7571a72bc379034c704eb170c80b7acde97d92
Status: Image is up to date for hyperledger/fabric-zookeeper:amd64-0.4.13
Pulling hyperledger/fabric-couchdb:amd64-0.4.13
Error response from daemon: Get https://registry-1.docker.io/v2/hyperledger/fabric-couchdb/manifests/amd64-0.4.13: read tcp 10.0.2.15:47024->54.175.43.85:443: read: connection reset by peer


How to Install Docker on Windows 10 Home?

How to Install Docker on Windows 10 Home?

If you’ve ever tried to install Docker for Windows, you’ve probably came to realize that the installer won’t run on Windows 10 Home. Only Windows Pro, Enterprise or Education support Docker. Upgrading your Windows license is pricey, and also pointless, since you can still run Linux Containers on Windows without relying on Hyper-V technology, a requirement for Docker for Windows.

If you’ve ever tried to install Docker for Windows, you’ve probably came to realize that the installer won’t run on Windows 10 Home. Only Windows Pro, Enterprise or Education support Docker. Upgrading your Windows license is pricey, and also pointless, since you can still run Linux Containers on Windows without relying on Hyper-V technology, a requirement for Docker for Windows.

If you plan on running Windows Containers, you’ll need a specific version and build of Windows Server. Check out the Windows container version compatibility matrix for details.

99.999% of the time, you only need a Linux Container, since it supports software built using open-source and .NET technologies. In addition, Linux Containers can run on any distro and on popular CPU architectures, including x86_64, ARM and IBM.

In this tutorial, I’ll show you how to quickly setup a Linux VM on Windows Home running Docker Engine with the help of Docker Machine. Here’s a list of software you’ll need to build and run Docker containers:

  • Docker Machine: a CLI tool for installing Docker Engine on virtual hosts
  • Docker Engine: runs on top of the Linux Kernel; used for building and running containers
  • Docker Client: a CLI tool for issuing commands to Docker Engine via REST API
  • Docker Compose: a tool for defining and running multi-container applications

I’ll show how to perform the installation in the following environments:

  1. On Windows using Git Bash
  2. On Windows Subsystem for Linux 2 (running Ubuntu 18.04)

First, allow me to explain how the Docker installation will work on Windows.

How it Works

As you probably know, Docker requires a Linux kernel to run Linux Containers. For this to work on Windows, you’ll need to set up a Linux virtual machine to run as guest in Windows 10 Home.

Setting up the Linux VM can be done manually. The easiest way is to use Docker Machine to do this work for you by running a single command. This Docker Linux VM can either run on your local system or on a remote server. Docker client will use SSH to communicate with Docker Engine. Whenever you create and run images, the actual process will happen within the VM, not on your host (Windows).

Let’s dive into the next section to set up the environment needed to install Docker.

Initial Setup

You may or may not have the following applications installed on your system. I’ll assume you don’t. If you do, make sure to upgrade to the latest versions. I’m also assuming you’re running the latest stable version of Windows. At the time of writing, I’m using Windows 10 Home version 1903. Let’s start installing the following:

  1. Install Git Bash for Windows. This will be our primary terminal for running Docker commands.

  2. Install Chocolatey, a package manager for Windows. It will make the work of installing the rest of the programs easier.

  3. Install VirtualBox and its extension. Alternatively, If you have finished installing Chocolatey, you can simply execute this command inside an elevated PowerShell terminal:

    C:\ choco install virtualbox
    
    
  4. If you’d like to try running Docker inside the WSL2 environment, you’ll need to set up WSL2 first. You can follow this tutorial for step-by-step instructions.

Docker Engine Setup

Installing Docker Engine is quite simple. First we need to install Docker Machine.

  1. Install Docker Machine by following instructions on this page. Alternatively, you can execute this command inside an elevated PowerShell terminal:

    C:\ choco install docker-machine
    
    
  2. Using Git Bash terminal, use Docker Machine to install Docker Engine. This will download a Linux image containing the Docker Engine and have it run as a VM using VirtualBox. Simply execute the following command:

    $ docker-machine create --driver virtualbox default
    
    
  3. Next, we need to configure which ports are exposed when running Docker containers. Doing this will allow us to access our applications via localhost<:port>. Feel free to add as many as you want. To do this, you’ll need to launch Oracle VM VirtualBox from your start menu. Select default VM on the side menu. Next click on Settings > Network > Adapter 1 > Port Forwarding. You should find the ssh forwarding port already set up for you. You can add more like so:

  4. Next, we need to allow Docker to mount volumes located on your hard drive. By default, you can only mount from the C://Users/ directory. To add a different path, simply go to Oracle VM VirtualBox GUI. Select default VM and go to Settings > Shared Folders. Add a new one by clicking the plus symbol. Enter the fields like so. If there’s an option called Permanent, enable it.

  5. To get rid of the invalid settings error as seen in the above screenshot, simply increase Video Memory under the Display tab in the settings option. Video memory is not important in this case, as we’ll run the VM in headless mode.

  6. To start the Linux VM, simply execute this command in Git Bash. The Linux VM will launch. Give it some time for the boot process to complete. It shouldn’t take more than a minute. You’ll need to do this every time you boot your host OS:

    $ docker-machine start vbox
    
    
  7. Next, we need to set up our Docker environment variables. This is to allow the Docker client and Docker Compose to communicate with the Docker Engine running in the Linux VM, default. You can do this by executing the commands in Git Bash:

    # Print out docker machine instance settings
    $ docker-machine env default
    
    # Set environment variables using Linux 'export' command
    $ eval $(docker-machine env default --shell linux)
    
    

    You’ll need to set the environment variables every time you start a new Git Bash terminal. If you’d like to avoid this, you can copy eval output and save it in your .bashrc file. It should look something like this:

    export DOCKER_TLS_VERIFY="1"
    export DOCKER_HOST="tcp://192.168.99.101:2376"
    export DOCKER_CERT_PATH="C:\Users\Michael Wanyoike\.docker\machine\machines\default"
    export DOCKER_MACHINE_NAME="default"
    export COMPOSE_CONVERT_WINDOWS_PATHS="true"
    
    

    IMPORTANT: for the DOCKER_CERT_PATH, you’ll need to change the Linux file path to a Windows path format. Also take note that there’s a chance the IP address assigned might be different from the one you saved every time you start the default VM.

In the next section, we’ll install Docker Client and Docker Compose.

Docker Tools Setup

For this section, you’ll need to install the following tools using PowerShell in admin mode. These tools are packaged inside the Docker for Windows installer. Since the installer refuses to run on Windows 10 Home, we’ll install these programs individually using Chocolatey:

C:\ choco install docker-cli
C:\ choco install docker-compose

Once the installation process is complete, you can switch back to Git Bash terminal. You can continue using PowerShell, but I prefer Linux syntax to execute commands. Let’s execute the following commands to ensure Docker is running:

# Start Docker VM
$ docker-machine start default

# Confirm Docker VM is running
$ docker-machine ls

# Configure Docker Envrionment to use Docker Vm
$ eval $(docker-machine env default --shell linux)

# Confirm Docker is connected. Should output Docker VM specs
$ docker info

# Run hello-world docker image. Should output "Hello from Docker"
$ docker run hello-world

If all the above commands run successfully, it means you’ve successfully installed Docker. If you want to try out a more ambitious example, I have a small Node.js application that that I’ve configured to run on Docker containers. First, you’ll need to install GNU Make using PowerShell with Admin privileges:

C:\ choco install make

Next, execute the following commands. Running this Node.js example will ensure you have no problem with exposed ports and mounting volumes on the Windows filesystem. First, navigate to a folder that that you’ve already mounted in VirtualBox settings. Next, execute the following commands:

$ git clone [email protected]:brandiqa/docker-node.git

$ cd docker-node/website

$ make

When you hit the last command, you should expect a similar output:

docker volume create nodemodules
nodemodules
docker-compose -f docker-compose.builder.yml run --rm install
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})

audited 9731 packages in 21.405s

docker-compose up
Starting website_dev_1 ... done
Attaching to website_dev_1
dev_1  |
dev_1  | > [email protected] start /usr/src/app
dev_1  | > parcel src/index.html --hmr-port 1235
dev_1  |
dev_1  | Server running at http://localhost:1234

Getting the above output means that volume mounting occurred successfully. Open localhost:1234 to confirm that the website can be accessed. This will confirm that you have properly configured the ports. You can edit the source code, for example change the h1 title in App.jsx. As soon as you save the file, the browser page should refresh automatically. This means hot module reloading works from a Docker container.

I would like to bring your attention to the docker-compose.yml file in use. For hot module reloading to work from a Docker Container in Windows requires the following:

  1. When using parcel, specify HMR port in your package.json start script:

    parcel src/index.html –hmr-port 1235

  2. In the VM’s Port Forwarding rules, make sure these ports are exposed to the host system:

    • 1234
    • 1235
  3. inotify doesn’t work on vboxsf filesystems, so file changes can’t be detected. The workaround is to set polling for Chokidar via environment variables in docker-compose.yml. Here’s the full file so that you can see how it’s set:

    version: '3'
    services:
      dev:
        image: node:10-jessie-slim
        volumes:
          - nodemodules:/usr/src/app/node_modules
          - ./:/usr/src/app
        working_dir: /usr/src/app
        command: npm start
        ports:
          - 1234:1234
          - 1235:1235
        environment:
        - CHOKIDAR_USEPOLLING=1
    volumes:
      nodemodules:
        external: true
    
    

Now that we have a fully working implementation of Docker on Windows 10 home, let’s set it up on WSL2 for those who are interested.

Windows Subsystem for Linux 2

Installing Docker on WSL2 is not as straightforward as it seems. Unfortunately, the latest version of Docker Engine can’t run on WSL2. However, there’s an older version, docker-ce=17.09.0~ce-0~ubuntu, that’s capable of running well in WSL2. I won’t be covering that here. Instead, I’ll show you how to access Docker Engine running in the VM we set up earlier from a WSL terminal.

All we have to do is install Docker client and Docker compose. Assuming you’re running WSL2 Ubuntu Terminal, execute the following:

  1. Install Docker using the official instructions:

    # Update the apt package list.
    sudo apt-get update -y
    
    # Install Docker's package dependencies.
    sudo apt-get install -y \
        apt-transport-https \
        ca-certificates \
        curl \
        software-properties-common
    
    # Download and add Docker's official public PGP key.
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    
    # Verify the fingerprint.
    sudo apt-key fingerprint 0EBFCD88
    
    # Add the `stable` channel's Docker upstream repository.
    #
    # If you want to live on the edge, you can change "stable" below to "test" or
    # "nightly". I highly recommend sticking with stable!
    sudo add-apt-repository \
    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) \
    stable"
    
    # Update the apt package list (for the new apt repo).
    sudo apt-get update -y
    
    # Install the latest version of Docker CE.
    sudo apt-get install -y docker-ce
    
    # Allow your user to access the Docker CLI without needing root access.
    sudo usermod -aG docker $USER
    
    
  2. Install Docker Compose using this official guide. An alternative is to use PIP, which will simply install the latest stable version:

    # Install Python and PIP.
    sudo apt-get install -y python python-pip
    
    # Install Docker Compose into your user's home directory.
    pip install --user docker-compose
    
    
  3. Fix the Docker mounting issue in WSL terminal by inserting this content in /etc/wsl.conf. Create the file if it doesn’t exist:

    [automount]
    root = /
    options = "metdata"
    
    

    You’ll need to restart your machine for this setting to take effect.

  4. Assuming that Linux Docker VM is running, you’ll need to connect the Docker tools in the WSL environment to it. If you can access docker-machine from the Ubuntu terminal, run the eval command. Otherwise, you can insert the following Docker variable in your .bashrc file. Here is an example of mine:

    export DOCKER_HOST="tcp://192.168.99.101:2376"
    export DOCKER_CERT_PATH="/c/Users/Michael Wanyoike/.docker/machine/machines/vbox"
    export DOCKER_MACHINE_NAME="vbox"
    export COMPOSE_CONVERT_WINDOWS_PATHS="true"
    
    

    You’ll need to restart your terminal or execute source ~/.bashrc for the settings to take effect. Running Docker commands should work properly in WSL without a hitch.

Switching to Linux

We’re now coming to the end of this article. The steps for setting up Docker in Windows 10 is a bit of a lengthy process. If you plan to reformat your machine, you’ll have to go through the same process again. It’s worse if your job is to install Docker on multiple machines running Windows 10 Home.

A simpler solution is to switch to Linux for development. You can create a partition and set up dual booting. You can also use VirtualBox to install and run a full Linux Distro inside Windows. Check out which popular distro you’d like to try out. I use Linux Lite because it’s lightweight and is based on Ubuntu. With VirtualBox, you can try out as many distros as you wish.

If you’re using a distro based on Ubuntu, you can install Docker easily with these commands:

# Install https://snapcraft.io/ package manager
sudo apt install snapd

# Install Docker Engine, Docker Client and Docker Compose
sudo snap install docker

# Run Docker commands without sudo
sudo usermod -aG docker $USER

You’ll need to log out and then log in for the last command to take effect. After that, you can run any Docker command without issue. You don’t need to worry about issues with mounting or ports. Docker Engine runs as a service in Linux, which by default starts automatically. No need for provisioning a Docker VM. Everything works out of the box without relying on a hack.

Summary

I hope you’ve had smooth sailing installing and running Docker on Windows 10 Home. I believe this technique should work on older versions such as Windows 7. In case you run into a problem, just go through the instructions to see if you missed something. Do note, however, that I haven’t covered every Docker feature. You may encounter a bug or an unsupported feature that requires a workaround, or may have no solution at all. If that’s the case, I’d recommend you just switch to Linux if you want a smoother development experience using Docker.

WordPress in Docker. Part 1: Dockerization

WordPress in Docker. Part 1: Dockerization

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

Docker usage in Blockchain project

Docker usage in Blockchain project

Docker is the single most important tool when developing blockchain applications.

Docker is the single most important tool when developing blockchain applications.

Why?

When you are developing blockchain applications,

1. you are creating a distributed application, locally

2. you need automation to reduce the time spent on error-prone steps involved in deploying and testing

3. you want to test your application with real users as soon as possible!

These factors, and many more, make Docker an invaluable tool in your toolkit. Not only is Docker a tool to ease your development, it also enables and encourages a different way of working — a way of working which lowers technical debt and increases agility (the reaction speed to changing environments).

What is Docker?

Docker

 is a container platform — much like the containers found on container ships — which enables the creation, shipment and deployment of small, self-contained, software components. These components can then be combined to create a service offering (SaaS), web or mobile application (app), or distributed blockchain application (dApp).

The Docker engine (Community Edition or CE) is free of use and runs on nearly every platform. Installing it is easy, just download and run the installer found on the Docker docs page. Once it is installed, creating and running Docker containers is just a matter of entering ‘docker run <image>’ from the command line, where <image> is the name of the image published on the Docker Hub. A private image can also be run, by prepending the image name with the name of the private or local hub.

How can I create my own Docker Image?

After installing Docker and running the ‘hello world’ example, it is time to create your first image and run it. Before you alt-tab out of here: this is easier than it sounds! All you need, is a single text-file called ‘Dockerfile’. These are the steps to follow:

  1. Create a new folder, called ‘my-first-docker-image’ for example.
  2. Create a file in this folder, called ‘Dockerfile’ (exact naming, no extension)
  3. Open this file, and enter the following 2 lines of text:
FROM python:2
CMD [ "python","-c", "print '\\n'.join(\"%i Byte = %i Bit = largest number: %i\" % (j, j*8, 256**j-1) for j in (1 << i for i in xrange(8)))" ]

4. Save & close the file, then execute docker build -t python-test .

5. After a while (some base images will be downloaded), your image is built and ready to use! Enter the following command to get it up and running: docker run -it --rm python-test

The output should be:

1 Byte = 8 Bit = largest number: 255
2 Byte = 16 Bit = largest number: 65535
4 Byte = 32 Bit = largest number: 4294967295
(...)

Even without any version of python installed on your local machine (this command itself also does not install python), the script is executed and its result shown in your terminal.

Some explanation of the commands used in these steps:

docker = The keyword to execute docker-related commands

docker build = Builds new docker images. You can give them a name (python-test in our example) with the -t option and it requires a path to the Dockerfile ( . in our example)

docker run = Runs docker images. You can use -it to attach an (interactive) terminal to the docker container and --rm to automatically remove the container when it finishes. An identifier for the image to run is required (again python-test in our example).

How do I speed up blockchain application development using Docker?

So now we have Docker installed and can download, build and run docker containers, we are ready for the next step: speeding up blockchain application development using Docker.

First up, it is important to see that there are 3 distinct components that interact in a basic blockchain application, namely:

  • A web interface (we’ll use Vue from https://vuejs.org/)
  • Smart contracts + tests and deployment (we’ll use the Truffle framework from http://truffleframework.com/)
  • An Ethereum node, a test-node for now (we’ll use ganache-cli, also from the Truffle framework)

These 3 components are a perfect fit for independent Docker containers, so let’s get started!

Step 1: NPM init

First, create a new folder for this project. I’ve called it minimal-blockchain-docker, but you’re free to be more creative of course.

Then run npm init in this folder and just go with all the default answers.

Step 2: Creating the UI using Vue

Create a services folder and navigate to it using the command line, install the Vue CLI (https://github.com/vuejs/vue-cli) if you haven’t done so yet and initialise the web interface:

npm install -g @vue/cli
# or
yarn global add @vue/cli

vue create ui

The output should look like the following snippet:

(...)
Successfully created project ui.
Get started with the following commands:
$ cd ui
$ yarn serve

You can do this, if you want to check if the standard Vue page gets hosted correctly and works out of the box. (it should!)

We now have a working UI component. Yay!

Step 3: Create some smart contracts

Let’s get working on thew smart contracts now, it’s the heart of our blockchain application!

First, go back to the services folder, and create a smart-contracts subfolder in which we init a basic Truffle project (after installing Truffle, if we haven’t done so yet):

npm install -g truffle

mkdir smart-contracts
cd smart-contracts
truffle unbox metacoin

There we go, we now have a boilerplate smart contract in solidity, ready to compile and run!

The compilation of the contract we can already do by executing truffle compile . At the time of writing, this gives some deprecation warnings on the boilerplate code, but it does compile (which is good!).

To actually run the code, we also need a testing environment to run this on… so let’s create one!

Step 4: Install and run Ganache-CLI

This one is a bit different from the other two components: we are just going to run it out of the box (kinda).

What is Ganache-CLI? (https://github.com/trufflesuite/ganache-cli) It’s a test framework on which we can deploy our smart contracts and use to test our blockchain application. Installation and running it is straightforward:

npm install -g ganache-cli
ganache-cli

So, once this is running you can then

  • “Migrate” the smart contracts, and,
  • Start the UI

However, if you change a smart contract — it is required to restart ganache-cli, compile and migrate the solidity contracts and restart the UI (if needed). That’s a hassle?! Also — if you host this software somewhere or want to run it on a different machine for any reason, you have to go through all these steps again.

We can do better than this!

Step 5: Putting it all together in a single docker-compose file

Docker-compose (https://docs.docker.com/compose/) is a tool for running multi-container Docker applications.

Having each component “Dockerized”, we can now put it together in a docker-compose.yml file. In docker-compose files, one defines different services, and their dependencies and interactions. For the previously mentioned docker images, our docker-compose.yml file looks like this:

version: '3'

services:
 # gateway/reverse proxy
 nginx:
  build: ./services/nginx
  restart: always
  depends_on:
   - api
   - ui
   - ganache
  volumes:
   - ./logs:/var/log/nginx
  ports:
   # proxy api + ui
   - "80:9000"
   # proxy ethereum node so you can connect with metamask from browser
   - "8545:9001"

 # starts webpack watch server with hot reload for ui (vue) code
 ui:
  build:
   context: ./services/ui
   dockerfile: Dockerfile.development
  restart: always
  env_file:
   - ./services/ui/.env
  volumes:
   - ./services/ui/src:/app/ui/src/
   - ./logs:/logs

 # api, handles browser requests initiated from ui api:
 api:
  build:
   context: ./services/api
   dockerfile: Dockerfile.development
  restart: always
  env_file:
   - ./services/api/.env
  depends_on:
   - mongo
  volumes:
   - ./services/api/src:/app/api/src/
   - ./logs:/logs

 # smart contracts source, tests and deployment code
 smart-contracts:
  build:
   context: ./services/smart-contracts
   dockerfile: Dockerfile.development
  env_file:
   - ./services/smart-contracts/.env
  depends_on:
   - ganache
  volumes:
   # mount the output contract build files into a host folder
   - ./services/smart-contracts/src/build:/app/smart-contracts/build/
   # mount the output test coverage report folder into a host folder
   - ./services/smart-contracts/coverage-report:/app/smart-contracts/coverage/
   - ./logs:/logs

 # ganache-cli ethereum node
 ganache:
  image: trufflesuite/ganache-cli
  command: "--seed abcd --defaultBalanceEther 100000000"

 # mongodb
 mongo:
  build: ./services/mongodb
  restart: always
  ports:
   # allow acces from (only!) localhost to mongo port 27017 so you can use
   # MongoHub or some other app to connect to the mongodb and view its contents
   - "127.0.0.1:27017:27017"
  volumes:
   - ./data/mongo:/data/db
   - ./logs:/var/log/mongodb

Tip: this file is also accessible in our Github pagedocker-compose.yml

Building and starting these docker images is as simple as running docker-compose up . Try it! 

Tip: to hide all the output, you can use the -d flag, after which you can check the separate logs with docker logs xxx where xxx is the name of the instance.

This fixes us having to start all the containers ourselves, but we still need to find a way to automate the building and deployment of the solidity smart contracts. For this, we use the NPM scripting language:

(...)
"start": "npm run deploy:smart-contracts && docker-compose up --build nginx api ui ganache mongo",
(...)

Tip: All of the scripts are in the package.json file in our Github repository.

This can be started with the npm run start command! The single command needed to start the boilerplate. Stopping the boilerplate is also simple, by using the npm run stop command. (yarn start and yarn stop also work)

All changes made to the code will automatically reflect in the running dApp due to shared folders

That’s it, we’ve automated pretty much everything to get this running on your laptop (or any other laptop or VM). So…

End..

Well, next step is to get this into production and monitor the running dApps.

Next blog we will get into how we can run these containers (and the smart contracts) on a Kubernetes cluster!

If you spot any errors, or have any suggestions on improvements: please leave either a comment here, or at the Github page. Also, if you liked this post — don’t forget to clap and share it with all of your programming buddies!

Further reading

Building a Blockchain with Python - Full

☞ Learn Python by Building a Blockchain & Cryptocurrency

A Beginners Guide to Blockchain Technology

☞ Blockchain and Bitcoin Fundamentals

☞ Blockchain A-Z™: Learn How To Build Your First Blockchain

Create a Blockchain Explorer in Csharp

Build a simple Cryptocurrency App using Angular 8