Write git object file from remote repo clone

I have been wrestling with this problem for hours. My local repo is somehow missing a commit

I have been wrestling with this problem for hours. My local repo is somehow missing a commit

> git reflog expire --stale-fix --all
error: refs/tags/12.01.02 does not point to a valid object!
error: Could not read 95eeac3b5f441c9ebbd89508896c572e3eb17205
fatal: Failed to traverse parents of commit 6c24f6ea7c0452e70dea6332c6959dad6c71305f

and

$ git fsck --full
Checking object directories: 100% (256/256), done.
Checking objects: 100% (159800/159800), done.
error: refs/tags/12.01.02: invalid sha1 pointer 95eeac3b5f441c9ebbd89508896c572e3eb17205
error: HEAD: invalid reflog entry 95eeac3b5f441c9ebbd89508896c572e3eb17205

I've been running through questions on fixing the problem. Specifically, I ran into this answer

The first thing you can try is to restore the missing items from backup. For example, see if you have a backup of the commit stored as .git/objects/98/4c11abfc9c2839b386f29c574d9e03383fa589. If so you can restore it.

So, I found the commit was in the remote on GitHub. So I made a fresh clone in a different directory, but /objects is mostly empty. Is there a way to regenerate the objects file for just one commit?

Build Docker Images and Host a Docker Image Repository with GitLab

Build Docker Images and Host a Docker Image Repository with GitLab

In this tutorial, you'll learn how to build Docker images and host a Docker image repository with GitLab. We set up a new GitLab runner to build Docker images, created a private Docker registry to store them in, and updated a Node.js app to be built and tested inside of Docker containers.

In this tutorial, you'll learn how to build Docker images and host a Docker image repository with GitLab. We set up a new GitLab runner to build Docker images, created a private Docker registry to store them in, and updated a Node.js app to be built and tested inside of Docker containers.

Introduction

Containerization is quickly becoming the most accepted method of packaging and deploying applications in cloud environments. The standardization it provides, along with its resource efficiency (when compared to full virtual machines) and flexibility, make it a great enabler of the modern DevOps mindset. Many interesting cloud native deployment, orchestration, and monitoring strategies become possible when your applications and microservices are fully containerized.

Docker containers are by far the most common container type today. Though public Docker image repositories like Docker Hub are full of containerized open source software images that you can docker pull and use today, for private code you'll need to either pay a service to build and store your images, or run your own software to do so.

GitLab Community Edition is a self-hosted software suite that provides Git repository hosting, project tracking, CI/CD services, and a Docker image registry, among other features. In this tutorial we will use GitLab's continuous integration service to build Docker images from an example Node.js app. These images will then be tested and uploaded to our own private Docker registry.

Prerequisites

Before we begin, we need to set up a secure GitLab server, and a GitLab CI runner to execute continuous integration tasks. The sections below will provide links and more details.

A GitLab Server Secured with SSL

To store our source code, run CI/CD tasks, and host the Docker registry, we need a GitLab instance installed on an Ubuntu 16.04 server. GitLab currently recommends a server with at least 2 CPU cores and 4GB of RAM. Additionally, we'll secure the server with SSL certificates from Let's Encrypt. To do so, you'll need a domain name pointed at the server.

A GitLab CI Runner

Set Up Continuous Integration Pipelines with GitLab CI on Ubuntu 16.04 will give you an overview of GitLab's CI service, and show you how to set up a CI runner to process jobs. We will build on top of the demo app and runner infrastructure created in this tutorial.

Step 1 — Setting Up a Privileged GitLab CI Runner

In the prerequisite GitLab continuous integration tutorial, we set up a GitLab runner using sudo gitlab-runner register and its interactive configuration process. This runner is capable of running builds and tests of software inside of isolated Docker containers.

However, in order to build Docker images, our runner needs full access to a Docker service itself. The recommended way to configure this is to use Docker's official docker-in-docker image to run the jobs. This requires granting the runner a special privileged execution mode, so we'll create a second runner with this mode enabled.

Note: Granting the runner privileged mode basically disables all of the security advantages of using containers. Unfortunately, the other methods of enabling Docker-capable runners also carry similar security implications. Please look at the official GitLab documentation on Docker Build to learn more about the different runner options and which is best for your situation.

Read Also: How to Create Docker Image with MySQL Database

Because there are security implications to using a privileged runner, we are going to create a project-specific runner that will only accept Docker jobs on our hello_hapi project (GitLab admins can always manually add this runner to other projects at a later time). From your hello_hapi project page, click Settings at the bottom of the left-hand menu, then click CI/CD in the submenu:

Build Docker Images and Host a Docker Image Repository with GitLab

Now click the Expand button next to the Runners settings section:

Build Docker Images and Host a Docker Image Repository with GitLab

There will be some information about setting up a Specific Runner, including a registration token. Take note of this token. When we use it to register a new runner, the runner will be locked to this project only.

Build Docker Images and Host a Docker Image Repository with GitLab

While we're on this page, click the Disable shared Runners button. We want to make sure our Docker jobs always run on our privileged runner. If a non-privileged shared runner was available, GitLab might choose to use that one, which would result in build errors.

Log in to the server that has your current CI runner on it. If you don't have a machine set up with runners already, go back and complete the Installing the GitLab CI Runner Service

section of the prerequisite tutorial before proceeding.

Now, run the following command to set up the privileged project-specific runner:

    sudo gitlab-runner register -n \
      --url https://gitlab.example.com/ \
      --registration-token your-token \
      --executor docker \
      --description "docker-builder" \
      --docker-image "docker:latest" \
      --docker-privileged

Output

Registering runner... succeeded                     runner=61SR6BwV
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

Be sure to substitute your own information. We set all of our runner options on the command line instead of using the interactive prompts, because the prompts don't allow us to specify --docker-privileged mode.

Your runner is now set up, registered, and running. To verify, switch back to your browser. Click the wrench icon in the main GitLab menu bar, then click Runners in the left-hand menu. Your runners will be listed:

Build Docker Images and Host a Docker Image Repository with GitLab

Now that we have a runner capable of building Docker images, let's set up a private Docker registry for it to push images to.

Read Also: Docker All The Things

Step 2 — Setting Up GitLab's Docker Registry

Setting up your own Docker registry lets you push and pull images from your own private server, increasing security and reducing the dependencies your workflow has on outside services.

GitLab will set up a private Docker registry with just a few configuration updates. First we'll set up the URL where the registry will reside. Then we will (optionally) configure the registry to use an S3-compatible object storage service to store its data.

SSH into your GitLab server, then open up the GitLab configuration file:

sudo nano /etc/gitlab/gitlab.rb

Scroll down to the Container Registry settings section. We're going to uncomment the registry_external_url line and set it to our GitLab hostname with a port number of 5555:

/etc/gitlab/gitlab.rb

registry_external_url 'https://gitlab.example.com:5555'

Next, add the following two lines to tell the registry where to find our Let's Encrypt certificates:

/etc/gitlab/gitlab.rb

registry_nginx['ssl_certificate'] = "/etc/letsencrypt/live/gitlab.example.com/fullchain.pem"
registry_nginx['ssl_certificate_key'] = "/etc/letsencrypt/live/gitlab.example.com/privkey.pem"

Save and close the file, then reconfigure GitLab:

sudo gitlab-ctl reconfigure

Output

gitlab Reconfigured!

Update the firewall to allow traffic to the registry port:

sudo ufw allow 5555

Now switch to another machine with Docker installed, and log in to the private Docker registry. If you don’t have Docker on your local development computer, you can use whichever server is set up to run your GitLab CI jobs, as it has Docker installed already:

docker login gitlab.example.com:5555

You will be prompted for your username and password. Use your GitLab credentials to log in.

Output
Login Succeeded 

Success! The registry is set up and working. Currently it will store files on the GitLab server's local filesystem. If you'd like to use an object storage service instead, continue with this section. If not, skip down to Step 3.

To set up an object storage backend for the registry, we need to know the following information about our object storage service:

  • Access Key
  • Secret Key
  • Region (us-east-1) for example, if using Amazon S3, or Region Endpoint if using an S3-compatible service ([https://nyc.digitaloceanspaces.com](https://nyc.digitaloceanspaces.com))
  • Bucket Name

If you're using DigitalOcean Spaces, you can find out how to set up a new Space and get the above information by reading How To Create a DigitalOcean Space and API Key.

When you have your object storage information, open the GitLab configuration file:

sudo nano /etc/gitlab/gitlab.rb

Once again, scroll down to the container registry section. Look for the registry['storage'] block, uncomment it, and update it to the following, again making sure to substitute your own information where appropriate:

/etc/gitlab/gitlab.rb

registry['storage'] = {
  's3' => {
    'accesskey' => 'your-key',
    'secretkey' => 'your-secret',
    'bucket' => 'your-bucket-name',
    'region' => 'nyc3',
    'regionendpoint' => 'https://nyc3.digitaloceanspaces.com'
  }
}

If you're using Amazon S3, you only need region and not regionendpoint. If you're using an S3-compatible service like Spaces, you'll need regionendpoint. In this case region doesn't actually configure anything and the value you enter doesn't matter, but it still needs to be present and not blank.

Save and close the file.

Note: There is currently a bug where the registry will shut down after thirty seconds if your object storage bucket is empty. To avoid this, put a file in your bucket before running the next step. You can remove it later, after the registry has added its own objects.

If you are using DigitalOcean Spaces, you can drag and drop to upload a file using the Control Panel interface.

Reconfigure GitLab one more time:

sudo gitlab-ctl reconfigure

On your other Docker machine, log in to the registry again to make sure all is well:

docker login gitlab.example.com:5555

You should get a Login Succeeded message.

Now that we've got our Docker registry set up, let's update our application's CI configuration to build and test our app, and push Docker images to our private registry.

Step 3 — Updating gitlab-ci.yaml and Building a Docker Image

Note: If you didn't complete the prerequisite article on GitLab CI you'll need to copy over the example repository to your GitLab server. Follow the Copying the Example Repository From GitHub section to do so.

To get our app building in Docker, we need to update the .gitlab-ci.yml file. You can edit this file right in GitLab by clicking on it from the main project page, then clicking the Edit button. Alternately, you could clone the repo to your local machine, edit the file, then git push it back to GitLab. That would look like this:

    git clone [email protected]:sammy/hello_hapi.git
    cd hello_hapi
    # edit the file w/ your favorite editor
    git commit -am "updating ci configuration"
    git push

First, delete everything in the file, then paste in the following configuration:

.gitlab-ci.yml

image: docker:latest
services:
- docker:dind

stages:
- build
- test
- release

variables:
  TEST_IMAGE: gitlab.example.com:5555/sammy/hello_hapi:$CI_COMMIT_REF_NAME
  RELEASE_IMAGE: gitlab.example.com:5555/sammy/hello_hapi:latest

before_script:
  - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN gitlab.example.com:5555

build:
  stage: build
  script:
    - docker build --pull -t $TEST_IMAGE .
    - docker push $TEST_IMAGE

test:
  stage: test
  script:
    - docker pull $TEST_IMAGE
    - docker run $TEST_IMAGE npm test

release:
  stage: release
  script:
    - docker pull $TEST_IMAGE
    - docker tag $TEST_IMAGE $RELEASE_IMAGE
    - docker push $RELEASE_IMAGE
  only:
    - master

Be sure to update the highlighted URLs and usernames with your own information, then save with the Commit changes button in GitLab. If you're updating the file outside of GitLab, commit the changes and git push back to GitLab.

This new config file tells GitLab to use the latest docker image (image: docker:latest) and link it to the docker-in-docker service (docker:dind). It then defines build, test, and release stages. The build stage builds the Docker image using the Dockerfile provided in the repo, then uploads it to our Docker image registry. If that succeeds, the test stage will download the image we just built and run the npm test command inside it. If the test stage is successful, the release stage will pull the image, tag it as hello_hapi:latest and push it back to the registry.

Depending on your workflow, you could also add additional test stages, or even deploy stages that push the app to a staging or production environment.

Updating the configuration file should have triggered a new build. Return to the hello_hapi project in GitLab and click on the CI status indicator for the commit:

Build Docker Images and Host a Docker Image Repository with GitLab

On the resulting page you can then click on any of the stages to see their progress:

Build Docker Images and Host a Docker Image Repository with GitLab

Build Docker Images and Host a Docker Image Repository with GitLab

Eventually, all stages should indicate they were successful by showing green check mark icons. We can find the Docker images that were just built by clicking the Registry item in the left-hand menu:

Build Docker Images and Host a Docker Image Repository with GitLab

If you click the little "document" icon next to the image name, it will copy the appropriate docker pull ... command to your clipboard. You can then pull and run your image:

    docker pull gitlab.example.com:5555/sammy/hello_hapi:latest
    docker run -it --rm -p 3000:3000 gitlab.example.com:5555/sammy/hello_hapi:latest

Output

> [email protected] start /usr/src/app
> node app.js

Server running at: http://56fd5df5ddd3:3000

The image has been pulled down from the registry and started in a container. Switch to your browser and connect to the app on port 3000 to test. In this case we're running the container on our local machine, so we can access it via localhost at the following URL:

http://localhost:3000/hello/test

Output

Hello, test!

Success! You can stop the container with CTRL-C. From now on, every time we push new code to the master branch of our repository, we'll automatically build and test a new hello_hapi:latest image.

Conclusion

In this tutorial we set up a new GitLab runner to build Docker images, created a private Docker registry to store them in, and updated a Node.js app to be built and tested inside of Docker containers.

20+ Outstanding Vue.js Open Source Projects

20+ Outstanding Vue.js Open Source Projects

There are more than 20 Vue.js open-source projects in this article. The goal was to make this list as varied as possible.

There are more than 20 Vue.js open-source projects in this article. The goal was to make this list as varied as possible.

In this short intro, I won’t go back to the history of the Vue.js or some statistics on the use of this framework. Now it is a matter of fact that Vue is gaining popularity and projects listed below are the best evidence of its prevalence.

So here we go!

Prettier

Opinionated code formatter

Website: https://prettier.io

Demo: https://prettier.io/playground/

**GitHub **:https://github.com/prettier/prettier

GitHub Stars: 32 343

Prettier reprints your code in a consistent style with several rules. Using a code formatter you don’t have to do it manually and argue about what is the right coding style anymore. This code formatter Integrates with most editors (Atom, Emacs, Visual Studio, Web Storm, etc. and works with all your favorite tools such as JavaScript, CSS, HTML, GraphQL, etc. And last year Prettier started to run in the browser and support .vue files.

Image source: https://prettier.io

Image source: https://prettier.io

Vuetify

Material Component Framework

Website: https://vuetifyjs.com/en/

GitHub: https://github.com/vuetifyjs/vuetify

GitHub Stars: 20 614

This framework allows you to customize visual components. It complies with Google Material Design guidelines. Vuetify combines all the advantages of Vue.js and Material. That is more Vuetify is constantly evolving because it has been improved by both communities on GitHub. This framework is compatible with RTL and Vue CLI-3. You can build an interactive and attractive frontend using Vuetify.

Image source: https://vuetifyjs.com/en/

iView

A set of UI components

Website: https://iviewui.com/

GitHub: https://github.com/iview/iview

GitHub Stars: 21 643

Developers of all skill levels

can use iView but you have to be familiar with a Single File Components

(https://vuejs.org/v2/guide/single-file-components.html).. "https://vuejs.org/v2/guide/single-file-components.html).") A friendly API and

constant fixes and upgrades make it easy to use. You can use separate

components (navigation, charts, etc.) or you can use a Starter Kit. Solid documentation of the iView is a big plus and of course, it is compatible with the latest Vue.js. Please note that it doesn’t support IE8.

Image source: https://iviewui.com/

Image source: https://iviewui.com/

Epiboard

A tab page

GitHub: https://github.com/Alexays/Epiboard

GitHub Stars: 124

A tab page gives easy access to RSS feeds, weather, downloads, etc. Epiboard focuses on customizability to provide a user with a personalized experience. You can synchronize your settings across your devices, change the feel and look and add your favorite bookmarks. This project follows the guidelines of the material design. The full list of the current cards you can find on the GitHub page.

Image source: https://github.com/Alexays/Epiboard

Light Blue Vue Admin

Vue JS Admin Dashboard Template

Website: https://flatlogic.com/admin-dashboards/light-blue-vue-lite

Demo: https://flatlogic.com/admin-dashboards/light-blue-vue-lite/demo

GitHub: https://github.com/flatlogic/light-blue-vue-admin

GitHub Stars: 28

Light Blue is built with latest Vue.js and Bootstrap has detailed documentation and transparent and modern design. This template is easy to navigate, has user-friendly functions and a variety of UI elements. All the components of this template fit together impeccably and provide great user experience. Easy customization is another big plus, cuts dramatically development time.

Image source: https://flatlogic.com/admin-dashboards/light-blue-vue-lite

Image source: https://flatlogic.com/admin-dashboards/light-blue-vue-lite

Beep

Account Security Scanner

Website: https://beep.modus.app

GitHub: https://github.com/ModusCreateOrg/beep

GitHub Stars: 110

This security scanner was built with Vue.js and Ionic. It runs security checks and keeps passwords safe. So how this check is working? Beep simply compare your data with all the information in the leaked credentials databases. Your passwords are safe with Beep thanks to the use of the SHA-1 algorithm. Plus this app never stores your login and password as it is.

Image source: https://beep.modus.app

Sing App Vue Dashboard

Vue.JS admin dashboard template

Website: https://flatlogic.com/admin-dashboards/sing-app-vue-dashboard

Demo: https://flatlogic.com/admin-dashboards/sing-app-vue-dashboard/demo

GitHub: https://github.com/flatlogic/sing-app-vue-dashboard

GitHub Stars: 176

What do you need from an admin template? You definitely need classic look, awesome typography and the usual set of components. Sing App fits all these criteria plus it has a very soft color scheme. A free version of this template has all the necessary features to start your project with minimal work. This is an elegantly designed dashboard can be useful more the most of web apps like CMS, CRM or simple website admin panel.

Image source: https://flatlogic.com/admin-dashboards/sing-app-vue-dashboard

Image source: https://flatlogic.com/admin-dashboards/sing-app-vue-dashboard

Vue Storefront

PWA for the eCommerce

Website: https://www.vuestorefront.io

GitHub: https://github.com/DivanteLtd/vue-storefront

GitHub Stars: 5 198

This PWA storefront can connect almost with any backend for the eCommerce because it uses headless architecture. This includes popular BigCommerce platform, Magento, Shopware, etc. Vue Storefront isn’t easy to learn at once because it is a complex solution. But it gives you lots of possibilities and it always improving thanks to growing community of professionals. Some of the advantages of Vue Storefront include mobile-first approach, Server-Side Rendering (good for SEO) and offline mode.

Image source: https://www.vuestorefront.io

Cross-platform GUI client for DynamoDb

GitHub: https://github.com/Arattian/DynamoDb-GUI-Client

GitHub Stars: 178

DynamoDB is a NoSQL database applicable in cases where you have to deal with large amounts of data or serverless apps with AWS Lambda. This GUI client gives remote access plus supports several databases at the same time.

Image source: https://github.com/Arattian/DynamoDb-GUI-Client

vueOrgChart

Interactive organization chart

Demo: https://hoogkamer.github.io/vue-org-chart/#/

GitHub: https://github.com/Hoogkamer/vue-org-chart

GitHub Stars: 44

With this solution, no webserver, install or database needed. This simple chart can be edited in excel or webpage. You can easily search for a particular manager or department. Also, there are two options for usage. First as a static website. This option is suitable for you if you want to use vueOrgChart without modification. If you are planning to build your own chart on top of this project you will have to study the section “Build Setup”.

Image source: https://hoogkamer.github.io/vue-org-chart/#/

Faviator

Favicon generator

Website: https://www.faviator.xyz

Demo: https://www.faviator.xyz/playground

GitHub: https://github.com/faviator/faviator

GitHub Stars: 63

This library helps you to create a simple icon. The first step is to pass in a configuration, and second, choose the format of your icon. You can choose JPG, PNG or SVG format. As you can see in the screenshot you can choose any font from the Google Fonts.

Image source: https://www.faviator.xyz

Minimal Notes

Web app for PC or Tablet

Demo: https://vladocar.github.io/Minimal-Notes/

GitHub: https://github.com/vladocar/Minimal-Notes

GitHub Stars: 48

There is not much to say about this app. It is minimalistic, works on a browser locally, stored in localStorage and the file is only 4Kb. It is also available for Mac OS but the file form 4KB becomes 0.45 Bb. But it is still very lightweight.

Image source: https://vladocar.github.io/Minimal-Notes/

Directus

CMS built with Vue.js

Website: https://directus.io

Demo: https://directus.app/?ref=madewithvuejs.com#/login

GitHub: https://github.com/directus/directus

GitHub Stars: 4 607

Directus is a very lightweight and simple CMS. It has been modularized to give the developers the opportunity to customize it in every aspect. The main peculiarity of this CMS is that it stores your data in SQL databases so it can stay synchronized with every change you made and be easily customized. It also supports multilingual content.

Image source: https://directus.io

VuePress

Static Site Generator

Website: https://vuepress.vuejs.org

GitHub: https://github.com/vuejs/vuepress

GitHub Stars: 12 964

The creator of Vue.js, Evan You, created this simple site generator. Minimalistic and SEO friendly it has multi-language support and easy Google Analytics integration. A VuePress site is using VueRouter, Vue, and webpack. If you worked with the Nuxt or Gatsby you will notice some familiarities. The only difference is that Nuxt was created to develop applications and VuePress is for building static websites.

Image source: https://vuepress.vuejs.org

Docsify

Documentation site generator

Website: https://docsify.js.org/#/

GitHub: https://github.com/docsifyjs/docsify

GitHub Stars: 10 105

This project has an impressive showcase list. The main peculiarity of this generator lies in the way pages are generated. It simply grabs you Markdown file and displays it as a page of your site. Another big plus of this project is a full-text search and API plugins. It supports multiple themes and really lightweight.

Image source: https://docsify.js.org/#/

vue-cli

Standard Tooling for Vue.js Development

Website: https://cli.vuejs.org

GitHub: https://github.com/vuejs/vue-cli

GitHub Stars: 21 263

This well-known tooling was released by the Vue team. Please note that before starting to use it you should install the latest version of Vue.js, Node.js, NPM, and a code editor. Vue CLI has a GUI tool and instant prototyping. Instant prototyping is a relatively new feature. It allows you to create a separate component. And this component will have all “vue powers” as full Vue.js project.

Image source: https://cli.vuejs.org

SheetJS

Spreadsheet Parser and Writer

Website: https://sheetjs.com/

Demo: https://sheetjs.com/demos

GitHub: https://github.com/SheetJS/js-xlsx

GitHub Stars: 16 264

SheetJS is a JS library that helps you to operate data stored in excel file. For example, you can export a workbook on browser-side or convert any HTML table. In other words, SheetJS doesn’t involve a server-side script, or for example AJAX. This the best solution for front-end operation of two-dimensional tables. It can export and parse data and run in node terminal or browser side.

Image source: https://sheetjs.com/

Vue-devtools

Browser devtools extension

GitHub: https://github.com/vuejs/vue-devtools

GitHub Stars: 13 954

Almost any framework provides developers with a suitable devtool. This is literally an additional panel in the browser which very differs from the standard one. You don’t have to install it as a browser extension. There is an option to install it as a standalone application. You can activate it by right-click the element and choose “Inspect Vue component” and navigate the tree of components. The left menu of this tool will show you the data and the props of the component.

Image source: https://github.com/vuejs/vue-devtools

Handsontable

Data Grid Component

Website: https://handsontable.com

GitHub: https://github.com/handsontable/handsontable

GitHub Stars: 12 049

This component has a spreadsheet look, can be easily modified with a plugin and binds to almost any data source. It supports all the standard operations like read, delete, update and create. Plus you can sort and filter your records. What is more, you can include data summaries and assign a type to a cell. This project has exemplary documentation and was designed as customizable as it needs to be.

Image source: https://handsontable.com

Vue webpack boilerplate

Website: http://vuejs-templates.github.io/webpack/

GitHub: https://github.com/vuejs-templates/webpack

GitHub Stars: 9 052

Vue.js provides great templates to help you start the development process with your favorite stack. This boilerplate is a solid foundation for your project. It includes the best project structure and configuration, optimal tools and best development practices. Make sure this template has more or less the same features that you need for your project. Otherwise, it is better to use Vue CLI due to its flexibility.

Image source: http://vuejs-templates.github.io/webpack/

Material design for Vue.js

Website: http://vuematerial.io

GitHub: https://github.com/vuematerial/vue-material

GitHub Stars: 7 984

What is great about this Material Design Framework is truly thorough documentation. The framework is very lightweight with a full array of components and fully in line with the Google Material Design guidelines. This design fits every screen and supports every modern browser.

Image source: http://vuematerial.io

CSSFX

Click-to-copy CSS effects

Website: https://cssfx.dev

GitHub: https://github.com/jolaleye/cssfx

GitHub Stars: 4 569

This project is very simple and does exactly what is said in the description line. It’s a collection of CSS effects. You can see a preview of each effect and click on it. You will see a pop up with a code snippet that you can copy.

Image source: https://cssfx.dev

uiGradients

Website: http://uigradients.com/

GitHub:https://github.com/ghosh/uiGradients

GitHub Stars: 4 323

This is a collection of linear gradients which allows you to copy CSS codes. The collection is community contributed and has the opportunity to filter gradients based on preferred color.

Image source: http://uigradients.com/

Vuestic

Demo: https://vuestic.epicmax.co/#/admin/dashboard

GitHub: https://github.com/epicmaxco/vuestic-admin

GitHub Stars: 5 568

Vuestic is a responsive admin template that already proving popular at the GitHub. Made with Bootstrap 4 this template doesn’t require jQuery. With 36 UI ready-to-use elements and 18 pages, Vuestic offers multiple options for customization. The code is constantly evolving not only due to the efforts of the author but also because of the support of the Vue community on GitHub.

Image source: https://vuestic.epicmax.co/#/admin/dashboard

Git and Github: A Beginner’s Guide for Absolute Beginners

Git and Github: A Beginner’s Guide for Absolute Beginners

If you are a developer and you want to get started with Git and GitHub, then this article is made for you.

What is Git?

Git is a free, open-source version control software. It was created by Linus Torvalds in 2005. This tool is a version control system that was initially developed to work with several developers on the Linux kernel.

This basically means that Git is a content tracker. So Git can be used to store content — and it is mostly used to store code because of the other features it provides.

Real life projects generally have multiple developers working in parallel. So they need a version control system like Git to make sure that there are no code conflicts between them.

Also, the requirements in such projects change often. So a version control system allows developers to revert and go back to an older version of their code.

The branch system in Git allows developers to work individually on a task (For example: One branch -> One task OR One branch -> One developer). Basically think of Git as a small software application that controls your code base, if you’re a developer.

Git Repositories

If we want to start using Git, we need to know where to host our repositories.

A repository (or “Repo” for short) is a project that contains multiple files. In our case a repository will contain code-based files.

There are two ways you can host your repositories. One is online (on the cloud) and the second is offline (self-installed on your server).

There are three popular Git hosting services: GitHub (owned by Microsoft), GitLab (owned by GitLab) and BitBucket. We’ll use GitHub as our hosting service.

Before using Git we should know why we need it

Git makes it easy to contribute to open source projects

Nearly every open-source project uses GitHub to manage their projects. Using GitHub is free if your project is open source, and it includes a wiki and issue tracker that makes it easy to include more in-depth documentation and get feedback about your project.

If you want to contribute, you just fork (get a copy of) a project, make your changes, and then send the project a pull request using GitHub's web interface. This pull request is your way of telling the project you're ready for them to review your changes.

Documentation

By using GitHub, you make it easier to get excellent documentation. Their help section and guides have articles for nearly any topic related to Git that you can think of.

Integration options

GitHub can integrate with common platforms such as Amazon and Google Cloud, with services such as Code Climate to track your feedback, and can highlight syntax in over 200 different programming languages.

Track changes in your code across versions

When multiple people collaborate on a project, it’s hard to keep track of revisions — who changed what, when, and where those files are stored.

GitHub takes care of this problem by keeping track of all the changes that have been pushed to the repository.

Much like using Microsoft Word or Google Drive, you can have a version history of your code so that previous versions are not lost with every iteration. It’s easy to come back to the previous version and contribute your work.

Showcase your work

Are you a developer who wishes to attract recruiters? GitHub is the best tool you can rely on for this.

Today, when searching for new recruits for their projects, most companies look at GitHub profiles. If your profile is available, you will have a higher chance of being recruited even if you are not from a great university or college.

Now we’ll learn how to use Git & GitHub

GitHub account creation

To create your account, you need to go to GitHub's website and fill out the registration form.

Git installation

Now we need to install Git's tools on our computer. We’ll use CLI to communicate with GitHub.

For Ubuntu:

  1. First, update your packages.
sudo apt update

2. Next, install Git and GitHub with apt-get

sudo apt-get install git

3. Finally, verify that Git is installed correctly

git --version

4. Run the following commands with your information to set a default username and email when you’re going to save your work.

git config --global user.name "MV Thanoshan"
git config --global user.email "[email protected]"
Working with GitHub projects

We’ll work with GitHub projects in two ways.

Type 1: Create the repository, clone it to your PC, and work on it.(Recommended)

Type 1 involves creating a totally fresh repository on GitHub, cloning it to our computer, working on our project, and pushing it back.

Create a new repository by clicking the “new repository” button on the GitHub web page.

Pick a name for your first repository, add a small description, check the ‘Initialize this repository with a README’ box, and click on the “Create repository” button.

Well done! Your first GitHub repository is created.

Your first mission is to get a copy of the repository on your computer. To do that, you need to “clone” the repository on your computer.

To clone a repository means that you're taking a repository that’s on the server and cloning it to your computer – just like downloading it. On the repository page, you need to get the “HTTPS” address.

Once you have the address of the repository, you need to use your terminal. Use the following command on your terminal. When you’re ready you can enter this:

git clone [HTTPS ADDRESS]

This command will make a local copy of the repository hosted at the given address.

Now, your repository is on your computer. You need to move in it with the following command.

cd [NAME OF REPOSITORY]

As you can see in the above picture, my repository name is “My-GitHub-Project” and this command made me go to that specific directory.

**NOTE:**When you clone, Git will create a repository on your computer. If you want, you can access your project with the computer user interface instead using the above ‘cd’ command on the terminal.

Now, in that folder we can create files, work on them, and save them locally. To save them in a remote place — like GitHub – we have do a process called a “commit”. To do this, get back to your terminal. If you closed it, like I previously stated, use the ‘cd’ command.

cd [NAME OF REPOSITORY]

Now, in the terminal, you’re in your repository directory. There are 4 steps in a commit: ‘status’ , ‘add’ , ‘commit’ and ‘push’. All the following steps must be performed within your project. Let's go through them one by one.

  1. “status”: The first thing you need to do is to check the files you have modified. To do this, you can type the following command to make a list of changes appear.
git status

2. “add”: With the help of the change list, you can add all files you want to upload with the following command,

git add [FILENAME] [FILENAME] [...]

In our case, we’ll add a simple HTML file.

git add sample.html

3. “commit”: Now that we have added the files of our choice, we need to write a message to explain what we have done. This message may be useful later if we want to check the change history. Here is an example of what we can put in our case.

git commit -m "Added sample HTML file that contain basic syntax"

4. “push”: Now we can put our work on GitHub. To do that we have to ‘push’ our files to Remote. Remote is a duplicate instance of our repository that lives somewhere else on a remote server. To do this, we must know the remote’s name (Mostly remote is named origin). To figure out that name, type the following command.

git remote

As you can see in the above image, it says that our remote’s name is origin. Now we can safely ‘push’ our work by the following command.

git push origin master

Now, if we go to our repository on the GitHub web page, we can see the sample.html file that we’ve pushed to remote — GitHub!

NOTE: Sometimes when you’re using Git commands in the terminal, it can lead you to the VIM text editor (a CLI based text-editor). So to get rid of it, you have to type

:q

and ENTER.

Pulling is the act of receiving from GitHub.

Pushing is the act of sending to GitHub.

Type 2: Work on your project locally then create the repository on GitHub and push it to remote.

Type 2 lets you make a fresh repository from an existing folder on our computer and send that to GitHub. In a lot of cases you might have actually already made something on your computer that you want to suddenly turn into a repository on GitHub.

I will explain this to you with a Survey form web project that I made earlier that wasn’t added to GitHub.

As I already mentioned, when executing any Git commands, we have to make sure that we are in the correct directory in the terminal.

By default, any directory on our computer is not a Git repository – but we can turn it into a Git repository by executing the following command in the terminal.

git init

After converting our directory to a Git repository, the first thing we need to do is to check the files we have by using the following command.

git status

So there are two files in that directory that we need to “add” to our Repo.

git add [FILENAME] [FILENAME] [...]

NOTE: To “add” all of the files in our Repository we can use the following command:

git add .

After the staging area (the add process) is complete, we can check whether the files are successfully added or not by executing the git status

If those particular files are in green like the below picture, you’ve done your work!

Then we have to “commit” with a description in it.

git commit -m "Adding web Survey form"

If my repository started on GitHub and I brought it down to my computer, a remote is already going to be attached to it (Type 1). But if I’m starting my repository on my computer, it doesn’t have a remote associated with it, so I need to add that remote (Type 2).

So to add that remote, we have to go to GitHub first. Create a new repository and name it whatever you want to store it in GitHub. Then click the “Create repository” button.

NOTE: In Type 2, Please don’t initialize the repository with a README file when creating a new repository on the GitHub web page.

After clicking the “Create repository” button you’ll find the below image as a web page.

Copy the HTTPS address. Now we’ll create the remote for our repository.

git remote add origin [HTTPS ADDRESS]

After executing this command, we can check whether we have successfully added the remote or not by the following command

git remote

And if it outputs “origin” you’ve added the remote to your project.

NOTE: Just remember we can state any name for the remote by changing the name “origin”. For example:

git remote add [REMOTE NAME] [HTTPS ADDRESS]

Now, we can push our project to GitHub without any problems!

git push origin master

After completing these steps one by one, if you go to GitHub you can find your repository with the files!

Conclusion

Thank you everyone for reading. I just explained the basics of Git and GitHub. I strongly encourage you all to read more related articles on Git and GitHub. I hope this article helped you.

Happy Coding!

Kubernetes Tutorial: How to deploy Gitea using the Google Kubernetes Engine

Kubernetes Tutorial: How to deploy Gitea using the Google Kubernetes Engine

In his tutorial will go over how to deploy Gitea, an open-source git hosting service, using the Google Kubernetes Engine.

Originally published by Daniel Sanche at https://medium.com

If you’ve read "An Introduction to Kubernetes", you should have a good foundational understanding of the basic pieces that make up Kubernetes. If you’re anything like me, however, you won’t fully understand a concept until you get hands on with it. 

There’s nothing too special about Gitea specifically, but going through the process of deploying an arbitrary open source application to the cloud will give us some practical hands-on experience with using Kubernetes. Plus, at the end you will be left with a great self-hosted service you can use to host your future projects!

Setting Up a Cluster

kubectl and gcloud

The most important tool you use when setting up a Kubernetes environment is the kubectl command. This command allows you to interact with the Kubernetes API. It is used to create, update, and delete Kubernetes resources like pods, deployments, and load balancers.

There is a catch, however: kubectl can’t be used to directly provision the nodes or clusters your pods are run on. This is because Kubernetes was designed to be platform agnostic. Kubernetes doesn’t know or care where it is running, so there is no built in way for it to communicate with your chosen cloud provider to rent nodes on your behalf. Because we are using Google Kubernetes Engine for this tutorial, we will need to use the gcloud command for these tasks.

In brief, gcloud is used to provision the resources listed under “Hardware”, and kubectl is used to manage the resources listed under “Software”

This tutorial assumes you already have kubectl and gcloud installed on your system. If you’re starting completely fresh, you will first want to check out the first part of the Google Kubernetes Engine Quickstart to sign up for a GCP account, set up a project, enable billing, and install the command line tools.

Once you have your environment ready to go, you can create a cluster by running the following commands:

# create the cluster
by default, 3 standard nodes are created for our cluster

gcloud container clusters create my-cluster --zone us-west1-a# get the credentials so we can manage it locally through kubectl

creating a cluster can take a few minutes to complete

gcloud container clusters get-credentials my-cluster \

     --zone us-west1-a

We now have a provisioned cluster made up of three n1-standard1 nodes

Along with the gcloud command, you can manage your resources through the Google Cloud Console page. After running the previous commands, you should see your cluster appear under the GKE section. You should also see a list of the VMs provisioned as your nodes under the GCE section. Note that although the GCE UI allows you to delete the VMs from this page, they are being managed by your cluster, which will re-create them when it notices they are missing. When you are finished with this tutorial and want to permanently remove the VMs, you can remove everything at once by deleting the cluster itself.


Deploying An App

YAML: Declarative Infrastructure

Now that our cluster is live, it’s time to put it to work. There are two ways to add resources to Kubernetes: interactively through the command line using kubectl add, and declaratively, by defining resources in YAML files

While interactive deployment with kubectl add is great for experimenting, YAML is the way to go when you want to build something maintainable. By writing all of your Kubernetes resources into YAML files, you can record the entire state of your cluster in a set of easily maintainable files, which can be version-controlled and managed like any other part of your system. In this way, all the instructions needed to host your service can be saved right alongside the code itself.

Adding a Pod

To show a basic example of what a Kubernetes YAML file looks like, let’s add a pod to our cluster. Create a new file called gitea.yaml and fill it with the following text:

apiVersion: v1
kind: Pod
metadata:
name: gitea-pod
spec:
containers:

  • name: gitea-container
    image: gitea/gitea:1.4

This pod is fairly basic. Line 2 declares that the type of resource we are creating is a pod; line 1 says that this resource is defined in v1 of the Kubernetes API. Lines 3–8 describe the properties of our pod. In this case, the pod is unoriginally named “gitea-pod”, and it contains a single container we’re calling “gitea-container”.

Line 8 is the most interesting part. This line defines which container image we want to run; in this case, the image tagged 1.4 in the gitea/gitea repository. Kubernetes will tell the built-in container runtime to find the requested container image, and pull it down into the pod. Because the default container runtime is Docker, it will find the gitea repository hosted on Dockerhub, and pull down the requested image.

Now that we have the YAML written out, we apply it to our cluster:kubectl apply -f gitea.yaml

This command will cause Kubernetes to read our YAML file, and update any resources in our cluster accordingly. To see the newly created pod in action, you can run kubectl get pods. You should see the pod running.

$ kubectl get podsNAME        READY     STATUS    RESTARTS   AGE

gitea-pod   1/1       Running   0          9m

Gitea is now running in a pod the cluster

If you want even more information, you can view the standard output of the container with the following command:

$ kubectl logs -f gitea-podGenerating /data/ssh/ssh_host_ed25519_key...

Feb 13 21:22:00 syslogd started: BusyBox v1.27.2

Generating /data/ssh/ssh_host_rsa_key...

Generating /data/ssh/ssh_host_dsa_key...

Generating /data/ssh/ssh_host_ecdsa_key...

/etc/ssh/sshd_config line 32: Deprecated option UsePrivilegeSeparation

Feb 13 21:22:01 sshd[12]: Server listening on :: port 22.

Feb 13 21:22:01 sshd[12]: Server listening on 0.0.0.0 port 22.

2018/02/13 21:22:01 [T] AppPath: /app/gitea/gitea

2018/02/13 21:22:01 [T] AppWorkPath: /app/gitea

2018/02/13 21:22:01 [T] Custom path: /data/gitea

2018/02/13 21:22:01 [T] Log path: /data/gitea/log

2018/02/13 21:22:01 [I] Gitea v1.4.0+rc1-1-gf61ef28 built with: bindata, sqlite

2018/02/13 21:22:01 [I] Log Mode: Console(Info)

2018/02/13 21:22:01 [I] XORM Log Mode: Console(Info)

2018/02/13 21:22:01 [I] Cache Service Enabled

2018/02/13 21:22:01 [I] Session Service Enabled

2018/02/13 21:22:01 [I] SQLite3 Supported

2018/02/13 21:22:01 [I] Run Mode: Development

2018/02/13 21:22:01 Serving [::]:3000 with pid 14

2018/02/13 21:22:01 [I] Listen: http://0.0.0.0:3000

As you can see, there is now a server running inside the container on our cluster! Unfortunately, we won’t be able to access it until we start opening up ingress channels (coming in a future post).

Deployment

As explained in Kubernetes Tutorial, pods aren’t typically run directly in Kubernetes. Instead, we should define a deployment to manage our pods.

First, let’s delete the pod we already have running:

kubectl delete -f gitea.yaml

This command removes all resources defined in the YAML file from the cluster. We can now modify our YAML file to look like this:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitea-deployment
spec:
replicas: 1
selector:
matchLabels:
app: gitea
template:
metadata:
labels:
app: gitea
spec:
containers:
- name: gitea-container
image: gitea/gitea:1.4

This one looks a bit more complicated than the pod we made earlier. That’s because we are really defining two different objects here: the deployment itself (lines 1–9), and the template of the pod it is managing (lines 10–17).

Line 6 is the most important part of our deployment. It defines the number of copies of the pods we want running. In this example, we are only requesting one copy, because Gitea wasn’t designed with multiple pods in mind.²

There is one other new concept introduced here: labels and selectors. Labels are simply user-defined key-value stores associated with Kubernetes resources. Selectors are used retrieve the resources that match a given label query. In this example, line 13 assigns the label “app=gitea” to all pods created by this deployment. Now, if the deployment ever needs to retrieve the list of all pods that it created (to make sure they are all healthy, for example) it will use the selector defined on lines 8–9. In this way, the deployment can always keep track of which pods it manages by searching for which ones have been assigned the “app=gitea” label.

For the most part, labels are user-defined. In the example above, “app” doesn’t mean anything special to Kubernetes, it is just a way that we may find useful to organize our system. Having said that, there are certain labels that are automatically applied by Kubernetes, containing information about the system.

Now that we have created our new YAML file, we can re-apply it to our cluster:

kubectl apply -f gitea.yaml

Now, our pod is managed by a deployment

Now, if we run kubectl get pods we can now our new pods running, as specified in our deployment:

$ kubectl get podsNAME                              READY    STATUS    RESTARTS
gitea-deployment-8944989b8-5kmn2  0/1      Running   0

We can see information about the deployment itself:

$ kubectl get deploymentsNAME              DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
gitea-deployment  1        1        1           1          4m

To test to make sure everything’s working, try deleting the pod with kubectl delete pod <pod_name>. You should quickly see a new one pop back in it’s place. That’s the magic of deployments!

You may have noticed that the new pod has weird, partially randomly generated name. That’s because pods are now created in bulk by the deployment, and are meant to be ephemeral. When wrapped in a deployment, pods should be thought of as cattle rather than pets.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow me on Facebook | Twitter

Further reading about Kubernetes

Docker and Kubernetes: The Complete Guide

Learn DevOps: The Complete Kubernetes Course

Docker and Kubernetes: The Complete Guide

Kubernetes Certification Course with Practice Tests

An illustrated guide to Kubernetes Networking

An Introduction to Kubernetes: Pods, Nodes, Containers, and Clusters

An Introduction to the Kubernetes DNS Service

Kubernetes Deployment Tutorial For Beginners

Kubernetes Tutorial - Step by Step Introduction to Basic Concepts




Why Git and Git-LFS is not enough to solve the Machine Learning Reproducibility crisis

Why Git and Git-LFS is not enough to solve the Machine Learning Reproducibility crisis

<strong>Originally published by </strong><a href="https://towardsdatascience.com/@7genblogger" target="_blank">David Herron</a><strong> </strong><em>at&nbsp;</em><a href="https://towardsdatascience.com/why-git-and-git-lfs-is-not-enough-to-solve-the-machine-learning-reproducibility-crisis-f733b49e96e8" target="_blank">towardsdatascience.com</a>

Some claim the machine learning field is in a crisis due to software tooling that’s insufficient to ensure repeatable processes. The crisis is about difficulty in reproducing results such as machine learning models. The crisis could be solved with better software tools for machine learning practitioners.

The reproducibility issue is so important that the annual NeurIPS conference plans to make this a major topic of discussion at NeurIPS 2019. The “Call for Papers” announcement has more information https://medium.com/@NeurIPSConf/call-for-papers-689294418f43

The so-called crisis is because of the difficulty in replicating the work of co-workers or fellow scientists, threatening their ability to build on each other’s work or to share it with clients or to deploy production services. Since machine learning, and other forms of artificial intelligence software, are so widely used across both academic and corporate research, replicability or reproducibility is a critical problem.

We might think this can be solved with typical software engineering tools, since machine learning development is similar to regular software engineering. In both cases we generate some sort of compiled software asset for execution on computer hardware hoping to get accurate results. Why can’t we tap into the rich tradition of software tools, and best practices for software quality, to build repeatable processes for machine learning teams?

Unfortunately traditional software engineering tools do not fit well with the needs of machine learning researchers.

A key issue is the training data. Often this is a large amount of data, such as images, videos, or texts, that are fed into machine learning tools to train an ML model. Often the training data is not under any kind of source control mechanism, if only because systems like Git do not deal well with large data files, and source control management systems designed to generate delta’s for text files do not deal well with changes to large binary files. Any experienced software engineer will tell you that a team without source control will be in a state of barely managed chaos. Changes won’t always be recorded and team members might forget what was done.

At the end of the day that means a model trained against the training data cannot be replicated because the training data set will have changed in unknown-able ways. If there is no software system to remember the state of the data set on any given day, then what mechanism is there to remember what happened when?

Git-LFS is your solution, right?

The first response might be to simply use Git-LFS (Git Large File Storage) because it, as the name implies, deals with large files while building on Git. The pitch is that Git-LFS “replaces large files such as audio samples, videos, datasets, and graphics with text pointers inside Git, while storing the file contents on a remote server like GitHub.com or GitHub Enterprise.” One can just imagine a harried machine learning team saying “sounds great, let’s go for it”. It handles multi-gigabyte files, speeds up checkout from remote repositories, and uses the same comfortable workflow. That sure ticks a lot of boxes, doesn’t it?

Not so fast, didn’t your manager instruct you to evaluate carefully before jumping in with both feet? Another life lesson to recall is to look both ways before crossing the street.

The first thing your evaluation should turn up is that Git-LFS requires an LFS server, and that server is not available through every Git hosting service. The big three (Github, Gitlab and Atlassian) all support Git-LFS, but maybe you have a DIY bone in your body. Instead of using a 3rd party Git hosting service, you might prefer to host your own Git service. Gogs, for example, is a competent Git service you can easily run on your own hardware, but it does not have built-in support for Git-LFS.

Depending on your data needs this next could be a killer: Git LFS lets you store files up to 2 GB in size. That is a Github limitation rather than Git-LFS limitation, however all Git-LFS implementations seem to come with various limitations. Gitlab and Atlassian both have their own lists of Git-LFS limitations. Consider this 2GB limit from Github: One of the use-cases in the Git-LFS pitch is storing video files, but isn’t it common for videos to be way beyond 2GB in size? Therefore GIt-LFS on Github is probably unsuitable for machine learning datasets.

It’s not just the 2GB file size limit, but Github places such a tight limit on the free tier of Git-LFS use that one must purchase a data plan covering both data and bandwidth usage.

An issue related to bandwidth is that when using a hosted Git-LFS solution, your training data is stored in a remote server and must be downloaded over the Internet. The time to download training data is a serious user experience problem.

Another issue is the ease of placing data files on a cloud storage system (AWS, GCP, etc) as is often required when to run cloud-based AI software. This is not supported, since the main Git-LFS offerings from the big 3 Git services store your LFS files on their server. There is a DIY Git-LFS server that does store files on AWS S3 at https://github.com/meltingice/git-lfs-s3 But setting up a custom Git-LFS server of course requires additional work. And, what if you need the files to be on GCP instead of AWS infrastructure? Is there a Git-LFS server which stores data on the cloud storage platform of your choice? Is there a Git-LFS server that utilizes a simple SSH server? In other words, GIt-LFS limits your choices of where the data is stored.

Does using Git-LFS solve the so-called Machine Learning Reproducibility Crisis?

With Git-LFS your team has better control over the data, because it is now version controlled. Does that mean the problem is solved?

Earlier we said the “key issue is the training data”, but that was a lie. Sort of. Yes keeping the data under version control is a big improvement. But is the lack of version control of the data files the entire problem? No.

What determines the results of training a model or other activities? The determining factors include the following, and perhaps more:

  • Training data — the image database or whatever data source is used in training the model
  • The scripts used in training the model
  • The libraries used by the training scripts
  • The scripts used in processing data
  • The libraries or other tools used in processing data
  • The operating system and CPU/GPU hardware
  • Production system code
  • Libraries used by production system code

Obviously the result of training a model depends on a variety of conditions. Since there are so many variables to this, it is hard to be precise, but the general problem is a lack of what’s now called Configuration Management. Software engineers have come to recognize the importance of being able to specify the precise system configuration used in deploying systems.

Solutions to machine learning reproducibility

Humans are an inventive lot, and there are many possible solutions to this “crisis”.

Environments like R Studio or Jupyter Notebook offer a kind of interactive Markdown document which can be configured to execute data science or machine learning workflows. This is useful for documenting machine learning work, and specifying which scripts and libraries are used. But these systems do not offer a solution to managing data sets.

Likewise, Makefiles and similar workflow scripting tools offer a method to repeatedly execute a series of commands. The executed commands are determined through file-system time stamps. These tools offer no solution for data management.

At the other end of the scale are companies like Domino Data Labs or C3 IoT offering hosted platforms for data science and machine learning. Both package together an offering built upon a wide swath of data science tools. In some cases, like C3 IoT, users are coding in a proprietary language and storing their data in a proprietary data store. It can be enticing to use a one-stop-shopping service, but will it offer the needed flexibility?

In the rest of this article we’ll discuss DVC. It was designed to closely match Git functionality, to leverage the familiarity most of us have with Git, but with features making it work well for both workflow and data management in the machine learning context.

DVC (https://dvc.org) takes on and solves a larger slice of the machine learning reproducibility problem than does Git-LFS or several other potential solutions. It does this by managing the code (scripts and programs), alongside large data files, in a hybrid between DVC and a source code management (SCM) system like Git. In addition DVC manages the workflow required for processing files used in machine learning experiments. The data files and commands-to-execute are described in DVC files which we’ll learn about in the following sections. Finally, with DVC it is easy to store data on many storage systems from the local disk, to an SSH server, or to cloud systems (S3, GCP, etc). Data managed by DVC can be easily shared with others using this storage system.

Image courtesy dvc.org

DVC uses a similar command structure to Git. As we see here, just like git push and git pull are used for sharing code and configuration with collaborators, dvc push and dvc pull is used for sharing data. All this is covered in more detail in the coming sections, or if you want to skip right to learning about DVC see the tutorial at https://dvc.org/doc/tutorial.

DVC remembers precisely which files were used at what point of time

At the core of DVC is a data store (the DVC cache) optimized for storing and versioning large files. The team chooses which files to store in the SCM (like Git) and which to store in DVC. Files managed by DVC are stored such that DVC can maintain multiple versions of each file, and to use file-system links to quickly change which version of each file is being used.

Conceptually the SCM (like Git) and DVC both have repositories holding multiple versions of each file. One can check out “version N” and the corresponding files will appear in the working directory, then later check out “version N+1” and the files will change around to match.

Image courtesy dvc.org

On the DVC side, this is handled in the DVC cache. Files stored in the cache are indexed by a checksum (MD5 hash) of the content. As the individual files managed by DVC change, their checksum will of course change, and corresponding cache entries are created. The cache holds all instances of each file.

For efficiency, DVC uses several linking methods (depending on file system support) to insert files into the workspace without copying. This way DVC can quickly update the working directory when requested.

DVC uses what are called “DVC files” to describe both the data files and the workflow steps. Each workspace will have multiple DVC files, with each describing one or more data files with the corresponding checksum, and each describing a command to execute in the workflow.

cmd: python src/prepare.py data/data.xml
deps:
- md5: b4801c88a83f3bf5024c19a942993a48
  path: src/prepare.py
- md5: a304afb96060aad90176268345e10355
  path: data/data.xml
md5: c3a73109be6c186b9d72e714bcedaddb
outs:
- cache: true
  md5: 6836f797f3924fb46fcfd6b9f6aa6416.dir
  metric: false
  path: data/prepared
wdir: .

This example DVC file comes from the DVC Getting Started example (https://github.com/iterative/example-get-started) and shows the initial step of a workflow. We’ll talk more about workflows in the next section. For now, note that this command has two dependencies, src/prepare.py and data/data.xml, and an output data directory named data/prepared. Everything has an MD5 hash, and as these files change the MD5 hash will change and a new instance of changed data files are stored in the DVC cache.

DVC files are checked into the SCM managed (Git) repository. As commits are made to the SCM repository each DVC file is updated (if appropriate) with new checksums of each file. Therefore with DVC one can recreate exactly the data set present for each commit, and the team can exactly recreate each development step of the project.

DVC files are roughly similar to the “pointer” files used in Git-LFS.

The DVC team recommends using different SCM tags or branches for each experiment. Therefore accessing the data files, and code, and configuration, appropriate to that experiment is as simple as switching branches. The SCM will update the code and configuration files, and DVC will update the data files, automatically.

This means there is no more scratching your head trying to remember which data files were used for what experiment. DVC tracks all that for you.

DVC remembers the exact sequence of commands used at what point of time

The DVC files remember not only the files used in a particular execution stage, but the command that is executed in that stage.

Reproducing a machine learning result requires not only using the precise same data files, but the same processing steps and the same code/configuration. Consider a typical step in creating a model, of preparing sample data to use in later steps. You might have a Python script, prepare.py, to perform that split, and you might have input data in an XML file named data/data.xml.

$ dvc run -d data/data.xml -d code/prepare.py \
            -o data/prepared \
            python code/prepare.py

This is how we use DVC to record that processing step. The DVC “run” command creates a DVC file based on the command-line options.

The -d option defines dependencies, and in this case we see an input file in XML format, and a Python script. The -o option records output files, in this case there is an output data directory listed. Finally, the executed command is a Python script. Hence, we have input data, code and configuration, and output data, all dutifully recorded in the resulting DVC file, which corresponds to the DVC file shown in the previous section.

If prepare.py is changed from one commit to the next, the SCM will automatically track the change. Likewise any change to data.xml results in a new instance in the DVC cache, which DVC will automatically track. The resulting data directory will also be tracked by DVC if they change.

A DVC file can also simply refer to a file, like so:

md5: 99775a801a1553aae41358eafc2759a9
outs:
- cache: true
  md5: ce68b98d82545628782c66192c96f2d2
  metric: false
  path: data/Posts.xml.zip
  persist: false
wdir: ..

This results from the “dvc add <em>file</em>” command, which is used when you simply have a data file, and it is not the result of another command. For example in https://dvc.org/doc/tutorial/define-ml-pipeline this is shown, which results in the immediately preceeding DVC file:

$ wget -P data https://dvc.org/s3/so/100K/Posts.xml.zip
$ dvc add data/Posts.xml.zip

The file Posts.xml.zip is then the data source for a sequence of steps shown in the tutorial that derive information from this data.

Take a step back and recognize these are individual steps in a larger workflow, or what DVC calls a pipeline. With “dvc add” and “dvc run” you can string together several Stages, each being created with a “dvc run” command, and each being described by a DVC file. For a complete working example, see https://github.com/iterative/example-get-started and https://dvc.org/doc/tutorial

This means that each working directory will have several DVC files, one for each stage in the pipeline used in that project. DVC scans the DVC files to build up a Directed Acyclic Graph (DAG) of the commands required to reproduce the output(s) of the pipeline. Each stage is like a mini-Makefile in that DVC executes the command only if the dependencies have changed. It is also different because DVC does not consider only the file-system timestamps, like Make does, but whether the file content has changed, as determined by the checksum in the DVC file versus the current state of the file.

Bottom line is that this means there is no more scratching your head trying to remember which version of what script was used for each experiment. DVC tracks all of that for you.

Image courtesy dvc.org

DVC makes it easy to share data and code between team members

A machine learning researcher is probably working with colleagues, and needs to share data and code and configuration. Or the researcher may need to deploy data to remote systems, for example to run software on a cloud computing system (AWS, GCP, etc), which often means uploading data to the corresponding cloud storage service (S3, GCP, etc).

The code and configuration side of a DVC workspace is stored in the SCM (like Git). Using normal SCM commands (like “git clone”) one can easily share it with colleagues. But how about sharing the data with colleagues?

DVC has the concept of remote storage. A DVC workspace can push data to, or pull data from, remote storage. The remote storage pool can exist on any of the cloud storage platforms (S3, GCP, etc) as well as an SSH server.

Therefore to share code, configuration and data with a colleague, you first define a remote storage pool. The configuration file holding remote storage definitions is tracked by the SCM. You next push the SCM repository to a shared server, which carries with it the DVC configuration file. When your colleague clones the repository, they can immediately pull the data from the remote cache.

This means your colleagues no longer have to scratch their head wondering how to run your code. They can easily replicate the exact steps, and the exact data, used to produce the results.

Image courtesy dvc.org

Conclusion

The key to repeatable results is using good practices, to keep proper versioning of not only their data but the code and configuration files, and to automate processing steps. Successful projects sometimes requires collaboration with colleagues, which is made easier through cloud storage systems. Some jobs require AI software running on cloud computing platforms, requiring data files to be stored on cloud storage platforms.

With DVC a machine learning research team can ensure their data, configuration and code are in sync with each other. It is an easy-to-use system which efficiently manages shared data repositories alongside an SCM system (like Git) to store the configuration and code.