Royce  Reinger

Royce Reinger

1678463308

DVC: Data Version Control, Git for Data & Models

Data Version Control or DVC is a command line tool and VS Code Extension to help you develop reproducible machine learning projects:

  1. Version your data and models. Store them in your cloud storage but keep their version info in your Git repo.
  2. Iterate fast with lightweight pipelines. When you make changes, only run the steps impacted by those changes.
  3. Track experiments in your local Git repo (no servers needed).
  4. Compare any data, code, parameters, model, or performance plots.
  5. Share experiments and automatically reproduce anyone's experiment.

Quick start

Please read our Command Reference for a complete list.

A common CLI workflow includes:

TaskTerminal
Track data

$ git add train.py params.yaml

$ dvc add images/

Connect code and data

$ dvc stage add -n featurize -d images/ -o features/ python featurize.py

$ dvc stage add -n train -d features/ -d train.py -o model.p -M metrics.json python train.py

Make changes and experiment

$ dvc exp run -n exp-baseline

$ vi train.py

$ dvc exp run -n exp-code-change

Compare and select experiments

$ dvc exp show

$ dvc exp apply exp-baseline

Share code

$ git add .

$ git commit -m 'The baseline model'

$ git push

Share data and ML models

$ dvc remote add myremote -d s3://mybucket/image_cnn

$ dvc push

How DVC works

We encourage you to read our Get Started docs to better understand what DVC does and how it can fit your scenarios.

The closest analogies to describe the main DVC features are these:

  1. Git for data: Store and share data artifacts (like Git-LFS but without a server) and models, connecting them with a Git repository. Data management meets GitOps!
  2. Makefiles for ML: Describes how data or model artifacts are built from other data and code in a standard format. Now you can version your data pipelines with Git.
  3. Local experiment tracking: Turn your machine into an ML experiment management platform, and collaborate with others using existing Git hosting (Github, Gitlab, etc.).

Git is employed as usual to store and version code (including DVC meta-files as placeholders for data). DVC stores data and model files seamlessly in a cache outside of Git, while preserving almost the same user experience as if they were in the repo. To share and back up the data cache, DVC supports multiple remote storage platforms - any cloud (S3, Azure, Google Cloud, etc.) or on-premise network storage (via SSH, for example).

how_dvc_works

DVC pipelines (computational graphs) connect code and data together. They specify all steps required to produce a model: input dependencies including code, data, commands to run; and output information to be saved.

Last but not least, DVC Experiment Versioning lets you prepare and run a large number of experiments. Their results can be filtered and compared based on hyperparameters and metrics, and visualized with multiple plots.

Visual Studio Code Extension

VS Code Extension

To use DVC as a GUI right from your VS Code IDE, install the DVC Extension from the Marketplace. It currently features experiment tracking and data management, and more features (data pipeline support, etc.) are coming soon!

DVC Extension for VS Code

Note: You'll have to install core DVC on your system separately (as detailed below). The Extension will guide you if needed.

Installation

There are several ways to install DVC: in VS Code; using snap, choco, brew, conda, pip; or with an OS-specific package. Full instructions are available here.

Snapcraft (Linux)

Snapcraft

snap install dvc --classic

This corresponds to the latest tagged release. Add --beta for the latest tagged release candidate, or --edge for the latest main version.

Chocolatey (Windows)

Chocolatey

choco install dvc

Brew (mac OS)

Homebrew

brew install dvc

Anaconda (Any platform)

conda install -c conda-forge mamba # installs much faster than conda
mamba install -c conda-forge dvc

Depending on the remote storage type you plan to use to keep and share your data, you might need to install optional dependencies: dvc-s3, dvc-azure, dvc-gdrive, dvc-gs, dvc-oss, dvc-ssh.

PyPI (Python)

PyPI

pip install dvc

Depending on the remote storage type you plan to use to keep and share your data, you might need to specify one of the optional dependencies: s3, gs, azure, oss, ssh. Or all to include them all. The command should look like this: pip install 'dvc[s3]' (in this case AWS S3 dependencies such as boto3 will be installed automatically).

To install the development version, run:

pip install git+git://github.com/iterative/dvc

Package (Platform-specific)

deb|pkg|rpm|exe

Self-contained packages for Linux, Windows, and Mac are available. The latest version of the packages can be found on the GitHub releases page.

Ubuntu / Debian (deb)

sudo wget https://dvc.org/deb/dvc.list -O /etc/apt/sources.list.d/dvc.list
wget -qO - https://dvc.org/deb/iterative.asc | sudo apt-key add -
sudo apt update
sudo apt install dvc

Fedora / CentOS (rpm)

sudo wget https://dvc.org/rpm/dvc.repo -O /etc/yum.repos.d/dvc.repo
sudo rpm --import https://dvc.org/rpm/iterative.asc
sudo yum update
sudo yum install dvc

Download Details:

Author: iterative
Source Code: https://github.com/iterative/dvc 
License: Apache-2.0 license

#machinelearning #python #git #datascience 

What is GEEK

Buddha Community

DVC: Data Version Control, Git for Data & Models
 iOS App Dev

iOS App Dev

1620466520

Your Data Architecture: Simple Best Practices for Your Data Strategy

If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.

If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.

In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.

#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition

Wiley  Mayer

Wiley Mayer

1602429113

Introduction to DVC: Data Version Control Tool for Machine Learning Projects

Motivation

Git is a powerful tool for version control. It allows you to go back and forth between different versions of your code without being afraid of losing the code you change. As a data scientist, you might not only want to control different versions of your code but also **control different versions of your data **for the same reason: you don’t want to lose the previous data when the data is changed.

But Git is not ideal for database version control because for two reasons:

  • It is messy to compare the change in different versions of data with Git. I remember that I need to manually accept about 100 lines of data that were changed between 2 commits when using git pulland it was a pain
  • Binary files are usually big. If you commit them to your repository, its size will become much larger. It can also take a long time to commit data to Git.

Wouldn’t it be nice if you can store your data in your favorite storage services such as Amazon S3, Google Drive, Google Cloud Storage, or your own local machine while still being able to switch back and forth between different versions of data? That is when DVC comes in handy.

#data-science #database #data-version-control #git #data

Gerhard  Brink

Gerhard Brink

1620629020

Getting Started With Data Lakes

Frameworks for Efficient Enterprise Analytics

The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.

This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.

Introduction

As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).


This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.

#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management

Enterprise Data Management: Stick to the Basics

Lots of people have increasing volumes of data and are trying to run data management programs to better sort it. Interestingly, people’s problems are pretty much the same throughout different sectors of any industry, and data management helps them configure solutions.

The fundamentals of enterprise data management (EDM), which one uses to tackle these kinds of initiatives, are the same whether one is in the health sector, a telco travel company, or a government agency, and more! Therefore, the fundamental practices that one needs to follow to manage data are similar from one industry to another.

For example, suppose you’re about to set off and design a program. In this case, it may be your integration platform project or your big warehouse project; however, the principles for designing that program of work is pretty much the same regardless of the actual details of the project.

#big data #bigdata #big data analytics #data management #data modeling #data governance #enterprise data #enterprise data management #edm

Cyrus  Kreiger

Cyrus Kreiger

1618039260

How Has COVID-19 Impacted Data Science?

The COVID-19 pandemic disrupted supply chains and brought economies around the world to a standstill. In turn, businesses need access to accurate, timely data more than ever before. As a result, the demand for data analytics is skyrocketing as businesses try to navigate an uncertain future. However, the sudden surge in demand comes with its own set of challenges.

Here is how the COVID-19 pandemic is affecting the data industry and how enterprises can prepare for the data challenges to come in 2021 and beyond.

#big data #data #data analysis #data security #data integration #etl #data warehouse #data breach #elt