Facebook launched PyTorch 1.0 early this year with integrations for Google Cloud, AWS, and Azure Machine Learning. In this example, I assume that you’re already familiar with Scikit-learn, Pandas, NumPy, and SciPy. These packages are important prerequisites for this tutorial.

Image for post

What is PyTorch?

It’s a Python-based scientific computing package targeted at two sets of audiences:

A replacement for NumPy to use the power of GPUs
a deep learning research platform that provides maximum flexibility and speed

First, we need to cover a few basic concepts that may throw you off-balance if you don’t grasp them well enough before going full-force on modeling.

In Deep Learning, we see tensors everywhere. Well, Google’s framework is called TensorFlow for a reason! What is a tensor, anyway?

Tensor

In Numpy, you may have an array that has three dimensions, right? That is, technically speaking, a tensor.

A scalar (a single number) has zero dimensions, a vector has one dimension, a matrix has two dimensions and a tensor has three or more dimensions. That’s it!

But, to keep things simple, it is commonplace to call vectors and matrices tensors as well — so, from now on, everything is either a scalar or a tensor.

Imports and Dataset

For this simple example we’ll use only a couple of libraries:

  • Pandas: for data loading and manipulation
  • Scikit-learn: for train-test split
  • Matplotlib: for data visualization
  • PyTorch: for model training Here are the imports if you just want to copy/paste:
import torch
import torch.nn as nn
import torch.nn.functional as F
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split

As for the dataset, the Beer dataset, it can be found on this URL. https://www.kaggle.com/jtrofe/beer-recipes

Prepare folder, files and download dataset form Kaggle:

This is a dataset of 75,000 homebrewed beers with over 176 different styles. Beer records are user-reported and are classified according to one of the 176 different styles. These recipes go into as much or as little detail as the user provided, but there’s are least 5 useful columns where data was entered for each: Original Gravity, Final Gravity, ABV, IBU, and Color.

We’ll use the linux terminal:

Remove directorys and files

! rm -r input/ ! mkdir input/ ! cd input/

Show directory

! ls

Download Dataset

! kaggle datasets download -d jtrofe/beer-recipes

Unzip Dataset

! unzip beer-recipes.zip

Move zip file

!mv beer-recipes.zip input/beer.zip

Move csv file

!mv recipeData.csv input/recipeDate.csv !mv styleData.csv

Show folder

! ls input/

Post- ETL

We are going to use a clean dataset.

#pytorch #beer #deep-learning #python #neural-networks #deep learning

My First Work With PyTorch
3.05 GEEK