Pytorch Lightning is one of the hottest AI libraries of 2020, and it makes AI research scalable and fast to iterate on. But if you use Pytorch Lightning, you’ll need to do hyperparameter tuning.

Proper hyperparameter tuning can make the difference between a good training run and a failing one. Everyone knows that you can dramatically boost the accuracy of your model with good tuning methods!

In this blog post, we’ll demonstrate how to use Ray Tune, an industry standard for hyperparameter tuning, with PyTorch Lightning. Ray Tune is part of Ray, a library for scaling Python.

Image for post

It is available as a PyPI package and can be installed like this:

pip install "ray[tune]"

To use Ray Tune with PyTorch Lightning, we only need to add a few lines of code!!

Getting started with Ray Tune + PTL!

To run the code in this blog post, be sure to first run:

pip install "ray[tune]" 
pip install "pytorch-lightning>=1.0" 
pip install "pytorch-lightning-bolts>=0.2.5"

_The below example is tested on ray==1.0.1 , pytorch-lightning==1.0.2, and pytorch-lightning-bolts==0.2.5. _See the full example here.

Let’s first start with some imports:

import torch
	from torch.nn import functional as F
	import pytorch_lightning as pl
	from pl_bolts.datamodules import MNISTDataModule
	import os
	from ray.tune.integration.pytorch_lightning import TuneReportCallback

After imports, there are three easy steps.

  1. Create your LightningModule
  2. Create a function that calls Trainer.fit with the Tune callback
  3. Use tune.run to execute your hyperparameter search.

#hyperparameter-tuning #pytorch-lightning #data-science #pytorch

How to tune Pytorch Lightning hyperparameters
26.70 GEEK