Google cloud platform offers a wide range of GPU options to choose from. If you are doing image processing in your ML models, you should definitely consider running it on GPUs for quicker results. Based on my experience, I’ll be sharing some results that will hopefully help you pick the right one and get you started. I’ll be focussing on the actual performance I got out of the GPUs in my ML models as opposed to the hardware details/architecture of these GPUs.
Here are the GPU options available at the moment and how much they cost as of now (July 16th, 2020): —
NVIDIA® Tesla® T4: $255.50/month
NVIDIA® Tesla® K80: $328.50/month
NVIDIA® Tesla® P4: $438.00/month
NVIDIA® Tesla® P100: $1,065.80/month
NVIDIA® Tesla® V100: $1,810.40/month
_More information here: _https://cloud.google.com/compute/docs/gpus
P100 and V100 are way too expensive as you can see and clearly not affordable. If you are thinking of running them for just a little bit to quickly finish your processing, it’s probably ok but otherwise, it’s just too expensive.
Tesla T4 is easily my first choice when I’m running heavy image processing models. It’s cheap and very efficient! The cool thing is you can afford to have this guy running a little longer to grab sustained discounts.
Launch dates:
_Tesla T4 — _September 2018
Tesla K80s_ — November 2014_
I would highly recommend Tesla T4 over K80s as it’s performance is pretty much the same or even better. Here’s my link to my tensorboard results from running a deep-Q reinforcement learning model on Tesla T4 GPU that learned to play **Atari’s breakout **in 2.5 days (Max Score Achieved: 406) — https://tensorboard.dev/experiment/1yrW70A0QHertoJDVZk1ig/#scalars&tagFilter=rew
K80s and P100 are not worth it given that Tesla T4 can give you similar performance for a cheaper price. My deep-Q reinforcement learning models running on Tensorflow(2.x) took similar time to run on T4, K80, P100 and there wasn’t much difference.
#gpu-computing #nvidia #machine-learning #tensorflow #google-cloud-platform