Earlier this year (2020), I decided to move fully into the engineering part of machine learning from Data Science. I wanted to experience a more efficient and scalable way of deploying machine learning models, decouple my models from my app, and version them properly.

Conventionally, what I do mostly after training my model is to import the model in my flask app and then perform inference whenever the API endpoint for the model is being called. Well, I use docker when trying to package my app and deploy to google cloud or any other platform, but there is more to this (I think).

MIGHT INTEREST YOU

Machine Learning Model Management in 2021 and Beyond – Everything That You Need to Know

I started diving deep into TensorFlow serving. TensorFlow extended and Kubeflow (Kubernetes made easier for machine learning projects). Along the line, I discovered I need to know more (maybe just a little) of Kubernetes needed for deploying, orchestrating, and scaling machine learning apps.

The journey and curiosity led to this article. Hence, if you are just like me, ready to up your game and add one of the tools to become a Unicorn data scientist, as described by Elle O’Brien in this article, then this article is for you.

“…so hard, the rare data scientist who can also develop quality software and play engineer is called a unicorn!”- Elle O’Brien

In this article, we will also follow a project-based method, which will make it possible for you to just port the ideas and code shown directly into your machine learning project.

#machine learning tools #kubernetes #docker

Kubernetes vs Docker - What You Should Know as a Machine Learning Engineer
1.20 GEEK