From Jupyter Notebooks to production-ready Machine Learning APIs, with just one line of code

Having a trained model served as a scalable API is the end-goal of every Machine Learning project. One could argue that monitoring comes after, but the Champagne flows when you have that endpoint ready.

In this story, we use Kale, Kubeflow’s superfood for Data Scientists, to deploy an ML model; all it takes is one line of code!

More than half of the models trained by Data Scientists today never make it into production. Sometimes, the challenges are organizational, but most of the time, technical obstacles seem insurmountable. Either way, a model that is not in production can’t provide business impact.

So, how can a Data Scientist take control over the whole process, from Notebook to serving, without needing to rely on an army of ML engineers and infrastructure architects?

This story demonstrates how we can deploy a Machine Learning model as a scalable API, leaving the orchestration part to Kubernetes and the heavy-lifting to KFServing. To this end, we use KaleKubeflow’s superfood for Data Scientists; all it takes is one line of code!

Learning Rate is a newsletter for those who are curious about the world of AI and MLOps. You’ll hear from me every Friday with updates and thoughts on the latest AI news and articles. Subscribe here!

#machine-learning #devops #kubernetes

The simplest way to serve your ML models on Kubernetes
1.40 GEEK