More machine learning models need to be deployed in production in a faster, repeatable, and consistent manner — and with the right governance.

In March 2018 at the Strata Data Conference, IBM VP Dinesh Nirmal noted a common refrain) in the machine learning echelon, “The story of enterprise Machine Learning — It took me 3 weeks to develop the model. It’s been >11 months, and it’s still not deployed.”

Echoing this sentiment, Forrester states in their 2020 report, “A top complaint of data science, application development and delivery (AD&D) teams, and, increasingly, line-of-business leaders is the challenge in deploying, monitoring, and governing machine learning models in production. Manual handoffs, frantic monitoring, and loose governance prevent organizations from deploying more AI use cases.”

MLOps and Kubeflow Pipelines speed deployment

To solve this problem, data scientists, data engineers, and DevOps folks worked together to move this discipline forward with the rigor of engineering as opposed to science. The MLOps and DataOps domains rose based on this demand and need, and consequently data and machine learning pipelines became a primary vehicle to drive these domains.

Kubeflow became a leading solution to address MLOps needs. Kubeflow is an end-to-end machine learning platform that is focused on distributed training, hyperparameter optimization, production model serving and management, and machine learning pipelines with metadata and lineage tracking.

#artificial intelligence #kubernetes #machine learning #open-source development #tekton

Kubeflow Pipelines meets Tekton and Watson
1.30 GEEK