Managing the hardware, drivers, libraries and packages that make up a ML development environment can be hard.

In this talk, I will introduce how Docker can be used to simplify the process of setting up a local ML development environment, and how we can use Kubernetes and Kubeflow to scale that standardised environment to provide scalable, web-based Jupyter environments for a large number of users, that can be served from both public cloud providers and from on-premise clusters.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading about Docker, Kubernetes and Machine Learning

Machine Learning A-Z™: Hands-On Python & R In Data Science

Python for Data Science and Machine Learning Bootcamp

Machine Learning, Data Science and Deep Learning with Python

Deep Learning A-Z™: Hands-On Artificial Neural Networks

Artificial Intelligence A-Z™: Learn How To Build An AI

A Complete Machine Learning Project Walk-Through in Python

Machine Learning In Node.js With TensorFlow.js

Docker for Absolute Beginners

An illustrated guide to Kubernetes Networking

#machine-learning #docker #kubernetes #deep-learning #artificial-intelligence #data-science

Using Docker and Kubernetes to Simplify Machine Learning
23.90 GEEK