Learn how to deliver AI for Big Data using Dash & Databricks this recorded webinar with Peter Kim of Plotly and Prasad Kona of Databricks.

We’re delighted to announce that Plotly and Databricks are partnering to bring cloud-distributed Artificial Intelligence (AI) & Machine Learning (ML) to a vastly wider audience of business users. By integrating the Plotly Dash frontend with the Databricks backend, we are offering a seamless process to transform AI and ML models into production-ready, dynamic, interactive, web applications. This partnership with Databricks empowers Python developers to easily and quickly build Dash apps that are connected to a Databricks Spark cluster. **The direct integration, **databricks-dash**, is distributed by Plotly and available with **Plotly’s Dash Enterprise.

Plotly’s Dash is a Python framework that enables developers to build interactive, data-rich analytical web apps in pure Python, with no JavaScript required. Traditional “full-stack” app development is done in teams with some members specializing in backend/server technologies like Python, some specializing in front-end technologies like React, and some specializing in data science. Dash provides a tightly-integrated backend and front-end, entirely written in Python. This means that data science teams producing models, visualizations and complex analyses no longer need to rely on backend specialists to expose these models to the front-end via APIs, and no longer need to rely on front-end specialists to build user interfaces to connect to these APIs. If you’re interested in Dash’s architecture, please see our “Dash is React for Python” article.

Databricks’ unified platform for data and AI rests on top of Apache Spark, a distributed general-purpose cluster computing framework originally developed by the Databricks founders. With enough hardware and networking availability, Apache Spark scales horizontally naturally due to its distributed architecture. Apache Spark has a rich collection of APIs, MLlib, and integration with popular Python scientific libraries (e.g. pandas, scikit-learn, etc). The Databricks Data Science Workspace provides managed, optimized, and secure Spark clusters. This enables developers and data scientists to focus on building and optimizing models and worry less about infrastructure aspects such as speed, reliability, building fault-tolerant systems, etc. Databricks also abstracts away many manual administrative duties (such as creating a cluster, auto-scaling hardware, and managing users) and simplifies the development process by enabling users to create IPython-like notebooks.

#machine-learning #dash #ai #spark #databricks

Dash is an ideal front-end for your Databricks Spark Backend
26.15 GEEK