“Of the cast of characters mentioned … the only ones every business with data needs are decision-makers and analysts”
— Cassie Kozyrkov (Data Science’s Most Misunderstood Hero)
Analysts and “citizen data scientists” are the often forgotten heroes across every organization . They tend to have a wide range of responsibilities spanning business domain knowledge, data extraction & analysis, predictive analytics & machine learning, and reporting & communication to stakeholders. Piece of cake right?
And as the size of data has grown, many of these practitioners have had to learn parts of big data frameworks and infrastructure management. This increased scope of work is not sustainable and has a direct impact on the most important steps of the workflow: data exploration & experimentation. This can result in rudimentary reports, less accurate predictive models & forecasts, and less innovative insights & ideas.
Am I really suggesting yet another big data framework/service? Don’t we already have Hive, Impala, Presto, Spark, Beam, BigQuery, Athena, and the list goes on? Don’t get me wrong. For teams running data platforms for a large organization, one or more of these frameworks/services is essential for managing hundreds of batch and streaming jobs, a vast ecosystem of data sources, and production pipelines.
My focus here however, is the data analyst who wants a flexible and scalable solution with minimal code changes to accelerate their existing workflows. Before thinking about multi-node clusters and new frameworks, you’d be surprised how much can be done with your existing code on one machine with some help from GPUs. Using myself as a guinea pig, I wanted to explore a workflow with the following constraints:
These constraints led me to RAPIDS with the help of the new & powerful Nvidia A100 GPU.
#rapids-ai #big-data #gpu #google-cloud-platform #pandas