5 Apache Spark Best Practices For Data Science

Why move to Spark?

Although we all talk about Big Data, it usually takes some time in your career until you encounter it. For me at Wix.com it came quicker than I thought, having well over 160M users generates a lot of data — and with that comes the need for scaling our data processes.

While there are other options out there (Dask for example), we decided to go with Spark for 2 main reasons — (1) It’s the current state of the art and widely used for Big Data. (2) We had the infrastructure needed for Spark inplace.

How to write in PySpark for pandas people

Chances are you’re familiar with pandas, and when I say familiar I mean fluent, your mother’s tongue :)

The headline of the following talk says it all — Data Wrangling with PySpark for Data Scientists Who Know Pandas and it’s a great one.

This will be a very good time to note that simply getting the syntax right might be a good place to start but you need a lot more for a successful PySpark project, you need to understand how Spark works.

It’s hard to get Spark to work properly, but when it works — it works great!

Spark in a nutshell

I would only go knee deep here but I recommend visiting the following article and reading the MapReduce explanation for a more extensive explanation — The Hitchhikers guide to handle Big Data using Spark.

The concept we want to understand here is Horizontal Scaling.

It’s easier to start with **Vertical Scaling. **If we have a pandas code that works great but then the data becomes too big for it, we can potentially move to a stronger machine with more memory and hope it manages. This means we still have one machine handling the entire data at the same time - we scaled vertically.

If instead we decided to use MapReduce, and split the data to chunks and let different machines handle each chunk — we’re scaling horizontally.

5 Spark Best Practices

These are the 5 Spark best practices that helped me reduce runtime by 10x and scale our project.

1 - Start small — Sample the data

If we want to make big data work, we first want to see we’re in the right direction using a small chunk of data. In my project I sampled 10% of the data and made sure the pipelines work properly, this allowed me to use the SQL section in the Spark UI and see the numbers grow through the entire flow, while not waiting too long for the process to run.

From my experience if you reach your desired runtime with the small sample, you can usually scale up rather easy.

2 - Understand the basics — Tasks, partitions, cores

This is probably the single most important thing to understand when working with Spark:

1 Partition makes for 1 Task that runs on 1 Core

You have to always be aware of the number of partitions you have - follow the number of tasks in each stage and match them with the correct number of cores in your Spark connection. A few tips and rules of thumb to help you do this (all of them require testing with your case):

  • The ratio between tasks and cores should be around 2–4 tasks for each core.
  • The size of each partition should be about 200MB–400MB, this depends on the memory of each worker, tune it to your needs.

3 - Debugging Spark

Spark works with lazy evaluation, which means it waits until an action is called before executing the graph of computation instructions. Examples of actions are show(), count(),...

This makes it very hard to understand where are the bugs / places that need optimization in our code. One practice which I found helpful was splitting the code to sections by using df.cache() and then use df.count() to force Spark to compute the df at each section.

Now, using the Spark UI you can look at the computation of each section and spot the problems. It’s important to note that using this practice without using the sampling we mentioned in (1) will probably create a very long runtime which will be hard to debug.

4 - Finding and solving skewness

Let’s start with defining skewness. As we mentioned our data is divided to partitions and along the transformations the size of each partition would likely change. This can create a wide variation in size between partitions which means we have a skewness in our data.

Finding the Skewness can be done by looking at the stage details in the Spark UI and looking for a significant difference between the max and median:

Figure

The big variance (Median=3s, Max=7.5min) might suggest a skewness in data

This means that we have a few tasks that were significantly slower than the others.

Why is this bad — this might cause other stages to wait for these few tasks and leave cores waiting while not doing anything.

Preferably if you know where the skewness is coming from you can address it directly and change the partitioning. If you have no idea / no option to solve it directly, try the following:

Adjusting the ratio between the tasks and cores

As we mentioned, by having more tasks than cores we hope that while the longer task is running other cores will remain busy with the other tasks. Although this is true, the ratio mentioned earlier (2-4:1) can’t really address such a big variance between tasks duration. We can try to increase the ratio to 10:1 and see if it helps, but there could be other downsides to this approach.

#2020 aug tutorials # overviews #apache spark #best practices #data science

What is GEEK

Buddha Community

5 Apache Spark Best Practices For Data Science
Uriah  Dietrich

Uriah Dietrich

1618449987

How To Build A Data Science Career In 2021

For this week’s data science career interview, we got in touch with Dr Suman Sanyal, Associate Professor of Computer Science and Engineering at NIIT University. In this interview, Dr Sanyal shares his insights on how universities can contribute to this highly promising sector and what aspirants can do to build a successful data science career.

With industry-linkage, technology and research-driven seamless education, NIIT University has been recognised for addressing the growing demand for data science experts worldwide with its industry-ready courses. The university has recently introduced B.Tech in Data Science course, which aims to deploy data sets models to solve real-world problems. The programme provides industry-academic synergy for the students to establish careers in data science, artificial intelligence and machine learning.

“Students with skills that are aligned to new-age technology will be of huge value. The industry today wants young, ambitious students who have the know-how on how to get things done,” Sanyal said.

#careers # #data science aspirant #data science career #data science career intervie #data science education #data science education marke #data science jobs #niit university data science

Siphiwe  Nair

Siphiwe Nair

1620466520

Your Data Architecture: Simple Best Practices for Your Data Strategy

If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.

If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.

In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.

#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition

'Commoditization Is The Biggest Problem In Data Science Education'

The buzz around data science has sent many youngsters and professionals on an upskill/reskilling spree. Prof. Raghunathan Rengasamy, the acting head of Robert Bosch Centre for Data Science and AI, IIT Madras, believes data science knowledge will soon become a necessity.

IIT Madras has been one of India’s prestigious universities offering numerous courses in data science, machine learning, and artificial intelligence in partnership with many edtech startups. For this week’s data science career interview, Analytics India Magazine spoke to Prof. Rengasamy to understand his views on the data science education market.

With more than 15 years of experience, Prof. Rengasamy is currently heading RBCDSAI-IIT Madras and teaching at the department of chemical engineering. He has co-authored a series of review articles on condition monitoring and fault detection and diagnosis. He has also been the recipient of the Young Engineer Award for the year 2000 by the Indian National Academy of Engineering (INAE) for outstanding engineers under the age of 32.

Of late, Rengaswamy has been working on engineering applications of artificial intelligence and computational microfluidics. His research work has also led to the formation of a startup, SysEng LLC, in the US, funded through an NSF STTR grant.

#people #data science aspirants #data science course director interview #data science courses #data science education #data science education market #data science interview

Ananya Gupta

1611381728

What Are The Advantages and Disadvantages of Data Science?

Data Science becomes an important part of today industry. It use for transforming business data into assets that help organizations improve revenue, seize business opportunities, improve customer experience, reduce costs, and more. Data science became the trending course to learn in the industries these days.

Its popularity has grown over the years, and companies have started implementing data science techniques to grow their business and increase customer satisfaction. In online Data science course you learn how Data Science deals with vast volumes of data using modern tools and techniques to find unseen patterns, derive meaningful information, and make business decisions.

Advantages of Data Science:- In today’s world, data is being generated at an alarming rate in all time lots of data is generated; from the users of social networking site, or from the calls that one makes, or the data which is being generated from different business. Because of that reason the huge amount of data the value of the field of Data Science has many advantages.

Some Of The Advantages Are Mentioned Below:-

Multiple Job Options :- Because of its high demand it provides large number of career opportunities in its various fields like Data Scientist, Data Analyst, Research Analyst, Business Analyst, Analytics Manager, Big Data Engineer, etc.

Business benefits: - By Data Science Online Course you learn how data science helps organizations knowing how and when their products sell well and that’s why the products are delivered always to the right place and right time. Faster and better decisions are taken by the organization to improve efficiency and earn higher profits.

Highly Paid jobs and career opportunities: - As Data Scientist continues working in that profile and the salaries of different position are grand. According to a Dice Salary Survey, the annual average salary of a Data Scientist $106,000 per year as we consider data.

Hiring Benefits:- If you have skills then don’t worry this comparatively easier to sort data and look for best of candidates for an organization. Big Data and data mining have made processing and selection of CVs, aptitude tests and games easier for the recruitment group.

Also Read: How Data Science Programs Become The Reason Of Your Success

Disadvantages of Data Science: - If there are pros then cons also so here we discuss both pros and cons which make you easy to choose Data Science Course without any doubts. Let’s check some of the disadvantages of Data Science:-

Data Privacy: - As we know Data is used to increase the productivity and the revenue of industry by making game-changing business decisions. But the information or the insights obtained from the data may be misused against any organization.

Cost:- The tools used for data science and analytics can cost tons to a corporation as a number of the tools are complex and need the people to undergo a knowledge Science training to use them. Also, it’s very difficult to pick the right tools consistent with the circumstances because their selection is predicated on the proper knowledge of the tools also as their accuracy in analyzing the info and extracting information.

#data science training in noida #data science training in delhi #data science online training #data science online course #data science course #data science training

Java Questions

Java Questions

1599137520

50 Data Science Jobs That Opened Just Last Week

Our latest survey report suggests that as the overall Data Science and Analytics market evolves to adapt to the constantly changing economic and business environments, data scientists and AI practitioners should be aware of the skills and tools that the broader community is working on. A good grip in these skills will further help data science enthusiasts to get the best jobs that various industries in their data science functions are offering.

In this article, we list down 50 latest job openings in data science that opened just last week.

(The jobs are sorted according to the years of experience r

1| Data Scientist at IBM

**Location: **Bangalore

Skills Required: Real-time anomaly detection solutions, NLP, text analytics, log analysis, cloud migration, AI planning, etc.

Apply here.

2| Associate Data Scientist at PayPal

**Location: **Chennai

Skills Required: Data mining experience in Python, R, H2O and/or SAS, cross-functional, highly complex data science projects, SQL or SQL-like tools, among others.

Apply here.

3| Data Scientist at Citrix

Location: Bangalore

Skills Required: Data modelling, database architecture, database design, database programming such as SQL, Python, etc., forecasting algorithms, cloud platforms, designing and developing ETL and ELT processes, etc.

Apply here.

4| Data Scientist at PayPal

**Location: **Bangalore

Skills Required: SQL and querying relational databases, statistical programming language (SAS, R, Python), data visualisation tool (Tableau, Qlikview), project management, etc.

Apply here.

5| Data Science at Accenture

**Location: **Bibinagar, Telangana

Skills Required: Data science frameworks Jupyter notebook, AWS Sagemaker, querying databases and using statistical computer languages: R, Python, SLQ, statistical and data mining techniques, distributed data/computing tools such as Map/Reduce, Flume, Drill, Hadoop, Hive, Spark, Gurobi, MySQL, among others.


#careers #data science #data science career #data science jobs #data science news #data scientist #data scientists #data scientists india