Cleaning, Analyzing, and Visualizing Survey Data in Python

Cleaning, Analyzing, and Visualizing Survey Data in Python

<strong>A tutorial using pandas, matplotlib, and seaborn to produce digestible insights from dirty data</strong>

A tutorial using pandas, matplotlib, and seaborn to produce digestible insights from dirty data

If you work in data at a D2C startup, there’s a good chance you will be asked to look at survey data at least once. And since SurveyMonkey is one of the most popular survey platforms out there, there’s a good chance it’ll be SurveyMonkey data.

The way SurveyMonkey exports data is not necessarily ready for analysis right out of the box, but it’s pretty close. Here I’ll demonstrate a few examples of questions you might want to ask of your survey data, and how to extract those answers quickly. We’ll even write a few functions to make our lives easier when plotting future questions.

We’ll be using pandasmatplotlib, and seaborn to make sense of our data. I used Mockaroo to generate this data; specifically, for the survey question fields, I used “Custom List” and entered in the appropriate fields. You could achieve the same effect by using random.choice in the random module, but I found it easier to let Mockaroo create the whole thing for me. I then tweaked the data in Excel so that it mirrored the structure of a SurveyMonkey export.

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns

%matplotlib inline

import warnings
survey_data = pd.read_csv('MOCK_DATA.csv')

Your first reaction to this might be “Ugh. It’s horrible.” I mean, the column names didn’t read in properly, there are a ton of NaNs, instead of numerical representations like 0/1 or 1/2/3/4/5 we have the actual text answers in each cell…And should we actually be reading this in with a MultiIndex?

But don’t worry, it’s not as bad as you might think. And we’re going to ignore MultiIndexes in this post. (Nobody really likes working with them anyway.) The team needs those insights ASAP — so we’ll come up with some hacky solutions.

First order of business: we’ve been asked to find how the answers to these questions vary by age group. But age is just an age–we don’t have a column for age groups! Well, luckily for us, we can pretty easily define a function to create one.

def age_group(age):

"""Creates an age bucket for each participant using the age variable.
    Meant to be used on a DataFrame with .apply()."""

# Convert to an int, in case the data is read in as an "object" (aka string)
age = int(age)

if age &lt; 30:
    bucket = '&lt;30'

# Age 30 to 39 ('range' excludes upper bound)
if age in range(30, 40):
    bucket = '30-39'
if age in range(40, 50):
    bucket = '40-49'
if age in range(50, 60):
    bucket = '50-59'

if age &gt;= 60:
    bucket = '60+'

return bucket 

But if we try to run it like this, we’ll get an error! That’s because we have that first row, and its value for age is the word “age” instead of a number. Since the first step is to convert each age to an int, this will fail.

We need to remove that row from the DataFrame, but it’ll be useful for us later when we rename columns, so we’ll save it as a separate variable.

# Save it as headers, and then later we can access it via slices like a list
headers = survey_data.loc[0]

.drop() defaults to axis=0, which refers to dropping items row-wise

survey_data = survey_data.drop(0)

You will notice that, since removing headers, we’ve now lost some information when looking at the survey data by itself. Ideally, you will have a list of the questions and their options that were asked in the survey, provided to you by whoever wants the analysis. If not, you should keep a separate way to reference this info in a document or note that you can look at while working.

OK, now let’s apply the age_group function to get our age_group column.

survey_data['age_group'] = survey_data['What is your age?'].apply(age_group)


Great. Next, let’s subset the data to focus on just the first question. How do the answers to this first question vary by age group?

# Subset the columns from when the question "What was the most..." is asked,

through to all the available answers. Easiest to use .iloc for this

survey_data.iloc[:5, 3:7]

# Next, assign it to a separate variable corresponding to your question
important_consideration = survey_data.iloc[:, 3:7]

Great. We have the answers in a variable now. But when we go to plot this data, it’s not going to look very good, because of the misnamed columns. Let’s write up a quick function to make renaming the columns simple:

def rename_columns(df, new_names_list):

"""Takes a DataFrame that needs to be renamed and a list of the new
    column names, and returns the renamed DataFrame. Make sure the 
    number of columns in the df matches the list length exactly,
    or function will not work as intended."""

rename_dict = dict(zip(df.columns, new_names_list))
df = df.rename(mapper=rename_dict, axis=1)

return df

Remember headers from earlier? We can use it to create our new_names_list for renaming.


It’s already an array, so we can just pass it right in, or we can rename it first for readability.

ic_col_names = headers[3:7].values

important_consideration = rename_columns(important_consideration, ic_col_names)

Now tack on age_group from the original DataFrame so we can use .groupby (You could also use pd.concat, but I find this easier)

important_consideration['age_group'] = survey_data['age_group']


Isn’t that so much nicer to look at? Don’t worry, we’re almost to the part where we get some insights.

consideration_grouped = important_consideration.groupby('age_group').agg('count')


Notice how groupby and other aggregation functions ignore NaNs automatically. That makes our lives significantly easier.

Let’s say we also don’t really care about analyzing under-30 customers right now, so we’ll plot only the other age groups.

figsize=(10, 10),
title='Most Important Consideration By Age Group'

OK, this is all well and good, but the 60+ group has more people in it than the other groups, and so it’s hard to make a fair comparison. What do we do? We can plot each age group in a separate plot, and then compare the distributions.

“But wait,” you might think. “I don’t really want to write the code for 4 different plots.”

Well of course not! Who has time for that? Let’s write another function to do it for us.

def plot_counts_by_age_group(groupby_count_obj, age_group, ax=None):

"""Takes a count-aggregated groupby object, an age group, and an 
(optional) AxesSubplot, and draws a barplot for that group."""

sort_order = groupby_count_obj.loc[age_group].sort_index().index

sns.barplot(y = groupby_count_obj.loc[age_group].index, 
            x = groupby_count_obj.loc[age_group].values, 
            order = sort_order, 
            palette = 'rocket', edgecolor = 'black', 
            ax = ax
            ).set_title("Age {}".format(age_group))

I believe it was Jenny Bryan, in her wonderful talk “Code Smells and Feels,” who first tipped me off to the following:

If you find yourself copying and pasting code and just changing a few values, you really ought to just write a function.

This has been a great guide for me in deciding when it is and isn’t worth it to write a function for something. A rule of thumb I like to use is that if I would be copying and pasting more than 3 times, I write a function.

There are also benefits other than convenience to this approach, such as that it:

  • reduces the possibility for error (when copying and pasting, it’s easy to accidentally forget to change a value)
  • makes for more readable code
  • builds up your personal toolbox of functions
  • forces you to think at a higher level of abstraction

(All of which improve your programming skills and make the people who need to read your code happier!)

# Setup for the 2x2 subplot grid

Note we don't want to share the x axis since we have counts

fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(8, 6), sharey=True)

ax.flatten() avoids having to explicitly reference a subplot index in ax Use consideration_grouped.index[:-1] because we're not plotting the under-30s

for subplot, age_group in zip(ax.flatten(), list(consideration_grouped.index)[:-1]):
plot_counts_by_age_group(consideration_grouped, age_group, ax=subplot)


This is, of course, generated data from a uniform distribution, and we would thus not expect to see any significant differences between groups. Hopefully your own survey data will be more interesting.

Next, let’s address another format of question. In this one, we need to see how interested each age group is in a given benefit. Happily, these questions are actually easier to deal with than the former type. Let’s take a look:

benefits = survey_data.iloc[:, 7:]

And look, since this is a small DataFrame, age_group is appended already and we won’t have to add it.

ben_col_names = headers[7:].values

benefits = rename_columns(benefits, ben_col_names)


Cool. Now we have the subsetted data, but we can’t just aggregate it by count this time like we could with the other question — the last question had NaNs that would be excluded to give the true count for that response, but with this one, we would just get the number of responses for each age group overall:


This is definitely not what we want! The point of the question is to understand how interested the different age groups are, and we need to preserve that information. All this tells us is how many people in each age group responded to the question.

So what do we do? One way to go would be to re-encode these responses numerically. But what if we want to preserve the relationship on an even more granular level? If we encode numerically, we can take the median and average of each age group’s level of interest. But what if what we’re really interested in is the specific percentage of people per age group who chose each interest level? It’d be easier to convey that info in a barplot, with the text preserved.

That’s what we’re going to do next. And — you guessed it — it’s time to write another function.

order = ['Not Interested at all', 'Somewhat uninterested',
'Neutral', 'Somewhat interested', 'Very interested']

def plot_benefit_question(df, col_name, age_group, order=order,
palette='Spectral', ax=None):

"""Takes a relevant DataFrame, the name of the column (benefit) we want info on,
    and an age group, and returns a plot of the answers to that benefit question."""

reduced_df = df[[col_name, 'age_group']]

# Gets the relative frequencies (percentages) for "this-age-group" only
data_to_plot = reduced_df[reduced_df['age_group'] == age_group][col_name].value_counts(normalize=True)

sns.barplot(y = data_to_plot.index, 
            x = data_to_plot.values, 
            order = order, 
            ax = ax,
            palette = palette, 
            edgecolor = 'black'
            ).set_title('Age {}: {}'.format(age_group, col_name))

Quick note to new learners: Most people won’t say this explicitly, but let me be clear on how visualizations are often made. Generally speaking, it is a highly iterative process. Even the most experienced data scientists don’t just write up a plot with all of these specifications off the top of their head.

Generally, you start with .plot(kind='bar'), or similar depending on the plot you want, and then you change size, color maps, get the groups properly sorted using order=, specify whether the labels should be rotated, and set x- or y-axis labels invisible, and more, depending on what you think is best for whoever will be using the visualizations.

So don’t be intimidated by the long blocks of code you see when people are making plots. They’re usually created over a span of minutes while testing out different specifications, not by writing perfect code from scratch in one go.

Now we can plot another 2x2 for each benefit broken out by age group. But we’d have to do that for all 4 benefits! Again: who has time for that? Instead, we’ll loop over each benefit, and each age group within each benefit, using a couple of for loops. But if you’re interested, I’d challenge you to refactor this into a function if you happen to have many questions that are formatted like this.

# Exclude age_group from the list of benefits
all_benefits = list(benefits.columns[:-1])

Exclude under-30s

buckets_except_under30 = [group for group in benefits['age_group'].unique()
if group != '<30']

for benefit in all_benefits:

fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(8, 6), 
                       sharey=True, sharex=True)

for a, age_group in zip(ax.flatten(), buckets_except_under30):
    plot_benefit_question(benefits, benefit, 
                          age_group=age_group, ax=a)
    # Keeps x-axis tick labels for each group of plots
    a.xaxis.set_tick_params(which='both', labelbottom=True)
    # Suppresses displaying the question along the y-axis


Success! And if we wanted to export each individual set of plots, we would simply add the line plt.savefig('{}_interest_by_age.png'.format(benefit)), and matplotlib would automatically save a beautifully sharp rendering of each set of plots.

This makes it especially easy for folks on other teams to use your findings; you can simply export them to a plots folder, and people can browse the images and be able to drag and drop them right into a PowerPoint presentation or other report.

These could use a tad more padding, so if I were to do this again, I would increase the allowed height for the figure slightly.

Let’s do one more example: numerically encoding the benefits, as we mentioned earlier. Then we can generate a heatmap of the correlations between interest in different benefits.

def encode_interest(interest):
"""Takes a string indicating interest and encodes it to an ordinal
(numerical) variable."""

if interest == 'Not Interested at all':
    x = 1
if interest == 'Somewhat uninterested':
    x = 2

if interest == 'Neutral':
    x = 3
if interest == 'Somewhat interested':
    x = 4
if interest == 'Very interested':
    x = 5

return x  

benefits_encoded = benefits.iloc[:, :-1].copy()

Map the ordinal variable

for column in benefits.iloc[:, :-1].columns:
benefits_encoded[column] = benefits[column].map(encode_interest)


And lastly, we’ll generate the correlation matrix and plot the correlations.

# Use Spearman instead of default Pearson, since these

are ordinal variables!

corr_matrix = benefits_encoded.corr(method='spearman')


fig, ax = plt.subplots(figsize=(8, 6))

vmin and vmax control the range of the colormap

sns.heatmap(corr_matrix, cmap='RdBu', annot=True, fmt='.2f',
vmin=-1, vmax=1)

plt.title("Correlations Between Desired Benefits")

Add tight_layout to ensure the labels don't get cut off


Again, since the data is randomly generated, we would expect there to be little to no correlation, and that is indeed what we find. (It is funny to note that SQL tutorials are slightly negatively correlated with drag-and-drop features, which is actually what we might expect to see in real data!)

Let’s do one last type of plot, one that’s closely related to the heatmap: the clustermap. Clustermaps make correlations especially informative in analyzing survey responses, because they use hierarchical clustering to (in this case) group benefits together by how closely related they are. So instead of eyeballing the heatmap for which individual benefits are positively or negatively associated, which can get a little crazy when you have 10+ benefits, the plot will be segmented into clusters, which is a little easier to look at.

You can also easily change the linkage type used in the calculation, if you’re familiar with the mathematical details of hierarchical clustering. Some of the available options are ‘single’, ‘average’, and ‘ward’ — I won’t get into the details, but ‘ward’ is generally a safe bet when starting out.

sns.clustermap(corr_matrix, method='ward', cmap='RdBu', annot=True,
vmin=-1, vmax=1, figsize=(14,12))

plt.title("Correlations Between Desired Benefits")

Long labels often require a little tweaking, so I’d recommend renaming your benefits to shorter names prior to using a clustermap.

A quick assessment of this shows that the clustering algorithm believes drag-and-drop features and ready-made formulas cluster together, while custom dashboard templates and SQL tutorials form another cluster. Since the correlations are so weak, you can see that the “height” of when the benefits link together to form a cluster is very tall. (This means you should probably not base any business decisions on this finding!) Hopefully the example is illustrative despite the weak relationships.

I hope you enjoyed this quick tutorial about working with survey data and writing functions to quickly generate visualizations of your findings! If you think you know an even more efficient way of doing things, feel free to let me know in the comments — this is just what I came up with when I needed to produce insights on individual questions as quickly as possible.

Originally published by Charlene Chambliss at

Learn More

☞ Machine Learning A-Z™: Hands-On Python & R In Data Science

☞ Data Science A-Z™: Real-Life Data Science Exercises Included

☞ R Programming A-Z™: R For Data Science With Real Exercises!

☞ Python for Data Science and Machine Learning Bootcamp

☞ Tableau 10 A-Z: Hands-On Tableau Training For Data Science!

☞ Deep Learning A-Z™: Hands-On Artificial Neural Networks

Data Science with Python explained

Data Science with Python explained

An overview of using Python for data science including Numpy, Scipy, pandas, Scikit-Learn, XGBoost, TensorFlow and Keras.

An overview of using Python for data science including Numpy, Scipy, pandas, Scikit-Learn, XGBoost, TensorFlow and Keras.

So you’ve heard of data science and you’ve heard of Python.

You want to explore both but have no idea where to start — data science is pretty complicated, after all.

Don’t worry — Python is one of the easiest programming languages to learn. And thanks to the hard work of thousands of open source contributors, you can do data science, too.

If you look at the contents of this article, you may think there’s a lot to master, but this article has been designed to gently increase the difficulty as we go along.

One article obviously can’t teach you everything you need to know about data science with python, but once you’ve followed along you’ll know exactly where to look to take the next steps in your data science journey.

Table contents:

  • Why Python?
  • Installing Python
  • Using Python for Data Science
  • Numeric computation in Python
  • Statistical analysis in Python
  • Data manipulation in Python
  • Working with databases in Python
  • Data engineering in Python
  • Big data engineering in Python
  • Further statistics in Python
  • Machine learning in Python
  • Deep learning in Python
  • Data science APIs in Python
  • Applications in Python
  • Summary
Why Python?

Python, as a language, has a lot of features that make it an excellent choice for data science projects.

It’s easy to learn, simple to install (in fact, if you use a Mac you probably already have it installed), and it has a lot of extensions that make it great for doing data science.

Just because Python is easy to learn doesn’t mean its a toy programming language — huge companies like Google use Python for their data science projects, too. They even contribute packages back to the community, so you can use the same tools in your projects!

You can use Python to do way more than just data science — you can write helpful scripts, build APIs, build websites, and much much more. Learning it for data science means you can easily pick up all these other things as well.

Things to note

There are a few important things to note about Python.

Right now, there are two versions of Python that are in common use. They are versions 2 and 3.

Most tutorials, and the rest of this article, will assume that you’re using the latest version of Python 3. It’s just good to be aware that sometimes you can come across books or articles that use Python 2.

The difference between the versions isn’t huge, but sometimes copying and pasting version 2 code when you’re running version 3 won’t work — you’ll have to do some light editing.

The second important thing to note is that Python really cares about whitespace (that’s spaces and return characters). If you put whitespace in the wrong place, your programme will very likely throw an error.

There are tools out there to help you avoid doing this, but with practice you’ll get the hang of it.

If you’ve come from programming in other languages, Python might feel like a bit of a relief: there’s no need to manage memory and the community is very supportive.

If Python is your first programming language you’ve made an excellent choice. I really hope you enjoy your time using it to build awesome things.

Installing Python

The best way to install Python for data science is to use the Anaconda distribution (you’ll notice a fair amount of snake-related words in the community).

It has everything you need to get started using Python for data science including a lot of the packages that we’ll be covering in the article.

If you click on Products -> Distribution and scroll down, you’ll see installers available for Mac, Windows and Linux.

Even if you have Python available on your Mac already, you should consider installing the Anaconda distribution as it makes installing other packages easier.

If you prefer to do things yourself, you can go to the official Python website and download an installer there.

Package Managers

Packages are pieces of Python code that aren’t a part of the language but are really helpful for doing certain tasks. We’ll be talking a lot about packages throughout this article so it’s important that we’re set up to use them.

Because the packages are just pieces of Python code, we could copy and paste the code and put it somewhere the Python interpreter (the thing that runs your code) can find it.

But that’s a hassle — it means that you’ll have to copy and paste stuff every time you start a new project or if the package gets updated.

To sidestep all of that, we’ll instead use a package manager.

If you chose to use the Anaconda distribution, congratulations — you already have a package manager installed. If you didn’t, I’d recommend installing pip.

No matter which one you choose, you’ll be able to use commands at the terminal (or command prompt) to install and update packages easily.

Using Python for Data Science

Now that you’ve got Python installed, you’re ready to start doing data science.

But how do you start?

Because Python caters to so many different requirements (web developers, data analysts, data scientists) there are lots of different ways to work with the language.

Python is an interpreted language which means that you don’t have to compile your code into an executable file, you can just pass text documents containing code to the interpreter!

Let’s take a quick look at the different ways you can interact with the Python interpreter.

In the terminal

If you open up the terminal (or command prompt) and type the word ‘python’, you’ll start a shell session. You can type any valid Python commands in there and they’d work just like you’d expect.

This can be a good way to quickly debug something but working in a terminal is difficult over the course of even a small project.

Using a text editor

If you write a series of Python commands in a text file and save it with a .py extension, you can navigate to the file using the terminal and, by typing python, can run the programme.

This is essentially the same as typing the commands one-by-one into the terminal, it’s just much easier to fix mistakes and change what your program does.

In an IDE

An IDE is a professional-grade piece of software that helps you manage software projects.

One of the benefits of an IDE is that you can use debugging features which tell you where you’ve made a mistake before you try to run your programme.

Some IDEs come with project templates (for specific tasks) that you can use to set your project out according to best practices.

Jupyter Notebooks

None of these ways are the best for doing data science with python — that particular honour belongs to Jupyter notebooks.

Jupyter notebooks give you the capability to run your code one ‘block’ at a time, meaning that you can see the output before you decide what to do next — that’s really crucial in data science projects where we often need to see charts before taking the next step.

If you’re using Anaconda, you’ll already have Jupyter lab installed. To start it you’ll just need to type ‘jupyter lab’ into the terminal.

If you’re using pip, you’ll have to install Jupyter lab with the command ‘python pip install jupyter’.

Numeric Computation in Python

It probably won’t surprise you to learn that data science is mostly about numbers.

The NumPy package includes lots of helpful functions for performing the kind of mathematical operations you’ll need to do data science work.

It comes installed as part of the Anaconda distribution, and installing it with pip is just as easy as installing Jupyter notebooks (‘pip install numpy’).

The most common mathematical operations we’ll need to do in data science are things like matrix multiplication, computing the dot product of vectors, changing the data types of arrays and creating the arrays in the first place!

Here’s how you can make a list into a NumPy array:

Here’s how you can do array multiplication and calculate dot products in NumPy:

And here’s how you can do matrix multiplication in NumPy:

Statistics in Python

With mathematics out of the way, we must move forward to statistics.

The Scipy package contains a module (a subsection of a package’s code) specifically for statistics.

You can import it (make its functions available in your programme) into your notebook using the command ‘from scipy import stats’.

This package contains everything you’ll need to calculate statistical measurements on your data, perform statistical tests, calculate correlations, summarise your data and investigate various probability distributions.

Here’s how to quickly access summary statistics (minimum, maximum, mean, variance, skew, and kurtosis) of an array using Scipy:

Data Manipulation with Python

Data scientists have to spend an unfortunate amount of time cleaning and wrangling data. Luckily, the Pandas package helps us do this with code rather than by hand.

The most common tasks that I use Pandas for are reading data from CSV files and databases.

It also has a powerful syntax for combining different datasets together (datasets are called DataFrames in Pandas) and performing data manipulation.

You can see the first few rows of a DataFrame using the .head method:

You can select just one column using square brackets:

And you can create new columns by combining others:

Working with Databases in Python

In order to use the pandas read_sql method, you’ll have to establish a connection to a database.

The most bulletproof method of connecting to a database is by using the SQLAlchemy package for Python.

Because SQL is a language of its own and connecting to a database depends on which database you’re using, I’ll leave you to read the documentation if you’re interested in learning more.

Data Engineering in Python

Sometimes we’d prefer to do some calculations on our data before they arrive in our projects as a Pandas DataFrame.

If you’re working with databases or scraping data from the web (and storing it somewhere), this process of moving data and transforming it is called ETL (Extract, transform, load).

You extract the data from one place, do some transformations to it (summarise the data by adding it up, finding the mean, changing data types, and so on) and then load it to a place where you can access it.

There’s a really cool tool called Airflow which is very good at helping you manage ETL workflows. Even better, it’s written in Python.

It was developed by Airbnb when they had to move incredible amounts of data around, you can find out more about it here.

Big Data Engineering in Python

Sometimes ETL processes can be really slow. If you have billions of rows of data (or if they’re a strange data type like text), you can recruit lots of different computers to work on the transformation separately and pull everything back together at the last second.

This architecture pattern is called MapReduce and it was made popular by Hadoop.

Nowadays, lots of people use Spark to do this kind of data transformation / retrieval work and there’s a Python interface to Spark called (surprise, surprise) PySpark.

Both the MapReduce architecture and Spark are very complex tools, so I’m not going to go into detail here. Just know that they exist and that if you find yourself dealing with a very slow ETL process, PySpark might help. Here’s a link to the official site.

Further Statistics in Python

We already know that we can run statistical tests, calculate descriptive statistics, p-values, and things like skew and kurtosis using the stats module from Scipy, but what else can Python do with statistics?

One particular package that I think you should know about is the lifelines package.

Using the lifelines package, you can calculate a variety of functions from a subfield of statistics called survival analysis.

Survival analysis has a lot of applications. I’ve used it to predict churn (when a customer will cancel a subscription) and when a retail store might be burglarised.

These are totally different to the applications the creators of the package imagined it would be used for (survival analysis is traditionally a medical statistics tool). But that just shows how many different ways there are to frame data science problems!

The documentation for the package is really good, check it out here.

Machine Learning in Python

Now this is a major topic — machine learning is taking the world by storm and is a crucial part of a data scientist’s work.

Simply put, machine learning is a set of techniques that allows a computer to map input data to output data. There are a few instances where this isn’t the case but they’re in the minority and it’s generally helpful to think of ML this way.

There are two really good machine learning packages for Python, let’s talk about them both.


Most of the time you spend doing machine learning in Python will be spent using the Scikit-Learn package (sometimes abbreviated sklearn).

This package implements a whole heap of machine learning algorithms and exposes them all through a consistent syntax. This makes it really easy for data scientists to take full advantage of every algorithm.

The general framework for using Scikit-Learn goes something like this –

You split your dataset into train and test datasets:

Then you instantiate and train a model:

And then you use the metrics module to test how well your model works:


The second package that is commonly used for machine learning in Python is XGBoost.

Where Scikit-Learn implements a whole range of algorithms XGBoost only implements a single one — gradient boosted decision trees.

This package (and algorithm) has become very popular recently due to its success at Kaggle competitions (online data science competitions that anyone can participate in).

Training the model works in much the same way as a Scikit-Learn algorithm.

Deep Learning in Python

The machine learning algorithms available in Scikit-Learn are sufficient for nearly any problem. That being said, sometimes you need to use the most advanced thing available.

Deep neural networks have skyrocketed in popularity due to the fact that systems using them have outperformed nearly every other class of algorithm.

There’s a problem though — it’s very hard to say what a neural net is doing and why it’s making the decisions that it is. Because of this, their use in finance, medicine, the law and related professions isn’t widely endorsed.

The two major classes of neural network are convolutional neural networks (which are used to classify images and complete a host of other tasks in computer vision) and recurrent neural nets (which are used to understand and generate text).

Exploring how neural nets work is outside the scope of this article, but just know that the packages you’ll need to look for if you want to do this kind of work are TensorFlow (a Google contibution!) and Keras.

Keras is essentially a wrapper for TensorFlow that makes it easier to work with.

Data Science APIs in Python

Once you’ve trained a model, you’d like to be able to access predictions from it in other software. The way you do this is by creating an API.

An API allows your model to receive data one row at a time from an external source and return a prediction.

Because Python is a general purpose programming language that can also be used to create web services, it’s easy to use Python to serve your model via API.

If you need to build an API you should look into the pickle and Flask. Pickle allows you to save trained models on your hard-drive so that you can use them later. And Flask is the simplest way to create web services.

Web Applications in Python

Finally, if you’d like to build a full-featured web application around your data science project, you should use the Django framework.

Django is immensely popular in the web development community and was used to build the first version of Instagram and Pinterest (among many others).


And with that we’ve concluded our whirlwind tour of data science with Python.

We’ve covered everything you’d need to learn to become a full-fledged data scientist. If it still seems intimidating, you should know that nobody knows all of this stuff and that even the best of us still Google the basics from time to time.

Learn Data Science | How to Learn Data Science for Free

Learn Data Science | How to Learn Data Science for Free

Learn Data Science | How to Learn Data Science for Free. In this post, I have described a learning path and free online courses and tutorials that will enable you to learn data science for free.

The average cost of obtaining a masters degree at traditional bricks and mortar institutions will set you back anywhere between $30,000 and $120,000. Even online data science degree programs don’t come cheap costing a minimum of $9,000. So what do you do if you want to learn data science but can’t afford to pay this?

I trained into a career as a data scientist without taking any formal education in the subject. In this article, I am going to share with you my own personal curriculum for learning data science if you can’t or don’t want to pay thousands of dollars for more formal study.

The curriculum will consist of 3 main parts, technical skills, theory and practical experience. I will include links to free resources for every element of the learning path and will also be including some links to additional ‘low cost’ options. So if you want to spend a little money to accelerate your learning you can add these resources to the curriculum. I will include the estimated costs for each of these.

Technical skills

The first part of the curriculum will focus on technical skills. I recommend learning these first so that you can take a practical first approach rather than say learning the mathematical theory first. Python is by far the most widely used programming language used for data science. In the Kaggle Machine Learning and Data Science survey carried out in 2018 83% of respondents said that they used Python on a daily basis. I would, therefore, recommend focusing on this language but also spending a little time on other languages such as R.

Python Fundamentals

Before you can start to use Python for data science you need a basic grasp of the fundamentals behind the language. So you will want to take a Python introductory course. There are lots of free ones out there but I like the Codeacademy ones best as they include hands-on in-browser coding throughout.

I would suggest taking the introductory course to learn Python. This covers basic syntax, functions, control flow, loops, modules and classes.

Data analysis with python

Next, you will want to get a good understanding of using Python for data analysis. There are a number of good resources for this.

To start with I suggest taking at least the free parts of the data analyst learning path on Dataquest offers complete learning paths for data analyst, data scientist and data engineer. Quite a lot of the content, particularly on the data analyst path is available for free. If you do have some money to put towards learning then I strongly suggest putting it towards paying for a few months of the premium subscription. I took this course and it provided a fantastic grounding in the fundamentals of data science. It took me 6 months to complete the data scientist path. The price varies from $24.50 to $49 per month depending on whether you pay annually or not. It is better value to purchase the annual subscription if you can afford it.

The Dataquest platform

Python for machine learning

If you have chosen to pay for the full data science course on Dataquest then you will have a good grasp of the fundamentals of machine learning with Python. If not then there are plenty of other free resources. I would focus to start with on scikit-learn which is by far the most commonly used Python library for machine learning.

When I was learning I was lucky enough to attend a two-day workshop run by Andreas Mueller one of the core developers of scikit-learn. He has however published all the material from this course, and others, on this Github repo. These consist of slides, course notes and notebooks that you can work through. I would definitely recommend working through this material.

Then I would suggest taking some of the tutorials in the scikit-learn documentation. After that, I would suggest building some practical machine learning applications and learning the theory behind how the models work — which I will cover a bit later on.


SQL is a vital skill to learn if you want to become a data scientist as one of the fundamental processes in data modelling is extracting data in the first place. This will more often than not involve running SQL queries against a database. Again if you haven’t opted to take the full Dataquest course then here are a few free resources to learn this skill.

Codeacamdemy has a free introduction to SQL course. Again this is very practical with in-browser coding all the way through. If you also want to learn about cloud-based database querying then Google Cloud BigQuery is very accessible. There is a free tier so you can try queries for free, an extensive range of public datasets to try and very good documentation.

Codeacademy SQL course


To be a well-rounded data scientist it is a good idea to diversify a little from just Python. I would, therefore, suggest also taking an introductory course in R. Codeacademy have an introductory course on their free plan. It is probably worth noting here that similar to Dataquest Codeacademy also offers a complete data science learning plan as part of their pro account (this costs from $31.99 to $15.99 per month depending on how many months you pay for up front). I personally found the Dataquest course to be much more comprehensive but this may work out a little cheaper if you are looking to follow a learning path on a single platform.

Software engineering

It is a good idea to get a grasp of software engineering skills and best practices. This will help your code to be more readable and extensible both for yourself and others. Additionally, when you start to put models into production you will need to be able to write good quality well-tested code and work with tools like version control.

There are two great free resources for this. Python like you mean it covers things like the PEP8 style guide, documentation and also covers object-oriented programming really well.

The scikit-learn contribution guidelines, although written to facilitate contributions to the library, actually cover the best practices really well. This covers topics such as Github, unit testing and debugging and is all written in the context of a data science application.

Deep learning

For a comprehensive introduction to deep learning, I don’t think that you can get any better than the totally free and totally ad-free This course includes an introduction to machine learning, practical deep learning, computational linear algebra and a code-first introduction to natural language processing. All their courses have a practical first approach and I highly recommend them. platform


Whilst you are learning the technical elements of the curriculum you will encounter some of the theory behind the code you are implementing. I recommend that you learn the theoretical elements alongside the practical. The way that I do this is that I learn the code to be able to implement a technique, let’s take KMeans as an example, once I have something working I will then look deeper into concepts such as inertia. Again the scikit-learn documentation contains all the mathematical concepts behind the algorithms.

In this section, I will introduce the key foundational elements of theory that you should learn alongside the more practical elements.

The khan academy covers almost all the concepts I have listed below for free. You can tailor the subjects you would like to study when you sign up and you then have a nice tailored curriculum for this part of the learning path. Checking all of the boxes below will give you an overview of most elements I have listed below.



Calculus is defined by Wikipedia as “the mathematical study of continuous change.” In other words calculus can find patterns between functions, for example, in the case of derivatives, it can help you to understand how a function changes over time.

Many machine learning algorithms utilise calculus to optimise the performance of models. If you have studied even a little machine learning you will probably have heard of Gradient descent. This functions by iteratively adjusting the parameter values of a model to find the optimum values to minimise the cost function. Gradient descent is a good example of how calculus is used in machine learning.

What you need to know:


  • Geometric definition
  • Calculating the derivative of a function
  • Nonlinear functions

Chain rule

  • Composite functions
  • Composite function derivatives
  • Multiple functions


  • Partial derivatives
  • Directional derivatives
  • Integrals

Linear Algebra

Many popular machine learning methods, including XGBOOST, use matrices to store inputs and process data. Matrices alongside vector spaces and linear equations form the mathematical branch known as Linear Algebra. In order to understand how many machine learning methods work it is essential to get a good understanding of this field.

What you need to learn:

Vectors and spaces

  • Vectors
  • Linear combinations
  • Linear dependence and independence
  • Vector dot and cross products

Matrix transformations

  • Functions and linear transformations
  • Matrix multiplication
  • Inverse functions
  • Transpose of a matrix


Here is a list of the key concepts you need to know:

Descriptive/Summary statistics

  • How to summarise a sample of data
  • Different types of distributions
  • Skewness, kurtosis, central tendency (e.g. mean, median, mode)
  • Measures of dependence, and relationships between variables such as correlation and covariance

Experiment design

  • Hypothesis testing
  • Sampling
  • Significance tests
  • Randomness
  • Probability
  • Confidence intervals and two-sample inference

Machine learning

  • Inference about slope
  • Linear and non-linear regression
  • Classification

Practical experience

The third section of the curriculum is all about practice. In order to truly master the concepts above you will need to use the skills in some projects that ideally closely resemble a real-world application. By doing this you will encounter problems to work through such as missing and erroneous data and develop a deep level of expertise in the subject. In this last section, I will list some good places you can get this practical experience from for free.

“With deliberate practice, however, the goal is not just to reach your potential but to build it, to make things possible that were not possible before. This requires challenging homeostasis — getting out of your comfort zone — and forcing your brain or your body to adapt.”, Anders Ericsson, Peak: Secrets from the New Science of Expertise

Kaggle, et al

Machine learning competitions are a good place to get practice with building machine learning models. They give access to a wide range of data sets, each with a specific problem to solve and have a leaderboard. The leaderboard is a good way to benchmark how good your knowledge at developing a good model actually is and where you may need to improve further.

In addition to Kaggle, there are other platforms for machine learning competitions including Analytics Vidhya and DrivenData.

Driven data competitions page

UCI Machine Learning Repository

The UCI machine learning repository is a large source of publically available data sets. You can use these data sets to put together your own data projects this could include data analysis and machine learning models, you could even try building a deployed model with a web front end. It is a good idea to store your projects somewhere publically such as Github as this can create a portfolio showcasing your skills to use for future job applications.

UCI repository

Contributions to open source

One other option to consider is contributing to open source projects. There are many Python libraries that rely on the community to maintain them and there are often hackathons held at meetups and conferences where even beginners can join in. Attending one of these events would certainly give you some practical experience and an environment where you can learn from others whilst giving something back at the same time. Numfocus is a good example of a project like this.

In this post, I have described a learning path and free online courses and tutorials that will enable you to learn data science for free. Showcasing what you are able to do in the form of a portfolio is a great tool for future job applications in lieu of formal qualifications and certificates. I really believe that education should be accessible to everyone and, certainly, for data science at least, the internet provides that opportunity. In addition to the resources listed here, I have previously published a recommended reading list for learning data science available here. These are also all freely available online and are a great way to complement the more practical resources covered above.

Thanks for reading!

Python For Data Analysis | Build a Data Analysis Library from Scratch | Learn Python in 2019

Python For Data Analysis - Build a Data Analysis Library from Scratch - Learn Python in 2019


Immerse yourself in a long, comprehensive project that teaches advanced Python concepts to build an entire library

You’ll learn

  • How to build a Python library similar pandas
  • How to complete a large, comprehensive project
  • Test-driven development with pytest
  • Environment creation
  • Advanced Python topics such as special methods and property decorators
  • A fully-functioning library that you can use to data analysis