Data Analysis is the process of exploring, investigating, and gathering insights from data using statistical measures and visualizations.
The objective of data analysis is to develop an understanding of data by uncovering trends, relationships, and patterns.
Data analysis is both a science and an art. On the one hand it requires that you know statistics, visualization techniques, and data analysis tools like Numpy, Pandas, and Seaborn.
On the other hand, it requires that you ask interesting questions to guide the investigation, and then interpret the numbers and figures to generate useful insights.
This tutorial on data analysis covers the following topics:
The Original Article can be found on https://www.freecodecamp.org
Source: Elegant Scipy
You can follow along with the tutorial and run the code here: https://jovian.ai/aakashns/python-numerical-computing-with-numpy
This section covers the following topics:
The “data” in Data Analysis typically refers to numerical data, like stock prices, sales figures, sensor measurements, sports scores, database tables, and so on.
The Numpy library provides specialized data structures, functions, and other tools for numerical computing in Python. Let’s work through an example to see why and how to use Numpy to work with numerical data.
Suppose we want to use climate data like the temperature, rainfall, and humidity to determine if a region is well suited for growing apples.
A simple approach to do this would be to formulate the relationship between the annual yield of apples (tons per hectare) and the climatic conditions like the average temperature (in degrees Fahrenheit), rainfall (in millimeters), and average relative humidity (in percentage) as a linear equation.
yield_of_apples = w1 * temperature + w2 * rainfall + w3 * humidity
We’re expressing the yield of apples as a weighted sum of the temperature, rainfall, and humidity.
This equation is an approximation, since the actual relationship may not necessarily be linear, and there may be other factors involved. But a simple linear model like this often works well in practice.
Based on some statistical analysis of historical data, we might come up with reasonable values for the weights w1
, w2
, and w3
. Here’s an example set of values:
Given some climate data for a region, we can now predict the yield of apples. Here’s some sample data:
To begin, we can define some variables to record climate data for a region.
We can now substitute these variables into the linear equation to predict the yield of apples.
To make it slightly easier to perform the above computation for multiple regions, we can represent the climate data for each region as a vector, that is a list of numbers.
The three numbers in each vector represent the temperature, rainfall, and humidity data, respectively.
We can also represent the set of weights used in the formula as a vector.
We can now write a function crop_yield
to calculate the yield of apples (or any other crop) given the climate data and the respective weights.
The calculation performed by the crop_yield
(element-wise multiplication of two vectors and taking a sum of the results) is also called the dot product. Learn more about dot products on Khan Academy.
The Numpy library provides a built-in function to compute the dot product of two vectors. However, we must first convert the lists into Numpy arrays.
Let’s install the Numpy library using the pip
package manager.
We can now compute the dot product of the two vectors using the np.dot
function.
Numpy arrays offer the following benefits over Python lists for operating on numerical data:
(kanto * weights).sum()
rather than using loops and custom functions like crop_yield
.Here’s a comparison of dot products performed using Python loops vs. Numpy arrays on two vectors with a million elements each.
As you can see, using np.dot
is 100 times faster than using a for
loop. This makes Numpy especially useful while working with really large datasets with tens of thousands or millions of data points.
We can now go one step further and represent the climate data for all the regions using a single 2-dimensional Numpy array.
If you’ve taken a linear algebra class in high school, you may recognize the above 2-d array as a matrix with five rows and three columns. Each row represents one region, and the columns represent temperature, rainfall, and humidity, respectively.
Numpy arrays can have any number of dimensions and different lengths along each dimension. We can inspect the length along each dimension using the .shape
property of an array.
Source: Elegant Scipy
We can now compute the predicted yields of apples in all the regions, using a single matrix multiplication between climate_data
(a 5x3 matrix) and weights
(a vector of length 3). Here’s what it looks like visually:
You can learn about matrices and matrix multiplication by watching the first 3-4 videos of this YouTube playlist.
We can use the np.matmul
function or the @
operator to perform matrix multiplication.
Numpy also provides helper functions reading from and writing to files. Let’s download a file climate.txt
, which contains 10,000 climate measurements (temperature, rainfall, and humidity) in the following format:
temperature,rainfall,humidity
25.00,76.00,99.00
39.00,65.00,70.00
59.00,45.00,77.00
84.00,63.00,38.00
66.00,50.00,52.00
41.00,94.00,77.00
91.00,57.00,96.00
49.00,96.00,99.00
67.00,20.00,28.00
...
This format of storing data is known as comma-separated values or CSV.
CSVs: A comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. A CSV file typically stores tabular data (numbers and text) in plain text, in which case each line will have the same number of fields. (Wikipedia)
To read this file into a numpy array, we can use the genfromtxt
function.
There are a couple of subtleties here:
axis=1
to np.concatenate
. The axis
argument specifies the dimension for concatenation. function to change the shape of
yields from
(10000,) to
(10000,1).Here’s a visual explanation of np.concatenate
along axis=1
(can you guess what axis=0
results in?):
Source: w3resource.com
The best way to understand what a Numpy function does is to experiment with it and read the documentation to learn about its arguments and return values. Use the cells below to experiment with np.concatenate
and np.reshape
.
Let’s write the final results from our computation above back to a file using the np.savetxt
function.
Numpy provides hundreds of functions for performing operations on arrays. Here are some commonly used functions:
np.sum
, np.exp
, np.round
, arithmetic operatorsnp.reshape
, np.stack
, np.concatenate
, np.split
np.matmul
, np.dot
, np.transpose
, np.eigvals
np.mean
, np.median
, np.std
, np.max
So how do you find the function you need? The easiest way to find the right function for a specific operation or use-case is to do a web search. For instance, searching for “How to join numpy arrays” leads to this tutorial on array concatenation.
You can find a full list of array functions here.
Numpy arrays support arithmetic operators like +
, -
, *
, etc. You can perform an arithmetic operation with a single number (also called a scalar) or with another array of the same shape.
Operators make it easy to write mathematical expressions with multi-dimensional arrays.
Numpy arrays also support broadcasting, allowing arithmetic operations between two arrays with different numbers of dimensions but compatible shapes. Let’s look at an example to see how it works.
When the expression arr2 + arr4
is evaluated, arr4
(which has the shape (4,)
) is replicated three times to match the shape (3, 4)
of arr2
. Numpy performs the replication without actually creating three copies of the smaller dimension array, thus improving performance and using lower memory.
Source: Python Data Science Handbook
Broadcasting only works if one of the arrays can be replicated to match the other array’s shape.
In the above example, even if arr5
is replicated three times, it will not match the shape of arr2
. So arr2 + arr5
cannot be evaluated successfully. Learn more about broadcasting here.
Numpy arrays also support comparison operations like ==
, !=
, >
and so on. The result is an array of booleans.
Array comparison is frequently used to count the number of equal elements in two arrays using the sum
method. Remember that True
evaluates to 1
and False
evaluates to 0
when you use booleans in arithmetic operations.
Numpy extends Python’s list indexing notation using []
to multiple dimensions in an intuitive fashion. You can provide a comma-separated list of indices or ranges to select a specific element or a subarray (also called a slice) from a Numpy array.
The notation and its results can seem confusing at first, so take your time to experiment and become comfortable with it.
Use the cells below to try out some examples of array indexing and slicing, with different combinations of indices and ranges. Here are some more examples demonstrated visually:
Source: Scipy Lectures
Numpy also provides some handy functions to create arrays of desired shapes with fixed or random values. Check out the official documentation or use the help
function to learn more.
Try the following exercises to become familiar with Numpy arrays and practice your skills:
With this, we complete our discussion of numerical computing with Numpy. We’ve covered the following topics in this part of the tutorial:
Check out the following resources for learning more about Numpy:
Try answering the following questions to test your understanding of the topics covered in this notebook:
numpy
module?numpy
?@
operator used for in Numpy?axis
argument of np.concatenate
?np.reshape
function?np.random.rand
and np.random.randn
? Illustrate with examples.np.arange
and np.linspace
? Illustrate with examples.You are ready to move on to the next section of this tutorial.
Follow along and run the code here: https://jovian.ai/aakashns/python-pandas-data-analysis.
This section covers the following topics:
Pandas is a popular Python library used for working in tabular data (similar to the data stored in a spreadsheet). It provides helper functions to read data from various file formats like CSV, Excel spreadsheets, HTML tables, JSON, SQL, and more.
Let’s download a file italy-covid-daywise.txt
which contains day-wise Covid-19 data for Italy in the following format:
date,new_cases,new_deaths,new_tests
2020-04-21,2256.0,454.0,28095.0
2020-04-22,2729.0,534.0,44248.0
2020-04-23,3370.0,437.0,37083.0
2020-04-24,2646.0,464.0,95273.0
2020-04-25,3021.0,420.0,38676.0
2020-04-26,2357.0,415.0,24113.0
2020-04-27,2324.0,260.0,26678.0
2020-04-28,1739.0,333.0,37554.0
...
This format of storing data is known as comma-separated values or CSV. Here’s a reminder in case you need a definition of what the CSV format is:
CSVs: A comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. A CSV file typically stores tabular data (numbers and text) in plain text, in which case each line will have the same number of fields. (Wikipedia)
We’ll download this file using the urlretrieve
function from the urllib.request
module.
Data from the file is read and stored in a DataFrame
object – one of the core data structures in Pandas for storing and working with tabular data. We typically use the _df
suffix in the variable names for dataframes.
Here’s what we can tell by looking at the dataframe:
Keep in mind that these are officially reported numbers. The actual number of cases and deaths may be higher, as not all cases are diagnosed.
We can view some basic information about the data frame using the .info
method.
It appears that each column contains values of a specific data type. You can view statistical information for numerical columns (mean, standard deviation, minimum/maximum values, and the number of non-empty values) using the .describe
method.
Here’s a summary of the functions and methods we’ve looked at so far:
pd.read_csv
– Read data from a CSV file into a Pandas DataFrame
object.info()
– View basic information about rows, columns, and data types.describe()
– View statistical information about numeric columns.columns
– Get the list of column names.shape
– Get the number of rows and columns as a tupleThe first thing you might want to do is retrieve data from this data frame, like the counts of a specific day or the list of values in a particular column.
To do this, you should understand the internal representation of data in a data frame. Conceptually, you can think of a dataframe as a dictionary of lists: keys are column names, and values are lists/arrays containing data for the respective columns.
Representing data in the above format has a few benefits:
With the dictionary of lists analogy in mind, you can now guess how to retrieve data from a data frame. For example, we can get a list of values from a specific column using the []
indexing notation.
Each column is represented using a data structure called Series
, which is essentially a numpy array with some extra methods and properties.
Instead of using the indexing notation []
, Pandas also allows accessing columns as properties of the dataframe using the .
notation. However, this method only works for columns whose names do not contain spaces or special characters.
Further, you can also pass a list of columns within the indexing notation []
to access a subset of the data frame with just the given columns.
The new data frame cases_df
is simply a “view” of the original data frame covid_df
. Both point to the same data in the computer’s memory. Changing any values inside one of them will also change the respective values in the other.
Sharing data between data frames makes data manipulation in Pandas blazing fast. You needn’t worry about the overhead of copying thousands or millions of rows every time you want to create a new data frame by operating on an existing one.
Sometimes you might need a full copy of the data frame, in which case you can use the copy
method.
The data within covid_df_copy
is completely separate from covid_df
, and changing values inside one of them will not affect the other.
To access a specific row of data, Pandas provides the .loc
method.
Notice above that while the first few values in the new_cases
and new_deaths
columns are 0
, the corresponding values within the new_tests
column are NaN
. That is because the CSV file does not contain any data for the new_tests
column for specific dates (you can verify this by looking into the file). These values may be missing or unknown.
The distinction between 0
and NaN
is subtle but important. In this dataset, it represents that daily test numbers were not reported on specific dates. Italy started reporting daily tests on Apr 19, 2020. They’d already conducted 935,310 tests before Apr 19.
We can find the first index that doesn’t contain a NaN
value using a column’s first_valid_index
method.
Let’s look at a few rows before and after this index to verify that the values change from NaN
to actual numbers. We can do this by passing a range to loc
.
Notice that even though we have taken a random sample, each row’s original index is preserved. This is a useful property of data frames.
Here’s a summary of the functions and methods we looked at in this section:
covid_df['new_cases']
– Retrieving columns as a Series
using the column namenew_cases[243]
– Retrieving values from a Series
using an indexcovid_df.at[243, 'new_cases']
– Retrieving a single value from a data framecovid_df.copy()
– Creating a deep copy of a data framecovid_df.loc[243]
- Retrieving a row or range of rows of data from the data framehead
, tail
, and sample
– Retrieving multiple rows of data from the data framecovid_df.new_tests.first_valid_index
– Finding the first non-empty index in a seriesLet’s try to answer some questions about our data.
Q: What are the total number of reported cases and deaths related to Covid-19 in Italy?
Similar to Numpy arrays, a Pandas series supports the sum
method to answer these questions.
Q: What is the overall death rate (ratio of reported deaths to reported cases)?
Q: What is the overall number of tests conducted? A total of 935,310 tests were conducted before daily test numbers were reported.
Q: What fraction of tests returned a positive result?
Try asking and answering some more questions about the data.
Let’s say we only want to look at the days which had more than 1,000 reported cases. We can use a boolean expression to check which rows satisfy this criterion.
The boolean expression returns a series containing True
and False
boolean values. You can use this series to select a subset of rows from the original dataframe, corresponding to the True
values in the series.
The data frame contains 72 rows, but only the first and last five rows are displayed by default with Jupyter for brevity. We can change some display options to view all the rows.
We can also formulate more complex queries that involve multiple columns. As an example, let’s try to determine the days when the ratio of cases reported to tests conducted is higher than the overall positive_rate
.
However, keep in mind that sometimes it takes a few days to get the results for a test, so we can’t compare the number of new cases with the number of tests conducted on the same day. Any inference based on this positive_rate
column is likely to be incorrect.
It’s essential to watch out for such subtle relationships that are often not conveyed within the CSV file and require some external context. It’s always a good idea to read through the documentation provided with the dataset or ask for more information.
For now, let’s remove the positive_rate
column using the drop
method.
Can you figure the purpose of the inplace
argument?
You can also sort the rows by a specific column using .sort_values
. Let’s sort to identify the days with the highest number of cases, then chain it with the head
method to list just the first ten results.
It looks like the last two weeks of March had the highest number of daily cases. Let’s compare this to the days where the highest number of deaths were recorded.
It appears that daily deaths hit a peak just about a week after the peak in daily new cases.
Let’s also look at the days with the smallest number of cases. We might expect to see the first few days of the year on this list.
It seems like the count of new cases on Jun 20, 2020, was -148
, a negative number! Not something we might have expected, but that’s the nature of real-world data. It could be a data entry error, or the government may have issued a correction to account for miscounting in the past.
Can you dig through news articles online and figure out why the number was negative?
Let’s look at some days before and after Jun 20, 2020.
For now, let’s assume this was indeed a data entry error. We can use one of the following approaches for dealing with the missing or faulty value:
0
.Which approach you pick requires some context about the data and the problem. In this case, since we are dealing with data ordered by date, we can go ahead with the third approach.
You can use the .at
method to modify a specific value within the dataframe.
Here’s a summary of the functions and methods we looked at in this section:
covid_df.new_cases.sum()
– Computing the sum of values in a column or seriescovid_df[covid_df.new_cases > 1000]
– Querying a subset of rows satisfying the chosen criteria using boolean expressionsdf['pos_rate'] = df.new_cases/df.new_tests
– Adding new columns by combining data from existing columnscovid_df.drop('positive_rate')
– Removing one or more columns from the data framesort_values
– Sorting the rows of a data frame using column valuescovid_df.at[172, 'new_cases'] = ...
– Replacing a value within the data frameWhile we’ve looked at overall numbers for the cases, tests, positive rate, and more, it would also be useful to study these numbers on a month-by-month basis.
The date
column might come in handy here, as Pandas provides many utilities for working with dates.
The data type of date is currently object
, so Pandas does not know that this column is a date. We can convert it into a datetime
column using the pd.to_datetime
method.
You can see that it now has the datatype datetime64
. We can now extract different parts of the data into separate columns, using the DatetimeIndex
class (view docs).
Let’s check the overall metrics for May. We can query the rows for May, choose a subset of columns, and use the sum
method to aggregate each selected column’s values.
It seems like more cases were reported on Sundays compared to other days.
Try asking and answering some more date-related questions about the data.
As a next step, we might want to summarize the day-wise data and create a new dataframe with month-wise data. We can use the groupby
function to create a group for each month, select the columns we wish to aggregate, and aggregate them using the sum
method.
The result is a new data frame that uses unique values from the column passed to groupby
as the index. Grouping and aggregation is a powerful method for progressively summarizing data into smaller data frames.
Instead of aggregating by sum, you can also aggregate by other measures like mean. Let’s compute the average number of daily new cases, deaths, and tests for each month.
Apart from grouping, another form of aggregation is the running or cumulative sum of cases, tests, or deaths up to each row’s date. We can use the cumsum
method to compute the cumulative sum of a column as a new series.
Let’s add three new columns: total_cases
, total_deaths
, and total_tests
.
Notice how the NaN
values in the total_tests
column remain unaffected.
To determine other metrics like test per million, cases per million, and so on, we require some more information about the country, namely its population.
Let’s download another file locations.csv
that contains health-related information for many countries, including Italy.
We can merge this data into our existing data frame by adding more columns. However, to merge two data frames, we need at least one common column. Let’s insert a location
column in the covid_df
dataframe with all values set to "Italy"
.
The location data for Italy is appended to each row within covid_df
. If the covid_df
data frame contained data for multiple locations, then the respective country’s location data would be appended for each row.
We can now calculate metrics like cases per million, deaths per million, and tests per million.
After completing your analysis and adding new columns, you should write the results back to a file. Otherwise, the data will be lost when the Jupyter notebook shuts down.
Before writing to file, let’s first create a data frame containing just the columns we wish to record.
We generally use a library like matplotlib
or seaborn
to plot graphs within a Jupyter notebook. However, Pandas dataframes and series provide a handy .plot
method for quick and easy plotting.
Let’s plot a line graph showing how the number of daily cases varies over time.
While this plot shows the overall trend, it’s hard to tell where the peak occurred, as there are no dates on the X-axis. We can use the date
column as the index for the data frame to address this issue.
Notice that the index of a data frame doesn’t have to be numeric. Using the date as the index also allows us to get the data for a specific data using .loc
.
Let’s plot the new cases and new deaths per day as line graphs.
We can also compare the total cases vs. total deaths.
Let’s see how the death rate and positive testing rates vary over time.
Finally, let’s plot some month-wise data using a bar chart to visualize the trend at a higher level.
Try the following exercises to become familiar with Pandas dataframes and practice your skills:
We’ve covered the following topics in this tutorial:
Check out the following resources to learn more about Pandas:
Try answering the following questions to test your understanding of the topics covered in this notebook:
pandas
module?pandas
module?describe
method of a dataframe?info
and describe
dataframe methods different?Series
? How is it different from a Numpy array?NaN
value in a Pandas dataframe represent?Nan
different from 0
?df.loc
and df.at
?DataFrame
and Series
objects?df[df.new_cases > 100]
?inplace
argument in dataframe methods?datetime
data type?datetime
data type instead of object
?groupby
method of a dataframe? Illustrate with an example.groupby
?You are ready to move on to the next section of the tutorial.
Notebook link: https://jovian.ai/aakashns/python-matplotlib-data-visualization
Data visualization is the graphic representation of data. It involves producing images that communicate relationships among the represented data to viewers.
Visualizing data is an essential part of data analysis and machine learning. We’ll use Python libraries Matplotlib and Seaborn to learn and apply some popular data visualization techniques. We’ll use the words chart, plot, and graph interchangeably in this tutorial.
To begin, let’s install and import the libraries. We’ll use the matplotlib.pyplot
module for basic plots like line and bar charts. It is often imported with the alias plt
. We’ll use the seaborn
module for more advanced plots. It is commonly imported with the alias sns
.
Notice this we also include the special command %matplotlib inline
to ensure that our plots are shown and embedded within the Jupyter notebook itself. Without this command, sometimes plots may show up in pop-up windows.
The line chart is one of the simplest and most widely used data visualization techniques. A line chart displays information as a series of data points or markers connected by straight lines.
You can customize the shape, size, color, and other aesthetic elements of the lines and markers for better visual clarity.
Here’s a Python list showing the yield of apples (tons per hectare) over six years in an imaginary country called Kanto.
We can visualize how the yield of apples changes over time using a line chart. To draw a line chart, we can use the plt.plot
function.
Calling the plt.plot
function draws the line chart as expected. It also returns a list of plots drawn [<matplotlib.lines.Line2D at 0x7ff70aa20760>]
, shown within the output. We can include a semicolon (;
) at the end of the last statement in the cell to avoiding showing the output and display just the graph.
Let’s enhance this plot step-by-step to make it more informative and beautiful.
The X-axis of the plot currently shows list element indices 0 to 5. The plot would be more informative if we could display the year for which we’re plotting the data. We can do this by two arguments plt.plot
.
We can add labels to the axes to show what each axis represents using the plt.xlabel
and plt.ylabel
methods.
You can invoke the plt.plot
function once for each line to plot multiple lines in the same graph. Let’s compare the yields of apples vs. oranges in Kanto.
To differentiate between multiple lines, we can include a legend within the graph using the plt.legend
function. We can also set a title for the chart using the plt.title
function.
We can also show markers for the data points on each line using the marker
argument of plt.plot
.
Matplotlib provides many different markers like a circle, cross, square, diamond, and more. You can find the full list of marker types here: https://matplotlib.org/3.1.1/api/markers_api.html .
The plt.plot
function supports many arguments for styling lines and markers:
color
or c
– Set the color of the line (supported colors)linestyle
or ls
– Choose between a solid or dashed linelinewidth
or lw
– Set the width of a linemarkersize
or ms
– Set the size of markersmarkeredgecolor
or mec
– Set the edge color for markersmarkeredgewidth
or mew
– Set the edge width for markersmarkerfacecolor
or mfc
– Set the fill color for markersalpha
– Opacity of the plotCheck out the documentation for plt.plot
to learn more: https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html#matplotlib.pyplot.plot .
The fmt
argument provides a shorthand for specifying the marker shape, line style, and line color. You can provide it as the third argument to plt.plot
.
fmt = '[marker][line][color]'
You can use the plt.figure
function to change the size of the figure.
An easy way to make your charts look beautiful is to use some default styles from the Seaborn library. You can apply them globally using the sns.set_style
function. You can see a full list of predefined styles here: https://seaborn.pydata.org/generated/seaborn.set_style.html .
In a scatter plot, the values of 2 variables are plotted as points on a 2-dimensional grid. Additionally, you can also use a third variable to determine the size or color of the points. Let’s try out an example.
The Iris flower dataset provides sample measurements of sepals and petals for three species of flowers. The Iris dataset is included with the Seaborn library and you can load it as a Pandas data frame.
The output is not very informative as there are too many combinations of the two properties within the dataset. There doesn’t seem to be simple relationship between them.
We can use a scatter plot to visualize how sepal length and sepal width vary using the scatterplot
function from the seaborn
module (imported as sns
).
Notice how the points in the above plot seem to form distinct clusters with some outliers. We can color the dots using the flower species as a hue
. We can also make the points larger using the s
argument.
Adding hues makes the plot more informative. We can immediately tell that Setosa irises have a smaller sepal length but higher sepal widths. In contrast, the opposite is true for Virginica irises.
Since Seaborn uses Matplotlib’s plotting functions internally, we can use functions like plt.figure
and plt.title
to modify the figure.
Seaborn has built-in support for Pandas data frames. Instead of passing each column as a series, you can provide column names and use the data
argument to specify a data frame.
A histogram represents the distribution of a variable by creating bins (intervals) along the range of values and showing vertical bars to indicate the number of observations in each bin.
For example, let’s visualize the distribution of values of sepal width in the Iris dataset. We can use the plt.hist
function to create a histogram.
We can immediately see that the sepal widths lie in the range 2.0 - 4.5, and around 35 values are in the range 2.9 - 3.1, which seems to be the most populous bin.
We can control the number of bins or the size of each one using the bins argument.
Similar to line charts, we can draw multiple histograms in a single chart. We can reduce each histogram’s opacity so that one histogram’s bars don’t hide the others’.
Let’s draw separate histograms for each species of flowers.
Bar charts are quite similar to line charts, that is they show a sequence of values. However, a bar is shown for each value, rather than points connected by lines. We can use the plt.bar
function to draw a bar chart.
Let’s look at another sample dataset included with Seaborn called tips
. The dataset contains information about the sex, time of day, total bill, and tip amount for customers visiting a restaurant over a week.
We might want to draw a bar chart to visualize how the average bill amount varies across different days of the week. One way to do this would be to compute the day-wise averages and then use plt.bar
(try it as an exercise).
However, since this is a very common use case, the Seaborn library provides a barplot
function which can automatically compute averages.
The lines cutting each bar represent the amount of variation in the values. For instance, it seems like the variation in the total bill is relatively high on Fridays and low on Saturdays.
We can also specify a hue
argument to compare bar plots side-by-side based on a third feature, for example sex.
A heatmap is used to visualize 2-dimensional data like a matrix or a table using colors. The best way to understand it is by looking at an example.
We’ll use another sample dataset from Seaborn, called flights
, to visualize monthly passenger footfall at an airport over 12 years.
flights_df
is a matrix with one row for each month and one column for each year. The values show the number of passengers (in thousands) that visited the airport in a specific month of a year. We can use the sns.heatmap
function to visualize the footfall at the airport.
The brighter colors indicate a higher footfall at the airport. By looking at the graph, we can infer two things:
We can also display the actual values in each block by specifying annot=True
and using the cmap
argument to change the color palette.
We can also use Matplotlib to display images. Let’s download an image from the internet.
Matplotlib and Seaborn also support plotting multiple charts in a grid, using plt.subplots
, which returns a set of axes for plotting.
Here’s a single grid showing the different types of charts we’ve covered in this tutorial.
See this page for a full list of supported functions: https://matplotlib.org/3.3.1/api/axes_api.html#the-axes-class .
Seaborn also provides a helper function sns.pairplot
to automatically plot several different charts for pairs of features within a dataframe.
We have covered the following topics in this tutorial:
plt.imshow
In this tutorial we’ve covered some of the fundamental concepts and popular techniques for data visualization using Matplotlib and Seaborn. Data visualization is a vast field and we’ve barely scratched the surface here. Check out these references to learn and discover more:
Try answering the following questions to test your understanding of the topics covered in this notebook:
%matplotlib inline
?fmt
argument to plt.plot
?plt.plot
?sns.scatterplot
?plt.bar
and sns.barplot
?pivot
method of a Pandas dataframe do?PIL
module in Python?plt.subplots
function?Congratulations on making it to the end of this tutorial! You can now apply these skills to analyze real world datasets from sources like Kaggle.
If you’re pursuing a career in data science and machine learning, consider joining the Zero to Data Science Bootcamp by Jovian. It’s a 20-week part-time program where you’ll complete 7 courses, 12 coding assignments and 4-real world projects. You will also receive 6 months of career support to help you find your first data science job.
#python #numpy #pandas #matplotlib #seaborn