In this short tutorial, I would like to walk through the use of Python Pandas to analyze a CSV log file for offload analysis.
In this short tutorial, I would like to walk through the use of Python Pandas to analyze a CSV log file for offload analysis.
BackgroundPython Pandas is a library that provides data science capabilities to Python. Using this library, you can use data structures like DataFrames. This data structure allows you to model the data like an inmemory database. By doing so, you will get querylike capabilities over the data set.
Use CaseSuppose we have a URL report from taken from either the Akamai Edge server logs or the Akamai Portal report. In this case, I am using the Akamai Portal report. In this workflow, I am trying to find the top URLs that have a volume offload less than 50%. I’ve attached the code at the end. I am going to walk through the code linebyline. Here are the column names within the CSV file for reference.
Offloaded Hits,Origin Hits,Origin OK Volume (MB),Origin Error Volume (MB)
The first step is to initialize the Pandas library. In almost all the references, this library is imported as pd
. We’ll follow the same convention.
import pandas as pd
The next step is to read the whole CSV file into a DataFrame. Note that this function to read CSV data also has options to ignore leading rows, trailing rows, handling missing values, and a lot more. I am not using these options for now.
urls_df = pd.read_csv('urls_report.csv')
Pandas automatically detects the right data formats for the columns. So the URL is treated as a string and all the other values are considered floating point values.
The default URL report does not have a column for Offload by Volume. So we need to compute this new column.
urls_df['Volume Offload'] = (urls_df['OK Volume']*100) / (urls_df[
We are using the columns named OK Volume and Origin OK Volumn (MB) to arrive at the percent offloads.
At this point, we need to have the entire data set with the offload percentage computed. Since we are interested in URLs that have a low offload, we add two filters:
At this point, we have the right set of URLs but they are unsorted. We need the rows to be sorted by URLs that have the most volume and least offload. We can achieve this sorting by columns using the sort command.
low_offload_urls.sort_values(by=['OK Volume','Volume Offload'],inplace
For simplicity, I am just listing the URLs. We can export the result to CSV or Excel as well.
First, we project the URL (i.e., extract just one column) from the dataframe. We then list the URLs with a simple for loop as the projection results in an array.
for each_url in low_offload_urls['URL']:
print (each_url)
I hope you found this useful and get inspired to pick up Pandas for your analytics as well!
ReferencesI was able to pick up Pandas after going through an excellent course on Coursera titled Introduction to Data Science in Python. During this course, I realized that Pandas has excellent documentation.
Full Code
import pandas as pd
urls_df = pd.read_csv('urls_report.csv')
#now convert to right types
urls_df['Volume Offload'] = (urls_df['OK Volume']*100) / (urls_df['OK Volume'] + urls_df['Origin OK Volume (MB)'])
low_offload_urls = urls_df[(urls_df['OK Volume'] > 0) & (urls_df['Volume Offload']<50.0)]
low_offload_urls = low_offload_urls[(~low_offload_urls.URL.str.contains("somepattern.net")) & (~low_offload_urls.URL.str.contains("statefulapis")) ]
low_offload_urls.sort_values(by=['OK Volume','Volume Offload'],inplace=True, ascending=['True','False'])
for each_url in low_offload_urls['URL']:
print (each_url)
Python For Data Analysis  Build a Data Analysis Library from Scratch  Learn Python in 2019
**
**
Immerse yourself in a long, comprehensive project that teaches advanced Python concepts to build an entire library
You’ll learn
We’ll learn how to do data analysis with Python and make pivot tables with Pandas.
We’ll learn how to do data analysis with Python and make pivot tables with Pandas.
One of the first posts on my blog was about Pivot tables. I’d created a library to pivot tables in my PHP scripts. The library is not very beautiful (it throws a lot of warnings), but it works. These days I’m playing with Python Data Analysis and I’m using Pandas. The purpose of this post is something that I like a lot: learn by doing. So I want to do the same operations that I did eight years ago in the post but now with Pandas. Let’s start.
I’ll start with the same data source that I used almost ten years ago. One simple set of records, with clicks and number of users
I create a dataframe with this data
import numpy as np
import pandas as pd
data = pd.DataFrame([
{'host': 1, 'country': 'fr', 'year': 2010, 'month': 1, 'clicks': 123, 'users': 4},
{'host': 1, 'country': 'fr', 'year': 2010, 'month': 2, 'clicks': 134, 'users': 5},
{'host': 1, 'country': 'fr', 'year': 2010, 'month': 3, 'clicks': 341, 'users': 2},
{'host': 1, 'country': 'es', 'year': 2010, 'month': 1, 'clicks': 113, 'users': 4},
{'host': 1, 'country': 'es', 'year': 2010, 'month': 2, 'clicks': 234, 'users': 5},
{'host': 1, 'country': 'es', 'year': 2010, 'month': 3, 'clicks': 421, 'users': 2},
{'host': 1, 'country': 'es', 'year': 2010, 'month': 4, 'clicks': 22, 'users': 3},
{'host': 2, 'country': 'es', 'year': 2010, 'month': 1, 'clicks': 111, 'users': 2},
{'host': 2, 'country': 'es', 'year': 2010, 'month': 2, 'clicks': 2, 'users': 4},
{'host': 3, 'country': 'es', 'year': 2010, 'month': 3, 'clicks': 34, 'users': 2},
{'host': 3, 'country': 'es', 'year': 2010, 'month': 4, 'clicks': 1, 'users': 1}
])

 
Now we want to do a simple pivot operation. We want to pivot on the host:
pd.pivot_table(data,
index=['host'],
values=['users', 'clicks'],
columns=['year', 'month'],
fill_value=''
)
We can add totals:
pd.pivot_table(data,
index=['host'],
values=['users', 'clicks'],
columns=['year', 'month'],
fill_value='',
aggfunc=np.sum,
margins=True,
margins_name='Total'
)

 
We can also pivot on more than one column. For example, host and country
pd.pivot_table(data,
index=['host', 'country'],
values=['users', 'clicks'],
columns=['year', 'month'],
fill_value=''
)
and also with totals
pd.pivot_table(data,
index=['host', 'country'],
values=['users', 'clicks'],
columns=['year', 'month'],
aggfunc=np.sum,
fill_value='',
margins=True,
margins_name='Total'
)
We can group by dataframe and calculate subtotals:
data.groupby(['host', 'country'])[('clicks', 'users')].sum()
data.groupby(['host', 'country'])[('clicks', 'users')].mean()
And, finally, we can mix totals and subtotals.
out = data.groupby('host').apply(lambda sub: sub.pivot_table(
index=['host', 'country'],
values=['users', 'clicks'],
columns=['year', 'month'],
aggfunc=np.sum,
margins=True,
margins_name='SubTotal',
))
out.loc[('', 'Max', '')] = out.max()
out.loc[('', 'Min', '')] = out.min()
out.loc[('', 'Total', '')] = out.sum()
out.index = out.index.droplevel(0)
out.fillna('', inplace=True)



And that’s all! I’ve got a lot to learn yet about data analysis, but Pandas will definitely be a good friend of mine.
You can see the Jupiter notebook on my GitHub account.
Thanks for reading ❤
Pandas and Plots for Data Analysis
Pandas and Plots for Data Analysis
Introducing the Anaconda Python distribution and JupyterLab IDE
Data types
Loops and list comprehensions
Loading and using packages
Introduction to the pandas package
Importing data from CSV, Excel and SQL databases
Data types in pandas (numerical, categorical, binary, boolean)
Creating numerical summaries
Exploring data grouped by a set of variables
Exploratory statistical graphics using the seaborn package
Estimating basic statistics like mean, median, standard deviation and quantiles
Basic probability distributions (normal/Gaussian, binomial, Poisson, exponential, Chisquared) including generating random numbers and finding critical values.
How pandas creates dummy variables from categorical variables
Linear & logistic regression and the formula interface
Creating publicationquality graphics Best practices for data analyses