Using Python Pandas for Log Analysis

Using Python Pandas for Log Analysis

In this short tutorial, I would like to walk through the use of Python Pandas to analyze a CSV log file for offload analysis.

In this short tutorial, I would like to walk through the use of Python Pandas to analyze a CSV log file for offload analysis.

Background

Python Pandas is a library that provides data science capabilities to Python. Using this library, you can use data structures like DataFrames. This data structure allows you to model the data like an in-memory database. By doing so, you will get query-like capabilities over the data set.

Use Case

Suppose we have a URL report from taken from either the Akamai Edge server logs or the Akamai Portal report. In this case, I am using the Akamai Portal report. In this workflow, I am trying to find the top URLs that have a volume offload less than 50%. I’ve attached the code at the end. I am going to walk through the code line-by-line. Here are the column names within the CSV file for reference.

Offloaded Hits,Origin Hits,Origin OK Volume (MB),Origin Error Volume (MB)

Initialize the Library

The first step is to initialize the Pandas library. In almost all the references, this library is imported as pd. We’ll follow the same convention.

import pandas as pd

Read the CSV as a DataFrame

The next step is to read the whole CSV file into a DataFrame. Note that this function to read CSV data also has options to ignore leading rows, trailing rows, handling missing values, and a lot more. I am not using these options for now.

urls_df = pd.read_csv('urls_report.csv')

Pandas automatically detects the right data formats for the columns. So the URL is treated as a string and all the other values are considered floating point values.

Compute Volume Offload

The default URL report does not have a column for Offload by Volume. So we need to compute this new column.

urls_df['Volume Offload'] = (urls_df['OK Volume']*100) / (urls_df[

We are using the columns named OK Volume and Origin OK Volumn (MB) to arrive at the percent offloads.

Filter the Data

At this point, we need to have the entire data set with the offload percentage computed. Since we are interested in URLs that have a low offload, we add two filters:

  • Consider the rows having a volume offload of less than 50% and it should have at least some traffic (we don’t want rows that have zero traffic).
  • We will also remove some known patterns. This is based on the customer context but essentially indicates URLs that can never be cached.

    Sort Data

At this point, we have the right set of URLs but they are unsorted. We need the rows to be sorted by URLs that have the most volume and least offload. We can achieve this sorting by columns using the sort command.

low_offload_urls.sort_values(by=['OK Volume','Volume Offload'],inplace

For simplicity, I am just listing the URLs. We can export the result to CSV or Excel as well.

First, we project the URL (i.e., extract just one column) from the dataframe. We then list the URLs with a simple for loop as the projection results in an array.

for each_url in low_offload_urls['URL']:
print (each_url)

I hope you found this useful and get inspired to pick up Pandas for your analytics as well!

References

I was able to pick up Pandas after going through an excellent course on Coursera titled Introduction to Data Science in Python. During this course, I realized that Pandas has excellent documentation.

  • Consider the rows having a volume offload of less than 50% and it should have at least some traffic (we don’t want rows that have zero traffic).
  • We will also remove some known patterns. This is based on the customer context but essentially indicates URLs that can never be cached.

Full Code

import pandas as pd

urls_df = pd.read_csv('urls_report.csv')

#now convert to right types
urls_df['Volume Offload'] = (urls_df['OK Volume']*100) / (urls_df['OK Volume'] + urls_df['Origin OK Volume (MB)'])

low_offload_urls = urls_df[(urls_df['OK Volume'] > 0) & (urls_df['Volume Offload']<50.0)]
low_offload_urls = low_offload_urls[(~low_offload_urls.URL.str.contains("some-pattern.net")) & (~low_offload_urls.URL.str.contains("stateful-apis")) ]

low_offload_urls.sort_values(by=['OK Volume','Volume Offload'],inplace=True, ascending=['True','False'])

for each_url in low_offload_urls['URL']:
print (each_url)

python pandas data-analysis

Bootstrap 5 Complete Course with Examples

Bootstrap 5 Tutorial - Bootstrap 5 Crash Course for Beginners

Nest.JS Tutorial for Beginners

Hello Vue 3: A First Look at Vue 3 and the Composition API

Building a simple Applications with Vue 3

Deno Crash Course: Explore Deno and Create a full REST API with Deno

How to Build a Real-time Chat App with Deno and WebSockets

Convert HTML to Markdown Online

HTML entity encoder decoder Online

An introduction to exploratory data analysis in python

Many a time, I have seen beginners in data science skip exploratory data analysis (EDA) and jump straight into building a hypothesis function or model. In my opinion, this should not be the case.

Python Pandas Tutorial | Data Analysis With Python Pandas

Pandas is a popular Python package as it offers powerful, expressive, and flexible data structures that make data manipulation and analysis easy, among many other things. We will go over the essential bits of information about pandas, including how to install it, its uses, and how it works, and much more.

Master Pandas’ Groupby for Efficient Data Summarizing And Analysis

Learn to group the data and summarize in several different ways, to use aggregate functions, data transformation, filter, map.

Basic Data Types in Python | Python Web Development For Beginners

In the programming world, Data types play an important role. Each Variable is stored in different data types and responsible for various functions. Python had two different objects, and They are mutable and immutable objects.

Python Pandas Tutorial - Data Analysis with Python Pandas

Python Pandas Tutorial will help you get started with Python Pandas Library for various applications including Data analysis. Introduction to Pandas. DataFrames and Series. How To View Data? Selecting Data. Handling Missing Data. Pandas Operations. Merge, Group, Reshape Data. Time Series And Categoricals. Plotting Using Pandas