If you are an R user, chances are that you have already been using the
<a href="https://cran.r-project.org/web/packages/data.table/data.table.pdf" target="_blank">Data.table</a> is an extension of the
<a href="https://www.rdocumentation.org/packages/base/versions/3.6.0/topics/data.frame" target="_blank">data.frame</a> package in R. It’s also the go-to package for R users when it comes to the fast aggregation of large data (including 100GB in RAM).
data.table package is a very versatile and a high-performance package due to its ease of use, convenience and programming speed. It is a fairly famous package in the R community with over 400k downloads per month and almost 650 CRAN and Bioconductor packages using it(source).
So, what is in it for the Python users? Well, the good news is that there also exists a Python counterpart to the
data.table package called
datatable which has a clear focus on big data support, high performance, both in-memory and out-of-memory datasets, and multi-threaded algorithms. In a way, it can be called as data.table***’s ***younger sibling.
Modern machine learning applications need to process a humongous amount of data and generate multiple features. This is necessary in order to build models with greater accuracy. Python’s
datatable module was created to address this issue. It is a toolkit for performing big data (up to 100GB) operations on a single-node machine, at the maximum possible speed. The development of datatable is sponsored by H2O.ai and the first user of
datatable was Driverless.ai.
This toolkit resembles pandas very closely but is more focussed on speed and big data support.* Python’s *
datatable also strives to achieve good user experience, helpful error messages, and a powerful API. In this article, we shall see how we can use datatable and how it scores over pandas when it comes to large datasets.
On MacOS, datatable can be easily installed with pip:
pip install datatable
On Linux, installation is achieved with a binary distribution as follows:
# If you have Python 3.5 pip install https://s3.amazonaws.com/h2o-release/datatable/stable/datatable-0.8.0/datatable-0.8.0-cp35-cp35m-linux_x86_64.whl # If you have Python 3.6 pip install https://s3.amazonaws.com/h2o-release/datatable/stable/datatable-0.8.0/datatable-0.8.0-cp36-cp36m-linux_x86_64.whl
Currently, datatable does not work on Windows but work is being done to add support for Windows also.
For more information see Build instructions.
The code for this article can be accessed from the associated Github Repository or can be viewed on my binder by clicking the image below.
The dataset being used has been taken from Kaggle and belongs to the Lending Club Loan Data Dataset*. *The dataset consists of complete loan data for all loans issued through the 2007–2015, including the current loan status (Current, Late, Fully Paid, etc.) and latest payment information. The file consists of **2.26 Million rows **and 145 columns. The data size is ideal to demonstrate the capabilities of the datatable library.
# Importing necessary Libraries import numpy as np import pandas as pd import datatable as dt
Let’s load in the data into the
Frame object. The fundamental unit of analysis in datatable is a
Frame. It is the same notion as a pandas DataFrame or SQL table: data arranged in a two-dimensional array with rows and columns.
%%time datatable_df = dt.fread("data.csv") ____________________________________________________________________ CPU times: user 30 s, sys: 3.39 s, total: 33.4 s Wall time: 23.6 s
fread() function above is both powerful and extremely fast. It can automatically detect and parse parameters for the majority of text files, load data from .zip archives or URLs, read Excel files, and much more.
Additionally, the datatable parser :
Now, let us calculate the time taken by pandas to read the same file.
%%time pandas_df= pd.read_csv("data.csv") ___________________________________________________________ CPU times: user 47.5 s, sys: 12.1 s, total: 59.6 s Wall time: 1min 4s
The results show that datatable clearly outperforms pandas when reading large datasets. Whereas pandas take more than a minute, datatable only takes seconds for the same.
numpy_df = datatable_df.to_numpy() pandas_df = datatable_df.to_pandas()
Let’s convert our existing frame into a pandas dataframe object and compare the time taken.
%%time datatable_pandas = datatable_df.to_pandas() ___________________________________________________________________ CPU times: user 17.1 s, sys: 4 s, total: 21.1 s Wall time: 21.4 s
It appears that reading a file as a datatable frame and then converting it to pandas dataframe takes less time than reading through pandas dataframe. Thus, it might be a good idea to import a large data file through datatable and then convert it to pandas dataframe.
type(datatable_pandas) ___________________________________________________________________ pandas.core.frame.DataFrame
Let’s look at some of the basic properties of a datatable frame which are similar to the pandas’ properties:
print(datatable_df.shape) # (nrows, ncols) print(datatable_df.names[:5]) # top 5 column names print(datatable_df.stypes[:5]) # column types(top 5) ______________________________________________________________ (2260668, 145) ('id', 'member_id', 'loan_amnt', 'funded_amnt', 'funded_amnt_inv') (stype.bool8, stype.bool8, stype.int32, stype.int32, stype.float64)
We can also use the
head command to output the top ‘n’ rows.
A glimpse of the first 10 rows of the datatable frame
The colour signifies the datatype where red denotes string,** green** denotes int and blue stands for float.
Calculating the summary stats in pandas is a memory consuming process but not anymore with datatable. We can compute the following per-column summary stats using datatable:
datatable_df.sum() datatable_df.nunique() datatable_df.sd() datatable_df.max() datatable_df.mode() datatable_df.min() datatable_df.nmodal() datatable_df.mean()
Let’s calculate the mean of the columns using both datatable and pandas to measure the time difference.
%%time datatable_df.mean() _______________________________________________________________ CPU times: user 5.11 s, sys: 51.8 ms, total: 5.16 s Wall time: 1.43 s
pandas_df.mean() __________________________________________________________________ Throws memory error.
The above command cannot be completed in pandas as it starts throwing memory error.
Data Tables like dataframes are columnar data structures. In datatable, the primary vehicle for all these operations is the square-bracket notation inspired by traditional matrix indexing but with more functionalities.
datatable’s square-bracket notation
The same DT[i, j] notation is used in mathematics when indexing matrices, in C/C++, in R, in pandas, in numpy, etc. Let’s see how we can perform common data manipulation activities using datatable:
The following code selects all rows and the
funded_amnt column from the dataset.
Here is how we can select the first 5 rows and 3 columns
Sorting the frame by a particular column can be accomplished by
datatable as follows:
%%time datatable_df.sort('funded_amnt_inv') _________________________________________________________________ CPU times: user 534 ms, sys: 67.9 ms, total: 602 ms Wall time: 179 ms
%%time pandas_df.sort_values(by = 'funded_amnt_inv') ___________________________________________________________________ CPU times: user 8.76 s, sys: 2.87 s, total: 11.6 s Wall time: 12.4 s
Notice the substantial time difference between datable and pandas.
Here is how we can delete the column named
del datatable_df[:, 'member_id']
Just like in pandas, datatable also has the groupby functionalities. Let’s see how we can get the mean of
funded_amount column grouped by the
%%time for i in range(100): datatable_df[:, dt.sum(dt.f.funded_amnt), dt.by(dt.f.grade)] ____________________________________________________________________ CPU times: user 6.41 s, sys: 1.34 s, total: 7.76 s Wall time: 2.42 s
%%time for i in range(100): pandas_df.groupby("grade")["funded_amnt"].sum() ____________________________________________________________________ CPU times: user 12.9 s, sys: 859 ms, total: 13.7 s Wall time: 13.9 s
What does .f stand for?
f stands for
frame proxy, and provides a simple way to refer to the Frame that we are currently operating upon. In the case of our example,
dt.f simply stands for
The syntax for filtering rows is pretty similar to that of GroupBy. Let us filter those rows of
loan_amntfor which the values of
loan_amnt are greater than
It is also possible to write the Frame’s content into a
csv file so that it can be used in future.
For more data manipulation functions, refer to the documentation page.
The datatable module definitely speeds up the execution as compared to the default pandas and this definitely is a boon when working on large datasets. However, datatable lags behind pandas in terms of the functionalities. But since datatable is still undergoing active development, we might see some major additions to the library in the future.
#python #data-science #pandas
Welcome to my Blog , In this article, you are going to learn the top 10 python tips and tricks.
#python #python hacks tricks #python learning tips #python programming tricks #python tips #python tips and tricks #python tips and tricks advanced #python tips and tricks for beginners #python tips tricks and techniques #python tutorial #tips and tricks in python #tips to learn python #top 30 python tips and tricks for beginners
Welcome to my Blog, In this article, we will learn python lambda function, Map function, and filter function.
Lambda function in python: Lambda is a one line anonymous function and lambda takes any number of arguments but can only have one expression and python lambda syntax is
Syntax: x = lambda arguments : expression
Now i will show you some python lambda function examples:
#python #anonymous function python #filter function in python #lambda #lambda python 3 #map python #python filter #python filter lambda #python lambda #python lambda examples #python map
Python is awesome, it’s one of the easiest languages with simple and intuitive syntax but wait, have you ever thought that there might ways to write your python code simpler?
In this tutorial, you’re going to learn a variety of Python tricks that you can use to write your Python code in a more readable and efficient way like a pro.
Swapping value in Python
Instead of creating a temporary variable to hold the value of the one while swapping, you can do this instead
>>> FirstName = "kalebu" >>> LastName = "Jordan" >>> FirstName, LastName = LastName, FirstName >>> print(FirstName, LastName) ('Jordan', 'kalebu')
#python #python-programming #python3 #python-tutorials #learn-python #python-tips #python-skills #python-development
Today you’re going to learn how to use Python programming in a way that can ultimately save a lot of space on your drive by removing all the duplicates.
In many situations you may find yourself having duplicates files on your disk and but when it comes to tracking and checking them manually it can tedious.
Heres a solution
Instead of tracking throughout your disk to see if there is a duplicate, you can automate the process using coding, by writing a program to recursively track through the disk and remove all the found duplicates and that’s what this article is about.
But How do we do it?
If we were to read the whole file and then compare it to the rest of the files recursively through the given directory it will take a very long time, then how do we do it?
The answer is hashing, with hashing can generate a given string of letters and numbers which act as the identity of a given file and if we find any other file with the same identity we gonna delete it.
There’s a variety of hashing algorithms out there such as
#python-programming #python-tutorials #learn-python #python-project #python3 #python #python-skills #python-tips
Magic Methods are the special methods which gives us the ability to access built in syntactical features such as ‘<’, ‘>’, ‘==’, ‘+’ etc…
You must have worked with such methods without knowing them to be as magic methods. Magic methods can be identified with their names which start with __ and ends with __ like init, call, str etc. These methods are also called Dunder Methods, because of their name starting and ending with Double Underscore (Dunder).
Now there are a number of such special methods, which you might have come across too, in Python. We will just be taking an example of a few of them to understand how they work and how we can use them.
class AnyClass: def __init__(): print("Init called on its own") obj = AnyClass()
The first example is _init, _and as the name suggests, it is used for initializing objects. Init method is called on its own, ie. whenever an object is created for the class, the init method is called on its own.
The output of the above code will be given below. Note how we did not call the init method and it got invoked as we created an object for class AnyClass.
Init called on its own
Let’s move to some other example, add gives us the ability to access the built in syntax feature of the character +. Let’s see how,
class AnyClass: def __init__(self, var): self.some_var = var def __add__(self, other_obj): print("Calling the add method") return self.some_var + other_obj.some_var obj1 = AnyClass(5) obj2 = AnyClass(6) obj1 + obj2
#python3 #python #python-programming #python-web-development #python-tutorials #python-top-story #python-tips #learn-python