Noah Saunders

Noah Saunders

1593826740

Thinking in Pandas: Python Data Analysis the Right Way

Are you using the Python library Pandas the right way? Do you wonder about getting better performance, or how to optimize your data for analysis? What does normalization mean? This week on the show we have Hannah Stepanek to discuss her new book “Thinking in Pandas”.

The inspiration behind Hannah’s book came out of her talk at PyCon US 2019 titled “Thinking Like a Panda: Everything You Need to Know to Use Pandas the Right Way.” We discuss several core concepts covered in the book. She shares techniques for getting more performance when working with your data in Pandas. We also talk about her recent PyCon US 2020 online presentation about databases and migration.

Topics:

  • 00:00:00 – Introduction
  • 00:01:36 – Working for New Relic
  • 00:03:14 – Thinking in Pandas book release
  • 00:03:27 – Who is the intended reader?
  • 00:05:27 – What is the underlying tech for Pandas?
  • 00:09:04 – Why you shouldn’t use apply?
  • 00:13:00 – When you have to use apply
  • 00:16:06 – Normalizing your data
  • 00:17:05 – Do you have a preferred format for a dataframe?
  • 00:18:17 – More on multi-index dataframes
  • 00:24:50 – Creating NumPy types
  • 00:28:30 – Loading in your data
  • 00:30:33 – Video Course Spotlight
  • 00:31:41 – Pivoting data
  • 00:34:34 – Considering outside libraries and performance
  • 00:35:41 – What topic were you eager to share in the book?
  • 00:37:52 – What resources did you use to learn pandas?
  • 00:40:53 – PyCon 2020 talk about databases and migration
  • 00:45:34 – Delving into migration and Alembic
  • 00:53:15 – Speaking opportunities
  • 00:56:13 – What are you excited about in the world of Python?
  • 00:57:32 – What do you want to learn next?
  • 00:58:49 – Do you read source code to learn?
  • 01:00:16 – Is there a particularly well-written library?
  • 01:01:28 – Final Thanks

#python #pandas #data-analysis

What is GEEK

Buddha Community

Thinking in Pandas: Python Data Analysis the Right Way
Paula  Hall

Paula Hall

1623488340

3 Python Pandas Tricks for Efficient Data Analysis

Explained with examples.

Pandas is one of the predominant data analysis tools which is highly appreciated among data scientists. It provides numerous flexible and versatile functions to perform efficient data analysis.

In this article, we will go over 3 pandas tricks that I think will make you a more happy pandas user. It is better to explain these tricks with some examples. Thus, we start by creating a data frame to wok on.

The data frame contains daily sales quantities of 3 different stores. We first create a period of 10 days using the date_range function of pandas.

import numpy as np
import pandas as pd

days = pd.date_range("2020-01-01", periods=10, freq="D")

The days variable will be used as a column. We also need a sales quantity column which can be generated by the randint function of numpy. Then, we create a data frame with 3 columns for each store.

#machine-learning #data-science #python #python pandas tricks #efficient data analysis #python pandas tricks for efficient data analysis

 iOS App Dev

iOS App Dev

1620466520

Your Data Architecture: Simple Best Practices for Your Data Strategy

If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.

If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.

In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.

#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition

Tia  Gottlieb

Tia Gottlieb

1597769760

An introduction to exploratory data analysis in python

Many a time, I have seen beginners in data science skip exploratory data analysis (EDA) and jump straight into building a hypothesis function or model. In my opinion, this should not be the case. We should first perform an EDA as it will connect us with the dataset at an emotional level and yes, of course, will help in building good hypothesis function.

EDA is a very crucial step. It gives us a glimpse of what our data set is all about, its uniqueness, its anomalies and finally it summarizes the main characteristics of the dataset for us. In this post, I will share a very basic guide for performing EDA.

**Step 1: Import your data set **and have a good look at the data.

In order to perform EDA, we will require the following python packages.

Packages to import:

import numpy as np
	import pandas as pd
	import matplotlib.pyplot as plt
	import seaborn as sns
	from collections import defaultdict
	%matplotlib inline
view raw
exploratory_analysis1.py hosted with ❤ by GitHub

Once we have imported the packages successfully, we will move on to importing our dataset. You must be aware of read_csv() tool from pandas for reading csv files.

Import the dataset:

For the purpose of this tutorial, I have used Loan Prediction dataset from Analytics Vidhya. If you wish to code along, here is the link.

The dataset has been successfully imported. Let’s have a look at the Train dataset.

Train.head()

Image for post

Fig 1 : Overview of Train dataset

#data-science #python #pandas #data-analysis #data-visualization #data analysis

Arvel  Parker

Arvel Parker

1593156510

Basic Data Types in Python | Python Web Development For Beginners

At the end of 2019, Python is one of the fastest-growing programming languages. More than 10% of developers have opted for Python development.

In the programming world, Data types play an important role. Each Variable is stored in different data types and responsible for various functions. Python had two different objects, and They are mutable and immutable objects.

Table of Contents  hide

I Mutable objects

II Immutable objects

III Built-in data types in Python

Mutable objects

The Size and declared value and its sequence of the object can able to be modified called mutable objects.

Mutable Data Types are list, dict, set, byte array

Immutable objects

The Size and declared value and its sequence of the object can able to be modified.

Immutable data types are int, float, complex, String, tuples, bytes, and frozen sets.

id() and type() is used to know the Identity and data type of the object

a**=25+**85j

type**(a)**

output**:<class’complex’>**

b**={1:10,2:“Pinky”****}**

id**(b)**

output**:**238989244168

Built-in data types in Python

a**=str(“Hello python world”)****#str**

b**=int(18)****#int**

c**=float(20482.5)****#float**

d**=complex(5+85j)****#complex**

e**=list((“python”,“fast”,“growing”,“in”,2018))****#list**

f**=tuple((“python”,“easy”,“learning”))****#tuple**

g**=range(10)****#range**

h**=dict(name=“Vidu”,age=36)****#dict**

i**=set((“python”,“fast”,“growing”,“in”,2018))****#set**

j**=frozenset((“python”,“fast”,“growing”,“in”,2018))****#frozenset**

k**=bool(18)****#bool**

l**=bytes(8)****#bytes**

m**=bytearray(8)****#bytearray**

n**=memoryview(bytes(18))****#memoryview**

Numbers (int,Float,Complex)

Numbers are stored in numeric Types. when a number is assigned to a variable, Python creates Number objects.

#signed interger

age**=**18

print**(age)**

Output**:**18

Python supports 3 types of numeric data.

int (signed integers like 20, 2, 225, etc.)

float (float is used to store floating-point numbers like 9.8, 3.1444, 89.52, etc.)

complex (complex numbers like 8.94j, 4.0 + 7.3j, etc.)

A complex number contains an ordered pair, i.e., a + ib where a and b denote the real and imaginary parts respectively).

String

The string can be represented as the sequence of characters in the quotation marks. In python, to define strings we can use single, double, or triple quotes.

# String Handling

‘Hello Python’

#single (') Quoted String

“Hello Python”

# Double (") Quoted String

“”“Hello Python”“”

‘’‘Hello Python’‘’

# triple (‘’') (“”") Quoted String

In python, string handling is a straightforward task, and python provides various built-in functions and operators for representing strings.

The operator “+” is used to concatenate strings and “*” is used to repeat the string.

“Hello”+“python”

output**:****‘Hello python’**

"python "*****2

'Output : Python python ’

#python web development #data types in python #list of all python data types #python data types #python datatypes #python types #python variable type

Tia  Gottlieb

Tia Gottlieb

1593894120

Master Pandas’ Groupby for Efficient Data Summarizing And Analysis

Learn to group the data and summarize in several different ways, to use aggregate functions, data transformation, filter, map, apply a function in the DataFrame, and visualization using groupby.

Source: Unspalsh by Ilona Froehlich

Groupby is a very popular function in Pandas. This is very good at summarising, transforming, filtering, and a few other very essential data analysis tasks. In this article, I will explain the application of groupby function in detail with example.

Dataset

For this article, I will use a ‘Students Performance’ dataset from Kaggle. Please feel free download the dataset from here:

rashida048/Datasets

Contribute to rashida048/Datasets development by creating an account on GitHub.

github.com

Here I am importing the necessary packages and the dataset:

import pandas as pd
import numpy as np
df = pd.read_csv('StudentsPerformance.csv')
df.head()

How Groupby Works?

Groupby function splits the dataset based on criteria that you define. Here I am showing the process behind the groupby function. It will give you an idea, how much work we may have to do if we would not have groupby function. I will make a new smaller dataset of two columns only to demonstrate in this section. The columns are ‘gender’ and ‘reading score’.

test = df[['gender', 'reading score']]
test.head()

Let’s find out the average reading score gender-wise

First, we need to split the dataset based on gender. Generate the data for females only.

female = test['gender'] == 'female'
test[female].head()

In the same way, generate the data for the males,

male = test['gender'] == 'male'
test[male].head()

Using females and males dataset above to calculate the mean reading score for females and males respectively.

fe_avg = test[female]['reading score'].mean()
male_avg = test[male]['reading score'].mean()
print(fe_avg, male_avg)

The mean reading score of females is 72.608 and the mean reading score for males is 65.473. Now, make a DataFrame for the mean reading score of females and males.

df_reading = pd.DataFrame({'Gender': ['female', 'male'], 'reading score': [fe_avg, male_avg]})

Now, let’s solve the same problem with the groupby function. Splitting the data based on gender and applying the ‘mean’ on it with just one simple line of code:

test.groupby('gender').mean()

This small piece of code gives the same result.

Groups in Groupby

I will use the original dataset ‘df’ now. Make groups of ‘race/ethnicity’.

race = df.groupby('race/ethnicity')
print(race)

Output: <pandas.core.groupby.generic.DataFrameGroupBy object at 0x0000023339DCE940>

It returns an object. Now check the datatype of ‘race’.

type(race)

Output: pandas.core.groupby.generic.DataFrameGroupBy

So, we generated a DataFrameGroupBy object. Calling groups on this DataFrameGroupBy object will return the indices of each group.

race.groups

#Here is the Output:
{'group A': Int64Index([  3,  13,  14,  25,  46,  61,  62,  72,  77,  82,  88, 112, 129,              143, 150, 151, 170, 228, 250, 296, 300, 305, 327, 356, 365, 368,              378, 379, 384, 395, 401, 402, 423, 428, 433, 442, 444, 464, 467,              468, 483, 489, 490, 506, 511, 539, 546, 571, 575, 576, 586, 589,              591, 597, 614, 623, 635, 651, 653, 688, 697, 702, 705, 731, 741,              769, 778, 805, 810, 811, 816, 820, 830, 832, 837, 851, 892, 902,              911, 936, 943, 960, 966, 972, 974, 983, 985, 988, 994],             dtype='int64'),  'group B': Int64Index([  0,   2,   5,   6,   7,   9,  12,  17,  21,  26,              ...              919, 923, 944, 946, 948, 969, 976, 980, 982, 991],             dtype='int64', length=190),  'group C': Int64Index([  1,   4,  10,  15,  16,  18,  19,  23,  27,  28,              ...              963, 967, 971, 975, 977, 979, 984, 986, 996, 997],             dtype='int64', length=319),  'group D': Int64Index([  8,  11,  20,  22,  24,  29,  30,  33,  36,  37,              ...              965, 970, 973, 978, 981, 989, 992, 993, 998, 999],             dtype='int64', length=262),  'group E': Int64Index([ 32,  34,  35,  44,  50,  51,  56,  60,  76,  79,              ...              937, 949, 950, 952, 955, 962, 968, 987, 990, 995],             dtype='int64', length=140)}

Have a look at the output above. Groupby function splits the data into subgroups and you can now see the indices of each subgroup. That’s great! But only the indices are not enough. We need to see the real data of each group. The function ‘get_group’ helps with that.

race.get_group('group B')

I am showing the part of the results here. The original output is much bigger.

Find the size of each group

Calling size on the ‘race’ object will give the size of each group

race.size()

Loop over each group

You can loop over the groups. Here is an example:

for name, group in race:
    print(name, 'has', group.shape[0], 'data')

Grouping by multiple variables

In all the examples above, we only grouped by one variable. But grouping by multiple variables is also possible. Here I am grouping by ‘race/ethnicity’ and ‘gender’. This should return the number of data of each race segregated by gender.

df.groupby(['gender', 'race/ethnicity']).size()

This example aggregates the data using ‘size’. There are other aggregate functions as well. Here is the list of all the aggregate functions:

sum()

mean()

size()

count()

std()

var()

sem()

min()

median()

Please try them out. Just replace any of these aggregate functions instead of the ‘size’ in the above example.

Using multiple aggregate functions

The way we can use groupby on multiple variables, using multiple aggregate functions is also possible. This next example will group by ‘race/ethnicity and will aggregate using ‘max’ and ‘min’ functions.

#data-science #pandas #data-analysis #towards-data-science #python #data analysis