1607703840
As a data scientist I recently took the chance, left my R tidyverse comfortzone and embarked on the adventure to explore the unknown depths of Python, numpy, scikit-learn and pandas
To document my adventure of exploring the Python data science ecosystem I started this series of stories. The purpose of this is firstly, to create an online reference page for myself and secondly, to potentially help other fellow R data scientist who want to embark on a similar journey and explore the possibilities of Python or simply to support other people who are getting started with data analysis in Python.
In this story we will dive right into our first adventure and go as far as to the very bottom of the North Atlantic Ocean to explore the Titanic dataset with Python. If you want to tag along and practice your data science skills on the same dataset, please visit Kaggle and the Titanic Challenge which is a great place to start your data science career by the way.
#analysis #data-science #python #visualization
1621635960
Exploratory Data Analysis (EDA) is a very common and important practice followed by all data scientists. It is the process of looking at tables and tables of data from different angles in order to understand it fully. Gaining a good understanding of data helps us to clean and summarize it, which then brings out the insights and trends which were otherwise unclear.
EDA has no hard-core set of rules which are to be followed like in ‘data analysis’, for example. People who are new to the field always tend to confuse between the two terms, which are mostly similar but different in their purpose. Unlike EDA, data analysis is more inclined towards the implementation of probabilities and statistical methods to reveal facts and relationships among different variants.
Coming back, there is no right or wrong way to perform EDA. It varies from person to person however, there are some major guidelines commonly followed which are listed below.
We will look at how some of these are implemented using a very famous ‘Home Credit Default Risk’ dataset available on Kaggle here. The data contains information about the loan applicant at the time of applying for the loan. It contains two types of scenarios:
on at least one of the first Y instalments of the loan in our sample,
We’ll be only working on the application data files for the sake of this article.
#data science #data analysis #data analysis in python #exploratory data analysis in python
1620466520
If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.
If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.
In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.
#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition
1603270800
Exploratory Data Analysis (EDA) is one of the most important aspect in every data science or data analysis problem. It provides us greater understanding on our data and can possibly unravel hidden insights that aren’t that obvious to us. The first article I’ve wrote on Medium is also on performing EDA in R, you can check it out here. This post will focus more on graphical EDA in Python using matplotlib, regression line and even motion chart!
The dataset we are using for this article can be obtained from Gapminder, and drilling down into _Population, Gender Equality in Education _and Income.
The _Population _data contains yearly data regarding the estimated resident population, grouped by countries around the world between 1800 and 2018.
The Gender Equality in Education data contains yearly data between 1970 and 2015 on the ratio between female to male in schools, among 25 to 34 years old which includes primary, secondary and tertiary education across different countries
The _Income _data contains yearly data of income per person adjusted for differences in purchasing power (in international dollars) across different countries around the world, for the period between 1800 and 2018.
Let’s first plot the population data over time, and focus mainly on the three countries Singapore, United States and China. We will use matplotlib
library to plot 3 different line charts on the same figure.
import pandas as pd
import matplotlib.pylab as plt
%matplotlib inline
## read in data
population = pd.read_csv('./population.csv')
## plot for the 3 countries
plt.plot(population.Year,population.Singapore,label="Singapore")
plt.plot(population.Year,population.China,label="China")
plt.plot(population.Year,population["United States"],label="United States")
## add legends, labels and title
plt.legend(loc='best')
plt.xlabel('Year')
plt.ylabel('Population')
plt.title('Population Growth over time')
plt.show()
#exploratory-data-analysis #data-analysis #data-science #data-visualization #python
1625001660
EDA is a way to understand what the data is all about. It is very important as it helps us to understand the outliers, relationship of features within the data with the help of graphs and plots.
EDA is a time taking process as we need to make visualizations between different features using libraries like Matplot, seaborn, etc.
There is a way to automate this process by a single line of code using the library Pandas Visual Analysis.
Let’s understand the different sections in the user interface :
#data-analysis #machine-learning #data-visualization #data-science #data analysis #exploratory data analysis
1593156510
At the end of 2019, Python is one of the fastest-growing programming languages. More than 10% of developers have opted for Python development.
In the programming world, Data types play an important role. Each Variable is stored in different data types and responsible for various functions. Python had two different objects, and They are mutable and immutable objects.
Table of Contents hide
III Built-in data types in Python
The Size and declared value and its sequence of the object can able to be modified called mutable objects.
Mutable Data Types are list, dict, set, byte array
The Size and declared value and its sequence of the object can able to be modified.
Immutable data types are int, float, complex, String, tuples, bytes, and frozen sets.
id() and type() is used to know the Identity and data type of the object
a**=25+**85j
type**(a)**
output**:<class’complex’>**
b**={1:10,2:“Pinky”****}**
id**(b)**
output**:**238989244168
a**=str(“Hello python world”)****#str**
b**=int(18)****#int**
c**=float(20482.5)****#float**
d**=complex(5+85j)****#complex**
e**=list((“python”,“fast”,“growing”,“in”,2018))****#list**
f**=tuple((“python”,“easy”,“learning”))****#tuple**
g**=range(10)****#range**
h**=dict(name=“Vidu”,age=36)****#dict**
i**=set((“python”,“fast”,“growing”,“in”,2018))****#set**
j**=frozenset((“python”,“fast”,“growing”,“in”,2018))****#frozenset**
k**=bool(18)****#bool**
l**=bytes(8)****#bytes**
m**=bytearray(8)****#bytearray**
n**=memoryview(bytes(18))****#memoryview**
Numbers are stored in numeric Types. when a number is assigned to a variable, Python creates Number objects.
#signed interger
age**=**18
print**(age)**
Output**:**18
Python supports 3 types of numeric data.
int (signed integers like 20, 2, 225, etc.)
float (float is used to store floating-point numbers like 9.8, 3.1444, 89.52, etc.)
complex (complex numbers like 8.94j, 4.0 + 7.3j, etc.)
A complex number contains an ordered pair, i.e., a + ib where a and b denote the real and imaginary parts respectively).
The string can be represented as the sequence of characters in the quotation marks. In python, to define strings we can use single, double, or triple quotes.
# String Handling
‘Hello Python’
#single (') Quoted String
“Hello Python”
# Double (") Quoted String
“”“Hello Python”“”
‘’‘Hello Python’‘’
# triple (‘’') (“”") Quoted String
In python, string handling is a straightforward task, and python provides various built-in functions and operators for representing strings.
The operator “+” is used to concatenate strings and “*” is used to repeat the string.
“Hello”+“python”
output**:****‘Hello python’**
"python "*****2
'Output : Python python ’
#python web development #data types in python #list of all python data types #python data types #python datatypes #python types #python variable type