1567240218

# Building A Logistic Regression in Python

Originally published by Susan Li at https://towardsdatascience.com

Logistic Regression is a Machine Learning classification algorithm that is used to predict the probability of a categorical dependent variable. In logistic regression, the dependent variable is a binary variable that contains data coded as 1 (yes, success, etc.) or 0 (no, failure, etc.). In other words, the logistic regression model predicts P(Y=1) as a function of X.

## Logistic Regression Assumptions

• Binary logistic regression requires the dependent variable to be binary.
• For a binary regression, the factor level 1 of the dependent variable should represent the desired outcome.
• Only the meaningful variables should be included.
• The independent variables should be independent of each other. That is, the model should have little or no multicollinearity.
• The independent variables are linearly related to the log odds.
• Logistic regression requires quite large sample sizes.

Keeping the above assumptions in mind, let’s look at our dataset.

## Data

The dataset comes from the UCI Machine Learning repository, and it is related to direct marketing campaigns (phone calls) of a Portuguese banking institution. The classification goal is to predict whether the client will subscribe (1/0) to a term deposit (variable y). The dataset can be downloaded from here.

```import pandas as pd
import numpy as np
from sklearn import preprocessing
import matplotlib.pyplot as plt
plt.rc(“font”, size=14)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
import seaborn as sns
sns.set(style=“white”)
sns.set(style=“whitegrid”, color_codes=True)
```

The dataset provides the bank customers’ information. It includes 41,188 records and 21 fields.

Figure 1

Input variables

1. age (numeric)
2. job : type of job (categorical: “admin”, “blue-collar”, “entrepreneur”, “housemaid”, “management”, “retired”, “self-employed”, “services”, “student”, “technician”, “unemployed”, “unknown”)
3. marital : marital status (categorical: “divorced”, “married”, “single”, “unknown”)
4. education (categorical: “basic.4y”, “basic.6y”, “basic.9y”, “high.school”, “illiterate”, “professional.course”, “university.degree”, “unknown”)
5. default: has credit in default? (categorical: “no”, “yes”, “unknown”)
6. housing: has housing loan? (categorical: “no”, “yes”, “unknown”)
7. loan: has personal loan? (categorical: “no”, “yes”, “unknown”)
8. contact: contact communication type (categorical: “cellular”, “telephone”)
9. month: last contact month of year (categorical: “jan”, “feb”, “mar”, …, “nov”, “dec”)
10. day_of_week: last contact day of the week (categorical: “mon”, “tue”, “wed”, “thu”, “fri”)
11. duration: last contact duration, in seconds (numeric). Important note: this attribute highly affects the output target (e.g., if duration=0 then y=’no’). The duration is not known before a call is performed, also, after the end of the call, y is obviously known. Thus, this input should only be included for benchmark purposes and should be discarded if the intention is to have a realistic predictive model
12. campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact)
13. pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric; 999 means client was not previously contacted)
14. previous: number of contacts performed before this campaign and for this client (numeric)
15. poutcome: outcome of the previous marketing campaign (categorical: “failure”, “nonexistent”, “success”)
16. emp.var.rate: employment variation rate — (numeric)
17. cons.price.idx: consumer price index — (numeric)
18. cons.conf.idx: consumer confidence index — (numeric)
19. euribor3m: euribor 3 month rate — (numeric)
20. nr.employed: number of employees — (numeric)

Predict variable (desired target):

y — has the client subscribed a term deposit? (binary: “1”, means “Yes”, “0” means “No”)

The education column of the dataset has many categories and we need to reduce the categories for a better modelling. The education column has the following categories:

Figure 2

```Let us group “basic.4y”, “basic.9y” and “basic.6y” together and call them “basic”.
data[‘education’]=np.where(data[‘education’] ==‘basic.9y’, ‘Basic’, data[‘education’])
data[‘education’]=np.where(data[‘education’] ==‘basic.6y’, ‘Basic’, data[‘education’])
data[‘education’]=np.where(data[‘education’] ==‘basic.4y’, ‘Basic’, data[‘education’])
```

After grouping, this is the columns:

Figure 3

## Data exploration

Figure 4

```count_no_sub = len(data[data[‘y’]==0])
count_sub = len(data[data[‘y’]==1])
pct_of_no_sub = count_no_sub/(count_no_sub+count_sub)
print(“percentage of no subscription is”, pct_of_no_sub100)
pct_of_sub = count_sub/(count_no_sub+count_sub)
print(“percentage of subscription”, pct_of_sub100)
```

percentage of no subscription is 88.73458288821988

percentage of subscription 11.265417111780131

Our classes are imbalanced, and the ratio of no-subscription to subscription instances is 89:11. Before we go ahead to balance the classes, let’s do some more exploration.

Figure 5

Observations:

• The average age of customers who bought the term deposit is higher than that of the customers who didn’t.
• The pdays (days since the customer was last contacted) is understandably lower for the customers who bought it. The lower the pdays, the better the memory of the last call and hence the better chances of a sale.
• Surprisingly, campaigns (number of contacts or calls made during the current campaign) are lower for customers who bought the term deposit.

We can calculate categorical means for other categorical variables such as education and marital status to get a more detailed sense of our data.

Figure 6

Figure 7

## Visualizations

```%matplotlib inline
pd.crosstab(data.job,data.y).plot(kind=‘bar’)
plt.title(‘Purchase Frequency for Job Title’)
plt.xlabel(‘Job’)
plt.ylabel(‘Frequency of Purchase’)
plt.savefig(‘purchase_fre_job’)
```

Figure 8

The frequency of purchase of the deposit depends a great deal on the job title. Thus, the job title can be a good predictor of the outcome variable.

```table=pd.crosstab(data.marital,data.y)
table.div(table.sum(1).astype(float), axis=0).plot(kind=‘bar’, stacked=True)
plt.title(‘Stacked Bar Chart of Marital Status vs Purchase’)
plt.xlabel(‘Marital Status’)
plt.ylabel(‘Proportion of Customers’)
plt.savefig(‘mariral_vs_pur_stack’)
```

Figure 9

The marital status does not seem a strong predictor for the outcome variable.

```table=pd.crosstab(data.education,data.y)
table.div(table.sum(1).astype(float), axis=0).plot(kind=‘bar’, stacked=True)
plt.title(‘Stacked Bar Chart of Education vs Purchase’)
plt.xlabel(‘Education’)
plt.ylabel(‘Proportion of Customers’)
plt.savefig(‘edu_vs_pur_stack’)
```

Figure 10

Education seems a good predictor of the outcome variable.

```pd.crosstab(data.day_of_week,data.y).plot(kind=‘bar’)
plt.title(‘Purchase Frequency for Day of Week’)
plt.xlabel(‘Day of Week’)
plt.ylabel(‘Frequency of Purchase’)
plt.savefig(‘pur_dayofweek_bar’)
```

Figure 11

Day of week may not be a good predictor of the outcome.

```pd.crosstab(data.month,data.y).plot(kind=‘bar’)
plt.title(‘Purchase Frequency for Month’)
plt.xlabel(‘Month’)
plt.ylabel(‘Frequency of Purchase’)
plt.savefig(‘pur_fre_month_bar’)
```

Figure 12

Month might be a good predictor of the outcome variable.

```data.age.hist()
plt.title(‘Histogram of Age’)
plt.xlabel(‘Age’)
plt.ylabel(‘Frequency’)
plt.savefig(‘hist_age’)
```

Figure 13

Most of the customers of the bank in this dataset are in the age range of 30–40.

```pd.crosstab(data.poutcome,data.y).plot(kind=‘bar’)
plt.title(‘Purchase Frequency for Poutcome’)
plt.xlabel(‘Poutcome’)
plt.ylabel(‘Frequency of Purchase’)
plt.savefig(‘pur_fre_pout_bar’)
```

Figure 14

Poutcome seems to be a good predictor of the outcome variable.

## Create dummy variables

That is variables with only two values, zero and one.

```cat_vars=[‘job’,‘marital’,‘education’,‘default’,‘housing’,‘loan’,‘contact’,‘month’,‘day_of_week’,‘poutcome’]
for var in cat_vars:
cat_list=‘var’+‘_’+var
cat_list = pd.get_dummies(data[var], prefix=var)
data1=data.join(cat_list)
data=data1cat_vars=[‘job’,‘marital’,‘education’,‘default’,‘housing’,‘loan’,‘contact’,‘month’,‘day_of_week’,‘poutcome’]
data_vars=data.columns.values.tolist()
to_keep=[i for i in data_vars if i not in cat_vars]
```

Our final data columns will be:

```data_final=data[to_keep]
data_final.columns.values
```

Figure 15

## Over-sampling using SMOTE

With our training data created, I’ll up-sample the no-subscription using the SMOTE algorithm(Synthetic Minority Oversampling Technique). At a high level, SMOTE:

1. Works by creating synthetic samples from the minor class (no-subscription) instead of creating copies.
2. Randomly choosing one of the k-nearest-neighbors and using it to create a similar, but randomly tweaked, new observations.

We are going to implement SMOTE in Python.

```X = data_final.loc[:, data_final.columns != ‘y’]
y = data_final.loc[:, data_final.columns == ‘y’]from imblearn.over_sampling import SMOTEos = SMOTE(random_state=0)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
columns = X_train.columnsos_data_X,os_data_y=os.fit_sample(X_train, y_train)
os_data_X = pd.DataFrame(data=os_data_X,columns=columns )
os_data_y= pd.DataFrame(data=os_data_y,columns=[‘y’])
we can Check the numbers of our data
print("length of oversampled data is ",len(os_data_X))
print(“Number of no subscription in oversampled data”,len(os_data_y[os_data_y[‘y’]==0]))
print(“Number of subscription”,len(os_data_y[os_data_y[‘y’]==1]))
print("Proportion of no subscription data in oversampled data is ",len(os_data_y[os_data_y[‘y’]==0])/len(os_data_X))
print("Proportion of subscription data in oversampled data is ",len(os_data_y[os_data_y[‘y’]==1])/len(os_data_X))
```

Figure 16

Now we have a perfect balanced data! You may have noticed that I over-sampled only on the training data, because by oversampling only on the training data, none of the information in the test data is being used to create synthetic observations, therefore, no information will bleed from test data into the model training.

## Recursive Feature Elimination

Recursive Feature Elimination (RFE) is based on the idea to repeatedly construct a model and choose either the best or worst performing feature, setting the feature aside and then repeating the process with the rest of the features. This process is applied until all features in the dataset are exhausted. The goal of RFE is to select features by recursively considering smaller and smaller sets of features.

```data_final_vars=data_final.columns.values.tolist()
y=[‘y’]
X=[i for i in data_final_vars if i not in y]from sklearn.feature_selection import RFE
from sklearn.linear_model import LogisticRegressionlogreg = LogisticRegression()rfe = RFE(logreg, 20)
rfe = rfe.fit(os_data_X, os_data_y.values.ravel())
print(rfe.support_)
print(rfe.ranking_)
```

Figure 16

The RFE has helped us select the following features: “euribor3m”, “job_blue-collar”, “job_housemaid”, “marital_unknown”, “education_illiterate”, “default_no”, “default_unknown”, “contact_cellular”, “contact_telephone”, “month_apr”, “month_aug”, “month_dec”, “month_jul”, “month_jun”, “month_mar”, “month_may”, “month_nov”, “month_oct”, “poutcome_failure”, “poutcome_success”.

```cols=[‘euribor3m’, ‘job_blue-collar’, ‘job_housemaid’, ‘marital_unknown’, ‘education_illiterate’, ‘default_no’, ‘default_unknown’,
‘contact_cellular’, ‘contact_telephone’, ‘month_apr’, ‘month_aug’, ‘month_dec’, ‘month_jul’, ‘month_jun’, ‘month_mar’,
‘month_may’, ‘month_nov’, ‘month_oct’, “poutcome_failure”, “poutcome_success”]
X=os_data_X[cols]
y=os_data_y[‘y’]
```

## Implementing the model

```import statsmodels.api as sm
logit_model=sm.Logit(y,X)
result=logit_model.fit()
print(result.summary2())
```

Figure 17

The p-values for most of the variables are smaller than 0.05, except four variables, therefore, we will remove them.

```cols=[‘euribor3m’, ‘job_blue-collar’, ‘job_housemaid’, ‘marital_unknown’, ‘education_illiterate’,
‘month_apr’, ‘month_aug’, ‘month_dec’, ‘month_jul’, ‘month_jun’, ‘month_mar’,
‘month_may’, ‘month_nov’, ‘month_oct’, “poutcome_failure”, “poutcome_success”]
X=os_data_X[cols]
y=os_data_y[‘y’]logit_model=sm.Logit(y,X)
result=logit_model.fit()
print(result.summary2())
```

Figure 18

## Logistic Regression Model Fitting

```from sklearn.linear_model import LogisticRegression
from sklearn import metricsX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
```

Figure 19

Predicting the test set results and calculating the accuracy

```y_pred = logreg.predict(X_test)
print(‘Accuracy of logistic regression classifier on test set: {:.2f}’.format(logreg.score(X_test, y_test)))
```

Accuracy of logistic regression classifier on test set: 0.74

## Confusion Matrix

```from sklearn.metrics import confusion_matrix
confusion_matrix = confusion_matrix(y_test, y_pred)
print(confusion_matrix)
```

[[6124 1542]

[2505 5170]]

The result is telling us that we have 6124+5170 correct predictions and 2505+1542 incorrect predictions.

## Compute precision, recall, F-measure and support

To quote from Scikit Learn:

The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. The precision is intuitively the ability of the classifier to not label a sample as positive if it is negative.

The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples.

The F-beta score can be interpreted as a weighted harmonic mean of the precision and recall, where an F-beta score reaches its best value at 1 and worst score at 0.

The F-beta score weights the recall more than the precision by a factor of beta. beta = 1.0 means recall and precision are equally important.

The support is the number of occurrences of each class in y_test.

```from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
```

Figure 20

Interpretation: Of the entire test set, 74% of the promoted term deposit were the term deposit that the customers liked. Of the entire test set, 74% of the customer’s preferred term deposits that were promoted.

## ROC Curve

```from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
logit_roc_auc = roc_auc_score(y_test, logreg.predict(X_test))
fpr, tpr, thresholds = roc_curve(y_test, logreg.predict_proba(X_test)[:,1])
plt.figure()
plt.plot(fpr, tpr, label=‘Logistic Regression (area = %0.2f)’ % logit_roc_auc)
plt.plot([0, 1], [0, 1],‘r–’)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel(‘False Positive Rate’)
plt.ylabel(‘True Positive Rate’)
plt.legend(loc=“lower right”)
plt.savefig(‘Log_ROC’)
plt.show()
```

Figure 21

The receiver operating characteristic (ROC) curve is another common tool used with binary classifiers. The dotted line represents the ROC curve of a purely random classifier; a good classifier stays as far away from that line as possible (toward the top-left corner).

The Jupyter notebook used to make this post is available here. I would be pleased to receive feedback or questions on any of the above.

If you liked this post, share it with all of your programming buddies!

#python #machine-learning

1619510796

## Lambda, Map, Filter functions in python

Welcome to my Blog, In this article, we will learn python lambda function, Map function, and filter function.

Lambda function in python: Lambda is a one line anonymous function and lambda takes any number of arguments but can only have one expression and python lambda syntax is

Syntax: x = lambda arguments : expression

Now i will show you some python lambda function examples:

#python #anonymous function python #filter function in python #lambda #lambda python 3 #map python #python filter #python filter lambda #python lambda #python lambda examples #python map

1626775355

## Why use Python for Software Development

No programming language is pretty much as diverse as Python. It enables building cutting edge applications effortlessly. Developers are as yet investigating the full capability of end-to-end Python development services in various areas.

By areas, we mean FinTech, HealthTech, InsureTech, Cybersecurity, and that's just the beginning. These are New Economy areas, and Python has the ability to serve every one of them. The vast majority of them require massive computational abilities. Python's code is dynamic and powerful - equipped for taking care of the heavy traffic and substantial algorithmic capacities.

Programming advancement is multidimensional today. Endeavor programming requires an intelligent application with AI and ML capacities. Shopper based applications require information examination to convey a superior client experience. Netflix, Trello, and Amazon are genuine instances of such applications. Python assists with building them effortlessly.

## 5 Reasons to Utilize Python for Programming Web Apps

Python can do such numerous things that developers can't discover enough reasons to admire it. Python application development isn't restricted to web and enterprise applications. It is exceptionally adaptable and superb for a wide range of uses.

Robust frameworks

Python is known for its tools and frameworks. There's a structure for everything. Django is helpful for building web applications, venture applications, logical applications, and mathematical processing. Flask is another web improvement framework with no conditions.

Web2Py, CherryPy, and Falcon offer incredible capabilities to customize Python development services. A large portion of them are open-source frameworks that allow quick turn of events.

Simple to read and compose

Python has an improved sentence structure - one that is like the English language. New engineers for Python can undoubtedly understand where they stand in the development process. The simplicity of composing allows quick application building.

The motivation behind building Python, as said by its maker Guido Van Rossum, was to empower even beginner engineers to comprehend the programming language. The simple coding likewise permits developers to roll out speedy improvements without getting confused by pointless subtleties.

Utilized by the best

Alright - Python isn't simply one more programming language. It should have something, which is the reason the business giants use it. Furthermore, that too for different purposes. Developers at Google use Python to assemble framework organization systems, parallel information pusher, code audit, testing and QA, and substantially more. Netflix utilizes Python web development services for its recommendation algorithm and media player.

Massive community support

Python has a steadily developing community that offers enormous help. From amateurs to specialists, there's everybody. There are a lot of instructional exercises, documentation, and guides accessible for Python web development solutions.

Today, numerous universities start with Python, adding to the quantity of individuals in the community. Frequently, Python designers team up on various tasks and help each other with algorithmic, utilitarian, and application critical thinking.

Progressive applications

Python is the greatest supporter of data science, Machine Learning, and Artificial Intelligence at any enterprise software development company. Its utilization cases in cutting edge applications are the most compelling motivation for its prosperity. Python is the second most well known tool after R for data analytics.

The simplicity of getting sorted out, overseeing, and visualizing information through unique libraries makes it ideal for data based applications. TensorFlow for neural networks and OpenCV for computer vision are two of Python's most well known use cases for Machine learning applications.

### Summary

Thinking about the advances in programming and innovation, Python is a YES for an assorted scope of utilizations. Game development, web application development services, GUI advancement, ML and AI improvement, Enterprise and customer applications - every one of them uses Python to its full potential.

The disadvantages of Python web improvement arrangements are regularly disregarded by developers and organizations because of the advantages it gives. They focus on quality over speed and performance over blunders. That is the reason it's a good idea to utilize Python for building the applications of the future.

#python development services #python development company #python app development #python development #python in web development #python software development

1602968400

## Python Tricks Every Developer Should Know

Python is awesome, it’s one of the easiest languages with simple and intuitive syntax but wait, have you ever thought that there might ways to write your python code simpler?

In this tutorial, you’re going to learn a variety of Python tricks that you can use to write your Python code in a more readable and efficient way like a pro.

### Let’s get started

Swapping value in Python

Instead of creating a temporary variable to hold the value of the one while swapping, you can do this instead

``````>>> FirstName = "kalebu"
>>> LastName = "Jordan"
>>> FirstName, LastName = LastName, FirstName
>>> print(FirstName, LastName)
('Jordan', 'kalebu')
``````

#python #python-programming #python3 #python-tutorials #learn-python #python-tips #python-skills #python-development

1602666000

## How to Remove all Duplicate Files on your Drive via Python

Today you’re going to learn how to use Python programming in a way that can ultimately save a lot of space on your drive by removing all the duplicates.

### Intro

In many situations you may find yourself having duplicates files on your disk and but when it comes to tracking and checking them manually it can tedious.

Heres a solution

Instead of tracking throughout your disk to see if there is a duplicate, you can automate the process using coding, by writing a program to recursively track through the disk and remove all the found duplicates and that’s what this article is about.

But How do we do it?

If we were to read the whole file and then compare it to the rest of the files recursively through the given directory it will take a very long time, then how do we do it?

The answer is hashing, with hashing can generate a given string of letters and numbers which act as the identity of a given file and if we find any other file with the same identity we gonna delete it.

There’s a variety of hashing algorithms out there such as

• md5
• sha1
• sha224, sha256, sha384 and sha512

#python-programming #python-tutorials #learn-python #python-project #python3 #python #python-skills #python-tips

1597751700

## How To Compare Tesla and Ford Company By Using Magic Methods in Python

Magic Methods are the special methods which gives us the ability to access built in syntactical features such as ‘<’, ‘>’, ‘==’, ‘+’ etc…

You must have worked with such methods without knowing them to be as magic methods. Magic methods can be identified with their names which start with __ and ends with __ like init, call, str etc. These methods are also called Dunder Methods, because of their name starting and ending with Double Underscore (Dunder).

Now there are a number of such special methods, which you might have come across too, in Python. We will just be taking an example of a few of them to understand how they work and how we can use them.

### 1. init

``````class AnyClass:
def __init__():
print("Init called on its own")
obj = AnyClass()
``````

The first example is _init, _and as the name suggests, it is used for initializing objects. Init method is called on its own, ie. whenever an object is created for the class, the init method is called on its own.

The output of the above code will be given below. Note how we did not call the init method and it got invoked as we created an object for class AnyClass.

``````Init called on its own
``````

Let’s move to some other example, add gives us the ability to access the built in syntax feature of the character +. Let’s see how,

``````class AnyClass:
def __init__(self, var):
self.some_var = var