Feature Scaling. Why we go for Feature Scaling ?

What is Feature Scaling ?
Feature Scaling is done on the dataset to bring all the different types of data to a Single Format. Done on Independent Variable.
Some Algorithm, uses Euclideam Distance to calculate the target. If the data varies in Magnitude and Units, Distance between the Independent Variables will be more. SO,bring the data in such a way that Independent variables looks same and does not vary much in terms of magnitude.

#standardscalar #scaling-pandas #minmaxscalar #feature-scaling

What is GEEK

Buddha Community

Feature Scaling. Why we go for Feature Scaling ?
Fannie  Zemlak

Fannie Zemlak


What's new in the go 1.15

Go announced Go 1.15 version on 11 Aug 2020. Highlighted updates and features include Substantial improvements to the Go linker, Improved allocation for small objects at high core counts, X.509 CommonName deprecation, GOPROXY supports skipping proxies that return errors, New embedded tzdata package, Several Core Library improvements and more.

As Go promise for maintaining backward compatibility. After upgrading to the latest Go 1.15 version, almost all existing Golang applications or programs continue to compile and run as older Golang version.

#go #golang #go 1.15 #go features #go improvement #go package #go new features

Feature Scaling. Why we go for Feature Scaling ?

What is Feature Scaling ?
Feature Scaling is done on the dataset to bring all the different types of data to a Single Format. Done on Independent Variable.
Some Algorithm, uses Euclideam Distance to calculate the target. If the data varies in Magnitude and Units, Distance between the Independent Variables will be more. SO,bring the data in such a way that Independent variables looks same and does not vary much in terms of magnitude.

#standardscalar #scaling-pandas #minmaxscalar #feature-scaling

Nat  Kutch

Nat Kutch


Feature Transformation and Scaling Techniques


  1. Understand the requirement of feature transformation and training techniques
  2. Get to know different feature transformation and scaling techniques including-
  • MinMax Scaler
  • Standard Scaler
  • Power Transformer Scaler
  • Unit Vector Scaler/Normalizer


In my machine learning journey, more often than not, I have found that feature preprocessing is a more effective technique in improving my evaluation metric than any other step, like choosing a model algorithm, hyperparameter tuning, etc.

Feature preprocessing is one of the most crucial steps in building a Machine learning model. Too few features and your model won’t have much to learn from. Too many features and we might be feeding unnecessary information to the model. Not only this, but the values in each of the features need to be considered as well.

We know that there are some set rules of dealing with categorical data, as in, encoding them in different ways. However, a large chunk of the process involves dealing with continuous variables. There are various methods of dealing with continuous variables. Some of them include converting them to a normal distribution or converting them to categorical variables, etc.

Image for post

There are a couple of go-to techniques I always use regardless of the model I am using, or whether it is a classification task or regression task, or even an unsupervised learning model. These techniques are:

  • Feature Transformation and
  • Feature Scaling.

_To get started with Data Science and Machine Learning, check out our course — _Applied Machine Learning — Beginner to Professional

Table of Contents

  1. Why do we need Feature Transformation and Scaling?
  2. MinMax Scaler
  3. Standard Scaler
  4. MaxAbsScaler
  5. Robust Scaler
  6. Quantile Transformer Scaler
  7. Log Transformation
  8. Power Transformer Scaler
  9. Unit Vector Scaler/Normalizer

Why do we need Feature Transformation and Scaling?

Oftentimes, we have datasets in which different columns have different units — like one column can be in kilograms, while another column can be in centimeters. Furthermore, we can have columns like income which can range from 20,000 to 100,000, and even more; while an age column which can range from 0 to 100(at the most). Thus, Income is about 1,000 times larger than age.

But how can we be sure that the model treats both these variables equally? When we feed these features to the model as is, there is every chance that the income will influence the result more due to its larger value. But this doesn’t necessarily mean it is more important as a predictor. So, to give importance to both Age, and Income, we need feature scaling.

In most examples of machine learning models, you would have observed either the Standard Scaler or MinMax Scaler. However, the powerful sklearn library offers many other scaling techniques and feature transformations as well, which we can leverage depending on the data we are dealing with. So, what are you waiting for?

Let us explore them one by one with Python code.

We will work with a simple dataframe:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt 
%matplotlib inline
df = pd.DataFrame({ 'Income': [15000, 1800, 120000, 10000], 
'Age': [25, 18, 42, 51], 
'Department': ['HR','Legal','Marketing','Management'] })

Before directly applying any feature transformation or scaling technique, we need to remember the categorical column: Department and first deal with it. This is because we cannot scale non-numeric values.

For that, we 1st create a copy of our dataframe and store the numerical feature names in a list, and their values as well:

df_scaled = df.copy() col_names = ['Income', 'Age']
features = df_scaled[col_names]

We will execute this snippet before using a new scaler every time.

MinMax Scaler

The MinMax scaler is one of the simplest scalers to understand. It just scales all the data between 0 and 1. The formula for calculating the scaled value is-

x_scaled = (x — x_min)/(x_max — x_min)

Thus, a point to note is that it does so for every feature separately. Though (0, 1) is the default range, we can define our range of max and min values as well. How to implement the MinMax scaler?

1 — We will first need to import it

from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()

2 — Apply it on only the values of the features:

df_scaled[col_names] = scaler.fit_transform(features.values)

How do the scaled values look like?

Image for post

Image for post

You can see how the values were scaled. The minimum value among the columns became 0, and the maximum value was changed to 1, with other values in between. However, suppose we don’t want the income or age to have values like 0. Let us take the range to be (5, 10)

from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(5, 10))

df_scaled[col_names] = scaler.fit_transform(features.values)

This is what the output looks like:

Image for post

Image for post

Amazing, right? The min-max scaler lets you set the range in which you want the variables to be.

Standard Scaler

Just like the MinMax Scaler, the Standard Scaler is another popular scaler that is very easy to understand and implement.

For each feature, the Standard Scaler scales the values such that the mean is 0 and the standard deviation is 1(or the variance).

x_scaled = x — mean/std_dev

However, Standard Scaler assumes that the distribution of the variable is normal. Thus, in case, the variables are not normally distributed, we

  1. either choose a different scaler
  2. or first, convert the variables to a normal distribution and then apply this scaler

Implementing the standard scaler is much similar to implementing a min-max scaler. Just like before, we will first import StandardScaler and then use it to transform our variable.

#feature-engineering #feature-scaling #scikit-learn #deep learning

James Clooney

James Clooney


Architect Scale Rulers with Your LOGO | Advantage-advertising.com

These architect scales, and rulers are used in construction and engineering. Your company logo will look great on these tapes when you give them to your clients or employees. Not only architects use these tapes. Painting estimators, HVAC, electrical, structural steel, construction and carpentry estimators call them a daily necessity. We have a variety of styles and price ranges, to suit your individual needs.


#architect scale rulers #architect scale #architect 6" pocket scale #hollow triangular architect 12" scale

Zander  Herzog

Zander Herzog


Secure HTTPS servers in Go

In this article, we are going to look at some of the basic APIs of the http package to create and initialize HTTPS servers in Go.

Image for post

(source: unsplash.com)

In the “Simple Hello World Server” lesson, we learned about net/http package, how to create routes and how [ServeMux](https://golang.org/pkg/net/http/#ServeMux) works. In the “Running multiple HTTP servers” lesson, we learned about [Server](https://golang.org/pkg/net/http/#Server) structure and how to run multiple HTTP servers concurrently.

In this lesson, we are going to create an HTTPS server using both Go’s standard server configuration and custom configuration (using [_Server_](https://golang.org/pkg/net/http/#Server) structure). But before this, we need to know what HTTPS really is?

HTTPS is a big topic of discussion in itself. Hence while writing this lesson, I published an article just on “How HTTPS works?”. I advise you to read this lesson first before continuing this article. In this article, I’ve also described the encryption paradigm and SSL certificates generation process.

If we recall the simplest HTTP server example from previous lessons, we only need http.``[ListenAndServe](https://golang.org/pkg/net/http/#ListenAndServe) function to start an HTTP server and http.``[HandleFunc](https://golang.org/pkg/net/http/#HandleFunc) to register a response handler for a particular endpoint.

Image for post


In the example above, when we run the command go run server.go , it will start an HTTP server on port 9000. By visiting http://localhost:9000 URL in a browser, you will be able to see a Hello World! message on the screen.

Image for post


As we know, the nil argument to ListenAndServe() call invokes Go to use the [DefaultServeMux](https://golang.org/pkg/net/http/#DefaultServeMux) response multiplexer, which is the default instance of ServeMux structure provided globally by the Go. The HandleFunc() call adds a response handler for a specific route on the multiplexer instance.

The http.ListenAndServe() call uses the Go’s standard HTTP server configuration, however, in the previous lesson, how we can customize a server using [Server](https://golang.org/pkg/net/http/#Server) structure type.

To start an HTTPS server, all we need do is to call ServerAndListenTLS method with some configuration. Just like ServeAndListen method, this method is available on both the http package and the Server structure.

The http.``[ServeAndListenTLS](https://golang.org/pkg/net/http/#ListenAndServeTLS) method uses the Go’s standard server implementation, however, both [Server](https://golang.org/pkg/net/http/#Server) instance and Server.``[ServeAndListenTLS](https://golang.org/pkg/net/http/#Server.ListenAndServeTLS) method can be configured for our needs.

#go-programming-language #go #golang-tutorial #go-programming #golang