Principle Component Analysis (PCA) is arguably a very difficult-to-understand topic for beginners in machine learning. Here, I will try my best to intuitively explain what it is, how the algorithm does what it does. This post assumes you have very basic knowledge of Linear Algebra like matrix multiplication, and vectors.

What is PCA?

PCA is a dimensionality-reduction technique used to make large datasets with hundreds of thousands of features into smaller datasets with fewer features while retaining as much information about the dataset as possible.

A perfect example would be:

Image for post

Notice that in the original dataset there were five features which could be reduced to two features. These two features _generalize _the features on the left.

Visualizing the idea of PCA.

To make a picture of what’s happening, we use our previous example. A 2-dimensional plane showing the correlation of size to number of rooms in a house can be compressed to a single size feature, as shown below:

Image for post

If we project the houses on the black line, we would get something like this:

Image for post

So we need to reduce that projection error (the magnitude of blue lines) in order to retain maximum information.

Prerequisites to understanding the PCA Algorithm.

I will explain some concepts intuitively in order for you to understand the algorithm better.


Mean of any dataset refers to the equilibrium of the dataset. Imagine a rod on which balls are placed at some distance x from the wall:


Summing the distance of the balls from the wall and dividing by the number of balls results the point of equilibrium, where a pivot would balance the rod.

#principal-component #dimensionality-reduction #machine-learning #data-science #python #data analysis

Intuitive Explanation for Principle Component Analysis (PCA)
3.05 GEEK