Understanding the Dimensionality Reduction in ML

ML (Machine Learning) algorithms are tested with some data which can be called a feature set at the time of development & testing. Developers need to reduce the number of input variables in their feature set to increase the performance of any particular ML model/algorithm.

For example, suppose you have a dataset with numerous columns, or you have an array of points in a 3-D space. In that case, you can reduce the dimensions of your dataset by applying dimensionality reduction techniques in ML. PCA (Principal Component Analysis) is one of the widely used dimensionality reduction techniques by ML developers/testers. Let us dive deeper into understanding PCA in machine learning.

Principal Component Analysis

PCA is an unsupervised statistical technique that is used to reduce the dimensions of the dataset. ML models with many input variables or higher dimensionality tend to fail when operating on a higher input dataset. PCA helps in identifying relationships among different variables & then coupling them. PCA works on some assumptions which are to be followed and it helps developers maintain a standard.

PCA involves the transformation of variables in the dataset into a new set of variables which are called PCs (Principal Components). The principal components would be equal to the number of original variables in the given dataset.

The first principal component (PC1) contains the maximum variation which was present in earlier variables, and this variation decreases as we move to the lower level. The final PC would have the least variation among variables and you will be able to reduce the dimensions of your feature set.

Assumptions in PCA

There are some assumptions in PCA which are to be followed as they will lead to accurate functioning of this dimensionality reduction technique in ML. The assumptions in PCA are:

• There must be linearity in the data set, i.e. the variables combine in a linear manner to form the dataset. The variables exhibit relationships among themselves.

• PCA assumes that the principal component with high variance must be paid attention and the PCs with lower variance are disregarded as noise. Pearson correlation coefficient framework led to the origin of PCA, and there it was assumed first that the axes with high variance would only be turned into principal components.

• All variables should be accessed on the same ratio level of measurement. The most preferred norm is at least 150 observations of the sample set with a ratio measurement of 5:1.

• Extreme values that deviate from other data points in any dataset, which are also called outliers, should be less. More number of outliers will represent experimental errors and will degrade your ML model/algorithm.

• The feature set must be correlated and the reduced feature set after applying PCA will represent the original data set but in an effective way with fewer dimensions.

Steps for Applying PCA

The steps for applying PCA on any ML model/algorithm are as follows:

• Normalisation of data is very necessary to apply PCA. Unscaled data can cause problems in the relative comparison of the dataset. For example, if we have a list of numbers under a column in some 2-D dataset, the mean of those numbers is subtracted from all numbers to normalise the 2-D dataset. Normalising the data can be done in a 3-D dataset too.

• Once you have normalised the dataset, find the covariance among different dimensions and put them in a covariance matrix. The off-diagonal elements in the covariance matrix will represent the covariance among each pair of variables and the diagonal elements will represent the variances of each variable/dimension.

A covariance matrix constructed for any dataset will always be symmetric. A covariance matrix will represent the relationship in data, and you can understand the amount of variance in each principal component easily.

• You have to find the eigenvalues of the covariance matrix which represents the variability in data on an orthogonal basis in the plot. You will also have to find eigenvectors of the covariance matrix which will represent the direction in which maximum variance among the data occurs.

Suppose your covariance matrix ‘C’ has a square matrix ‘E’ of eigenvalues of ‘C’. In that case, it should satisfy this equation – determinant of (EI – C) = 0, where ‘I’ is an identity matrix of the same dimension as of ‘C’. You should check that their covariance matrix is a symmetric/square matrix because then only the calculation of eigenvalues is possible.

• Arrange the eigenvalues in an ascending/descending order and select the higher eigenvalues. You can choose how many eigenvalues you want to proceed with. You will lose some information while ignoring the smaller eigenvalues, but those minute values will not create enough impact on the final result.

The selected higher eigenvalues will become the dimensions of your updated feature set. We also form a feature vector, which is a vector matrix consisting of eigenvectors of relative chosen eigenvalues.

• Using the feature vector, we find the principal components of the dataset under analysis. We multiply the transpose of the feature vector with the transpose of the scaled matrix (a scaled version of data after normalisation) to obtain a matrix containing principal components.

We will notice that the highest eigenvalue will be appropriate for the data, and the other ones will not provide much information about the dataset. This proves that we are not losing data when reducing the dimensions of the dataset; we are just representing it more effectively.

These methods are implemented to finally reduce the dimensions of any dataset in PCA.

#machine learning #pca #pca in machine learning

PCA in Machine Learning: Assumptions, Steps to Apply & Applications
1.30 GEEK