Nat  Grady

Nat Grady


Interactive PCA Explorer: Shiny App for Exploring A PCA

Interactive PCA Explorer

This Shiny application takes a CSV file of clean data, allows you to inspect the data and compute a Principal Components Analysis, and will return several diagnostic plots and tables. The plots include a tableplot, a correlation matrix, a scree plot, and a biplot of Principal Components.

You can chose which columns to include in the PCA, and which column to use as a grouping variable. You can choose the center and/or scale the data, or not. You can choose which PCs to include on the biplot.

The biplot of PCs is interactive, so you can click on points or select points and inspect the details of those points in a table.

How to run or install

There are two ways to run/install this app.

First, you can run it on your computer like so:

runGitHub("interactive_pca_explorer", "benmarwick")

Second, you can clone this repo to have the code on your computer, and run the app from there, like so:

# First clone the repository with git. If you have cloned it into
# ~/interactive_pca_explorer, first change your working directory to ~/interactive_pca_explorer, then use runApp() to start the app.
setwd("~/interactive_pca_explorer") # change to match where you downloaded this repo to
runApp() # runs the app 

This app depends on several R packages (ggplot2, DT, GGally, psych, Hmisc, MASS, tabplot). The app will check to see if you have them installed, and if you don't, it will try to download and install them for you.

How to use

Start on the first (left-most) tab to upload your CSV file, then click on each tab, in order from left to right, to see the results.


Here's what it looks like. Here we have input a CSV file that contain the iris data (included with this app).

Then we can see some simple descriptions of the data, and the raw data at the bottom of the page.

Below we see how we can choose the variables to explore in a correlation matrix. We also have a table that summarizes the correlations and gives p-values.

Below we have a few popular diagnostic tests that many people like to do before doing a PCA. They're not very informative and can be skipped, but people coming from SPSS might feel more comfortable if they can see them here also.

Below are the options for computing the PCA. We can choose which columns to include, and a few details about the PCA function. We are using the prcomp function to compute the PCA.

Here are the classic PCA plots. First is the scree plot summarizing how important the first few PCs are. Second is the interactive PC biplot. You can see that I've used my mouse to draw a rectangle around a few of the points in the biplot (this is called 'brushing') and in the table below we can see the details of those points in the selected area. We can choose which column to use for grouping (this only affects the colouring of the plot, it doesn't change the PCA results), and we can choose which PCs to show on the plot.

Finally we have some of the raw output from the PCA.

Feedback, contributing, etc.

Please open an issue if you find something that doesn't work as expected. Note that this project is released with a Guide to Contributing and a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.

Download Details:

Author: Benmarwick
Source Code: 
License: MIT license

#r #interactive #pca 

Interactive PCA Explorer: Shiny App for Exploring A PCA

A Conceptual Description of Principal Component Analysis (PCA)

Principal component analysis, or PCA, is a statistical procedure that allows you to summarize the information content in large data tables by means of a smaller set of “summary indices” that can be more easily visualized and analyzed. The underlying data can be measurements describing properties of production samples, chemical compounds or reactions, process time points of a continuous process, batches from a batch process, biological individuals or trials of a DOE-protocol, for example.

A conceptual description of principal component analysis, including:
- variance and covariance
- eigenvectors and eigenvalues
- applications


#pca #machine-learning 

A Conceptual Description of Principal Component Analysis (PCA)

How to do PCA in Python for Beginners

You asked for it, you got it! Now I walk you through how to do PCA in Python, step-by-step. It’s not too bad, and I’ll show you how to generate test data, do the analysis, draw fancy graphs and interpret the results. If you want to download the code, here’s the link to the StatQuest GitHub:

👕 T-shirts for programmers:

#python #pca

How to do PCA in Python for Beginners

How to do PCA in R

We’ve talked about the theory behind PCA - Now we talk about how to do it in practice using R. If you want to copy and paste the code I use in this video, it’s right here in the StatQuest GitHub:…


#pca #r

How to do PCA in R
Ray  Patel

Ray Patel


Principal Component Analysis in Dimensionality Reduction with Python

Reducing high dimension features to low dimension features

In this article, we will discuss the feature reduction methods that deals with over-fitting problems occurs in large number of features. When a high dimension data fits in the model then it confused sometimes in between features of similar information. To find the main features/components that are going to impact more on target variable and those components have maximum variance. The 2-dimension feature convert to 1- dimension feature so that computational will be fast.

Why reduce the dimensions?

  • We know that for training large dimension data need more computation power and time.Visualization is not possible with large dimensional data.More dimensions means more storage space problem.

#python #artificial-intelligence #machine-learning #technology #dimensionality reduction #pca

Principal Component Analysis in Dimensionality Reduction with Python

PCA in Machine Learning: Assumptions, Steps to Apply & Applications

Understanding the Dimensionality Reduction in ML

ML (Machine Learning) algorithms are tested with some data which can be called a feature set at the time of development & testing. Developers need to reduce the number of input variables in their feature set to increase the performance of any particular ML model/algorithm.

For example, suppose you have a dataset with numerous columns, or you have an array of points in a 3-D space. In that case, you can reduce the dimensions of your dataset by applying dimensionality reduction techniques in ML. PCA (Principal Component Analysis) is one of the widely used dimensionality reduction techniques by ML developers/testers. Let us dive deeper into understanding PCA in machine learning.

Principal Component Analysis

PCA is an unsupervised statistical technique that is used to reduce the dimensions of the dataset. ML models with many input variables or higher dimensionality tend to fail when operating on a higher input dataset. PCA helps in identifying relationships among different variables & then coupling them. PCA works on some assumptions which are to be followed and it helps developers maintain a standard.

PCA involves the transformation of variables in the dataset into a new set of variables which are called PCs (Principal Components). The principal components would be equal to the number of original variables in the given dataset.

The first principal component (PC1) contains the maximum variation which was present in earlier variables, and this variation decreases as we move to the lower level. The final PC would have the least variation among variables and you will be able to reduce the dimensions of your feature set.

Assumptions in PCA

There are some assumptions in PCA which are to be followed as they will lead to accurate functioning of this dimensionality reduction technique in ML. The assumptions in PCA are:

• There must be linearity in the data set, i.e. the variables combine in a linear manner to form the dataset. The variables exhibit relationships among themselves.

• PCA assumes that the principal component with high variance must be paid attention and the PCs with lower variance are disregarded as noise. Pearson correlation coefficient framework led to the origin of PCA, and there it was assumed first that the axes with high variance would only be turned into principal components.

• All variables should be accessed on the same ratio level of measurement. The most preferred norm is at least 150 observations of the sample set with a ratio measurement of 5:1.

• Extreme values that deviate from other data points in any dataset, which are also called outliers, should be less. More number of outliers will represent experimental errors and will degrade your ML model/algorithm.

• The feature set must be correlated and the reduced feature set after applying PCA will represent the original data set but in an effective way with fewer dimensions.

Steps for Applying PCA

The steps for applying PCA on any ML model/algorithm are as follows:

• Normalisation of data is very necessary to apply PCA. Unscaled data can cause problems in the relative comparison of the dataset. For example, if we have a list of numbers under a column in some 2-D dataset, the mean of those numbers is subtracted from all numbers to normalise the 2-D dataset. Normalising the data can be done in a 3-D dataset too.

• Once you have normalised the dataset, find the covariance among different dimensions and put them in a covariance matrix. The off-diagonal elements in the covariance matrix will represent the covariance among each pair of variables and the diagonal elements will represent the variances of each variable/dimension.

A covariance matrix constructed for any dataset will always be symmetric. A covariance matrix will represent the relationship in data, and you can understand the amount of variance in each principal component easily.

• You have to find the eigenvalues of the covariance matrix which represents the variability in data on an orthogonal basis in the plot. You will also have to find eigenvectors of the covariance matrix which will represent the direction in which maximum variance among the data occurs.

Suppose your covariance matrix ‘C’ has a square matrix ‘E’ of eigenvalues of ‘C’. In that case, it should satisfy this equation – determinant of (EI – C) = 0, where ‘I’ is an identity matrix of the same dimension as of ‘C’. You should check that their covariance matrix is a symmetric/square matrix because then only the calculation of eigenvalues is possible.

• Arrange the eigenvalues in an ascending/descending order and select the higher eigenvalues. You can choose how many eigenvalues you want to proceed with. You will lose some information while ignoring the smaller eigenvalues, but those minute values will not create enough impact on the final result.

The selected higher eigenvalues will become the dimensions of your updated feature set. We also form a feature vector, which is a vector matrix consisting of eigenvectors of relative chosen eigenvalues.

• Using the feature vector, we find the principal components of the dataset under analysis. We multiply the transpose of the feature vector with the transpose of the scaled matrix (a scaled version of data after normalisation) to obtain a matrix containing principal components.

We will notice that the highest eigenvalue will be appropriate for the data, and the other ones will not provide much information about the dataset. This proves that we are not losing data when reducing the dimensions of the dataset; we are just representing it more effectively.

These methods are implemented to finally reduce the dimensions of any dataset in PCA.

#machine learning #pca #pca in machine learning

PCA in Machine Learning: Assumptions, Steps to Apply & Applications
Elton  Bogan

Elton Bogan


Essential Guide to Clustering In Unsupervised Learning

Data in itself has no value, it actually finds its expression when it is processed right, for the right purpose using the right tools.

So when it comes to understanding the data it becomes extremely important that we are not only looking to extract obvious insights but also to identify the hidden patterns which may not be easy to find just by exploratory data analysis. To make intelligent predictions, identifying patterns and make effective recommendations our data need to be segregated into meaningful clusters.

This stream of machine learning is where we do not rely on a labeled data set which has a target variable already defined. Instead, we rely upon clustering the datasets into groups and try to make predictions about the behavior. This is called unsupervised learning.

Unsupervised learning collaborates with supervised machine learning to make our model robust and reliable. So today we will look into unsupervised learning techniques, we will go into details of

  1. What is Unsupervised Learning?
  2. Types Of Unsupervised Learning
  3. Understanding clustering & its types
  4. Hands-on on K-Means & hierarchical clustering

Let’s start this journey of learning by understanding unsupervised learning.

#machine-learning #pca #data-science #unsupervised-learning #technology #ml #artificial-intelligence #ai

Essential Guide to Clustering In Unsupervised Learning

Dimensionality Reduction with Principal Component Analysis

With the availability of high-performance CPUs and GPUs, it is pretty much possible to solve every regression, classification, clustering, and other related problems using machine learning and deep learning models. However, there are still various portions that cause performance bottlenecks while developing such models. A large number of features in the dataset are one of the major factors that affect both the training time as well as the accuracy of machine learning models.
The Curse of Dimensionality
In machine learning, “dimensionality” simply refers to the number of features (i.e. input variables) in your dataset.
While the performance of any machine learning model increases if we add additional features/dimensions, at some point a further insertion leads to performance degradation that is when the number of features is very large commensurate with the number of observations in your dataset, several linear algorithms strive hard to train efficient models. This is called the “Curse of Dimensionality”.

Dimensionality reduction is a set of techniques that studies how to shrivel the size of data while preserving the most important information and further eliminating the curse of dimensionality. It plays an important role in the performance of classification and clustering problems.

#2020 may tutorials # overviews #dimensionality reduction #numpy #pca #python

Dimensionality Reduction with Principal Component Analysis