1598667420

# The Mathematics behind Principle Component Analysis

## Introduction:-

The Logic behind Principle Component Analysis is to _reduce the dimensionality _of data sets . There are so many features or columns in our datasets which are not much helpful in prediction as well as they don’t have much knowledge about the data due to which model take much time to compute and predict the result. So with the help of PCA we are going to transform a new set of features which are uncorrelated as well as ordered.

For Understanding the maths Behind PCA we have some Knowledge about the term which we are going to use to reduce the dimensionality.

• Mean
• Variance
• Covariance
• Linear Transformation
• EigenValue
• EigenVector

### Means:-

The Mean is the most basic quantity in statistics.The means tell us where the measurement are centered.

Let there is a columns X in which we have record of weight of 10 students.

{ a1 ,a2 ,a3…….a10 }

μA = 1/n(a1 +…+a10)

### Variance :-

In variance we understand how spread out are the measurements ?

In this we calculate each difference from the mean and Square of it and then take average of it , gives us the variance.

Var(A)= 1/n (a1 −μA)2 +…….+(an −μA)2

#data analysis

1597883760

## Principle Component Analysis

Mainly used in the dimensionality reduction of the feature space, increasing the interpretability without loss of information, this is achieved by creating a new uncorrelated variable, such that with emphasized variance and brings strong patterns from it. PCA works better even with 3-dimensional or more.

Steps

1. Plot the graph using x- and y-axis

2. Find the average or the midpoint and make the data-centered at the origin by using the formula

x= x — µ

where µ= mean of x

#dimensionality-reduction #apc #machine-learning #principle-component-ana #data-science #data analysis

1623856080

## Streamline Your Data Analysis With Automated Business Analysis

#### Enhance marketing and ROI

#big data #latest news #data analysis #streamline your data analysis #automated business analysis #streamline your data analysis with automated business analysis

1604008800

## Static Code Analysis: What It Is? How to Use It?

Static code analysis refers to the technique of approximating the runtime behavior of a program. In other words, it is the process of predicting the output of a program without actually executing it.

Lately, however, the term “Static Code Analysis” is more commonly used to refer to one of the applications of this technique rather than the technique itself — program comprehension — understanding the program and detecting issues in it (anything from syntax errors to type mismatches, performance hogs likely bugs, security loopholes, etc.). This is the usage we’d be referring to throughout this post.

“The refinement of techniques for the prompt discovery of error serves as well as any other as a hallmark of what we mean by science.”

• J. Robert Oppenheimer

### Outline

We cover a lot of ground in this post. The aim is to build an understanding of static code analysis and to equip you with the basic theory, and the right tools so that you can write analyzers on your own.

We start our journey with laying down the essential parts of the pipeline which a compiler follows to understand what a piece of code does. We learn where to tap points in this pipeline to plug in our analyzers and extract meaningful information. In the latter half, we get our feet wet, and write four such static analyzers, completely from scratch, in Python.

Note that although the ideas here are discussed in light of Python, static code analyzers across all programming languages are carved out along similar lines. We chose Python because of the availability of an easy to use `ast` module, and wide adoption of the language itself.

### How does it all work?

Before a computer can finally “understand” and execute a piece of code, it goes through a series of complicated transformations:

As you can see in the diagram (go ahead, zoom it!), the static analyzers feed on the output of these stages. To be able to better understand the static analysis techniques, let’s look at each of these steps in some more detail:

### Scanning

The first thing that a compiler does when trying to understand a piece of code is to break it down into smaller chunks, also known as tokens. Tokens are akin to what words are in a language.

A token might consist of either a single character, like `(`, or literals (like integers, strings, e.g., `7``Bob`, etc.), or reserved keywords of that language (e.g, `def` in Python). Characters which do not contribute towards the semantics of a program, like trailing whitespace, comments, etc. are often discarded by the scanner.

Python provides the `tokenize` module in its standard library to let you play around with tokens:

Python

1

``````import io
``````

2

``````import tokenize
``````

3

4

``````code = b"color = input('Enter your favourite color: ')"
``````

5

6

``````for token in tokenize.tokenize(io.BytesIO(code).readline):
``````

7

``````    print(token)
``````

Python

1

``````TokenInfo(type=62 (ENCODING),  string='utf-8')
``````

2

``````TokenInfo(type=1  (NAME),      string='color')
``````

3

``````TokenInfo(type=54 (OP),        string='=')
``````

4

``````TokenInfo(type=1  (NAME),      string='input')
``````

5

``````TokenInfo(type=54 (OP),        string='(')
``````

6

``````TokenInfo(type=3  (STRING),    string="'Enter your favourite color: '")
``````

7

``````TokenInfo(type=54 (OP),        string=')')
``````

8

``````TokenInfo(type=4  (NEWLINE),   string='')
``````

9

``````TokenInfo(type=0  (ENDMARKER), string='')
``````

(Note that for the sake of readability, I’ve omitted a few columns from the result above — metadata like starting index, ending index, a copy of the line on which a token occurs, etc.)

#code quality #code review #static analysis #static code analysis #code analysis #static analysis tools #code review tips #static code analyzer #static code analysis tool #static analyzer

1598667420

## Introduction:-

The Logic behind Principle Component Analysis is to _reduce the dimensionality _of data sets . There are so many features or columns in our datasets which are not much helpful in prediction as well as they don’t have much knowledge about the data due to which model take much time to compute and predict the result. So with the help of PCA we are going to transform a new set of features which are uncorrelated as well as ordered.

For Understanding the maths Behind PCA we have some Knowledge about the term which we are going to use to reduce the dimensionality.

• Mean
• Variance
• Covariance
• Linear Transformation
• EigenValue
• EigenVector

### Means:-

The Mean is the most basic quantity in statistics.The means tell us where the measurement are centered.

Let there is a columns X in which we have record of weight of 10 students.

{ a1 ,a2 ,a3…….a10 }

μA = 1/n(a1 +…+a10)

### Variance :-

In variance we understand how spread out are the measurements ?

In this we calculate each difference from the mean and Square of it and then take average of it , gives us the variance.

Var(A)= 1/n (a1 −μA)2 +…….+(an −μA)2

#data analysis

1623292080

## Getting started with Time Series using Pandas

### An introductory guide on getting started with the Time Series Analysis in Python

Time series analysis is the backbone for many companies since most businesses work by analyzing their past data to predict their future decisions. Analyzing such data can be tricky but Python, as a programming language, can help to deal with such data. Python has both inbuilt tools and external libraries, making the whole analysis process both seamless and easy. Python’s Panda s library is frequently used to import, manage, and analyze datasets in various formats. However, in this article, we’ll use it to analyze stock prices and perform some basic time-series operations.

#data-analysis #time-series-analysis #exploratory-data-analysis #stock-market-analysis #financial-analysis #getting started with time series using pandas