1604217420

Evaluation Metrics for Regression Analysis

Terms to knowThese terms will come up, and it’s good to get familiar with them if you aren’t already:Regression analysis — a set of statistical processes for estimating a continuous dependent variable given a number of independentsVariance — measurement of the spread between numbers in a data set**ŷ D— the estimated value of yȳ **— mean value of y“Goodness of fit”Goodness of fit is typically a term used to describe how well a dataset aligns with a certain statistical distribution. Here, we’re going to think of it as a way of describing how well our model is fitted to our data.If we can think about our regression model in terms of the imaginary “best-fit” line it produces, then it makes sense that we would want to know how well this line matches our data. This goodness of fit can be quantified in a variety of ways, but the  and the adjusted R² score are two of the most common methods for describing how well our model is capturing the variance in our target data.R²R² — also called the coefficient of determination — is a statistical measure representing the amount of variance for a dependent variable that is captured by your model’s predictions. Essentially, it is a measure of how well your model is fitted to the data. This score will always fall between -1 and 1, with values closest to 1 being best (value of 1 means our model is completely explaining our dependent variable).R² uses a sort of “baseline model” as a marker to compare our regression results against. This baseline model simply predicts the mean every time, regardless of the data. After fitting the regression model, the predictions of our baseline (mean-guessing) model are compared to the predictions of our newly fitted model in terms of errors squared.

#machine-learning #data-science #statistics #ai #deep-learning

1657356960

Oyente

An Analysis Tool for Smart Contracts

This repository is currently maintained by Xiao Liang Yu (@yxliang01). If you encounter any bugs or usage issues, please feel free to create an issue on our issue tracker.

Quick Start

A container with required dependencies configured can be found here. The image is however outdated. We are working on pushing the latest image to dockerhub for your convenience. If you experience any issue with this image, please try to build a new docker image by pulling this codebase before open an issue.

To open the container, install docker and run:

``````docker pull luongnguyen/oyente && docker run -i -t luongnguyen/oyente
``````

To evaluate the greeter contract inside the container, run:

``````cd /oyente/oyente && python oyente.py -s greeter.sol
``````

and you are done!

Note - If need the version of Oyente referred to in the paper, run the container from here

To run the web interface, execute `docker run -w /oyente/web -p 3000:3000 oyente:latest ./bin/rails server`

Custom Docker image build

``````docker build -t oyente .
docker run -it -p 3000:3000 -e "OYENTE=/oyente/oyente" oyente:latest
``````

Open a web browser to `http://localhost:3000` for the graphical interface.

Installation

Execute a python virtualenv

``````python -m virtualenv env
source env/bin/activate
``````

Install Oyente via pip:

``````\$ pip2 install oyente
``````

Dependencies:

The following require a Linux system to fufill. macOS instructions forthcoming.

Full installation

Install the following dependencies

solc

``````\$ sudo add-apt-repository ppa:ethereum/ethereum
\$ sudo apt-get update
\$ sudo apt-get install solc
``````

evm from go-ethereum

2. By from PPA if your using Ubuntu

z3 Theorem Prover version 4.5.0.

Install z3 using Python bindings

``````\$ python scripts/mk_make.py --python
\$ cd build
\$ make
\$ sudo make install
``````

Requests library

``````pip install requests
``````

web3 library

``````pip install web3
``````

Evaluating Ethereum Contracts

``````#evaluate a local solidity contract
python oyente.py -s <contract filename>

#evaluate a local solidity with option -a to verify assertions in the contract
python oyente.py -a -s <contract filename>

#evaluate a local evm contract
python oyente.py -s <contract filename> -b

#evaluate a remote contract
python oyente.py -ru https://gist.githubusercontent.com/loiluu/d0eb34d473e421df12b38c12a7423a61/raw/2415b3fb782f5d286777e0bcebc57812ce3786da/puzzle.sol
``````

And that's it! Run `python oyente.py --help` for a list of options.

Paper

The accompanying paper explaining the bugs detected by the tool can be found here.

Miscellaneous Utilities

A collection of the utilities that were developed for the paper are in `misc_utils`. Use them at your own risk - they have mostly been disposable.

1. `generate-graphs.py` - Contains a number of functions to get statistics from contracts.
2. `get_source.py` - The get_contract_code function can be used to retrieve contract source from EtherScan
3. `transaction_scrape.py` - Contains functions to retrieve up-to-date transaction information for a particular contract.

Benchmarks

Note: This is an improved version of the tool used for the paper. Benchmarks are not for direct comparison.

To run the benchmarks, it is best to use the docker container as it includes the blockchain snapshot necessary. In the container, run `batch_run.py` after activating the virtualenv. Results are in `results.json` once the benchmark completes.

The benchmarks take a long time and a lot of RAM in any but the largest of clusters, beware.

Some analytics regarding the number of contracts tested, number of contracts analysed etc. is collected when running this benchmark.

Contributing

Checkout out our contribution guide and the code structure here.

``````\$ sudo apt-get install software-properties-common
\$ sudo apt-get update
\$ sudo apt-get install ethereum``````

Author: enzymefinance
Source Code: https://github.com/enzymefinance/oyente

#blockchain #smartcontract #ethereum

1596969720

Evaluation Metrics for Regression Problems

Hi, today we are going to study about the Evaluation metrics for regression problems. Evaluation Metrics are very important as they tell us, how accurate our model is.

Before we proceed to the evaluation techniques, it is important to gain some intuition.

In the above image, we can see that we have plotted a linear curve, but the curve is not perfect as some points are lying above the line & some are lying below the line.

So, how accurate our model is?

The evaluation metrics aim to solve these problems. Now, without wasting time, let’s jump to the evaluation metrics & see the evaluation techniques.

There are 6 evaluation techniques:

1. M.A.E (Mean Absolute Error)

2. M.S.E (Mean Squared Error)

3. R.M.S.E (Root Mean Squared Error)

4. R.M.S.L.E (Root Mean Squared Log Error)

5. R-Squared

Now, let’s discuss these techniques one by one.

M.A.E (Mean Absolute Error)

It is the simplest & very widely used evaluation technique. It is simply the mean of difference b/w actual & predicted values.

Below, is the mathematical formula of the Mean Absolute Error.

Mean Absolute Error

The Scikit-Learn is a great library, as it has almost all the inbuilt functions that we need in our Data Science journey.

Below is the code to implement Mean Absolute Error

``````from sklearn.metrics import mean_absolute_error

mean_absolute_error(y_true, y_pred)
``````

Here, ‘y_true’ is the true target values & ‘y_pred’ is the predicted target values.

#artificial-intelligence #evaluation-metric #machine-learning #regression #statistics #deep learning

1603022085

Revisiting Regression Analysis

In Supervised Learning, we mostly deal with two types of variables i.e **numerical **variables and categorical variables. Wherein regression deals with numerical variables and **classification **deals with categorical variables. Where,

Regressionis one of the most popular statistical techniques used for Predictive Modelling and Data Mining in the world of Data Science. Basically,

Regression Analysis is a technique used for determining the relationship between two or more variables of interest.

However, Generally only 2–3 types of total 10+ types of regressions are used in practice. Linear Regression and Logistic Regression being widely used in general. So, Today we’re going to explore following 4 types of Regression Analysis techniques:

• Simple Linear Regression
• Ridge Regression
• Lasso Regression
• ElasticNet Regression

We will be observing their applications as well as the difference among them on the go while working on Student’s Score Prediction dataset. Let’s get started.

1. Linear Regression

It is the simplest form of regression. As the name suggests, if the variables of interest share a linear relationship, then Linear Regression algorithm is applicable to them. If there is a single independent variable(here, Hours), then it is a Simple Linear Regression. If there are more than 1 independent variables, then it is a Multiple Linear Regression. The mathematical equation that approximates linear relationship between independent (criterion ) variable X and dependent(predictor) variable Y is:

where, β0 and β1 are intercept and slope respectively which are also known as parameters or model co-efficients.

#data-science #regression-analysis #elastic-net #ridge-regression #lasso-regression

1594102200

Introduction

In this article, we will analyse a business problem with linear regression in a step by step manner and try to interpret the statistical terms at each step to understand its inner workings. Although the liner regression algorithm is simple, for proper analysis, one should interpret the statistical results.

First, we will take a look at simple linear regression and after extending the problem to multiple linear regression.

For easy understanding, follow the python notebook side by side.

What is Linear Regression?

Regression is the statistical approach to find the relationship between variables. Hence, the** Linear Regression** assumes a linear relationship between variables. Depending on the number of input variables, the regression problem classified into

1. Simple linear regression

2. Multiple linear regression

Let’s consider there is a company and it has to improve the sales of the product. The company spends money on different advertising media such as TV, radio, and newspaper to increase the sales of its products. The company records the money spent on each advertising media (in thousands of dollars) and the number of units of product sold (in thousands of units).

Now we have to help the company to find out the most effective way to spend money on advertising media to improve sales for the next year with a less advertising budget.

Simple Linear Regression

Simple linear is an approach for predicting the quantitative response Y based on single predictor variable X.

This is the equation of straight-line having slope β1 and intercept β0.

Let’s start the regression analysis for given advertisement data with simple linear regression. Initially, we will consider the simple linear regression model for the sales and money spent on TV advertising media.

Then the mathematical equation becomes 𝑆𝑎𝑙𝑒𝑠 = 𝛽0 + 𝛽1 * 𝑇𝑉.

Step 1: Estimating the coefficients: (Let’s find the coefficients)

Now to find the estimate of the sales for the advertising budget, we have to know the values of the β1 and β0. For the best estimate, the difference between predicted sales and the actual sales (called as residual) should be minimum.

As the residual may be negative or positive, so while calculating the net residual it can be lead to cancellation terms and reduction of net effect which leads to a non-optimal estimate of coefficients. To overcome this, we use a Residual sum of squares (RSS).

With a simple calculation, we can find the value of β0 and β1 for minimum RSS value.

With the stats model library in python, we can find out the coefficients,

Table 1: Simple regression of sales on TV

Values for β0 and β1 are 7.03 and 0.047 respectively. Then the relation becomes, Sales = 7.03 + 0.047 * TV.

This means if we spend an additional 1000 dollars on TV advertising media it increases the sales of products by 47 units.

This gives us how strongly the TV advertising media associated with the sales.

Step 2: Assessing the Accuracy of the Coefficient Estimates ( How accurate these coefficients are? )

Why the coefficients are not perfect estimates?

The true relationship may not be perfectly linear, so there is an error that can be reduced by using a more complex model such as the polynomial regression model. These types of errors are called reducible errors.

On the other hand, errors may introduce because of errors in measurement and environmental conditions such as the office is closed for one week due to heavy rain which affects the sales. These types of errors are called**_ irreducible errors_**.

#linear-regression #machine-learning #basics #regression-analysis #data-science #data analysis

1623856080