Macey  Kling

Macey Kling

1597499940

How To Deter Adversarial Attacks In Computer Vision Models

While computer vision has become one of the most used technologies across the globe, computer vision models are not immune to threats. One of the reasons for this threat is the underlying lack of robustness of the models. Indrajit Kar, who is the Principal Solution Architect at Accenture, took through a talk at CVDC 2020 on how to make AI more resilient to attack.

As Kar shared, AI has become the new target for attackers, and the instances of manipulation and adversaries have increased dramatically over the last few years. From companies such as Google and Tesla to startups are affected by adversarial attacks.

“While we celebrate advancements in AI, deep neural networks (DNNs)—the algorithms intrinsic to much of AI—have recently been proven to be at risk from attack through seemingly benign inputs. It is possible to fool DNNs by making subtle alterations to input data that often either remain undetected or are overlooked if presented to a human,” he said.

Type Of Adversarial Attacks

Alterations to images that are so small as to remain unnoticed by humans can cause DNNs to misinterpret the image content. As many AI systems take their input from external sources—voice recognition devices or social media upload, for example—this ability to be tricked by adversarial input opens a new, often intriguing, security threat. This has called for an increase in cybersecurity which is coming together to address the crevices in computer vision and machine learning.

#developers corner #adversarial attacks #computer vision #computer vision adversarial attack

What is GEEK

Buddha Community

How To Deter Adversarial Attacks In Computer Vision Models
Macey  Kling

Macey Kling

1597499940

How To Deter Adversarial Attacks In Computer Vision Models

While computer vision has become one of the most used technologies across the globe, computer vision models are not immune to threats. One of the reasons for this threat is the underlying lack of robustness of the models. Indrajit Kar, who is the Principal Solution Architect at Accenture, took through a talk at CVDC 2020 on how to make AI more resilient to attack.

As Kar shared, AI has become the new target for attackers, and the instances of manipulation and adversaries have increased dramatically over the last few years. From companies such as Google and Tesla to startups are affected by adversarial attacks.

“While we celebrate advancements in AI, deep neural networks (DNNs)—the algorithms intrinsic to much of AI—have recently been proven to be at risk from attack through seemingly benign inputs. It is possible to fool DNNs by making subtle alterations to input data that often either remain undetected or are overlooked if presented to a human,” he said.

Type Of Adversarial Attacks

Alterations to images that are so small as to remain unnoticed by humans can cause DNNs to misinterpret the image content. As many AI systems take their input from external sources—voice recognition devices or social media upload, for example—this ability to be tricked by adversarial input opens a new, often intriguing, security threat. This has called for an increase in cybersecurity which is coming together to address the crevices in computer vision and machine learning.

#developers corner #adversarial attacks #computer vision #computer vision adversarial attack

How to Trick Computer Vision Models

With the advent of neural networks, machine learning has gained immense popularity, and companies in just about every industry have started to apply some form of this vast technology to increase efficiency, improve throughput, or enhance customer experiences.

Artificial intelligence as a field has seen major breakthroughs in many areas within the past decade. With so many industries jumping towards automation and trying to apply AI to enhance customer experiences, it’s started to create a bigger impact in our day-to-day lives. Being used on such a large and varied scale, it has recently come to light that these methods come with their own problems.

This article asks an important question: whether the machine learning models we use are intrinsically flawed or not.

#neural-networks #heartbeat #adversarial-attack #machine-learning #computer-vision

Are Computer Vision Models Vulnerable to Weight Poisoning Attacks?

Introduction

In a recent article “Weight Poisoning Attacks on Pre-trained Models” (Kurita et al., 2020), the authors explore the possibility of influencing the predictions of a freshly trained Natural Language Processing (NLP) model by tweaking the weights re-used in its training. While they also propose defenses against such attacks, the very existence of such backdoors poses questions to any production AI system trained using pre-trained weights. This result is especially interesting if it proves to transfer also to the context of Computer Vision (CV) since there, the usage of pre-trained weights is widespread.

#overviews #adversarial #ai #computer vision #machine learning #nlp

Royce  Reinger

Royce Reinger

1667895908

Vision: Datasets, Transforms and Models Specific to Computer Vision

Torchvision

The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision.

Installation

We recommend Anaconda as Python package management system. Please refer to pytorch.org for the detail of PyTorch (torch) installation. The following is the corresponding torchvision versions and supported Python versions.

torchtorchvisionpython
main / nightlymain / nightly>=3.7, <=3.10
1.13.00.14.0>=3.7, <=3.10
1.12.00.13.0>=3.7, <=3.10
1.11.00.12.0>=3.7, <=3.10
1.10.20.11.3>=3.6, <=3.9
1.10.10.11.2>=3.6, <=3.9
1.10.00.11.1>=3.6, <=3.9
1.9.10.10.1>=3.6, <=3.9
1.9.00.10.0>=3.6, <=3.9
1.8.20.9.2>=3.6, <=3.9
1.8.10.9.1>=3.6, <=3.9
1.8.00.9.0>=3.6, <=3.9
1.7.10.8.2>=3.6, <=3.9
1.7.00.8.1>=3.6, <=3.8
1.7.00.8.0>=3.6, <=3.8
1.6.00.7.0>=3.6, <=3.8
1.5.10.6.1>=3.5, <=3.8
1.5.00.6.0>=3.5, <=3.8
1.4.00.5.0==2.7, >=3.5, <=3.8
1.3.10.4.2==2.7, >=3.5, <=3.7
1.3.00.4.1==2.7, >=3.5, <=3.7
1.2.00.4.0==2.7, >=3.5, <=3.7
1.1.00.3.0==2.7, >=3.5, <=3.7
<=1.0.10.2.2==2.7, >=3.5, <=3.7

Anaconda:

conda install torchvision -c pytorch

pip:

pip install torchvision

From source:

python setup.py install
# or, for OSX
# MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install

We don't officially support building from source using pip, but if you do, you'll need to use the --no-build-isolation flag. In case building TorchVision from source fails, install the nightly version of PyTorch following the linked guide on the contributing page and retry the install.

By default, GPU support is built if CUDA is found and torch.cuda.is_available() is true. It's possible to force building GPU support by setting FORCE_CUDA=1 environment variable, which is useful when building a docker image.

Image Backend

Torchvision currently supports the following image backends:

  • Pillow (default)
  • Pillow-SIMD - a much faster drop-in replacement for Pillow with SIMD. If installed will be used as the default.
  • accimage - if installed can be activated by calling torchvision.set_image_backend('accimage')
  • libpng - can be installed via conda conda install libpng or any of the package managers for debian-based and RHEL-based Linux distributions.
  • libjpeg - can be installed via conda conda install jpeg or any of the package managers for debian-based and RHEL-based Linux distributions. libjpeg-turbo can be used as well.

Notes: libpng and libjpeg must be available at compilation time in order to be available. Make sure that it is available on the standard library locations, otherwise, add the include and library paths in the environment variables TORCHVISION_INCLUDE and TORCHVISION_LIBRARY, respectively.

Video Backend

Torchvision currently supports the following video backends:

  • pyav (default) - Pythonic binding for ffmpeg libraries.
  • video_reader - This needs ffmpeg to be installed and torchvision to be built from source. There shouldn't be any conflicting version of ffmpeg installed. Currently, this is only supported on Linux.
conda install -c conda-forge ffmpeg
python setup.py install

Using the models on C++

TorchVision provides an example project for how to use the models on C++ using JIT Script.

Installation From source:

mkdir build
cd build
# Add -DWITH_CUDA=on support for the CUDA if needed
cmake ..
make
make install

Once installed, the library can be accessed in cmake (after properly configuring CMAKE_PREFIX_PATH) via the TorchVision::TorchVision target:

find_package(TorchVision REQUIRED)
target_link_libraries(my-target PUBLIC TorchVision::TorchVision)

The TorchVision package will also automatically look for the Torch package and add it as a dependency to my-target, so make sure that it is also available to cmake via the CMAKE_PREFIX_PATH.

For an example setup, take a look at examples/cpp/hello_world.

Python linking is disabled by default when compiling TorchVision with CMake, this allows you to run models without any Python dependency. In some special cases where TorchVision's operators are used from Python code, you may need to link to Python. This can be done by passing -DUSE_PYTHON=on to CMake.

TorchVision Operators

In order to get the torchvision operators registered with torch (eg. for the JIT), all you need to do is to ensure that you #include <torchvision/vision.h> in your project.

Documentation

You can find the API documentation on the pytorch website: https://pytorch.org/vision/stable/index.html

Contributing

See the CONTRIBUTING file for how to help out.

Disclaimer on Datasets

This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!

Pre-trained Model License

The pre-trained models provided in this library may have their own licenses or terms and conditions derived from the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.

More specifically, SWAG models are released under the CC-BY-NC 4.0 license. See SWAG LICENSE for additional details.

Download Details:

Author: Pytorch
Source Code: https://github.com/pytorch/vision 
License: BSD-3-Clause license

#machinelearning #computer #vision #dataset 

How to Predict Housing Prices with Linear Regression?

How-to-Predict-Housing-Prices-with-Linear-Regression

The final objective is to estimate the cost of a certain house in a Boston suburb. In 1970, the Boston Standard Metropolitan Statistical Area provided the information. To examine and modify the data, we will use several techniques such as data pre-processing and feature engineering. After that, we'll apply a statistical model like regression model to anticipate and monitor the real estate market.

Project Outline:

  • EDA
  • Feature Engineering
  • Pick and Train a Model
  • Interpret
  • Conclusion

EDA

Before using a statistical model, the EDA is a good step to go through in order to:

  • Recognize the data set
  • Check to see if any information is missing.
  • Find some outliers.
  • To get more out of the data, add, alter, or eliminate some features.

Importing the Libraries

  • Recognize the data set
  • Check to see if any information is missing.
  • Find some outliers.
  • To get more out of the data, add, alter, or eliminate some features.

# Import the libraries #Dataframe/Numerical libraries import pandas as pd import numpy as np #Data visualization import plotly.express as px import matplotlib import matplotlib.pyplot as plt import seaborn as sns #Machine learning model from sklearn.linear_model import LinearRegression

Reading the Dataset with Pandas

#Reading the data path='./housing.csv' housing_df=pd.read_csv(path,header=None,delim_whitespace=True)

 CRIMZNINDUSCHASNOXRMAGEDISRADTAXPTRATIOBLSTATMEDV
00.0063218.02.3100.5386.57565.24.09001296.015.3396.904.9824.0
10.027310.07.0700.4696.42178.94.96712242.017.8396.909.1421.6
20.027290.07.0700.4697.18561.14.96712242.017.8392.834.0334.7
30.032370.02.1800.4586.99845.86.06223222.018.7394.632.9433.4
40.069050.02.1800.4587.14754.26.06223222.018.7396.905.3336.2
.............................................
5010.062630.011.9300.5736.59369.12.47861273.021.0391.999.6722.4
5020.045270.011.9300.5736.12076.72.28751273.021.0396.909.0820.6
5030.060760.011.9300.5736.97691.02.16751273.021.0396.905.6423.9
5040.109590.011.9300.5736.79489.32.38891273.021.0393.456.4822.0
5050.047410.011.9300.5736.03080.82.50501273.021.0396.907.8811.9

Have a Look at the Columns

Crime: It refers to a town's per capita crime rate.

ZN: It is the percentage of residential land allocated for 25,000 square feet.

Indus: The amount of non-retail business lands per town is referred to as the indus.

CHAS: CHAS denotes whether or not the land is surrounded by a river.

NOX: The NOX stands for nitric oxide content (part per 10m)

RM: The average number of rooms per home is referred to as RM.

AGE: The percentage of owner-occupied housing built before 1940 is referred to as AGE.

DIS: Weighted distance to five Boston employment centers are referred to as dis.

RAD: Accessibility to radial highways index

TAX: The TAX columns denote the rate of full-value property taxes per $10,000 dollars.

B: B=1000(Bk — 0.63)2 is the outcome of the equation, where Bk is the proportion of blacks in each town.

PTRATIO: It refers to the student-to-teacher ratio in each community.

LSTAT: It refers to the population's lower socioeconomic status.

MEDV: It refers to the 1000-dollar median value of owner-occupied residences.

Data Preprocessing

# Check if there is any missing values. housing_df.isna().sum() CRIM       0 ZN         0 INDUS      0 CHAS       0 NOX        0 RM         0 AGE        0 DIS        0 RAD        0 TAX        0 PTRATIO    0 B          0 LSTAT      0 MEDV       0 dtype: int64

No missing values are found

We examine our data's mean, standard deviation, and percentiles.

housing_df.describe()

Graph Data

 CRIMZNINDUSCHASNOXRMAGEDISRADTAXPTRATIOBLSTATMEDV
count506.000000506.000000506.000000506.000000506.000000506.000000506.000000506.000000506.000000506.000000506.000000506.000000506.000000506.000000
mean3.61352411.36363611.1367790.0691700.5546956.28463468.5749013.7950439.549407408.23715418.455534356.67403212.65306322.532806
std8.60154523.3224536.8603530.2539940.1158780.70261728.1488612.1057108.707259168.5371162.16494691.2948647.1410629.197104
min0.0063200.0000000.4600000.0000000.3850003.5610002.9000001.1296001.000000187.00000012.6000000.3200001.7300005.000000
25%0.0820450.0000005.1900000.0000000.4490005.88550045.0250002.1001754.000000279.00000017.400000375.3775006.95000017.025000
50%0.2565100.0000009.6900000.0000000.5380006.20850077.5000003.2074505.000000330.00000019.050000391.44000011.36000021.200000
75%3.67708312.50000018.1000000.0000000.6240006.62350094.0750005.18842524.000000666.00000020.200000396.22500016.95500025.000000
max88.976200100.00000027.7400001.0000000.8710008.780000100.00000012.12650024.000000711.00000022.000000396.90000037.97000050.000000

The crime, area, sector, nitric oxides, 'B' appear to have multiple outliers at first look because the minimum and maximum values are so far apart. In the Age columns, the mean and the Q2(50 percentile) do not match.

We might double-check it by examining the distribution of each column.

Inferences

  1. The rate of crime is rather low. The majority of values are in the range of 0 to 25. With a huge value and a value of zero.
  2. The majority of residential land is zoned for less than 25,000 square feet. Land zones larger than 25,000 square feet represent a small portion of the dataset.
  3. The percentage of non-retial commercial acres is mostly split between two ranges: 0-13 and 13-23.
  4. The majority of the properties are bordered by the river, although a tiny portion of the data is not.
  5. The content of nitrite dioxide has been trending lower from.3 to.7, with a little bump towards.8. It is permissible to leave a value in the range of 0.1–1.
  6. The number of rooms tends to cluster around the average.
  7. With time, the proportion of owner-occupied units rises.
  8. As the number of weights grows, the weight distance between 5 employment centers reduces. It could indicate that individuals choose to live in new high-employment areas.
  9. People choose to live in places with limited access to roadways (0-10). We have a 30th percentile outlier.
  10. The majority of dwelling taxes are in the range of $200-450, with large outliers around $700,000.
  11. The percentage of people with lower status tends to cluster around the median. The majority of persons are of lower social standing.

Because the model is overly generic, removing all outliers will underfit it. Keeping all outliers causes the model to overfit and become excessively accurate. The data's noise will be learned.

The approach is to establish a happy medium that prevents the model from becoming overly precise. When faced with a new set of data, however, they generalise well.

We'll keep numbers below 600 because there's a huge anomaly in the TAX column around 600.

new_df=housing_df[housing_df['TAX']<600]

Looking at the Distribution

Looking-at-the-Distribution

The overall distribution, particularly the TAX, PTRATIO, and RAD, has improved slightly.

Correlation

Correlation

Perfect correlation is denoted by the clear values. The medium correlation between the columns is represented by the reds, while the negative correlation is represented by the black.

With a value of 0.89, we can see that 'MEDV', which is the medium price we wish to anticipate, is substantially connected with the number of rooms 'RM'. The proportion of black people in area 'B' with a value of 0.19 is followed by the residential land 'ZN' with a value of 0.32 and the percentage of black people in area 'ZN' with a value of 0.32.

The metrics that are most connected with price will be plotted.

The-metrics-that-are-most-connected

Feature Engineering

Feature Scaling

Gradient descent is aided by feature scaling, which ensures that all features are on the same scale. It makes locating the local optimum much easier.

Mean standardization is one strategy to employ. It substitutes (target-mean) for the target to ensure that the feature has a mean of nearly zero.

def standard(X):    '''Standard makes the feature 'X' have a zero mean'''    mu=np.mean(X) #mean    std=np.std(X) #standard deviation    sta=(X-mu)/std # mean normalization    return mu,std,sta     mu,std,sta=standard(X) X=sta X

 CRIMZNINDUSCHASNOXRMAGEDISRADTAXPTRATIOBLSTAT
0-0.6091290.092792-1.019125-0.2809760.2586700.2791350.162095-0.167660-2.105767-0.235130-1.1368630.401318-0.933659
1-0.575698-0.598153-0.225291-0.280976-0.4237950.0492520.6482660.250975-1.496334-1.032339-0.0041750.401318-0.219350
2-0.575730-0.598153-0.225291-0.280976-0.4237951.1897080.0165990.250975-1.496334-1.032339-0.0041750.298315-1.096782
3-0.567639-0.598153-1.040806-0.280976-0.5325940.910565-0.5263500.773661-0.886900-1.3276010.4035930.343869-1.283945
4-0.509220-0.598153-1.040806-0.280976-0.5325941.132984-0.2282610.773661-0.886900-1.3276010.4035930.401318-0.873561
..........................................
501-0.519445-0.5981530.585220-0.2809760.6048480.3060040.300494-0.936773-2.105767-0.5746821.4456660.277056-0.128344
502-0.547094-0.5981530.585220-0.2809760.604848-0.4000630.570195-1.027984-2.105767-0.5746821.4456660.401318-0.229652
503-0.522423-0.5981530.585220-0.2809760.6048480.8777251.077657-1.085260-2.105767-0.5746821.4456660.401318-0.820331
504-0.444652-0.5981530.585220-0.2809760.6048480.6060461.017329-0.979587-2.105767-0.5746821.4456660.314006-0.676095
505-0.543685-0.5981530.585220-0.2809760.604848-0.5344100.715691-0.924173-2.105767-0.5746821.4456660.401318-0.435703

Choose and Train the Model

For the sake of the project, we'll apply linear regression.

Typically, we run numerous models and select the best one based on a particular criterion.

Linear regression is a sort of supervised learning model in which the response is continuous, as it relates to machine learning.

Form of Linear Regression

y= θX+θ1 or y= θ1+X1θ2 +X2θ3 + X3θ4

y is the target you will be predicting

0 is the coefficient

x is the input

We will Sklearn to develop and train the model

#Import the libraries to train the model from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression

Allow us to utilise the train/test method to learn a part of the data on one set and predict using another set using the train/test approach.

X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.4) #Create and Train the model model=LinearRegression().fit(X_train,y_train) #Generate prediction predictions_test=model.predict(X_test) #Compute loss to evaluate the model coefficient= model.coef_ intercept=model.intercept_ print(coefficient,intercept) [7.22218258] 24.66379606613584

In this example, you will learn the model using below hypothesis:

Price= 24.85 + 7.18* Room

It is interpreted as:

For a decided price of a house:

A 7.18-unit increase in the price is connected with a growth in the number of rooms.

As a side note, this is an association, not a cause!

Interpretation

You will need a metric to determine whether our hypothesis was right. The RMSE approach will be used.

Root Means Square Error (RMSE) is defined as the square root of the mean of square error. The difference between the true and anticipated numbers called the error. It's popular because it can be expressed in y-units, which is the median price of a home in our scenario.

def rmse(predict,actual):    return np.sqrt(np.mean(np.square(predict - actual))) # Split the Data into train and test set X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.4) #Create and Train the model model=LinearRegression().fit(X_train,y_train) #Generate prediction predictions_test=model.predict(X_test) #Compute loss to evaluate the model coefficient= model.coef_ intercept=model.intercept_ print(coefficient,intercept) loss=rmse(predictions_test,y_test) print('loss: ',loss) print(model.score(X_test,y_test)) #accuracy [7.43327725] 24.912055881970886 loss: 3.9673165450580714 0.7552661033654667 Loss will be 3.96

This means that y-units refer to the median value of occupied homes with 1000 dollars.

This will be less by 3960 dollars.

While learning the model you will have a high variance when you divide the data. Coefficient and intercept will vary. It's because when we utilized the train/test approach, we choose a set of data at random to place in either the train or test set. As a result, our theory will change each time the dataset is divided.

This problem can be solved using a technique called cross-validation.

Improvisation in the Model

With 'Forward Selection,' we'll iterate through each parameter to assist us choose the numbers characteristics to include in our model.

Forward Selection

  1. Choose the most appropriate variable (in our case based on high correlation)
  2. Add the next best variable to the model
  3. Some predetermined conditions must meet.

We'll use a random state of 1 so that each iteration yields the same outcome.

cols=[] los=[] los_train=[] scor=[] i=0 while i < len(high_corr_var):    cols.append(high_corr_var[i])        # Select inputs variables    X=new_df[cols]        #mean normalization    mu,std,sta=standard(X)    X=sta        # Split the data into training and testing    X_train,X_test,y_train,y_test= train_test_split(X,y,random_state=1)        #fit the model to the training    lnreg=LinearRegression().fit(X_train,y_train)        #make prediction on the training test    prediction_train=lnreg.predict(X_train)        #make prediction on the testing test    prediction=lnreg.predict(X_test)        #compute the loss on train test    loss=rmse(prediction,y_test)    loss_train=rmse(prediction_train,y_train)    los_train.append(loss_train)    los.append(loss)        #compute the score    score=lnreg.score(X_test,y_test)    scor.append(score)        i+=1

We have a big 'loss' with a smaller collection of variables, yet our system will overgeneralize in this scenario. Although we have a reduced 'loss,' we have a large number of variables. However, if the model grows too precise, it may not generalize well to new data.

In order for our model to generalize well with another set of data, we might use 6 or 7 features. The characteristic chosen is descending based on how strong the price correlation is.

high_corr_var ['RM', 'ZN', 'B', 'CHAS', 'RAD', 'DIS', 'CRIM', 'NOX', 'AGE', 'TAX', 'INDUS', 'PTRATIO', 'LSTAT']

With 'RM' having a high price correlation and LSTAT having a negative price correlation.

# Create a list of features names feature_cols=['RM','ZN','B','CHAS','RAD','CRIM','DIS','NOX'] #Select inputs variables X=new_df[feature_cols] # Split the data into training and testing sets X_train,X_test,y_train,y_test= train_test_split(X,y, random_state=1) # feature engineering mu,std,sta=standard(X) X=sta # fit the model to the trainning data lnreg=LinearRegression().fit(X_train,y_train) # make prediction on the testing test prediction=lnreg.predict(X_test) # compute the loss loss=rmse(prediction,y_test) print('loss: ',loss) lnreg.score(X_test,y_test) loss: 3.212659865936143 0.8582338376696363

The test set yielded a loss of 3.21 and an accuracy of 85%.

Other factors, such as alpha, the learning rate at which our model learns, could still be tweaked to improve our model. Alternatively, return to the preprocessing section and working to increase the parameter distribution.

For more details regarding scraping real estate data you can contact Scraping Intelligence today

https://www.websitescraper.com/how-to-predict-housing-prices-with-linear-regression.php