George more


Get Difference Between If Macfee Scanner Slow Computer?

According to reports, many people stated there was additional computer gear they slowed down after installing any anti virus application or safety bundle.

**Now we’ll find out about the McAfee making computer sluggish.
McAfee antivirus software has many functions and features, but this wide variety of choices can lead to problems on some computers, especially elderly ones. Mcafee antivirus renewal enables you to scan your computer and upgrade automatically, but these surgeries take up RAM and bandwidth, supplying less power for work.

McAfee may also slow down your computer if it’s not capable of working it, in case you’ve got an older machine, then maybe it does not meet system demands.

**You are able to use exactly the very same actions to resolve just about any anti virus program -

  • The simple allergy that fixes the lag -
  • be certain the configuration was made properly and the item is current.
  • Update your own body Check to find out whether it helps.
  • Let us find more information about this issue and attempt to repair the issue utilizing rapid and easy measures.

Why is my computer slow after installing McAfee Antivirus?

If you’re a Windows or even Mac user, you ought to know about McAfee Antivirus, that was formerly among the very popular antivirus software. However, since we know everything has a price tag, also McAfee does it.

McAfee always stutters and flaws the pc, as it utilizes many of their tools all of the time, making windows sure, and this also takes some time, and lastly, the computer starts to slow down those components.

The setup of this anti virus program might also be necessary since in many instances it saves your pc from many malware and viruses when using the net or by simply connecting a USB stick that has been formerly connected to the infected computer. A lot of individuals have slowed their computers after installing McAfee. We now supply quite a few hints so that you may make your personal computer somewhat quicker and at precisely the exact same time provide security.

So originally let us see exactly what McAfee does if the computer is running slowly, McAfee really consistently shields the C drive, and consequently assesses each file in the temporary folder, then evaluations every file that attempts to find the registry or alternative driver files.

Now they’re not damaged and consequently get the computer design from the exact same and consequently really render fewer resources to run the computer. This manner, the computer begins to hang.

Measure 1 - Changing the setup from constant security to constant oversight

  1. Open the McAfee program
  2. Change the choice on the Protection tab in Constant safety to Constant monitoring.
  3. This keeps the program under surveillance, however once we browse untrusted websites, we will need to give complete security for McAfee to keep us secure.
  4. And we really must manually scan after adding a new disk in the computer, so we will need to become very mindful of utilizing McAfee in constant supervision manner.

Measure 2 - Avoid using McAfee’s system tools

  1. Prevent using McAfee’s program tools.
  2. Sort"MSConfig" in the search bar.

**Uncheck the boxes connected with the McAfee manufacturer enrolled close to the item.
You can really disable McAfee’s ongoing scanning and actually intentionally perform daily scans within the very long term, it may actually prevent it from interfering with our job, and we can continue our work without even slowing the computer down.

#how to renew mcafee using product key #how can i activate my mcafee antivirus

What is GEEK

Buddha Community

Shubham Ankit

Shubham Ankit


How to Automate Excel with Python | Python Excel Tutorial (OpenPyXL)

How to Automate Excel with Python

In this article, We will show how we can use python to automate Excel . A useful Python library is Openpyxl which we will learn to do Excel Automation


Openpyxl is a Python library that is used to read from an Excel file or write to an Excel file. Data scientists use Openpyxl for data analysis, data copying, data mining, drawing charts, styling sheets, adding formulas, and more.

Workbook: A spreadsheet is represented as a workbook in openpyxl. A workbook consists of one or more sheets.

Sheet: A sheet is a single page composed of cells for organizing data.

Cell: The intersection of a row and a column is called a cell. Usually represented by A1, B5, etc.

Row: A row is a horizontal line represented by a number (1,2, etc.).

Column: A column is a vertical line represented by a capital letter (A, B, etc.).

Openpyxl can be installed using the pip command and it is recommended to install it in a virtual environment.

pip install openpyxl


We start by creating a new spreadsheet, which is called a workbook in Openpyxl. We import the workbook module from Openpyxl and use the function Workbook() which creates a new workbook.

from openpyxl
import Workbook
#creates a new workbook
wb = Workbook()
#Gets the first active worksheet
ws =
#creating new worksheets by using the create_sheet method

ws1 = wb.create_sheet("sheet1", 0) #inserts at first position
ws2 = wb.create_sheet("sheet2") #inserts at last position
ws3 = wb.create_sheet("sheet3", -1) #inserts at penultimate position

#Renaming the sheet
ws.title = "Example"

#save the workbook = "example.xlsx")


We load the file using the function load_Workbook() which takes the filename as an argument. The file must be saved in the same working directory.

#loading a workbook
wb = openpyxl.load_workbook("example.xlsx")




#getting sheet names
result = ['sheet1', 'Sheet', 'sheet3', 'sheet2']

#getting a particular sheet
sheet1 = wb["sheet2"]

#getting sheet title
result = 'sheet2'

#Getting the active sheet
sheetactive =
result = 'sheet1'




#get a cell from the sheet
sheet1["A1"] <
  Cell 'Sheet1'.A1 >

  #get the cell value
ws["A1"].value 'Segment'

#accessing cell using row and column and assigning a value
d = ws.cell(row = 4, column = 2, value = 10)




#looping through each row and column
for x in range(1, 5):
  for y in range(1, 5):
  print(x, y, ws.cell(row = x, column = y)

#getting the highest row number

#getting the highest column number

There are two functions for iterating through rows and columns.

Iter_rows() => returns the rows
Iter_cols() => returns the columns {
  min_row = 4, max_row = 5, min_col = 2, max_col = 5
} => This can be used to set the boundaries
for any iteration.


#iterating rows
for row in ws.iter_rows(min_row = 2, max_col = 3, max_row = 3):
  for cell in row:
  print(cell) <
  Cell 'Sheet1'.A2 >
  Cell 'Sheet1'.B2 >
  Cell 'Sheet1'.C2 >
  Cell 'Sheet1'.A3 >
  Cell 'Sheet1'.B3 >
  Cell 'Sheet1'.C3 >

  #iterating columns
for col in ws.iter_cols(min_row = 2, max_col = 3, max_row = 3):
  for cell in col:
  print(cell) <
  Cell 'Sheet1'.A2 >
  Cell 'Sheet1'.A3 >
  Cell 'Sheet1'.B2 >
  Cell 'Sheet1'.B3 >
  Cell 'Sheet1'.C2 >
  Cell 'Sheet1'.C3 >

To get all the rows of the worksheet we use the method worksheet.rows and to get all the columns of the worksheet we use the method worksheet.columns. Similarly, to iterate only through the values we use the method worksheet.values.


for row in ws.values:
  for value in row:



Writing to a workbook can be done in many ways such as adding a formula, adding charts, images, updating cell values, inserting rows and columns, etc… We will discuss each of these with an example.




#creates a new workbook
wb = openpyxl.Workbook()

#saving the workbook"new.xlsx")




#creating a new sheet
ws1 = wb.create_sheet(title = "sheet 2")

#creating a new sheet at index 0
ws2 = wb.create_sheet(index = 0, title = "sheet 0")

#checking the sheet names
wb.sheetnames['sheet 0', 'Sheet', 'sheet 2']

#deleting a sheet
del wb['sheet 0']

#checking sheetnames
wb.sheetnames['Sheet', 'sheet 2']




#checking the sheet value

#adding value to cell
ws['B2'] = 367

#checking value




We often require formulas to be included in our Excel datasheet. We can easily add formulas using the Openpyxl module just like you add values to a cell.

For example:

import openpyxl
from openpyxl
import Workbook

wb = openpyxl.load_workbook("new1.xlsx")
ws = wb['Sheet']

ws['A9'] = '=SUM(A2:A8)'"new2.xlsx")

The above program will add the formula (=SUM(A2:A8)) in cell A9. The result will be as below.




Two or more cells can be merged to a rectangular area using the method merge_cells(), and similarly, they can be unmerged using the method unmerge_cells().

For example:
Merge cells

#merge cells B2 to C9
ws['B2'] = "Merged cells"

Adding the above code to the previous example will merge cells as below.




#unmerge cells B2 to C9

The above code will unmerge cells from B2 to C9.


To insert an image we import the image function from the module openpyxl.drawing.image. We then load our image and add it to the cell as shown in the below example.


import openpyxl
from openpyxl
import Workbook
from openpyxl.drawing.image
import Image

wb = openpyxl.load_workbook("new1.xlsx")
ws = wb['Sheet']
#loading the image(should be in same folder)
img = Image('logo.png')
ws['A1'] = "Adding image"
#adjusting size
img.height = 130
img.width = 200
#adding img to cell A3

ws.add_image(img, 'A3')"new2.xlsx")




Charts are essential to show a visualization of data. We can create charts from Excel data using the Openpyxl module chart. Different forms of charts such as line charts, bar charts, 3D line charts, etc., can be created. We need to create a reference that contains the data to be used for the chart, which is nothing but a selection of cells (rows and columns). I am using sample data to create a 3D bar chart in the below example:


import openpyxl
from openpyxl
import Workbook
from openpyxl.chart
import BarChart3D, Reference, series

wb = openpyxl.load_workbook("example.xlsx")
ws =

values = Reference(ws, min_col = 3, min_row = 2, max_col = 3, max_row = 40)
chart = BarChart3D()
ws.add_chart(chart, "E3")"MyChart.xlsx")


How to Automate Excel with Python with Video Tutorial

Welcome to another video! In this video, We will cover how we can use python to automate Excel. I'll be going over everything from creating workbooks to accessing individual cells and stylizing cells. There is a ton of things that you can do with Excel but I'll just be covering the core/base things in OpenPyXl.

⭐️ Timestamps ⭐️
00:00 | Introduction
02:14 | Installing openpyxl
03:19 | Testing Installation
04:25 | Loading an Existing Workbook
06:46 | Accessing Worksheets
07:37 | Accessing Cell Values
08:58 | Saving Workbooks
09:52 | Creating, Listing and Changing Sheets
11:50 | Creating a New Workbook
12:39 | Adding/Appending Rows
14:26 | Accessing Multiple Cells
20:46 | Merging Cells
22:27 | Inserting and Deleting Rows
23:35 | Inserting and Deleting Columns
24:48 | Copying and Moving Cells
26:06 | Practical Example, Formulas & Cell Styling

📄 Resources 📄
OpenPyXL Docs: 
Code Written in This Tutorial: 


Monty  Boehm

Monty Boehm


Twitter.jl: Julia Package to Access Twitter API


A Julia package for interacting with the Twitter API.

Twitter.jl is a Julia package to work with the Twitter API v1.1. Currently, only the REST API methods are supported; streaming API endpoints aren't implemented at this time.

All functions have required arguments for those parameters required by Twitter and an options keyword argument to provide a Dict{String, String} of optional parameters Twitter API documentation. Most function calls will return either a Dict or an Array <: TwitterType. Bad requests will return the response code from the API (403, 404, etc).

DataFrame methods are defined for functions returning composite types: Tweets, Places, Lists, and Users.


Before one can make use of this package, you must create an application on the Twitter's Developer Platform.

Once your application is approved, you can access your dashboard/portal to grab your authentication credentials from the "Details" tab of the application.

Note that you will also want to ensure that your App has Read / Write OAuth access in order to post tweets. You can find out more about this on Stack Overflow.


To install this package, enter ] on the REPL to bring up Julia's package manager. Then add the package:

julia> ]
(v1.7) pkg> add Twitter

Tip: Press Ctrl+C to return to the julia> prompt.


To run Twitter.jl, enter the following command in your Julia REPL

julia> using Twitter

Then the a global variable has to be declared with the twitterauth function. This function holds the consumer_key(API Key), consumer_secret(API Key Secret), oauth_token(Access Token), and oauth_secret(Access Token Secret) respectively.

twitterauth("6nOtpXmf...", # API Key
            "sES5Zlj096S...", # API Key Secret
            "98689850-Hj...", # Access Token
            "UroqCVpWKIt...") # Access Token Secret
  • Ensure you put your credentials in an env file to avoid pushing your secrets to the public 🙀.

Note: This package does not currently support OAuth authentication.

Code examples

See runtests.jl for example function calls.

using Twitter, Test
using JSON, OAuth

# set debugging


mentions_timeline_default = get_mentions_timeline()
tw = mentions_timeline_default[1]
tw_df = DataFrame(mentions_timeline_default)
@test 0 <= length(mentions_timeline_default) <= 20
@test typeof(mentions_timeline_default) == Vector{Tweets}
@test typeof(tw) == Tweets
@test size(tw_df)[2] == 30

user_timeline_default = get_user_timeline(screen_name = "randyzwitch")
@test typeof(user_timeline_default) == Vector{Tweets}

home_timeline_default = get_home_timeline()
@test typeof(home_timeline_default) == Vector{Tweets}

get_tweet_by_id = get_single_tweet_id(id = "434685122671939584")
@test typeof(get_tweet_by_id) == Tweets

duke_tweets = get_search_tweets(q = "#Duke", count = 200)
@test typeof(duke_tweets) <: Dict

#test sending/deleting direct messages
#commenting out because Twitter API changed. Come back to fix
# send_dm = post_direct_messages_send(text = "Testing from Julia, this might disappear later $(time())", screen_name = "randyzwitch")
# get_single_dm = get_direct_messages_show(id =
# destroy = post_direct_messages_destroy(id =
# @test typeof(send_dm) == Tweets
# @test typeof(get_single_dm) == Tweets
# @test typeof(destroy) == Tweets

#creating/destroying friendships
add_friend = post_friendships_create(screen_name = "kyrieirving")

unfollow = post_friendships_destroy(screen_name = "kyrieirving")
unfollow_df = DataFrame(unfollow)
@test typeof(add_friend) == Users
@test typeof(unfollow) == Users
@test size(unfollow_df)[2] == 40

# create a cursor for follower ids
follow_cursor_test = get_followers_ids(screen_name = "twitter", count = 10_000)
@test length(follow_cursor_test["ids"]) == 10_000

# create a cursor for friend ids - use barackobama because he follows a lot of accounts!
friend_cursor_test = get_friends_ids(screen_name = "BarackObama", count = 10_000)
@test length(friend_cursor_test["ids"]) == 10_000

# create a test for home timelines
home_t = get_home_timeline(count = 2)
@test length(home_t) > 1

# TEST of cursoring functionality on user timelines
user_t = get_user_timeline(screen_name = "stefanjwojcik", count = 400)
@test length(user_t) == 400
# get the minimum ID of the tweets returned (the earliest)
minid = minimum( for x in user_t);

# now iterate until you hit that tweet: should return 399
# WARNING: current versions of julia cannot use keywords in macros? read here:
# eventually replace since_id = minid
tweets_since = get_user_timeline(screen_name = "stefanjwojcik", count = 400, since_id = 1001808621053898752, include_rts=1)

@test length(tweets_since)>=399

# testing get_mentions_timeline
mentions = get_mentions_timeline(screen_name = "stefanjwojcik", count = 300) 
@test length(mentions) >= 50 #sometimes API doesn't return number requested (twitter API specifies count is the max returned, may be much lower)
@test Tweets<:typeof(mentions[1])

# testing retweets_of_me
my_rts = get_retweets_of_me(count = 300)
@test Tweets<:typeof(my_rts[1])

Want to contribute?

Contributions are welcome! Kindly refer to the contribution guidelines.

Linux: Build Status 

CodeCov: codecov

Author: Randyzwitch
Source Code: 
License: View license

#julia #api #twitter 

How to Predict Housing Prices with Linear Regression?


The final objective is to estimate the cost of a certain house in a Boston suburb. In 1970, the Boston Standard Metropolitan Statistical Area provided the information. To examine and modify the data, we will use several techniques such as data pre-processing and feature engineering. After that, we'll apply a statistical model like regression model to anticipate and monitor the real estate market.

Project Outline:

  • EDA
  • Feature Engineering
  • Pick and Train a Model
  • Interpret
  • Conclusion


Before using a statistical model, the EDA is a good step to go through in order to:

  • Recognize the data set
  • Check to see if any information is missing.
  • Find some outliers.
  • To get more out of the data, add, alter, or eliminate some features.

Importing the Libraries

  • Recognize the data set
  • Check to see if any information is missing.
  • Find some outliers.
  • To get more out of the data, add, alter, or eliminate some features.

# Import the libraries #Dataframe/Numerical libraries import pandas as pd import numpy as np #Data visualization import as px import matplotlib import matplotlib.pyplot as plt import seaborn as sns #Machine learning model from sklearn.linear_model import LinearRegression

Reading the Dataset with Pandas

#Reading the data path='./housing.csv' housing_df=pd.read_csv(path,header=None,delim_whitespace=True)


Have a Look at the Columns

Crime: It refers to a town's per capita crime rate.

ZN: It is the percentage of residential land allocated for 25,000 square feet.

Indus: The amount of non-retail business lands per town is referred to as the indus.

CHAS: CHAS denotes whether or not the land is surrounded by a river.

NOX: The NOX stands for nitric oxide content (part per 10m)

RM: The average number of rooms per home is referred to as RM.

AGE: The percentage of owner-occupied housing built before 1940 is referred to as AGE.

DIS: Weighted distance to five Boston employment centers are referred to as dis.

RAD: Accessibility to radial highways index

TAX: The TAX columns denote the rate of full-value property taxes per $10,000 dollars.

B: B=1000(Bk — 0.63)2 is the outcome of the equation, where Bk is the proportion of blacks in each town.

PTRATIO: It refers to the student-to-teacher ratio in each community.

LSTAT: It refers to the population's lower socioeconomic status.

MEDV: It refers to the 1000-dollar median value of owner-occupied residences.

Data Preprocessing

# Check if there is any missing values. housing_df.isna().sum() CRIM       0 ZN         0 INDUS      0 CHAS       0 NOX        0 RM         0 AGE        0 DIS        0 RAD        0 TAX        0 PTRATIO    0 B          0 LSTAT      0 MEDV       0 dtype: int64

No missing values are found

We examine our data's mean, standard deviation, and percentiles.


Graph Data


The crime, area, sector, nitric oxides, 'B' appear to have multiple outliers at first look because the minimum and maximum values are so far apart. In the Age columns, the mean and the Q2(50 percentile) do not match.

We might double-check it by examining the distribution of each column.


  1. The rate of crime is rather low. The majority of values are in the range of 0 to 25. With a huge value and a value of zero.
  2. The majority of residential land is zoned for less than 25,000 square feet. Land zones larger than 25,000 square feet represent a small portion of the dataset.
  3. The percentage of non-retial commercial acres is mostly split between two ranges: 0-13 and 13-23.
  4. The majority of the properties are bordered by the river, although a tiny portion of the data is not.
  5. The content of nitrite dioxide has been trending lower from.3 to.7, with a little bump towards.8. It is permissible to leave a value in the range of 0.1–1.
  6. The number of rooms tends to cluster around the average.
  7. With time, the proportion of owner-occupied units rises.
  8. As the number of weights grows, the weight distance between 5 employment centers reduces. It could indicate that individuals choose to live in new high-employment areas.
  9. People choose to live in places with limited access to roadways (0-10). We have a 30th percentile outlier.
  10. The majority of dwelling taxes are in the range of $200-450, with large outliers around $700,000.
  11. The percentage of people with lower status tends to cluster around the median. The majority of persons are of lower social standing.

Because the model is overly generic, removing all outliers will underfit it. Keeping all outliers causes the model to overfit and become excessively accurate. The data's noise will be learned.

The approach is to establish a happy medium that prevents the model from becoming overly precise. When faced with a new set of data, however, they generalise well.

We'll keep numbers below 600 because there's a huge anomaly in the TAX column around 600.


Looking at the Distribution


The overall distribution, particularly the TAX, PTRATIO, and RAD, has improved slightly.



Perfect correlation is denoted by the clear values. The medium correlation between the columns is represented by the reds, while the negative correlation is represented by the black.

With a value of 0.89, we can see that 'MEDV', which is the medium price we wish to anticipate, is substantially connected with the number of rooms 'RM'. The proportion of black people in area 'B' with a value of 0.19 is followed by the residential land 'ZN' with a value of 0.32 and the percentage of black people in area 'ZN' with a value of 0.32.

The metrics that are most connected with price will be plotted.


Feature Engineering

Feature Scaling

Gradient descent is aided by feature scaling, which ensures that all features are on the same scale. It makes locating the local optimum much easier.

Mean standardization is one strategy to employ. It substitutes (target-mean) for the target to ensure that the feature has a mean of nearly zero.

def standard(X):    '''Standard makes the feature 'X' have a zero mean'''    mu=np.mean(X) #mean    std=np.std(X) #standard deviation    sta=(X-mu)/std # mean normalization    return mu,std,sta     mu,std,sta=standard(X) X=sta X


Choose and Train the Model

For the sake of the project, we'll apply linear regression.

Typically, we run numerous models and select the best one based on a particular criterion.

Linear regression is a sort of supervised learning model in which the response is continuous, as it relates to machine learning.

Form of Linear Regression

y= θX+θ1 or y= θ1+X1θ2 +X2θ3 + X3θ4

y is the target you will be predicting

0 is the coefficient

x is the input

We will Sklearn to develop and train the model

#Import the libraries to train the model from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression

Allow us to utilise the train/test method to learn a part of the data on one set and predict using another set using the train/test approach.

X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.4) #Create and Train the model model=LinearRegression().fit(X_train,y_train) #Generate prediction predictions_test=model.predict(X_test) #Compute loss to evaluate the model coefficient= model.coef_ intercept=model.intercept_ print(coefficient,intercept) [7.22218258] 24.66379606613584

In this example, you will learn the model using below hypothesis:

Price= 24.85 + 7.18* Room

It is interpreted as:

For a decided price of a house:

A 7.18-unit increase in the price is connected with a growth in the number of rooms.

As a side note, this is an association, not a cause!


You will need a metric to determine whether our hypothesis was right. The RMSE approach will be used.

Root Means Square Error (RMSE) is defined as the square root of the mean of square error. The difference between the true and anticipated numbers called the error. It's popular because it can be expressed in y-units, which is the median price of a home in our scenario.

def rmse(predict,actual):    return np.sqrt(np.mean(np.square(predict - actual))) # Split the Data into train and test set X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.4) #Create and Train the model model=LinearRegression().fit(X_train,y_train) #Generate prediction predictions_test=model.predict(X_test) #Compute loss to evaluate the model coefficient= model.coef_ intercept=model.intercept_ print(coefficient,intercept) loss=rmse(predictions_test,y_test) print('loss: ',loss) print(model.score(X_test,y_test)) #accuracy [7.43327725] 24.912055881970886 loss: 3.9673165450580714 0.7552661033654667 Loss will be 3.96

This means that y-units refer to the median value of occupied homes with 1000 dollars.

This will be less by 3960 dollars.

While learning the model you will have a high variance when you divide the data. Coefficient and intercept will vary. It's because when we utilized the train/test approach, we choose a set of data at random to place in either the train or test set. As a result, our theory will change each time the dataset is divided.

This problem can be solved using a technique called cross-validation.

Improvisation in the Model

With 'Forward Selection,' we'll iterate through each parameter to assist us choose the numbers characteristics to include in our model.

Forward Selection

  1. Choose the most appropriate variable (in our case based on high correlation)
  2. Add the next best variable to the model
  3. Some predetermined conditions must meet.

We'll use a random state of 1 so that each iteration yields the same outcome.

cols=[] los=[] los_train=[] scor=[] i=0 while i < len(high_corr_var):    cols.append(high_corr_var[i])        # Select inputs variables    X=new_df[cols]        #mean normalization    mu,std,sta=standard(X)    X=sta        # Split the data into training and testing    X_train,X_test,y_train,y_test= train_test_split(X,y,random_state=1)        #fit the model to the training    lnreg=LinearRegression().fit(X_train,y_train)        #make prediction on the training test    prediction_train=lnreg.predict(X_train)        #make prediction on the testing test    prediction=lnreg.predict(X_test)        #compute the loss on train test    loss=rmse(prediction,y_test)    loss_train=rmse(prediction_train,y_train)    los_train.append(loss_train)    los.append(loss)        #compute the score    score=lnreg.score(X_test,y_test)    scor.append(score)        i+=1

We have a big 'loss' with a smaller collection of variables, yet our system will overgeneralize in this scenario. Although we have a reduced 'loss,' we have a large number of variables. However, if the model grows too precise, it may not generalize well to new data.

In order for our model to generalize well with another set of data, we might use 6 or 7 features. The characteristic chosen is descending based on how strong the price correlation is.

high_corr_var ['RM', 'ZN', 'B', 'CHAS', 'RAD', 'DIS', 'CRIM', 'NOX', 'AGE', 'TAX', 'INDUS', 'PTRATIO', 'LSTAT']

With 'RM' having a high price correlation and LSTAT having a negative price correlation.

# Create a list of features names feature_cols=['RM','ZN','B','CHAS','RAD','CRIM','DIS','NOX'] #Select inputs variables X=new_df[feature_cols] # Split the data into training and testing sets X_train,X_test,y_train,y_test= train_test_split(X,y, random_state=1) # feature engineering mu,std,sta=standard(X) X=sta # fit the model to the trainning data lnreg=LinearRegression().fit(X_train,y_train) # make prediction on the testing test prediction=lnreg.predict(X_test) # compute the loss loss=rmse(prediction,y_test) print('loss: ',loss) lnreg.score(X_test,y_test) loss: 3.212659865936143 0.8582338376696363

The test set yielded a loss of 3.21 and an accuracy of 85%.

Other factors, such as alpha, the learning rate at which our model learns, could still be tweaked to improve our model. Alternatively, return to the preprocessing section and working to increase the parameter distribution.

For more details regarding scraping real estate data you can contact Scraping Intelligence today

How to Get Current URL in Laravel

In this small post we will see how to get current url in laravel, if you want to get current page url in laravel then we can use many method such type current(), full(), request(), url().

Here i will give you all example to get current page url in laravel, in this example i have used helper and function as well as so let’s start example of how to get current url id in laravel.

Read More : How to Get Current URL in Laravel

Read More : Laravel Signature Pad Example

#how to get current url in laravel #laravel get current url #get current page url in laravel #find current url in laravel #get full url in laravel #how to get current url id in laravel

Thurman  Mills

Thurman Mills


Cloud Computing Vs Grid Computing

The similarity between cloud computing and grid computing is uncanny. The underlying concepts that make these two inherently different are actually so similar to one and another, which is responsible for creating a lot of confusion. Both cloud and grid computing aims to provide a similar kind of services to a large user base by sharing assets among an enormous pool of clients.

Both of these technologies are obviously network-based and are capable enough to sport multitasking. The availability of multitasking allows the users of either of the two services to use multiple applications at the same time. You are also not limited to the kind of applications that you can use. You are free to choose any number of applications that can accomplish any tasks that you want. Learn more about cloud computing applications.

#cloud computing #cloud computing vs grid computing #grid computing #cloud