1666021860
This is an assembler for the SISA assembly language, written in Rust!
Execute the following commands:
git clone https://github.com/rdvdev2/sisa-assembler.git
cd sisa-assembler
cargo install --path .
If you haven't done it yet, add .cargo/bin
to your $PATH
# At the end of your .bashrc / .zshrc
export PATH=$PATH:$HOME/.cargo/bin
Run sas -h
to see the program help:
The SISA assembler by rdvdev2<me@rdvdev2.com>
Usage: sas [OPTIONS]
Recognized options:
-i, --input FILE Uses FILE as input (source.S by default)
-o, --output FILE Uses FILE as output (out.bin by default)
--text-section-start ADDRESS Places the .text section in ADDRESS (0x0000 by default)
--data-section-start ADDRESS Places the .data section in ADDRESS (right after .text by default)
--auto-align-words Automatically aligns words to multiples of 2 (disabled by default)
--auto-align-sections Automatically aligns sections to multiples of 2 (disabled by default)
-h, --help Shows this help message
The language (as well as the ISA) is defined by the documentation of the IC subject of the Computer Engineering course of the UPC. I won't include the specification here as I'm not sure about its licensing. I wrote the assembler following this specification as close as possible, but be aware that this is a personal project and as such the implementation may not be perfect. Report any issues you find!
Here are some notes for IC students that may use this assembler in their study:
--auto-align-words
and --auto-align-words
aren't part of the official specification, use .even
instead.data
section immediately after the .text
section by default. Ensure that this is the desired behaviour before assembling. If it isn't, check the program help to relocate the sections..byte 0xFFFF
and the assembler will interpret it as .byte -1
, effectively translating a word into a byte. This is possibly not desirable when writing programs for your assignments, and you should avoid taking advantage of this feature.NOP
may not be accepted in your assignments. However, you shouldn't need it, because it just does nothing. If you use it, take note that it can be codified using any invalid opcode. In the case of this assembler, NOP
is always codified as 0xFFFF
.This is a (somewhat loose) roadmap of the project. Take it with a grain of salt, I may not implement everything in the list!
--enforce-ic-compliance
flagAuthor: rdvdev2
Source Code: https://github.com/rdvdev2/sisa-assembler
License: LGPL-3.0 license
1643176207
Serde
*Serde is a framework for serializing and deserializing Rust data structures efficiently and generically.*
You may be looking for:
#[derive(Serialize, Deserialize)]
Click to show Cargo.toml. Run this code in the playground.
[dependencies]
# The core APIs, including the Serialize and Deserialize traits. Always
# required when using Serde. The "derive" feature is only required when
# using #[derive(Serialize, Deserialize)] to make Serde work with structs
# and enums defined in your crate.
serde = { version = "1.0", features = ["derive"] }
# Each data format lives in its own crate; the sample code below uses JSON
# but you may be using a different one.
serde_json = "1.0"
use serde::{Serialize, Deserialize};
#[derive(Serialize, Deserialize, Debug)]
struct Point {
x: i32,
y: i32,
}
fn main() {
let point = Point { x: 1, y: 2 };
// Convert the Point to a JSON string.
let serialized = serde_json::to_string(&point).unwrap();
// Prints serialized = {"x":1,"y":2}
println!("serialized = {}", serialized);
// Convert the JSON string back to a Point.
let deserialized: Point = serde_json::from_str(&serialized).unwrap();
// Prints deserialized = Point { x: 1, y: 2 }
println!("deserialized = {:?}", deserialized);
}
Serde is one of the most widely used Rust libraries so any place that Rustaceans congregate will be able to help you out. For chat, consider trying the #rust-questions or #rust-beginners channels of the unofficial community Discord (invite: https://discord.gg/rust-lang-community), the #rust-usage or #beginners channels of the official Rust Project Discord (invite: https://discord.gg/rust-lang), or the #general stream in Zulip. For asynchronous, consider the [rust] tag on StackOverflow, the /r/rust subreddit which has a pinned weekly easy questions post, or the Rust Discourse forum. It's acceptable to file a support issue in this repo but they tend not to get as many eyes as any of the above and may get closed without a response after some time.
Download Details:
Author: serde-rs
Source Code: https://github.com/serde-rs/serde
License: View license
1666021860
This is an assembler for the SISA assembly language, written in Rust!
Execute the following commands:
git clone https://github.com/rdvdev2/sisa-assembler.git
cd sisa-assembler
cargo install --path .
If you haven't done it yet, add .cargo/bin
to your $PATH
# At the end of your .bashrc / .zshrc
export PATH=$PATH:$HOME/.cargo/bin
Run sas -h
to see the program help:
The SISA assembler by rdvdev2<me@rdvdev2.com>
Usage: sas [OPTIONS]
Recognized options:
-i, --input FILE Uses FILE as input (source.S by default)
-o, --output FILE Uses FILE as output (out.bin by default)
--text-section-start ADDRESS Places the .text section in ADDRESS (0x0000 by default)
--data-section-start ADDRESS Places the .data section in ADDRESS (right after .text by default)
--auto-align-words Automatically aligns words to multiples of 2 (disabled by default)
--auto-align-sections Automatically aligns sections to multiples of 2 (disabled by default)
-h, --help Shows this help message
The language (as well as the ISA) is defined by the documentation of the IC subject of the Computer Engineering course of the UPC. I won't include the specification here as I'm not sure about its licensing. I wrote the assembler following this specification as close as possible, but be aware that this is a personal project and as such the implementation may not be perfect. Report any issues you find!
Here are some notes for IC students that may use this assembler in their study:
--auto-align-words
and --auto-align-words
aren't part of the official specification, use .even
instead.data
section immediately after the .text
section by default. Ensure that this is the desired behaviour before assembling. If it isn't, check the program help to relocate the sections..byte 0xFFFF
and the assembler will interpret it as .byte -1
, effectively translating a word into a byte. This is possibly not desirable when writing programs for your assignments, and you should avoid taking advantage of this feature.NOP
may not be accepted in your assignments. However, you shouldn't need it, because it just does nothing. If you use it, take note that it can be codified using any invalid opcode. In the case of this assembler, NOP
is always codified as 0xFFFF
.This is a (somewhat loose) roadmap of the project. Take it with a grain of salt, I may not implement everything in the list!
--enforce-ic-compliance
flagAuthor: rdvdev2
Source Code: https://github.com/rdvdev2/sisa-assembler
License: LGPL-3.0 license
1654894080
Serde JSON
Serde is a framework for serializing and deserializing Rust data structures efficiently and generically.
[dependencies]
serde_json = "1.0"
You may be looking for:
#[derive(Serialize, Deserialize)]
JSON is a ubiquitous open-standard format that uses human-readable text to transmit data objects consisting of key-value pairs.
{
"name": "John Doe",
"age": 43,
"address": {
"street": "10 Downing Street",
"city": "London"
},
"phones": [
"+44 1234567",
"+44 2345678"
]
}
There are three common ways that you might find yourself needing to work with JSON data in Rust.
Serde JSON provides efficient, flexible, safe ways of converting data between each of these representations.
Any valid JSON data can be manipulated in the following recursive enum representation. This data structure is serde_json::Value
.
enum Value {
Null,
Bool(bool),
Number(Number),
String(String),
Array(Vec<Value>),
Object(Map<String, Value>),
}
A string of JSON data can be parsed into a serde_json::Value
by the serde_json::from_str
function. There is also from_slice
for parsing from a byte slice &[u8] and from_reader
for parsing from any io::Read
like a File or a TCP stream.
use serde_json::{Result, Value};
fn untyped_example() -> Result<()> {
// Some JSON input data as a &str. Maybe this comes from the user.
let data = r#"
{
"name": "John Doe",
"age": 43,
"phones": [
"+44 1234567",
"+44 2345678"
]
}"#;
// Parse the string of data into serde_json::Value.
let v: Value = serde_json::from_str(data)?;
// Access parts of the data by indexing with square brackets.
println!("Please call {} at the number {}", v["name"], v["phones"][0]);
Ok(())
}
The result of square bracket indexing like v["name"]
is a borrow of the data at that index, so the type is &Value
. A JSON map can be indexed with string keys, while a JSON array can be indexed with integer keys. If the type of the data is not right for the type with which it is being indexed, or if a map does not contain the key being indexed, or if the index into a vector is out of bounds, the returned element is Value::Null
.
When a Value
is printed, it is printed as a JSON string. So in the code above, the output looks like Please call "John Doe" at the number "+44 1234567"
. The quotation marks appear because v["name"]
is a &Value
containing a JSON string and its JSON representation is "John Doe"
. Printing as a plain string without quotation marks involves converting from a JSON string to a Rust string with as_str()
or avoiding the use of Value
as described in the following section.
The Value
representation is sufficient for very basic tasks but can be tedious to work with for anything more significant. Error handling is verbose to implement correctly, for example imagine trying to detect the presence of unrecognized fields in the input data. The compiler is powerless to help you when you make a mistake, for example imagine typoing v["name"]
as v["nmae"]
in one of the dozens of places it is used in your code.
Serde provides a powerful way of mapping JSON data into Rust data structures largely automatically.
use serde::{Deserialize, Serialize};
use serde_json::Result;
#[derive(Serialize, Deserialize)]
struct Person {
name: String,
age: u8,
phones: Vec<String>,
}
fn typed_example() -> Result<()> {
// Some JSON input data as a &str. Maybe this comes from the user.
let data = r#"
{
"name": "John Doe",
"age": 43,
"phones": [
"+44 1234567",
"+44 2345678"
]
}"#;
// Parse the string of data into a Person object. This is exactly the
// same function as the one that produced serde_json::Value above, but
// now we are asking it for a Person as output.
let p: Person = serde_json::from_str(data)?;
// Do things just like with any other Rust data structure.
println!("Please call {} at the number {}", p.name, p.phones[0]);
Ok(())
}
This is the same serde_json::from_str
function as before, but this time we assign the return value to a variable of type Person
so Serde will automatically interpret the input data as a Person
and produce informative error messages if the layout does not conform to what a Person
is expected to look like.
Any type that implements Serde's Deserialize
trait can be deserialized this way. This includes built-in Rust standard library types like Vec<T>
and HashMap<K, V>
, as well as any structs or enums annotated with #[derive(Deserialize)]
.
Once we have p
of type Person
, our IDE and the Rust compiler can help us use it correctly like they do for any other Rust code. The IDE can autocomplete field names to prevent typos, which was impossible in the serde_json::Value
representation. And the Rust compiler can check that when we write p.phones[0]
, then p.phones
is guaranteed to be a Vec<String>
so indexing into it makes sense and produces a String
.
The necessary setup for using Serde's derive macros is explained on the Using derive page of the Serde site.
Serde JSON provides a json!
macro to build serde_json::Value
objects with very natural JSON syntax.
use serde_json::json;
fn main() {
// The type of `john` is `serde_json::Value`
let john = json!({
"name": "John Doe",
"age": 43,
"phones": [
"+44 1234567",
"+44 2345678"
]
});
println!("first phone number: {}", john["phones"][0]);
// Convert to a string of JSON and print it out
println!("{}", john.to_string());
}
The Value::to_string()
function converts a serde_json::Value
into a String
of JSON text.
One neat thing about the json!
macro is that variables and expressions can be interpolated directly into the JSON value as you are building it. Serde will check at compile time that the value you are interpolating is able to be represented as JSON.
let full_name = "John Doe";
let age_last_year = 42;
// The type of `john` is `serde_json::Value`
let john = json!({
"name": full_name,
"age": age_last_year + 1,
"phones": [
format!("+44 {}", random_phone())
]
});
This is amazingly convenient, but we have the problem we had before with Value
: the IDE and Rust compiler cannot help us if we get it wrong. Serde JSON provides a better way of serializing strongly-typed data structures into JSON text.
A data structure can be converted to a JSON string by serde_json::to_string
. There is also serde_json::to_vec
which serializes to a Vec<u8>
and serde_json::to_writer
which serializes to any io::Write
such as a File or a TCP stream.
use serde::{Deserialize, Serialize};
use serde_json::Result;
#[derive(Serialize, Deserialize)]
struct Address {
street: String,
city: String,
}
fn print_an_address() -> Result<()> {
// Some data structure.
let address = Address {
street: "10 Downing Street".to_owned(),
city: "London".to_owned(),
};
// Serialize it to a JSON string.
let j = serde_json::to_string(&address)?;
// Print, write to a file, or send to an HTTP server.
println!("{}", j);
Ok(())
}
Any type that implements Serde's Serialize
trait can be serialized this way. This includes built-in Rust standard library types like Vec<T>
and HashMap<K, V>
, as well as any structs or enums annotated with #[derive(Serialize)]
.
It is fast. You should expect in the ballpark of 500 to 1000 megabytes per second deserialization and 600 to 900 megabytes per second serialization, depending on the characteristics of your data. This is competitive with the fastest C and C++ JSON libraries or even 30% faster for many use cases. Benchmarks live in the serde-rs/json-benchmark repo.
Serde is one of the most widely used Rust libraries, so any place that Rustaceans congregate will be able to help you out. For chat, consider trying the #rust-questions or #rust-beginners channels of the unofficial community Discord (invite: https://discord.gg/rust-lang-community), the #rust-usage or #beginners channels of the official Rust Project Discord (invite: https://discord.gg/rust-lang), or the #general stream in Zulip. For asynchronous, consider the [rust] tag on StackOverflow, the /r/rust subreddit which has a pinned weekly easy questions post, or the Rust Discourse forum. It's acceptable to file a support issue in this repo, but they tend not to get as many eyes as any of the above and may get closed without a response after some time.
As long as there is a memory allocator, it is possible to use serde_json without the rest of the Rust standard library. This is supported on Rust 1.36+. Disable the default "std" feature and enable the "alloc" feature:
[dependencies]
serde_json = { version = "1.0", default-features = false, features = ["alloc"] }
For JSON support in Serde without a memory allocator, please see the serde-json-core
crate.
1602576000
I’ve recently been working on a Rust course for the Qvault app. In order to write a more engaging course, I want students to be able to write and execute code right in the browser. As I’ve learned from my previous posts on this topic, the easiest way to sandbox code execution on a server is to not execute code on a server. Enter Web Assembly, stage left.
For those of you who don’t care about how it works, and just want to give it a try, checkout the demo: Rust WASM Playground.
The architecture is fairly simple:
Writing code and shipping it to the server hopefully needs no explanation, it’s a simple text editor coupled with the fetch API. The first interesting thing we do is compile the code on the server.
Qvault’s server is written in Go. I have a simple HTTP handler with the following signature:
func (cfg config) compileRustHandler(w http.ResponseWriter, r *http.Request)
At the start of the function we unmarshal the code which was provided in a JSON body:
type parameters struct {
Code string
}
decoder := json.NewDecoder(r.Body)
params := parameters{}
err := decoder.Decode(¶ms)
if err != nil {
respondWithError(w, 500, "Couldn't decode parameters")
return
}
Next, we create a temporary folder on disk that we’ll use as a “scratch pad” to create a Rust project.
usr, err := user.Current()
if err != nil {
respondWithError(w, 500, "Couldn't get system user")
return
}
workingDir := filepath.Join(usr.HomeDir, ".wasm", uuid.New().String())
err = os.MkdirAll(workingDir, os.ModePerm)
if err != nil {
respondWithError(w, 500, "Couldn't create directory for compilation")
return
}
defer func() {
err = os.RemoveAll(workingDir)
if err != nil {
respondWithError(w, 500, "Couldn't clean up code from compilation")
return
}
}()
As you can see, we create the project under the .wasm/uuid
path in the home directory. We also defer
an os.RemoveAll
function that will delete this folder when we are doing handling this request.
#golang #languages #rust #wasm #rust #rustlang #wasm #web assembly
1641805837
The final objective is to estimate the cost of a certain house in a Boston suburb. In 1970, the Boston Standard Metropolitan Statistical Area provided the information. To examine and modify the data, we will use several techniques such as data pre-processing and feature engineering. After that, we'll apply a statistical model like regression model to anticipate and monitor the real estate market.
Project Outline:
Before using a statistical model, the EDA is a good step to go through in order to:
# Import the libraries #Dataframe/Numerical libraries import pandas as pd import numpy as np #Data visualization import plotly.express as px import matplotlib import matplotlib.pyplot as plt import seaborn as sns #Machine learning model from sklearn.linear_model import LinearRegression
#Reading the data path='./housing.csv' housing_df=pd.read_csv(path,header=None,delim_whitespace=True)
CRIM | ZN | INDUS | CHAS | NOX | RM | AGE | DIS | RAD | TAX | PTRATIO | B | LSTAT | MEDV | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0.00632 | 18.0 | 2.31 | 0 | 0.538 | 6.575 | 65.2 | 4.0900 | 1 | 296.0 | 15.3 | 396.90 | 4.98 | 24.0 |
1 | 0.02731 | 0.0 | 7.07 | 0 | 0.469 | 6.421 | 78.9 | 4.9671 | 2 | 242.0 | 17.8 | 396.90 | 9.14 | 21.6 |
2 | 0.02729 | 0.0 | 7.07 | 0 | 0.469 | 7.185 | 61.1 | 4.9671 | 2 | 242.0 | 17.8 | 392.83 | 4.03 | 34.7 |
3 | 0.03237 | 0.0 | 2.18 | 0 | 0.458 | 6.998 | 45.8 | 6.0622 | 3 | 222.0 | 18.7 | 394.63 | 2.94 | 33.4 |
4 | 0.06905 | 0.0 | 2.18 | 0 | 0.458 | 7.147 | 54.2 | 6.0622 | 3 | 222.0 | 18.7 | 396.90 | 5.33 | 36.2 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
501 | 0.06263 | 0.0 | 11.93 | 0 | 0.573 | 6.593 | 69.1 | 2.4786 | 1 | 273.0 | 21.0 | 391.99 | 9.67 | 22.4 |
502 | 0.04527 | 0.0 | 11.93 | 0 | 0.573 | 6.120 | 76.7 | 2.2875 | 1 | 273.0 | 21.0 | 396.90 | 9.08 | 20.6 |
503 | 0.06076 | 0.0 | 11.93 | 0 | 0.573 | 6.976 | 91.0 | 2.1675 | 1 | 273.0 | 21.0 | 396.90 | 5.64 | 23.9 |
504 | 0.10959 | 0.0 | 11.93 | 0 | 0.573 | 6.794 | 89.3 | 2.3889 | 1 | 273.0 | 21.0 | 393.45 | 6.48 | 22.0 |
505 | 0.04741 | 0.0 | 11.93 | 0 | 0.573 | 6.030 | 80.8 | 2.5050 | 1 | 273.0 | 21.0 | 396.90 | 7.88 | 11.9 |
Crime: It refers to a town's per capita crime rate.
ZN: It is the percentage of residential land allocated for 25,000 square feet.
Indus: The amount of non-retail business lands per town is referred to as the indus.
CHAS: CHAS denotes whether or not the land is surrounded by a river.
NOX: The NOX stands for nitric oxide content (part per 10m)
RM: The average number of rooms per home is referred to as RM.
AGE: The percentage of owner-occupied housing built before 1940 is referred to as AGE.
DIS: Weighted distance to five Boston employment centers are referred to as dis.
RAD: Accessibility to radial highways index
TAX: The TAX columns denote the rate of full-value property taxes per $10,000 dollars.
B: B=1000(Bk — 0.63)2 is the outcome of the equation, where Bk is the proportion of blacks in each town.
PTRATIO: It refers to the student-to-teacher ratio in each community.
LSTAT: It refers to the population's lower socioeconomic status.
MEDV: It refers to the 1000-dollar median value of owner-occupied residences.
# Check if there is any missing values. housing_df.isna().sum() CRIM 0 ZN 0 INDUS 0 CHAS 0 NOX 0 RM 0 AGE 0 DIS 0 RAD 0 TAX 0 PTRATIO 0 B 0 LSTAT 0 MEDV 0 dtype: int64
No missing values are found
We examine our data's mean, standard deviation, and percentiles.
housing_df.describe()
CRIM | ZN | INDUS | CHAS | NOX | RM | AGE | DIS | RAD | TAX | PTRATIO | B | LSTAT | MEDV | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
count | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 |
mean | 3.613524 | 11.363636 | 11.136779 | 0.069170 | 0.554695 | 6.284634 | 68.574901 | 3.795043 | 9.549407 | 408.237154 | 18.455534 | 356.674032 | 12.653063 | 22.532806 |
std | 8.601545 | 23.322453 | 6.860353 | 0.253994 | 0.115878 | 0.702617 | 28.148861 | 2.105710 | 8.707259 | 168.537116 | 2.164946 | 91.294864 | 7.141062 | 9.197104 |
min | 0.006320 | 0.000000 | 0.460000 | 0.000000 | 0.385000 | 3.561000 | 2.900000 | 1.129600 | 1.000000 | 187.000000 | 12.600000 | 0.320000 | 1.730000 | 5.000000 |
25% | 0.082045 | 0.000000 | 5.190000 | 0.000000 | 0.449000 | 5.885500 | 45.025000 | 2.100175 | 4.000000 | 279.000000 | 17.400000 | 375.377500 | 6.950000 | 17.025000 |
50% | 0.256510 | 0.000000 | 9.690000 | 0.000000 | 0.538000 | 6.208500 | 77.500000 | 3.207450 | 5.000000 | 330.000000 | 19.050000 | 391.440000 | 11.360000 | 21.200000 |
75% | 3.677083 | 12.500000 | 18.100000 | 0.000000 | 0.624000 | 6.623500 | 94.075000 | 5.188425 | 24.000000 | 666.000000 | 20.200000 | 396.225000 | 16.955000 | 25.000000 |
max | 88.976200 | 100.000000 | 27.740000 | 1.000000 | 0.871000 | 8.780000 | 100.000000 | 12.126500 | 24.000000 | 711.000000 | 22.000000 | 396.900000 | 37.970000 | 50.000000 |
The crime, area, sector, nitric oxides, 'B' appear to have multiple outliers at first look because the minimum and maximum values are so far apart. In the Age columns, the mean and the Q2(50 percentile) do not match.
We might double-check it by examining the distribution of each column.
Because the model is overly generic, removing all outliers will underfit it. Keeping all outliers causes the model to overfit and become excessively accurate. The data's noise will be learned.
The approach is to establish a happy medium that prevents the model from becoming overly precise. When faced with a new set of data, however, they generalise well.
We'll keep numbers below 600 because there's a huge anomaly in the TAX column around 600.
new_df=housing_df[housing_df['TAX']<600]
The overall distribution, particularly the TAX, PTRATIO, and RAD, has improved slightly.
Perfect correlation is denoted by the clear values. The medium correlation between the columns is represented by the reds, while the negative correlation is represented by the black.
With a value of 0.89, we can see that 'MEDV', which is the medium price we wish to anticipate, is substantially connected with the number of rooms 'RM'. The proportion of black people in area 'B' with a value of 0.19 is followed by the residential land 'ZN' with a value of 0.32 and the percentage of black people in area 'ZN' with a value of 0.32.
The metrics that are most connected with price will be plotted.
Gradient descent is aided by feature scaling, which ensures that all features are on the same scale. It makes locating the local optimum much easier.
Mean standardization is one strategy to employ. It substitutes (target-mean) for the target to ensure that the feature has a mean of nearly zero.
def standard(X): '''Standard makes the feature 'X' have a zero mean''' mu=np.mean(X) #mean std=np.std(X) #standard deviation sta=(X-mu)/std # mean normalization return mu,std,sta mu,std,sta=standard(X) X=sta X
CRIM | ZN | INDUS | CHAS | NOX | RM | AGE | DIS | RAD | TAX | PTRATIO | B | LSTAT | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | -0.609129 | 0.092792 | -1.019125 | -0.280976 | 0.258670 | 0.279135 | 0.162095 | -0.167660 | -2.105767 | -0.235130 | -1.136863 | 0.401318 | -0.933659 |
1 | -0.575698 | -0.598153 | -0.225291 | -0.280976 | -0.423795 | 0.049252 | 0.648266 | 0.250975 | -1.496334 | -1.032339 | -0.004175 | 0.401318 | -0.219350 |
2 | -0.575730 | -0.598153 | -0.225291 | -0.280976 | -0.423795 | 1.189708 | 0.016599 | 0.250975 | -1.496334 | -1.032339 | -0.004175 | 0.298315 | -1.096782 |
3 | -0.567639 | -0.598153 | -1.040806 | -0.280976 | -0.532594 | 0.910565 | -0.526350 | 0.773661 | -0.886900 | -1.327601 | 0.403593 | 0.343869 | -1.283945 |
4 | -0.509220 | -0.598153 | -1.040806 | -0.280976 | -0.532594 | 1.132984 | -0.228261 | 0.773661 | -0.886900 | -1.327601 | 0.403593 | 0.401318 | -0.873561 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
501 | -0.519445 | -0.598153 | 0.585220 | -0.280976 | 0.604848 | 0.306004 | 0.300494 | -0.936773 | -2.105767 | -0.574682 | 1.445666 | 0.277056 | -0.128344 |
502 | -0.547094 | -0.598153 | 0.585220 | -0.280976 | 0.604848 | -0.400063 | 0.570195 | -1.027984 | -2.105767 | -0.574682 | 1.445666 | 0.401318 | -0.229652 |
503 | -0.522423 | -0.598153 | 0.585220 | -0.280976 | 0.604848 | 0.877725 | 1.077657 | -1.085260 | -2.105767 | -0.574682 | 1.445666 | 0.401318 | -0.820331 |
504 | -0.444652 | -0.598153 | 0.585220 | -0.280976 | 0.604848 | 0.606046 | 1.017329 | -0.979587 | -2.105767 | -0.574682 | 1.445666 | 0.314006 | -0.676095 |
505 | -0.543685 | -0.598153 | 0.585220 | -0.280976 | 0.604848 | -0.534410 | 0.715691 | -0.924173 | -2.105767 | -0.574682 | 1.445666 | 0.401318 | -0.435703 |
For the sake of the project, we'll apply linear regression.
Typically, we run numerous models and select the best one based on a particular criterion.
Linear regression is a sort of supervised learning model in which the response is continuous, as it relates to machine learning.
Form of Linear Regression
y= θX+θ1 or y= θ1+X1θ2 +X2θ3 + X3θ4
y is the target you will be predicting
0 is the coefficient
x is the input
We will Sklearn to develop and train the model
#Import the libraries to train the model from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression
Allow us to utilise the train/test method to learn a part of the data on one set and predict using another set using the train/test approach.
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.4) #Create and Train the model model=LinearRegression().fit(X_train,y_train) #Generate prediction predictions_test=model.predict(X_test) #Compute loss to evaluate the model coefficient= model.coef_ intercept=model.intercept_ print(coefficient,intercept) [7.22218258] 24.66379606613584
In this example, you will learn the model using below hypothesis:
Price= 24.85 + 7.18* Room
It is interpreted as:
For a decided price of a house:
A 7.18-unit increase in the price is connected with a growth in the number of rooms.
As a side note, this is an association, not a cause!
You will need a metric to determine whether our hypothesis was right. The RMSE approach will be used.
Root Means Square Error (RMSE) is defined as the square root of the mean of square error. The difference between the true and anticipated numbers called the error. It's popular because it can be expressed in y-units, which is the median price of a home in our scenario.
def rmse(predict,actual): return np.sqrt(np.mean(np.square(predict - actual))) # Split the Data into train and test set X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.4) #Create and Train the model model=LinearRegression().fit(X_train,y_train) #Generate prediction predictions_test=model.predict(X_test) #Compute loss to evaluate the model coefficient= model.coef_ intercept=model.intercept_ print(coefficient,intercept) loss=rmse(predictions_test,y_test) print('loss: ',loss) print(model.score(X_test,y_test)) #accuracy [7.43327725] 24.912055881970886 loss: 3.9673165450580714 0.7552661033654667 Loss will be 3.96
This means that y-units refer to the median value of occupied homes with 1000 dollars.
This will be less by 3960 dollars.
While learning the model you will have a high variance when you divide the data. Coefficient and intercept will vary. It's because when we utilized the train/test approach, we choose a set of data at random to place in either the train or test set. As a result, our theory will change each time the dataset is divided.
This problem can be solved using a technique called cross-validation.
With 'Forward Selection,' we'll iterate through each parameter to assist us choose the numbers characteristics to include in our model.
We'll use a random state of 1 so that each iteration yields the same outcome.
cols=[] los=[] los_train=[] scor=[] i=0 while i < len(high_corr_var): cols.append(high_corr_var[i]) # Select inputs variables X=new_df[cols] #mean normalization mu,std,sta=standard(X) X=sta # Split the data into training and testing X_train,X_test,y_train,y_test= train_test_split(X,y,random_state=1) #fit the model to the training lnreg=LinearRegression().fit(X_train,y_train) #make prediction on the training test prediction_train=lnreg.predict(X_train) #make prediction on the testing test prediction=lnreg.predict(X_test) #compute the loss on train test loss=rmse(prediction,y_test) loss_train=rmse(prediction_train,y_train) los_train.append(loss_train) los.append(loss) #compute the score score=lnreg.score(X_test,y_test) scor.append(score) i+=1
We have a big 'loss' with a smaller collection of variables, yet our system will overgeneralize in this scenario. Although we have a reduced 'loss,' we have a large number of variables. However, if the model grows too precise, it may not generalize well to new data.
In order for our model to generalize well with another set of data, we might use 6 or 7 features. The characteristic chosen is descending based on how strong the price correlation is.
high_corr_var ['RM', 'ZN', 'B', 'CHAS', 'RAD', 'DIS', 'CRIM', 'NOX', 'AGE', 'TAX', 'INDUS', 'PTRATIO', 'LSTAT']
With 'RM' having a high price correlation and LSTAT having a negative price correlation.
# Create a list of features names feature_cols=['RM','ZN','B','CHAS','RAD','CRIM','DIS','NOX'] #Select inputs variables X=new_df[feature_cols] # Split the data into training and testing sets X_train,X_test,y_train,y_test= train_test_split(X,y, random_state=1) # feature engineering mu,std,sta=standard(X) X=sta # fit the model to the trainning data lnreg=LinearRegression().fit(X_train,y_train) # make prediction on the testing test prediction=lnreg.predict(X_test) # compute the loss loss=rmse(prediction,y_test) print('loss: ',loss) lnreg.score(X_test,y_test) loss: 3.212659865936143 0.8582338376696363
The test set yielded a loss of 3.21 and an accuracy of 85%.
Other factors, such as alpha, the learning rate at which our model learns, could still be tweaked to improve our model. Alternatively, return to the preprocessing section and working to increase the parameter distribution.
For more details regarding scraping real estate data you can contact Scraping Intelligence today
https://www.websitescraper.com/how-to-predict-housing-prices-with-linear-regression.php