Feature selection and error analysis while working with spatial data

Did you ever spend hours experimenting with model architecture and parameter tuning before finding out that the algorithm is missing some crucial details? Find out how to conduct an efficient feature selection and error analysis while working with spatial data.

Image for post

Error analysis is one of the key parts while training any ML model. Despite its importance, I have found myself spending far too many hours experimenting with model architecture and hyperparameter tuning before investigating errors made by the algorithm. Efficient error analysis requires combining good knowledge of the input data, algorithms as well as domain knowledge about the problem we are trying to solve.

Working with spatial data makes structuring error analysis easier as you can extract a lot of insights just by mapping your errors. In this article, I will focus on benchmarking Real Estate prices in Warsaw using Random Forest Regression.

The key data source consists of around 25k property sale offers from Warsaw with nearly 100 features. During project development, this data was enriched with additional data sources after the need for extra location information has been found during error analysis. I will demonstrate the key stages of the error analysis driven development process.

Whole code and data sources are available on GitHub: https://github.com/Jan-Majewski/Project_Portfolio/blob/master/03_Real_Estate_pricing_in_Warsaw/03_03_Feature_selection_and_error_analysis.ipynb

Access code with nbViewer for full interactivity: https://nbviewer.jupyter.org/github/Jan-Majewski/Project_Portfolio/blob/eb4bb8be0cf79cac979d9411b69d5150270550d5/03_Real_Estate_pricing_in_Warsaw/03_03_Feature_selection_and_error_analysis.ipynb

1. Introduction

The data used can be downloaded from GitHub

df = pd.read_excel(r"https://raw.githubusercontent.com/Jan-Majewski/Project_Portfolio/master/03_Real_Estate_pricing_in_Warsaw/Warsaw_RE_data.xlsx")

After initial data transformation and basic EDA, which is described in detail within the linked notebook, we end up with a DataFrame called ml_data with 25240 rows and 89 columns.

ml_data.columns

Image for post

property characteristic features available in base input data

Going through nearly 100 features might seem difficult at first, but we can quickly realize that apart from 5 numerical features like Area, building year, number of rooms and floors, the remaining features are 1-hot columns created from numerical data.

We can see that the basic dataset consists only of property level characteristics similar to BostonHousing or CaliforniaHousing datasets. Initial data misses a detailed description of building location, which in reality is the key price driver.

2. Feature selection

The first challenge is how to select the most important features to make the training of the regression model easier and avoid overfitting. Sklearn provides a great function — SelectKBest to aid us in feature selection. As we are facing a regression problem I chose f_regression scoring function.

SelectKBest allows us to find top features, which carry most information about variable y, which in our case is unit_price expressed as price per area.

from sklearn.feature_selection import SelectKBest, f_regression

bestfeatures = SelectKBest(score_func=f_regression, k="all")
fit = bestfeatures.fit(X,y)
dfscores = pd.DataFrame(fit.scores_)
dfcolumns = pd.DataFrame(X.columns)
#Let's transform outputs into one DataFrame for readability
featureScores = pd.concat([dfcolumns,dfscores],axis=1)
featureScores.columns = ['Specs','Score']
featureScores.nlargest(50,'Score').head(20)

Image for post

Top 20 features from base data input

After several rounds of iteration, I decided to use only features with a f_regression score over 200. While working with initial data there are 33 features passing this threshold.

top_features=featureScores.query("Score>200").Feature.unique()
top_features

Image for post

Features used in the initial model

3. Building initial model

As feature selection and error analysis is the key focus of this article, I chose Random Forest Regressor as the best algorithm — it combines quite good performance with the ability to analyze feature importances. I will be using one model with the same hyperparameters across all interactions for results comparability.

To avoid overfitting I chose a few regularization hyperparameters — they could probably be tuned to achieve slightly better results but hyperparameter tuning could be a good material for another article. Please find the model setup below:

Image for post

Random Forest Regressor hyperparameters

3.1 Investigating model performance and feature importance

As the key goal of my model is to accurately benchmark real estate prices I aim to get as many properties as possible close to the benchmark. I will analyze the model performance as the share of properties from the test set, for which the absolute percentage error of model forecast was within 5%, 10%, and 25% boundary.

#real-estate #error-analysis #spatial-analysis #data-visualization #feature-selection #data analysis

What is GEEK

Buddha Community

Feature selection and error analysis while working with spatial data

Feature selection and error analysis while working with spatial data

Did you ever spend hours experimenting with model architecture and parameter tuning before finding out that the algorithm is missing some crucial details? Find out how to conduct an efficient feature selection and error analysis while working with spatial data.

Image for post

Error analysis is one of the key parts while training any ML model. Despite its importance, I have found myself spending far too many hours experimenting with model architecture and hyperparameter tuning before investigating errors made by the algorithm. Efficient error analysis requires combining good knowledge of the input data, algorithms as well as domain knowledge about the problem we are trying to solve.

Working with spatial data makes structuring error analysis easier as you can extract a lot of insights just by mapping your errors. In this article, I will focus on benchmarking Real Estate prices in Warsaw using Random Forest Regression.

The key data source consists of around 25k property sale offers from Warsaw with nearly 100 features. During project development, this data was enriched with additional data sources after the need for extra location information has been found during error analysis. I will demonstrate the key stages of the error analysis driven development process.

Whole code and data sources are available on GitHub: https://github.com/Jan-Majewski/Project_Portfolio/blob/master/03_Real_Estate_pricing_in_Warsaw/03_03_Feature_selection_and_error_analysis.ipynb

Access code with nbViewer for full interactivity: https://nbviewer.jupyter.org/github/Jan-Majewski/Project_Portfolio/blob/eb4bb8be0cf79cac979d9411b69d5150270550d5/03_Real_Estate_pricing_in_Warsaw/03_03_Feature_selection_and_error_analysis.ipynb

1. Introduction

The data used can be downloaded from GitHub

df = pd.read_excel(r"https://raw.githubusercontent.com/Jan-Majewski/Project_Portfolio/master/03_Real_Estate_pricing_in_Warsaw/Warsaw_RE_data.xlsx")

After initial data transformation and basic EDA, which is described in detail within the linked notebook, we end up with a DataFrame called ml_data with 25240 rows and 89 columns.

ml_data.columns

Image for post

property characteristic features available in base input data

Going through nearly 100 features might seem difficult at first, but we can quickly realize that apart from 5 numerical features like Area, building year, number of rooms and floors, the remaining features are 1-hot columns created from numerical data.

We can see that the basic dataset consists only of property level characteristics similar to BostonHousing or CaliforniaHousing datasets. Initial data misses a detailed description of building location, which in reality is the key price driver.

2. Feature selection

The first challenge is how to select the most important features to make the training of the regression model easier and avoid overfitting. Sklearn provides a great function — SelectKBest to aid us in feature selection. As we are facing a regression problem I chose f_regression scoring function.

SelectKBest allows us to find top features, which carry most information about variable y, which in our case is unit_price expressed as price per area.

from sklearn.feature_selection import SelectKBest, f_regression

bestfeatures = SelectKBest(score_func=f_regression, k="all")
fit = bestfeatures.fit(X,y)
dfscores = pd.DataFrame(fit.scores_)
dfcolumns = pd.DataFrame(X.columns)
#Let's transform outputs into one DataFrame for readability
featureScores = pd.concat([dfcolumns,dfscores],axis=1)
featureScores.columns = ['Specs','Score']
featureScores.nlargest(50,'Score').head(20)

Image for post

Top 20 features from base data input

After several rounds of iteration, I decided to use only features with a f_regression score over 200. While working with initial data there are 33 features passing this threshold.

top_features=featureScores.query("Score>200").Feature.unique()
top_features

Image for post

Features used in the initial model

3. Building initial model

As feature selection and error analysis is the key focus of this article, I chose Random Forest Regressor as the best algorithm — it combines quite good performance with the ability to analyze feature importances. I will be using one model with the same hyperparameters across all interactions for results comparability.

To avoid overfitting I chose a few regularization hyperparameters — they could probably be tuned to achieve slightly better results but hyperparameter tuning could be a good material for another article. Please find the model setup below:

Image for post

Random Forest Regressor hyperparameters

3.1 Investigating model performance and feature importance

As the key goal of my model is to accurately benchmark real estate prices I aim to get as many properties as possible close to the benchmark. I will analyze the model performance as the share of properties from the test set, for which the absolute percentage error of model forecast was within 5%, 10%, and 25% boundary.

#real-estate #error-analysis #spatial-analysis #data-visualization #feature-selection #data analysis

 iOS App Dev

iOS App Dev

1620466520

Your Data Architecture: Simple Best Practices for Your Data Strategy

If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.

If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.

In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.

#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition

Gerhard  Brink

Gerhard Brink

1624272463

How Are Data analysis and Data science Different From Each Other

With possibly everything that one can think of which revolves around data, the need for people who can transform data into a manner that helps in making the best of the available data is at its peak. This brings our attention to two major aspects of data – data science and data analysis. Many tend to get confused between the two and often misuse one in place of the other. In reality, they are different from each other in a couple of aspects. Read on to find how data analysis and data science are different from each other.

Before jumping straight into the differences between the two, it is critical to understand the commonalities between data analysis and data science. First things first – both these areas revolve primarily around data. Next, the prime objective of both of them remains the same – to meet the business objective and aid in the decision-making ability. Also, both these fields demand the person be well acquainted with the business problems, market size, opportunities, risks and a rough idea of what could be the possible solutions.

Now, addressing the main topic of interest – how are data analysis and data science different from each other.

As far as data science is concerned, it is nothing but drawing actionable insights from raw data. Data science has most of the work done in these three areas –

  • Building/collecting data
  • Cleaning/filtering data
  • Organizing data

#big data #latest news #how are data analysis and data science different from each other #data science #data analysis #data analysis and data science different

Gerhard  Brink

Gerhard Brink

1620629020

Getting Started With Data Lakes

Frameworks for Efficient Enterprise Analytics

The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.

This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.

Introduction

As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).


This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.

#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management

Ian  Robinson

Ian Robinson

1623856080

Streamline Your Data Analysis With Automated Business Analysis

Have you ever visited a restaurant or movie theatre, only to be asked to participate in a survey? What about providing your email address in exchange for coupons? Do you ever wonder why you get ads for something you just searched for online? It all comes down to data collection and analysis. Indeed, everywhere you look today, there’s some form of data to be collected and analyzed. As you navigate running your business, you’ll need to create a data analytics plan for yourself. Data helps you solve problems , find new customers, and re-assess your marketing strategies. Automated business analysis tools provide key insights into your data. Below are a few of the many valuable benefits of using such a system for your organization’s data analysis needs.

Workflow integration and AI capability

Pinpoint unexpected data changes

Understand customer behavior

Enhance marketing and ROI

#big data #latest news #data analysis #streamline your data analysis #automated business analysis #streamline your data analysis with automated business analysis