Easy access to an enormous amount of data and high computing power has made it possible to design complex machine learning algorithms. As the model complexity increases, the amount of data required to train it also increases.

Data is not the only factor in the performance of a model. Complex models have many hyperparameters that need to be correctly adjusted or tuned in order to make the most out of them.

For instance, the performance of XGBoost and LightGBM highly depend on the hyperparameter tuning. It would be like driving a Ferrari at a speed of 50 mph to implement these algorithms without carefully adjusting the hyperparameters.

In this post, we will experiment with how the performance of LightGBM changes based on hyperparameter values. The focus is on the parameters that help to generalize the models and thus reduce the risk of overfitting.

Let’s start with importing the libraries.

import pandas as pd
from sklearn.model_selection import train_test_split
import lightgbm as lgb

The dataset contains 60 k observations, 99 numerical features, and a target variable.

#programming #machine-learning #data-analysis #artificial-intelligence #data-science

Hyperparameter Tuning to Reduce Overfitting — LightGBM
9.05 GEEK