In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm.

The traditional way of performing hyperparameter optimization has been grid search, or a parameter sweep, which is simply an exhaustive searching through a manually specified subset of the hyperparameter space of a learning algorithm. A grid search algorithm must be guided by some performance metric, typically measured by cross-validation on the training set or evaluation on a held-out validation set.

Hyperparameter tuning is must be procedure in Supervised learning.

By applying Hyperparameter tuning you can judge how well your model are performing with different parameters of classifier.
When fitting a linear regression we just choosing parameters for the model that fit data the best. For example we have to choose a value for the alpha in ridge and lasso regression before fitting it.
Analogously, before fitting and predicting K-nearest neighbors, we need to choose n_neighbors.
Such parameters need to be specified before fitting a model, a called hyperparameters.
Hyperparameters cannot be learned by fitting the model.

The scheme how working hyperparameter tuning is simple:

  • Try a bunch of different hyperparameter values
  • Fit all of them separately
  • See how well each performs
  • Choose the best performing one
  • It is essential to use cross-validation as using train test split alone would risk overfitting the hyperparameter to the test set.

We want to have already split off a test set in order to report how well our model can be expected to perform on a dataset that it has never seen before.

The basic idea is as follow: we choose a grid of possible values we want to try for hyperparameter or hyperparameters.
For example, if we had two hyperparameters, C and alpha, the grid of values to test could look like this.

We then perform k-fold cross-validation for each point in the grid, for each choise of hyperparameter or combination of hyperparameters.

We then choose for our model the choise of hyperparameters that performed the best!
This is called a grid search and in scikit-learn we implement it using the class GridSearchCV.

GridSearchCV can be computationally expensive, especially if you are searching over a large hyperparameter space and dealing with multiple hyperparameters. A solution to this is to use RandomizedSearchCV, in which not all hyperparameter values are tried out. Instead, a fixed number of hyperparameter settings is sampled from specified probability distributions.

The content of video:
#01 Theory part: 0:03
#02 Coding: standard way: 2:30
#03 Coding: implementing Cross Validation and Hyperparameter tuning to the model: 4:43
Results: 12:40

Subscribe: https://www.youtube.com/c/VytautasBielinskas/featured

#python #machine-learning

Hyperparameter Tuning and Cross Validation to Decision Tree classifier (Machine learning by Python)
11.95 GEEK