Parameters and Hyperparameters:

Parameters:

Probabilistic models are estimated by unknown quantities, called parameters. These are adjusted using an optimization technique so that in the training sample it is possible to find a pattern in the best possible way. In a simple way, parameters are estimated by the algorithm and the user has little / nothing control over them.

In a simple linear regression, the model parameters are betas (ẞ).

Image for post

Font: https://pt.slideshare.net/vitor_vasconcelos/regresso-linear-mltipla

BAM!!! : In statistics jargon, parameters are defined as population characteristics. So strictly speaking, when we want to talk about sample characteristics, we use the term estimators. This makes little difference in this context, but it is important to note.

Hyperparameters:

Hyperparameters are sets of information that are used to control way of learn an algorithm. Their definitions impact parameters of the models, seen as a way of learning, change from the new hyperparameters. This set of values affects performance, stability and interpretation of a model.

Each algorithm requires a specific hyperparameters grid that can be adjusted according to the business problem.

#hyperparameter-tuning #machine-learning #statistics #deep learning

Hyperparameter Tuning
1.35 GEEK