You’ve created a deep learning model in Keras, you prepared the data and now you are wondering which loss you should choose for your problem.

We’ll get to that in a second but first what is a loss function?

In deep learning, the loss is computed to get the gradients with respect to model weights and update those weights accordingly via backpropagation. Loss is calculated and the network is updated after every iteration until model updates don’t bring any improvement in the desired evaluation metric.

So while you keep using the same evaluation metric like f1 score or AUC on the validation set during (long parts) of your machine learning project, the loss can be changed, adjusted and modified to get the best evaluation metric performance.

You can think of the loss function just like you think about the model architecture or the optimizer and it is important to put some thought into choosing it. In this piece we’ll look at:

  • **loss functions available in Keras **and how to use them,
  • how you can define your own custom loss function in Keras,
  • how to add** sample weighing** to create observation-sensitive losses,
  • how to avoid nans in the loss,
  • **how you can monitor the loss function **via plotting and callbacks.

Let’s get into it!

#keras

Keras Loss Functions: Everything You Need To Know
3.85 GEEK