Custom metrics for Keras/TensorFlow

Recently, I published an article about binary classification metrics that you can check here. The article gives a brief explanation of the most traditional metrics and presents less famous ones like NPV, Specificity, and MCC. If you don’t know some of these metrics, take a look at the article. It’s only 7 minutes to read. I’m sure it will be useful for you.

In this article, I decided to share the implementation of these metrics for Deep Learning frameworks. It includes recall, precision, specificity, negative predictive value (NPV), f1-score, and Matthews’ Correlation Coefficient (MCC). You can use it in both Keras or TensorFlow v1/v2.

The Code

Here’s the complete code for all metrics:

import numpy as np
	import tensorflow as tf
	from keras import backend as K

	def recall(y_true, y_pred):
	    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
	    possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
	    recall_keras = true_positives / (possible_positives + K.epsilon())
	    return recall_keras

	def precision(y_true, y_pred):
	    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
	    predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
	    precision_keras = true_positives / (predicted_positives + K.epsilon())
	    return precision_keras

	def specificity(y_true, y_pred):
	    tn = K.sum(K.round(K.clip((1 - y_true) * (1 - y_pred), 0, 1)))
	    fp = K.sum(K.round(K.clip((1 - y_true) * y_pred, 0, 1)))
	    return tn / (tn + fp + K.epsilon())

	def negative_predictive_value(y_true, y_pred):
	    tn = K.sum(K.round(K.clip((1 - y_true) * (1 - y_pred), 0, 1)))
	    fn = K.sum(K.round(K.clip(y_true * (1 - y_pred), 0, 1)))
	    return tn / (tn + fn + K.epsilon())

	def f1(y_true, y_pred):
	    p = precision(y_true, y_pred)
	    r = recall(y_true, y_pred)
	    return 2 * ((p * r) / (p + r + K.epsilon()))

	def fbeta(y_true, y_pred, beta=2):
	    y_pred = K.clip(y_pred, 0, 1)

	    tp = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)), axis=1)
	    fp = K.sum(K.round(K.clip(y_pred - y_true, 0, 1)), axis=1)
	    fn = K.sum(K.round(K.clip(y_true - y_pred, 0, 1)), axis=1)

	    p = tp / (tp + fp + K.epsilon())
	    r = tp / (tp + fn + K.epsilon())

	    num = (1 + beta ** 2) * (p * r)
	    den = (beta ** 2 * p + r + K.epsilon())
	    return K.mean(num / den)

	def matthews_correlation_coefficient(y_true, y_pred):
	    tp = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
	    tn = K.sum(K.round(K.clip((1 - y_true) * (1 - y_pred), 0, 1)))
	    fp = K.sum(K.round(K.clip((1 - y_true) * y_pred, 0, 1)))
	    fn = K.sum(K.round(K.clip(y_true * (1 - y_pred), 0, 1)))

	    num = tp * tn - fp * fn
	    den = (tp + fp) * (tp + fn) * (tn + fp) * (tn + fn)
	    return num / K.sqrt(den + K.epsilon())

	def equal_error_rate(y_true, y_pred):
	    n_imp = tf.count_nonzero(tf.equal(y_true, 0), dtype=tf.float32) + tf.constant(K.epsilon())
	    n_gen = tf.count_nonzero(tf.equal(y_true, 1), dtype=tf.float32) + tf.constant(K.epsilon())

	    scores_imp = tf.boolean_mask(y_pred, tf.equal(y_true, 0))
	    scores_gen = tf.boolean_mask(y_pred, tf.equal(y_true, 1))

	    loop_vars = (tf.constant(0.0), tf.constant(1.0), tf.constant(0.0))
	    cond = lambda t, fpr, fnr: tf.greater_equal(fpr, fnr)
	    body = lambda t, fpr, fnr: (
	        t + 0.001,
	        tf.divide(tf.count_nonzero(tf.greater_equal(scores_imp, t), dtype=tf.float32), n_imp),
	        tf.divide(tf.count_nonzero(tf.less(scores_gen, t), dtype=tf.float32), n_gen)
	    )
	    t, fpr, fnr = tf.while_loop(cond, body, loop_vars, back_prop=False)
	    eer = (fpr + fnr) / 2

	    return eer

Almost all the metrics in the code are described in the article previously mentioned. Therefore, you can find a detailed explanation there.

#keras #deep-learning #metrics #classification #tensorflow

What is GEEK

Buddha Community

Custom metrics for Keras/TensorFlow
Hello Jay

Hello Jay

1594525380

Keras vs. Tensorflow - Difference Between Tensorflow and Keras

Keras and Tensorflow are two very popular deep learning frameworks. Deep Learning practitioners most widely use Keras and Tensorflow. Both of these frameworks have large community support. Both of these frameworks capture a major fraction of deep learning production.

Which framework is better for us then?

This blog will be focusing on Keras Vs Tensorflow. There are some differences between Keras and Tensorflow, which will help you choose between the two. We will provide you better insights on both these frameworks.

What is Keras?

Keras is a high-level API built on the top of a backend engine. The backend engine may be either TensorFlow, theano, or CNTK. It provides the ease to build neural networks without worrying about the backend implementation of tensors and optimization methods.

Fast prototyping allows for more experiments. Using Keras developers can convert their algorithms into results in less time. It provides an abstraction overs lower level computations.

Major Applications of Keras

  • The performance of Keras is smooth on both CPU and GPU.
  • Keras provides modularity, flexibility to code, extensibility, and has an adaptation for innovation and research.
  • The pythonic nature of Keras makes it easy to explore and debug the code.

What is Tensorflow?

Tensorflow is a tool designed by Google for the deep learning developer community. The aim of TensorFlow was to make deep learning applications accessible to the people. It is an open-source library available on Github. It is one of the most famous libraries to experiment with deep learning. The popularity of TensorFlow is because of the ease of building and deployment of neural net models.

Major area of focus here is numerical computation. It was built keeping the processing computation power in mind. Therefore we can run TensorFlow applications on almost kind of computer.

Major applications of Tensorflow

  • From mobiles to embedded devices and distributed servers Tensorflow runs on all the platforms.
  • Tensorflow is the enterprise of solving real-world and real-time problems like image analysis, robotics, generating data, and NLP.
  • Developers are implementing tools for translation languages and the detection of skin cancers using Tensorflow.
  • Major projects using TensorFlow are Google translate, video detection, image recognition.

#keras tutorials #keras vs tensorflow #keras #tensorflow

Custom metrics for Keras/TensorFlow

Recently, I published an article about binary classification metrics that you can check here. The article gives a brief explanation of the most traditional metrics and presents less famous ones like NPV, Specificity, and MCC. If you don’t know some of these metrics, take a look at the article. It’s only 7 minutes to read. I’m sure it will be useful for you.

In this article, I decided to share the implementation of these metrics for Deep Learning frameworks. It includes recall, precision, specificity, negative predictive value (NPV), f1-score, and Matthews’ Correlation Coefficient (MCC). You can use it in both Keras or TensorFlow v1/v2.

The Code

Here’s the complete code for all metrics:

import numpy as np
	import tensorflow as tf
	from keras import backend as K

	def recall(y_true, y_pred):
	    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
	    possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
	    recall_keras = true_positives / (possible_positives + K.epsilon())
	    return recall_keras

	def precision(y_true, y_pred):
	    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
	    predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
	    precision_keras = true_positives / (predicted_positives + K.epsilon())
	    return precision_keras

	def specificity(y_true, y_pred):
	    tn = K.sum(K.round(K.clip((1 - y_true) * (1 - y_pred), 0, 1)))
	    fp = K.sum(K.round(K.clip((1 - y_true) * y_pred, 0, 1)))
	    return tn / (tn + fp + K.epsilon())

	def negative_predictive_value(y_true, y_pred):
	    tn = K.sum(K.round(K.clip((1 - y_true) * (1 - y_pred), 0, 1)))
	    fn = K.sum(K.round(K.clip(y_true * (1 - y_pred), 0, 1)))
	    return tn / (tn + fn + K.epsilon())

	def f1(y_true, y_pred):
	    p = precision(y_true, y_pred)
	    r = recall(y_true, y_pred)
	    return 2 * ((p * r) / (p + r + K.epsilon()))

	def fbeta(y_true, y_pred, beta=2):
	    y_pred = K.clip(y_pred, 0, 1)

	    tp = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)), axis=1)
	    fp = K.sum(K.round(K.clip(y_pred - y_true, 0, 1)), axis=1)
	    fn = K.sum(K.round(K.clip(y_true - y_pred, 0, 1)), axis=1)

	    p = tp / (tp + fp + K.epsilon())
	    r = tp / (tp + fn + K.epsilon())

	    num = (1 + beta ** 2) * (p * r)
	    den = (beta ** 2 * p + r + K.epsilon())
	    return K.mean(num / den)

	def matthews_correlation_coefficient(y_true, y_pred):
	    tp = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
	    tn = K.sum(K.round(K.clip((1 - y_true) * (1 - y_pred), 0, 1)))
	    fp = K.sum(K.round(K.clip((1 - y_true) * y_pred, 0, 1)))
	    fn = K.sum(K.round(K.clip(y_true * (1 - y_pred), 0, 1)))

	    num = tp * tn - fp * fn
	    den = (tp + fp) * (tp + fn) * (tn + fp) * (tn + fn)
	    return num / K.sqrt(den + K.epsilon())

	def equal_error_rate(y_true, y_pred):
	    n_imp = tf.count_nonzero(tf.equal(y_true, 0), dtype=tf.float32) + tf.constant(K.epsilon())
	    n_gen = tf.count_nonzero(tf.equal(y_true, 1), dtype=tf.float32) + tf.constant(K.epsilon())

	    scores_imp = tf.boolean_mask(y_pred, tf.equal(y_true, 0))
	    scores_gen = tf.boolean_mask(y_pred, tf.equal(y_true, 1))

	    loop_vars = (tf.constant(0.0), tf.constant(1.0), tf.constant(0.0))
	    cond = lambda t, fpr, fnr: tf.greater_equal(fpr, fnr)
	    body = lambda t, fpr, fnr: (
	        t + 0.001,
	        tf.divide(tf.count_nonzero(tf.greater_equal(scores_imp, t), dtype=tf.float32), n_imp),
	        tf.divide(tf.count_nonzero(tf.less(scores_gen, t), dtype=tf.float32), n_gen)
	    )
	    t, fpr, fnr = tf.while_loop(cond, body, loop_vars, back_prop=False)
	    eer = (fpr + fnr) / 2

	    return eer

Almost all the metrics in the code are described in the article previously mentioned. Therefore, you can find a detailed explanation there.

#keras #deep-learning #metrics #classification #tensorflow

Mia  Marquardt

Mia Marquardt

1624853402

Logging TensorFlow(Keras) metrics to Azure ML Studio in realtime

A real-time approach using a custom Keras callback.

Training a TensorFlow/Keras model on Azure’s Machine Learning Studio can save a lot of time, especially if you don’t have your own GPU or your dataset is large. It seems that there should be an easy way to track your training metrics in Azure ML Studio’s dashboard. Well, there is! It just requires a short custom Keras callback.

If you are new to training TensorFlow models on Azure, take a look my article “Train on Cloud GPUs with Azure Machine Learning SDK for Python.” It starts from the beginning and implements an entire training workflow from scratch. This post, however, assumes you know the basics and will only focus on the necessary tools to log your metrics to Azure.

There is a working code example that demonstrates the tools in this article in the examplesfolder of the GitHub repository for this project. The callback itself is in the log_to_azure.py file.

#python #azure #tensorflow #keras #azure-machine-learning #logging tensorflow(keras) metrics to azure ml studio in realtime

Keras Tutorial - Ultimate Guide to Deep Learning - DataFlair

Welcome to DataFlair Keras Tutorial. This tutorial will introduce you to everything you need to know to get started with Keras. You will discover the characteristics, features, and various other properties of Keras. This article also explains the different neural network layers and the pre-trained models available in Keras. You will get the idea of how Keras makes it easier to try and experiment with new architectures in neural networks. And how Keras empowers new ideas and its implementation in a faster, efficient way.

Keras Tutorial

Introduction to Keras

Keras is an open-source deep learning framework developed in python. Developers favor Keras because it is user-friendly, modular, and extensible. Keras allows developers for fast experimentation with neural networks.

Keras is a high-level API and uses Tensorflow, Theano, or CNTK as its backend. It provides a very clean and easy way to create deep learning models.

Characteristics of Keras

Keras has the following characteristics:

  • It is simple to use and consistent. Since we describe models in python, it is easy to code, compact, and easy to debug.
  • Keras is based on minimal substructure, it tries to minimize the user actions for common use cases.
  • Keras allows us to use multiple backends, provides GPU support on CUDA, and allows us to train models on multiple GPUs.
  • It offers a consistent API that provides necessary feedback when an error occurs.
  • Using Keras, you can customize the functionalities of your code up to a great extent. Even small customization makes a big change because these functionalities are deeply integrated with the low-level backend.

Benefits of using Keras

The following major benefits of using Keras over other deep learning frameworks are:

  • The simple API structure of Keras is designed for both new developers and experts.
  • The Keras interface is very user friendly and is pretty optimized for general use cases.
  • In Keras, you can write custom blocks to extend it.
  • Keras is the second most popular deep learning framework after TensorFlow.
  • Tensorflow also provides Keras implementation using its tf.keras module. You can access all the functionalities of Keras in TensorFlow using tf.keras.

Keras Installation

Before installing TensorFlow, you should have one of its backends. We prefer you to install Tensorflow. Install Tensorflow and Keras using pip python package installer.

Starting with Keras

The basic data structure of Keras is model, it defines how to organize layers. A simple type of model is the Sequential model, a sequential way of adding layers. For more flexible architecture, Keras provides a Functional API. Functional API allows you to take multiple inputs and produce outputs.

Keras Sequential model

Keras Functional API

It allows you to define more complex models.

#keras tutorials #introduction to keras #keras models #keras tutorial #layers in keras #why learn keras

Custom Layer in TensorFlow using Keras API | Custom Dense Layer in TensorFlow Keras | Deep Learning

In this video, we will learn how to create custom layers on TensorFlow using Keras API. For this tutorial, we are going to create a custom Dense layer by extending the tf.keras.layers.Layer class. For a fair comparison, first, we are going to build a simple MNIST classifier using regular Dense layer from the Keras API. After that, we are going to replace the regular Dense layer with the CustomDense layer.

CODE: https://github.com/nikhilroxtomar/Custom-Layer-in-TensorFlow-using-Keras-API

Subscribe: https://www.youtube.com/channel/UClkqp31PHke-f8b8mjiiY-Q

#tensorflow #keras