Drug Discovery with Graph Neural Networks : Learn How to Predict Toxicity with GNNs Using Deepchem — a Deep Learning Library for Life Sciences.
In this article, we will cover another crucial factor that determines whether the drug can pass safety tests — toxicity. In fact, the toxicity accounts for 30% of rejected drug candidates making it one of the most important factors to consider during the drug development stage [1]. Machine learning will prove here very beneficial as it can filter out toxic drug candidates in the early stage of the drug discovery process.
I will assume that you’ve read my previous article which explains some topics and terms that I will be using in this article :) Let’s get started!
The feature engineering part is pretty much the same as in part 1 of the series. To convert molecular structure into an input for GNNs, we can create molecular fingerprints, or feed it into graph neural network using adjacency matrix and feature vectors. This features can be automatically generated by external software such as RDKit or Deepchem so we don’t have to worry much about it.
The biggest difference is in the machine learning task itself. Toxicity prediction is a classification task, in contrary to the solubility prediction which is a regression task as we might recall from the previous article. There are many different toxicity effects such as carcinogenicity, respiratory toxicity, irritation/corrosion, and others [2]. This makes it a slightly more complicated challenge to work with as we might have to cope also with the imbalanced classes.
Fortunately, the toxicity datasets are often considerably bigger than the solubility counterparts. For example, the Tox21 dataset has ~12k training samples when the Delaney dataset used for solubility prediction has only ~3k training samples. This makes neural networks architectures a more promising approach to use as it can capture more hidden information.
deep-learning life-sciences graph-neural-networks toxicity cheminformatics deep learning
Deep learning on graphs: successes, challenges, and next steps. TL;DR This is the first in a series of posts where I will discuss the evolution and future trends in the field of deep learning on graphs.
The figure below shows an example most of us are familiar with, the molecule of caffeine, whose level in my bloodstream is alarmingly low.
In this blog,Let's discuss Graph Neural Network and its variants. Let us start with what graph neural networks are and what are the areas in which it can be applied. The sequence in which we proceed further is as follows.
The past few decades have witnessed a massive boom in the penetration as well as the power of computation, and amidst this information.
“Deep Neural Nets” — the buzzword that has been around for quite a while now with its applications ranging from Text Classification to Self-Driving cars.