Vaughn  Sauer

Vaughn Sauer

1625923320

Discovering Symbolic Models From Deep Learning With Inductive Biases

In machine learning, the aim is to create algorithms that can learn and predict a required target output from the learnings. To achieve this, the learning algorithm is presented and fed with some examples it can train and learn from to achieve the intended relation of input and output values. Then the learner, a model, in this case, is intended to approximate the correct output, even for examples that have been unseen during the training phase. Without any additional assumptions, problems cannot be solved since unseen situations might have an arbitrary output value. The necessary assumptions about the nature of the target function values are subsumed in the phrase inductive bias. Inductive Bias, also sometimes known as learning bias, is a predictive learning algorithm  that uses a set of assumptions that the learner uses to predict outputs of given inputs that it has not encountered yet.

Algebraic expressions are usually compact and present us with explicit interpretations and generalize assumptions well. However, finding such algebraic expressions is a difficult task. Symbolic regression comes to the rescue as one of the options. It is a supervised machine learning technique that assembles analytic functions to create a model for a given dataset. However, genetic algorithms  are traditionally used, which are essentially brute force procedures that scale exponentially with input variables and operators. On the other hand, deep learning methods allow efficient training of complex models on a high dimensional dataset . However, these learned models are typically black boxes and can be difficult to interpret.

About Symbolic Models Framework with Inductive Biases

Symbolic Model Framework proposes a general framework to leverage the advantages of both traditional deep learning and symbolic regression. As an example, the study of Graph Networks (GNs or GNNs) can be presented, as they have strong and well-motivated inductive biases that are very well suited to complex problems that can be explained. Symbolic regression is applied to fit the different internal parts of the learned model that operate on a reduced size of representations. A Number of symbolic expressions can also be joined together, giving rise to an overall algebraic equation equivalent to the trained Graph Network. The framework can be applied to more such problems as rediscovering force laws, rediscovering Hamiltonians, and a real-world astrophysical challenge, demonstrating that drastic improvements can be made to generalization, and plausible analytical expressions are being made distilled. Not only can it recover the injected closed-form physical laws for the Newtonian and Hamiltonian examples, but it can also derive a new interpretable closed-form analytical expression that can be useful in astrophysics.

Getting Started with Creating a Deep Learning Model With Inductive Bias

This demonstration will try to predict the path dynamics from a simple particle model using Graph Neural Network. We will also try to induce Low Dimensionality for a clearer understanding and predict the dynamic path movement for newly induced particles and extract it to the symbolic equation. The following implementation is inspired by the official demo of the Symbolic model, whose Github repository can be explored here.

#developers corner #deep learning techniques #predictive analysis #predictive analytics

Discovering Symbolic Models From Deep Learning With Inductive Biases