Using Fact-Based Modelling to Kickstart One-Shot Learning

An earlier article, Fact-Based AI — Improving on a Knowledge Graph, I provided a vision for Fact-Based Modelling’s future in AI while providing background knowledge to digest. Here we get straight to the facts.

It has become apparent within AI research that Machine Learning (ML), Deep Learning (DL), and Neural Networks, in general, are not mechanisms that lend themselves readily to “one-shot learning”.

Neural Networks, in general, must be trained on large sets of training data resulting in a mechanism which provides a probabilistic result based on live data that approximates the training data set.

Train a suitable neural network on a set of images of a panda, then provide a new image of a panda and the neural network will give a probability of what it believes the new image to be, and hopefully that prediction is ‘a panda’.

The less training data provided the neural network, no matter how optimised it is in its inner structure, the less chance that it will provide a favourable result. That is, you cannot reliably provide a suitable neural network with the picture of just one panda and expect it to recognise another and different picture of a panda. In general, it cannot learn a concept, or set of rules, in one shot.

This is heavily contrasted with human learning, in which a person can readily grasp and apply knowledge learned, having only been introduced to the problem space once. This is true over a vast domain of problem spaces.

One-shot learning is a recognised problem for Machine Learning and Deep Learning, and experts in the field are working on ways to overcome that problem. It seems unlikely to me that ML/DL strategies will get to Artificial General Intelligence (AGI) without some form of one-shot learning.

A problem within ML/DL approaches is that data, by way of accumulated weightings, is passed through a network of nodes and where little logic is applied to the data at each node, but rather statistics-based math is applied to the values of the accumulated weightings received by the node. The result of the applied statistical math over input data provides output data graded by probabilities, providing a probabilistic logic over the input data.

#fact-based-modelling #fact-based-ai #artificial-intelligence #machine-learning #ai

Fact-Based AI In A Nutshell
1.10 GEEK