Data is everything. Especially in deep learning, the amount of data, type of data, and quality of data are the most important factors. Sometimes the amount of labeled data that we have is not enough or the problem domain that we work on does not make sense to use a big amount of data such as in few-shot learning. The important factors in those situations are the algorithms. In deep metric learning, the loss functions that we use are the most important factors. What we try to do in DML is to learn a set of features that can distinguish different image samples from each other while matches similar ones to each other.

In my work, I have started working on a project where we need to leverage deep metric learning methods. We needed to extract meaningful features out of images so that we can classify thousands of image samples accurately. While DML is the problem domain for the system that we build, I was thinking about other methods that can extract meaningful features out of images. I looked back at the human brain.

I believe that the human brain is neither a classification model nor an autoencoder model. The human brain is a DML system where each object, each scene, and each input is represented with a set of embedding. But, when we are babies, does someone comes and shows each object to us and say

#unsupervised-learning #neural-networks #deep-learning #machine-learning

Filter Learning with Unsupervised Learning
1.10 GEEK