The usage of an autoencoder provides a means of describing melt pool images with fewer parameters. Essentially, this is a form of data compression. However, as illustrated in part 2, the latent vectors encoded by the autoencoder are highly-packed and appeared in clusters form. As a result, the latent space is not smooth and continuous. It is possible to have severe overfitting in the latent space where two latent vectors which are near to each other look very different when reconstructed. This is due to the absence of regularisation term to control how the data should be compressed in its loss function.

Well, you can say that the autoencoder compresses data for the sake of compressing and hence it does not necessarily preserve the structure of data in the latent space.

This is an issue if we want to have both:

  1. Lesser parameters (achievable with an autoencoder) and
  2. Quality data compression(an autoencoder is not optimised for this purpose)

#machine-learning #variational-autoencoder #additive-manufacturing #data-science #generative-model

AI for 3-D Printing: Disentangled Variational Autoencoder
1.30 GEEK