Self-supervised representation learning methods aim at learning useful and general representations from large amounts of unlabeled data, which can reduce sample complexity for downstream supervised learning. These methods have been widely applied to various domains such as computer vision (Oord et al., 2018; Hjelm et al., 2018; Chen et al., 2020; Grill et al., 2020), natural language processing (Peters et al., 2018; Devlin et al., 2019; Brown et al., 2020), and speech processing (Schneider et al., 2019; Pascual et al., 2019b; Chung & Glass, 2020; Wang et al., 2020; Baevski et al., 2020). In the case of sequence data, representation learning may force the model to recover the underlying dynamics from the raw data, so that the learnt representations remove irrelevant variability in the inputs, embed rich context information and become predictive of future states. The effectiveness of the representations depends on the self-supervised task which injects inductive bias into learning. The design of self-supervision has become an active research area.
 propose Deep Autoencoding Predictive Components (DAPC) a self supervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space. They encourage this latent structure by maximizing an estimate of predictive information of latent feature sequences, which is the mutual information between the past and future windows at each time step.
#deep-learning #nlp #machine-learning #audio #computer-vision