Match!

Learnt dynamics generalizes across tasks, datasets, and populations.

Usman Mahmood (GSU: Georgia State University), M. M. Rahman (GSU: Georgia State University)+ 3 AuthorsSergey M. Plis2
Estimated H-index: 2
(GSU: Georgia State University)
Abstract
Differentiating multivariate dynamic signals is a difficult learning problem as the feature space may be large yet often only a few training examples are available. Traditional approaches to this problem either proceed from handcrafted features or require large datasets to combat the m >> n problem. In this paper, we show that the source of the problem---signal dynamics---can be used to our advantage and noticeably improve classification performance on a range of discrimination tasks when training data is scarce. We demonstrate that self-supervised pre-training guided by signal dynamics produces embedding that generalizes across tasks, datasets, data collection sites, and data distributions. We perform an extensive evaluation of this approach on a range of tasks including simulated data, keyword detection problem, and a range of functional neuroimaging data, where we show that a single embedding learnt on healthy subjects generalizes across a number of disorders, age groups, and datasets.
  • References (43)
  • Citations (0)
📖 Papers frequently viewed together
3 Citations
78% of Scinapse members use related papers. After signing in, all features are FREE.
References43
Newest
Apr 30, 2020 in ICLR (International Conference on Learning Representations)
#1Michael Tschannen (ETH Zurich)H-Index: 13
#2Josip Djolonga (Google)H-Index: 6
Last. Mario Lucic (Google)H-Index: 15
view all 5 authors...
Many recent methods for unsupervised or self-supervised representation learning train feature extractors by maximizing an estimate of the mutual information (MI) between different views of the data. This comes with several immediate problems: For example, MI is notoriously hard to estimate, and using it as an objective for representation learning may lead to highly entangled representations due to its invariance under arbitrary invertible transformations. Nevertheless, these methods have been re...
12 Citations
#1Meenakshi Khosla (Cornell University)H-Index: 3
#2Keith Jamison (Cornell University)H-Index: 7
Last. Mert R. Sabuncu (Cornell University)H-Index: 36
view all 5 authors...
Abstract Machine learning techniques have gained prominence for the analysis of resting-state functional Magnetic Resonance Imaging (rs-fMRI) data. Here, we present an overview of various unsupervised and supervised machine learning applications to rs-fMRI. We offer a methodical taxonomy of machine learning methods in resting-state fMRI. We identify three major divisions of unsupervised learning methods with regard to their applications to rs-fMRI, based on whether they discover principal modes ...
3 CitationsSource
#1Meenakshi Khosla (Cornell University)H-Index: 3
#2Keith Jamison (Cornell University)H-Index: 7
Last. Mert R. Sabuncu (Cornell University)H-Index: 4
view all 4 authors...
Resting-state functional MRI (rs-fMRI) is a rich imaging modality that captures spontaneous brain activity patterns, revealing clues about the connectomic organization of the human brain. While many rs-fMRI studies have focused on static measures of functional connectivity, there has been a recent surge in examining the temporal patterns in these data. In this paper, we explore two strategies for capturing the normal variability in resting-state activity across a healthy population: (a) an autoe...
1 CitationsSource
The application of deep learning (DL) models to the decoding of cognitive states from whole-brain functional Magnetic Resonance Imaging (fMRI) data is often hindered by the small sample size and high dimensionality of these datasets. Especially, in clinical settings, where patient data are scarce. In this work, we demonstrate that transfer learning represents a solution to this problem. Particularly, we show that a DL model, which has been previously trained on a large openly available fMRI data...
#1Ankesh AnandH-Index: 3
#2Evan RacahH-Index: 8
Last. R Devon HjelmH-Index: 10
view all 6 authors...
State representation learning, or the ability to capture latent generative factors of an environment, is crucial for building intelligent agents that can perform a wide variety of tasks. Learning such representations without supervision from rewards is a challenging open problem. We introduce a method that learns state representations by maximizing mutual information across spatially and temporally distinct features of a neural encoder of the observations. We also introduce a new benchmark based...
1 Citations
#1Philip BachmanH-Index: 16
#2R Devon Hjelm (Microsoft)H-Index: 10
Last. William Buchwalter (Microsoft)H-Index: 1
view all 3 authors...
We propose an approach to self-supervised representation learning based on maximizing mutual information between features extracted from multiple views of a shared context. For example, one could produce multiple views of a local spatio-temporal context by observing it from different locations (e.g., camera positions within a scene), and via different modalities (e.g., tactile, auditory, or visual). Or, an ImageNet image could provide a context from which one produces multiple views by repeatedl...
6 Citations
Large scale deep learning excels when labeled images are abundant, yet data-efficient learning remains a longstanding challenge. While biological vision is thought to leverage vast amounts of unlabeled data to solve classification problems with limited supervision, computer vision has so far not succeeded in this `semi-supervised' regime. Our work tackles this challenge with Contrastive Predictive Coding, an unsupervised objective which extracts stable structure from still images. The result is ...
16 Citations
#1Zening Fu (The Mind Research Network)H-Index: 8
#2Arvind Caprihan (The Mind Research Network)H-Index: 30
Last. Vince D. Calhoun (UNM: University of New Mexico)H-Index: 93
view all 8 authors...
Source
Learning good representations is of crucial importance in deep learning. Mutual Information (MI) or similar measures of statistical dependence are promising tools for learning these representations in an unsupervised way. Even though the mutual information between two random variables is hard to measure directly in high dimensional spaces, some recent studies have shown that an implicit optimization of MI can be achieved with an encoder-discriminator architecture similar to that of Generative Ad...
5 Citations
#1Jacob Devlin (Google)H-Index: 13
#2Ming-Wei Chang (Google)H-Index: 26
Last. Kristina Toutanova (Google)H-Index: 31
view all 4 authors...
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such a...
1,250 Citations
Cited By0
Newest