Match!
Rushil Anirudh
Lawrence Livermore National Laboratory
43Publications
6H-index
195Citations
Publications 48
Newest
#1Shusen Liu (LLNL: Lawrence Livermore National Laboratory)H-Index: 10
#2Rushil Anirudh (LLNL: Lawrence Livermore National Laboratory)H-Index: 6
Last.Valerio Pascucci (UofU: University of Utah)H-Index: 45
view all 16 authors...
With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpre...
Source
#1Bogdan Kustowski (LLNL: Lawrence Livermore National Laboratory)
#2Jim Gaffney (LLNL: Lawrence Livermore National Laboratory)H-Index: 6
Last.Rushil Anirudh (LLNL: Lawrence Livermore National Laboratory)H-Index: 6
view all 6 authors...
We adopt a technique, known in the machine learning community as transfer learning, to reduce the bias of computer simulation using very sparse experimental data. Unlike the Bayesian calibration, which is commonly used to estimate the simulation bias, the transfer learning approach discussed in this article involves calculating an artificial neural network surrogate model of the simulations. Assuming that the simulation code correctly predicts the trends in the experimental data but it is subjec...
Source
Training deep neural networks on large scientific data is a challenging task that requires enormous compute power, especially if no pre-trained models exist to initialize the process. We present a novel tournament method to train traditional as well as generative adversarial networks built on LBANN, a scalable deep learning framework optimized for HPC systems. LBANN combines multiple levels of parallelism and exploits some of the worlds largest supercomputers. We demonstrate our framework by cre...
#1Sam Ade Jacobs (LLNL: Lawrence Livermore National Laboratory)H-Index: 2
#2Jim Gaffney (LLNL: Lawrence Livermore National Laboratory)H-Index: 6
Last.Peer-Timo Bremer (LLNL: Lawrence Livermore National Laboratory)H-Index: 26
view all 14 authors...
Training deep neural networks on large scientific data is a challenging task that requires enormous compute power, especially if no pre-trained models exist to initialize the process. We present a novel tournament method to train traditional as well as generative adversarial networks built on LBANN, a scalable deep learning framework optimized for HPC systems. LBANN combines multiple levels of parallelism and exploits some of the worlds largest supercomputers.We demonstrate our framework by crea...
1 CitationsSource
#1Shusen LiuH-Index: 10
Last.Peer-Timo BremerH-Index: 26
view all 16 authors...
With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpre...
May 1, 2019 in ICASSP (International Conference on Acoustics, Speech, and Signal Processing)
#1Rushil Anirudh (LLNL: Lawrence Livermore National Laboratory)H-Index: 6
#2Jayaraman J. Thiagarajan (LLNL: Lawrence Livermore National Laboratory)H-Index: 14
Using predictive models to identify patterns that can act as biomarkers for different neuropathoglogical conditions is becoming highly prevalent. In this paper, we consider the problem of Autism Spectrum Disorder (ASD) classification where previous work has shown that it can be beneficial to incorporate a wide variety of meta features, such as socio-cultural traits, into predictive modeling. A graph-based approach naturally suits these scenarios, where a contextual graph captures traits that cha...
4 CitationsSource
May 1, 2019 in ICASSP (International Conference on Acoustics, Speech, and Signal Processing)
#1Jayaraman J. Thiagarajan (LLNL: Lawrence Livermore National Laboratory)H-Index: 14
#2Rushil Anirudh (LLNL: Lawrence Livermore National Laboratory)H-Index: 6
Last.Peer-Timo Bremer (LLNL: Lawrence Livermore National Laboratory)H-Index: 26
view all 4 authors...
Unsupervised dimension selection is an important problem that seeks to reduce dimensionality of data, while preserving the most useful characteristics. While dimensionality reduction is commonly utilized to construct low-dimensional embeddings, they produce feature spaces that are hard to interpret. Further, in applications such as sensor design, one needs to perform reduction directly in the input domain, instead of constructing transformed spaces. Consequently, dimension selection (DS) aims to...
Source
#1Rushil Anirudh (LLNL: Lawrence Livermore National Laboratory)H-Index: 6
Last.Timo BremerH-Index: 4
view all 4 authors...
In the past few years, generative models like Generative Adversarial Networks (GANs) have dramatically advanced our ability to represent and parameterize high-dimensional, non-linear image manifolds. As a result, they have been widely adopted across a variety of applications, ranging from challenging inverse problems like image completion, to being used as a prior in problems such as anomaly detection and adversarial defense. A recurring theme in many of these applications is the notion of proje...
1 Citations
#1Hyojin Kim (LLNL: Lawrence Livermore National Laboratory)H-Index: 6
#2Rushil AnirudhH-Index: 6
Last.Kyle M. ChampleyH-Index: 7
view all 4 authors...
Reconstruction of few-view x-ray Computed Tomography (CT) data is a highly ill-posed problem. It is often used in applications that require low radiation dose in clinical CT, rapid industrial scanning, or fixed-gantry CT. Existing analytic or iterative algorithms generally produce poorly reconstructed images, severely deteriorated by artifacts and noise, especially when the number of x-ray projections is considerably low. This paper presents a deep network-driven approach to address extreme few-...
#1Rushil Anirudh (LLNL: Lawrence Livermore National Laboratory)H-Index: 6
Last.Brian SpearsH-Index: 24
view all 4 authors...
Neural networks have become very popular in surrogate modeling because of their ability to characterize arbitrary, high dimensional functions in a data driven fashion. This paper advocates for the training of surrogates that are consistent with the physical manifold -- i.e., predictions are always physically meaningful, and are cyclically consistent -- i.e., when the predictions of the surrogate, when passed through an independently trained inverse model give back the original input parameters. ...
12345