Match!
Shusen Liu
Lawrence Livermore National Laboratory
28Publications
10H-index
344Citations
Publications 29
Newest
#1Shusen Liu (LLNL: Lawrence Livermore National Laboratory)H-Index: 10
#2Rushil Anirudh (LLNL: Lawrence Livermore National Laboratory)H-Index: 6
Last.Valerio Pascucci (UofU: University of Utah)H-Index: 45
view all 16 authors...
With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpre...
Source
Training deep neural networks on large scientific data is a challenging task that requires enormous compute power, especially if no pre-trained models exist to initialize the process. We present a novel tournament method to train traditional as well as generative adversarial networks built on LBANN, a scalable deep learning framework optimized for HPC systems. LBANN combines multiple levels of parallelism and exploits some of the worlds largest supercomputers. We demonstrate our framework by cre...
#1Sam Ade Jacobs (LLNL: Lawrence Livermore National Laboratory)H-Index: 2
#2Jim Gaffney (LLNL: Lawrence Livermore National Laboratory)H-Index: 6
Last.Peer-Timo Bremer (LLNL: Lawrence Livermore National Laboratory)H-Index: 26
view all 14 authors...
Training deep neural networks on large scientific data is a challenging task that requires enormous compute power, especially if no pre-trained models exist to initialize the process. We present a novel tournament method to train traditional as well as generative adversarial networks built on LBANN, a scalable deep learning framework optimized for HPC systems. LBANN combines multiple levels of parallelism and exploits some of the worlds largest supercomputers.We demonstrate our framework by crea...
1 CitationsSource
#1Shusen LiuH-Index: 10
Last.Peer-Timo BremerH-Index: 26
view all 16 authors...
With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpre...
#1Shusen Liu (LLNL: Lawrence Livermore National Laboratory)H-Index: 10
#2R Anirudh (LLNL: Lawrence Livermore National Laboratory)
Last.B T Bremer (LLNL: Lawrence Livermore National Laboratory)
view all 4 authors...
Source
#1Shusen LiuH-Index: 10
#2Rushil AnirudhH-Index: 6
Last.Peer-Timo Bremer (LLNL: Lawrence Livermore National Laboratory)H-Index: 26
view all 4 authors...
We present function preserving projections (FPP), a scalable linear projection technique for discovering interpretable relationships in high-dimensional data. Conventional dimension reduction methods aim to maximally preserve the global and/or local geometric structure of a dataset. However, in practice one is often more interested in determining how one or multiple user-selected response function(s) can be explained by the data. To intuitively connect the responses to the data, FPP constructs 2...
#1Rushil Anirudh (LLNL: Lawrence Livermore National Laboratory)H-Index: 6
Last.Brian SpearsH-Index: 24
view all 5 authors...
There is significant interest in using modern neural networks for scientific applications due to their effectiveness in modeling highly complex, non-linear problems in a data-driven fashion. However, a common challenge is to verify the scientific plausibility or validity of outputs predicted by a neural network. This work advocates the use of known scientific constraints as a lens into evaluating, exploring, and understanding such predictions for the problem of inertial confinement fusion.
#1Shusen Liu (LLNL: Lawrence Livermore National Laboratory)H-Index: 10
#2zhimin liH-Index: 1
Last.Peer-Timo Bremer (LLNL: Lawrence Livermore National Laboratory)H-Index: 26
view all 6 authors...
With the recent advances in deep learning, neural network models have obtained state-of-the-art performances for many linguistic tasks in natural language processing. However, this rapid progress also brings enormous challenges. The opaque nature of a neural network model leads to hard-to-debug-systems and difficult-to-interpret mechanisms. Here, we introduce a visualization system that, through a tight yet flexible integration between visualization elements and the underlying model, allows a us...
3 CitationsSource
Oct 31, 2018 in EMNLP (Empirical Methods in Natural Language Processing)
#1Shusen Liu (LLNL: Lawrence Livermore National Laboratory)H-Index: 10
#2Tao Li (UofU: University of Utah)H-Index: 2
Last.Peer-Timo Bremer (LLNL: Lawrence Livermore National Laboratory)H-Index: 26
view all 6 authors...
6 CitationsSource
#1Shusen Liu (LLNL: Lawrence Livermore National Laboratory)H-Index: 10
#2Peer-Timo Bremer (LLNL: Lawrence Livermore National Laboratory)H-Index: 26
Last.Valerio PascucciH-Index: 45
view all 7 authors...
Constructing distributed representations for words through neural language models and using the resulting vector spaces for analysis has become a crucial component of natural language processing (NLP). However, despite their widespread application, little is known about the structure and properties of these spaces. To gain insights into the relationship between words, the NLP community has begun to adapt high-dimensional visualization techniques. In particular, researchers commonly use t-distrib...
18 CitationsSource
123