Match!
Peer-Timo Bremer
Lawrence Livermore National Laboratory
202Publications
26H-index
2,739Citations
Publications 206
Newest
#1Shusen Liu (LLNL: Lawrence Livermore National Laboratory)H-Index: 10
#2Rushil Anirudh (LLNL: Lawrence Livermore National Laboratory)H-Index: 6
Last.Valerio Pascucci (UofU: University of Utah)H-Index: 45
view all 16 authors...
With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpre...
Source
Topological approaches to data analysis can answer complex questions about the number, connectivity, and scale of intrinsic features in scalar data. However, the global nature of many topological structures makes their computation challenging at scale, and thus often limits the size of data that can be processed. One key quality to achieving scalability and performance on modern architectures is data locality, i.e., a process operates on data that resides in a nearby memory system, avoiding freq...
Source
#1Steve Petruzza (UofU: University of Utah)H-Index: 3
#2Giorgio Scorzelli (UofU: University of Utah)H-Index: 10
Last.Peer-Timo Bremer (LLNL: Lawrence Livermore National Laboratory)H-Index: 26
view all 11 authors...
#1Francesco Di Natale (LLNL: Lawrence Livermore National Laboratory)H-Index: 1
#2Harsh Bhatia (LLNL: Lawrence Livermore National Laboratory)H-Index: 7
Last.Helgi I. Ingólfsson (LLNL: Lawrence Livermore National Laboratory)H-Index: 22
view all 25 authors...
Computational models can define the functional dynamics of complex systems in exceptional detail. However, many modeling studies face seemingly incommensurate requirements: to gain meaningful insights into some phenomena requires models with high resolution (microscopic) detail that must nevertheless evolve over large (macroscopic) length- and time-scales. Multiscale modeling has become increasingly important to bridge this gap. Executing complex multiscale models on current petascale computers ...
4 CitationsSource
Advances in simulation methodologies, code efficiency, and computing power have enabled larger, longer, and more-complicated biological membrane simulations. The resulting membranes can be highly complex and have curved geometries that greatly deviate from a simple planar state. Studying these membranes requires appropriate characterization of geometric and topological properties of the membrane surface before any local lipid properties, such as areas and curvatures, can be computed. We present ...
1 CitationsSource
Training deep neural networks on large scientific data is a challenging task that requires enormous compute power, especially if no pre-trained models exist to initialize the process. We present a novel tournament method to train traditional as well as generative adversarial networks built on LBANN, a scalable deep learning framework optimized for HPC systems. LBANN combines multiple levels of parallelism and exploits some of the worlds largest supercomputers. We demonstrate our framework by cre...
#2Bindya VenkateshH-Index: 1
Last.Peer-Timo BremerH-Index: 26
view all 4 authors...
With rapid adoption of deep learning in high-regret applications, the question of when and how much to trust these models often arises, which drives the need to quantify the inherent uncertainties. While identifying all sources that account for the stochasticity of learned models is challenging, it is common to augment predictions with confidence intervals to convey the expected variations in a model's behavior. In general, we require confidence intervals to be well-calibrated, reflect the true ...
1 Citations
#1Sam Ade Jacobs (LLNL: Lawrence Livermore National Laboratory)H-Index: 2
#2Jim Gaffney (LLNL: Lawrence Livermore National Laboratory)H-Index: 6
Last.Peer-Timo Bremer (LLNL: Lawrence Livermore National Laboratory)H-Index: 26
view all 14 authors...
Training deep neural networks on large scientific data is a challenging task that requires enormous compute power, especially if no pre-trained models exist to initialize the process. We present a novel tournament method to train traditional as well as generative adversarial networks built on LBANN, a scalable deep learning framework optimized for HPC systems. LBANN combines multiple levels of parallelism and exploits some of the worlds largest supercomputers.We demonstrate our framework by crea...
1 CitationsSource
#1Shusen LiuH-Index: 10
Last.Peer-Timo BremerH-Index: 26
view all 16 authors...
With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpre...
#1Andrew G. StephenH-Index: 26
#2Animesh Agarwal (LANL: Los Alamos National Laboratory)H-Index: 8
Last.Constance AgamasuH-Index: 1
view all 32 authors...
Driver mutations in KRAS occur in almost 30% of human tumors, primarily in pancreatic, colorectal and lung tumors. These mutations result in increased cell proliferation and survival predominantly mediated through the MAPK signaling pathway. MAPK signal transduction is initiated by the interaction of RAF kinase with active RAS at the plasma membrane. The precise molecular details of this process are currently unknown. The Frederick National Laboratory for Cancer Research has partnered with the D...
Source
12345678910