Match!
Bhavya Kailkhura
Lawrence Livermore National Laboratory
61Publications
12H-index
372Citations
Publications 61
Newest
#1Chengxi Li (THU: Tsinghua University)
#2Gang Li (THU: Tsinghua University)H-Index: 18
Last.Pramod K. Varshney (SU: Syracuse University)H-Index: 65
view all 4 authors...
The problem of detecting a high-dimensional signal based on compressive measurements in the presence of an eavesdropper (Eve) is studied in this paper. We assume that a large number of sensors collaborate to detect the presence of sparse signals while the Eve has access to all the information transmitted by the sensors to the fusion center (FC). A strategy to ensure secrecy that has been used in the literature is the injection of artificial noise to the raw observations of some of the nodes. How...
Source
In this work, we propose an introspection technique for deep neural networks that relies on a generative model to instigate salient editing of the input image for model interpretation. Such modification provides the fundamental interventional operation that allows us to obtain answers to counterfactual inquiries, i.e., what meaningful change can be made to the input image in order to alter the prediction. We demonstrate how to reveal interesting properties of the given classifiers by utilizing t...
#1Swatantra Kafle (SU: Syracuse University)H-Index: 1
#2Vipul Gupta (University of California, Berkeley)H-Index: 4
Last.Pramod K. Varshney (SU: Syracuse University)H-Index: 65
view all 5 authors...
In this paper, we study the problem of joint sparse support recovery with 1-b quantized compressive measurements in a distributed sensor network. Multiple nodes in the network are assumed to observe sparse signals having the same but unknown sparse support. Each node quantizes its measurement vector element-wise to 1 b. First, we consider that all the quantized measurements are available at a central fusion center. We derive performance bounds for sparsity pattern recovery using 1-bit quantized ...
1 CitationsSource
#1Ankur MallickH-Index: 2
Last.T. Yong-Jin HanH-Index: 1
view all 5 authors...
Gaussian Processes (GPs) with an appropriate kernel are known to provide accurate predictions and uncertainty estimates even with very small amounts of labeled data. However, GPs are generally unable to learn a good representation that can encode intricate structures in high dimensional data. The representation power of GPs depends heavily on kernel functions used to quantify the similarity between data points. Traditional GP kernels are not very effective at capturing similarity between high di...
#1Bhavya Kailkhura (LLNL: Lawrence Livermore National Laboratory)H-Index: 12
#2Jayaraman J. Thiagarajan (LLNL: Lawrence Livermore National Laboratory)H-Index: 14
Last.Peer-Timo Bremer (LLNL: Lawrence Livermore National Laboratory)H-Index: 26
view all 4 authors...
This paper provides a general framework to study the effect of sampling properties of training data on the generalization error of the learned machine learning (ML) models. Specifically, we propose a new spectral analysis of the generalization error, expressed in terms of the power spectra of the sampling pattern and the function involved. The framework is build in the Euclidean space using Fourier analysis and establishes a connection between some high dimensional geometric objects and optimal ...
Jan 1, 2019 in ICCV (International Conference on Computer Vision)
#1Pu Zhao (NU: Northeastern University)H-Index: 4
#2Sijia Liu (IBM)H-Index: 11
Last.Xue Lin (NU: Northeastern University)H-Index: 19
view all 7 authors...
1 Citations
Dec 3, 2018 in NeurIPS (Neural Information Processing Systems)
#1Sijia Liu (IBM)H-Index: 11
#2Bhavya Kailkhura (LLNL: Lawrence Livermore National Laboratory)H-Index: 12
Last.Lisa Amini (IBM)H-Index: 13
view all 6 authors...
As application demands for zeroth-order (gradient-free) optimization accelerate, the need for variance reduced and faster converging approaches is also intensifying. This paper addresses these challenges by presenting: a) a comprehensive theoretical analysis of variance reduced zeroth-order (ZO) optimization, b) a novel variance reduced ZO algorithm, called ZO-SVRG, and c) an experimental evaluation of our approach in the context of two compelling applications, black-box chemical material classi...
19 Citations
#1Thomas A. HoganH-Index: 1
#2Bhavya KailkhuraH-Index: 12
1 Citations
#2Bhavya KailkhuraH-Index: 12
Last.Peer-Timo BremerH-Index: 26
view all 4 authors...
Sampling one or more effective solutions from large search spaces is a recurring idea in computer vision, and sequential optimization has become the prevalent solution. Typical examples are hyper-parameter optimization in deep learning and sample mining in predictive modeling tasks. Existing solutions attempt to trade-off between global exploration and local exploitation, wherein the initial exploratory sample is critical to their success. While discrepancy-based samples have become the \textit{...
#1Gowtham MunirajuH-Index: 3
#2Bhavya KailkhuraH-Index: 12
Last.Andreas SpaniasH-Index: 28
view all 6 authors...
Sampling one or more effective solutions from large search spaces is a recurring idea in machine learning, and sequential optimization has become a popular solution. Typical examples include data summarization, sample mining for predictive modeling and hyper-parameter optimization. Existing solutions attempt to adaptively trade-off between global exploration and local exploitation, wherein the initial exploratory sample is critical to their success. While discrepancy-based samples have become th...
1234567