Gowtham Muniraju

Arizona State University

Ergodic processWireless sensor networkMathematicsComputer scienceUpper and lower bounds

10Publications

3H-index

20Citations

What is this?

Publications 13

Newest

#1Gowtham MunirajuH-Index: 3

#2Bhavya KailkhuraH-Index: 12

Last. Andreas SpaniasH-Index: 28

view all 6 authors...

Sampling one or more effective solutions from large search spaces is a recurring idea in machine learning (ML), and sequential optimization has become a popular solution. Typical examples include data summarization, sample mining for predictive modeling, and hyperparameter optimization. Existing solutions attempt to adaptively trade off between global exploration and local exploitation, in which the initial exploratory sample is critical to their success. While discrepancy-based samples have bec...

#1Gowtham Muniraju (ASU: Arizona State University)H-Index: 3

#2Cihan Tepedelenlioglu (ASU: Arizona State University)H-Index: 26

Last. Andreas Spanias (ASU: Arizona State University)H-Index: 28

view all 3 authors...

A distributed algorithm to compute the spectral radius of the graph in the presence of additive channel noise is proposed. The spectral radius of the graph is the eigenvalue with the largest magnitude of the adjacency matrix, and is a useful characterization of the network graph. Conventionally, centralized methods are used to compute the spectral radius, which involves eigenvalue decomposition of the adjacency matrix of the underlying graph. We devise an algorithm to reach consensus on the spec...

#1Blaine Ayotte (Clarkson University)H-Index: 1

#2Justin Au-Yeung (Clarkson University)

Last. Cihan Tepedelenlioglu (ASU: Arizona State University)H-Index: 26

view all 8 authors...

In this innovative practice work-in-progress paper, we discuss novel methods to teach machine learning concepts to undergraduate students. Teaching machine learning involves introducing students to complex concepts in statistics, linear algebra, and optimization. In order for students to better grasp concepts in machine learning, we provide them with hands-on exercises. These types of immersive experiences will expose students to the different stages of the practical uses of machine learning. Th...

#1Gowtham Muniraju (ASU: Arizona State University)H-Index: 3

#2Cihan Tepedelenlioglu (ASU: Arizona State University)H-Index: 26

Last. Andreas Spanias (ASU: Arizona State University)H-Index: 28

view all 3 authors...

A novel distributed algorithm for estimating the maximum of the node initial state values in a network, in the presence of additive communication noise is proposed. Conventionally, the maximum is estimated locally at each node by updating the node state value with the largest received measurements in every iteration. However, due to the additive channel noise, the estimate of the maximum at each node drifts at each iteration and this results in nodes diverging from the true max value. Max-plus a...

#1Xue ZhangH-Index: 7

#2Cihan TepedelenliogluH-Index: 26

Last. Gowtham MunirajuH-Index: 3

view all 5 authors...

In this paper, localization using narrowband communication signals are considered in the presence of fading channels with time of arrival measurements. When narrowband signals are used for localization, due to existing hardware constraints, fading channels play a crucial role in localization accuracy. In a location estimation formulation, the Cramer-Rao lower bound for localization error is derived under different assumptions on fading coefficients. For the same level of localization accuracy, t...

#1Gowtham Muniraju (ASU: Arizona State University)H-Index: 3

#2Cihan Tepedelenlioglu (ASU: Arizona State University)H-Index: 26

Last. Mahesh K. Banavar (Clarkson University)H-Index: 14

view all 5 authors...

The analysis of a distributed consensus algorithm for estimating the maximum of the node initial state values in a network is considered in the presence of communication noise. Conventionally, the maximum is estimated by updating the node state value with the largest received measurements in every iteration at each node. However, due to additive channel noise, the estimate of the maximum at each node has a positive drift at each iteration and this results in nodes diverging from the true max val...

#1Gowtham MunirajuH-Index: 3

#2Bhavya KailkhuraH-Index: 12

Last. Peer-Timo BremerH-Index: 26

view all 4 authors...

A common challenge in machine learning and related fields is the need to efficiently explore high dimensional parameter spaces using small numbers of samples. Typical examples are hyper-parameter optimization in deep learning and sample mining in predictive modeling tasks. All such problems trade-off exploration, which samples the space without knowledge of the target function, and exploitation where information from previous evaluations is used in an adaptive feedback loop. Much of the recent f...

#1Gowtham MunirajuH-Index: 3

#2Bhavya KailkhuraH-Index: 12

Last. Andreas SpaniasH-Index: 28

view all 6 authors...

Sampling one or more effective solutions from large search spaces is a recurring idea in machine learning, and sequential optimization has become a popular solution. Typical examples include data summarization, sample mining for predictive modeling and hyper-parameter optimization. Existing solutions attempt to adaptively trade-off between global exploration and local exploitation, wherein the initial exploratory sample is critical to their success. While discrepancy-based samples have become th...

#1Abhinav DixitH-Index: 1

#2Uday Shankar ShanthamalluH-Index: 2

Last. Huan SongH-Index: 4

view all 12 authors...

#1Abhinav Dixit (ASU: Arizona State University)H-Index: 1

#2Uday Shankar Shanthamallu (ASU: Arizona State University)H-Index: 2

Last. Huan SongH-Index: 4

view all 12 authors...

12