Andreas Spanias

Arizona State University

Signal processingPattern recognitionMathematicsComputer scienceDigital signal processing

487Publications

29H-index

5,372Citations

What is this?

Publications 461

Newest

#1Vivek Sivaraman Narayanaswamy (ASU: Arizona State University)H-Index: 2

#2Jayaraman J. Thiagarajan (LLNL: Lawrence Livermore National Laboratory)H-Index: 15

Last. Andreas Spanias (ASU: Arizona State University)H-Index: 29

view all 4 authors...

State-of-the-art under-determined audio source separation systems rely on supervised end-end training of carefully tailored neural network architectures operating either in the time or the spectral domain. However, these methods are severely challenged in terms of requiring access to expensive source level labeled data and being specific to a given set of sources and the mixing process, which demands complete re-training when those assumptions change. This strongly emphasizes the need for unsupe...

May 1, 2020 in ICASSP (International Conference on Acoustics, Speech, and Signal Processing)

#1Uday Shankar Shanthamallu (ASU: Arizona State University)H-Index: 3

#2Jayaraman J. Thiagarajan (LLNL: Lawrence Livermore National Laboratory)H-Index: 15

Last. Andreas Spanias (ASU: Arizona State University)H-Index: 29

view all 3 authors...

#1Gowtham MunirajuH-Index: 3

#2Bhavya KailkhuraH-Index: 14

Last. Andreas SpaniasH-Index: 29

view all 6 authors...

Sampling one or more effective solutions from large search spaces is a recurring idea in machine learning (ML), and sequential optimization has become a popular solution. Typical examples include data summarization, sample mining for predictive modeling, and hyperparameter optimization. Existing solutions attempt to adaptively trade off between global exploration and local exploitation, in which the initial exploratory sample is critical to their success. While discrepancy-based samples have bec...

#1Uday Shankar Shanthamallu (ASU: Arizona State University)H-Index: 3

#2Jayaraman J. Thiagarajan (LLNL: Lawrence Livermore National Laboratory)H-Index: 15

Last. Andreas Spanias (ASU: Arizona State University)H-Index: 29

view all 3 authors...

Machine learning models that can exploit the inherent structure in data have gained prominence. In particular, there is a surge in deep learning solutions for graph-structured data, due to its wide-spread applicability in several fields. Graph attention networks (GAT), a recent addition to the broad class of feature learning models in graphs, utilizes the attention mechanism to efficiently learn continuous vector representations for semi-supervised learning problems. In this paper, we perform a ...

#1Gowtham MunirajuH-Index: 3

#2Cihan TepedelenliogluH-Index: 26

Last. Andreas SpaniasH-Index: 29

view all 3 authors...

#1Uday Shankar Shanthamallu (ASU: Arizona State University)H-Index: 3

#2Jayaraman J. Thiagarajan (LLNL: Lawrence Livermore National Laboratory)H-Index: 15

Last. Andreas Spanias (ASU: Arizona State University)H-Index: 29

view all 4 authors...

Modern data analysis pipelines are becoming increasingly complex due to the presence of multiview information sources. While graphs are effective in modeling complex relationships, in many scenarios, a single graph is rarely sufficient to succinctly represent all interactions, and hence, multilayered graphs have become popular. Though this leads to richer representations, extending solutions from the single-graph case is not straightforward. Consequently, there is a strong need for novel solutio...

#1Gowtham Muniraju (ASU: Arizona State University)H-Index: 3

#2Cihan Tepedelenlioglu (ASU: Arizona State University)H-Index: 26

Last. Andreas Spanias (ASU: Arizona State University)H-Index: 29

view all 3 authors...

A novel distributed algorithm for estimating the maximum of the node initial state values in a network, in the presence of additive communication noise is proposed. Conventionally, the maximum is estimated locally at each node by updating the node state value with the largest received measurements in every iteration. However, due to the additive channel noise, the estimate of the maximum at each node drifts at each iteration and this results in nodes diverging from the true max value. Max-plus a...

#1Sameeksha KatochH-Index: 4

#2Kowshik ThopalliH-Index: 2

Last. Andreas SpaniasH-Index: 29

view all 5 authors...

Exploiting known semantic relationships between fine-grained tasks is critical to the success of recent model agnostic approaches. These approaches often rely on meta-optimization to make a model robust to systematic task or domain shifts. However, in practice, the performance of these methods can suffer, when there are no coherent semantic relationships between the tasks (or domains). We present Invenio, a structured meta-learning algorithm to infer semantic similarities between a given set of ...

#1Gowtham Muniraju (ASU: Arizona State University)H-Index: 3

#2Cihan Tepedelenlioglu (ASU: Arizona State University)H-Index: 26

Last. Andreas Spanias (ASU: Arizona State University)H-Index: 29

view all 3 authors...

A distributed algorithm to compute the spectral radius of the graph in the presence of additive channel noise is proposed. The spectral radius of the graph is the eigenvalue with the largest magnitude of the adjacency matrix, and is a useful characterization of the network graph. Conventionally, centralized methods are used to compute the spectral radius, which involves eigenvalue decomposition of the adjacency matrix of the underlying graph. We devise an algorithm to reach consensus on the spec...

#1Kristen Jaskie (ASU: Arizona State University)H-Index: 1

#2Charles Elkan (UCSD: University of California, San Diego)H-Index: 44

Last. Andreas Spanias (ASU: Arizona State University)H-Index: 29

view all 3 authors...

The positive and unlabeled learning problem is a semi-supervised binary classification problem. In PU learning, only an unknown percentage of positive samples are known, while the remaining samples, both positive and negative, are unknown. We wish to learn a decision boundary that separates the positive and negative data distributions. In this paper, we build on an existing popular probabilistic positive unlabeled learning algorithm and introduce a new modified logistic regression learner with a...

12345678910