Match!

LDA via L1-PCA of Whitened Data

Published on Jan 1, 2020in IEEE Transactions on Signal Processing5.23
· DOI :10.1109/TSP.2019.2955860
Rubén Martín-Clemente6
Estimated H-index: 6
(University of Seville),
Vicente Zarzoso20
Estimated H-index: 20
(CNRS: Centre national de la recherche scientifique)
Abstract
Principal component analysis (PCA) and Fisher's linear discriminant analysis (LDA) are widespread techniques in data analysis and pattern recognition. Recently, the L1-norm has been proposed as an alternative criterion to classical L2-norm in PCA, drawing considerable research interest on account of its increased robustness to outliers. The present work proves that, combined with a whitening preprocessing step, L1-PCA can perform LDA in an unsupervised manner, i.e., sparing the need for labelled data. Rigorous proof is given in the case of data drawn from a mixture of Gaussians. A number of numerical experiments on synthetic as well as real data confirm the theoretical findings.
  • References (34)
  • Citations (0)
📖 Papers frequently viewed together
2010
1 Author (Issam Dagher)
26 Citations
86 Citations
2011IJCNN: International Joint Conference on Neural Network
3 Authors (Nan Zhao, ..., Xiuwen Liu)
3 Citations
78% of Scinapse members use related papers. After signing in, all features are FREE.
References34
Newest
#1Nicholas Tsagkarakis (SUNY: State University of New York System)H-Index: 4
#2Panos P. Markopoulos (RIT: Rochester Institute of Technology)H-Index: 9
Last. Dimitris A. Pados (FAU: Florida Atlantic University)H-Index: 27
view all 4 authors...
L1-norm Principal-Component Analysis (L1-PCA) of real-valued data has attracted significant research interest over the past decade. L1-PCA of complex-valued data remains to date unexplored despite the many possible applications (in communication systems, for example). In this paper, we establish theoretical and algorithmic foundations of L1-PCA of complex-valued data matrices. Specifically, we first show that, in contrast to the real-valued case for which an optimal polynomial-cost algorithm was...
11 CitationsSource
#1Rubén Martín-Clemente (University of Seville)H-Index: 6
#2Vicente Zarzoso (University of Nice Sophia Antipolis)H-Index: 20
7 CitationsSource
#1Sheng Wang (Nanjing University of Science and Technology)H-Index: 3
#2Jianfeng Lu (Nanjing University of Science and Technology)H-Index: 7
Last. Yang Jing-yu (Nanjing University of Science and Technology)H-Index: 49
view all 5 authors...
When facing high dimensional data, dimension reduction is necessary before classification. Among dimension reduction methods, linear discriminant analysis (LDA) is a popular one that has been widely used. LDA aims to maximize the ratio of the between-class scatter and total data scatter in projected space, and the label of each data is necessary. However, in real applications, the labeled data are scarce and unlabeled data are in large quantity, so LDA is hard to be used under such case. In this...
26 CitationsSource
Jun 1, 2016 in CVPR (Computer Vision and Pattern Recognition)
#1Kaiming He (Microsoft)H-Index: 57
#2Xiangyu Zhang (Xi'an Jiaotong University)H-Index: 25
Last. Jian Sun (Microsoft)H-Index: 88
view all 4 authors...
Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the...
14k CitationsSource
#1Ying Liu (UB: University at Buffalo)H-Index: 6
#2Dimitris A. Pados (UB: University at Buffalo)H-Index: 27
We consider the problem of foreground and background extraction from compressed-sensed (CS) surveillance videos that are captured by a static CS camera. We propose, for the first time in the literature, a principal component analysis (PCA) approach that computes directly in the CS domain the low-rank subspace of the background scene. Rather than computing the conventional L 2-norm-based principal components, which are simply the dominant left singular vectors of the CS-domain data matrix, we com...
35 CitationsSource
Jun 1, 2015 in SPAWC (International Workshop on Signal Processing Advances in Wireless Communications)
#1Nicholas Tsagkarakis (SUNY: State University of New York System)H-Index: 4
#2Panos P. Markopoulos (SUNY: State University of New York System)H-Index: 1
Last. Dimitris A. Pados (SUNY: State University of New York System)H-Index: 27
view all 3 authors...
In the light of recent developments in optimal real L 1 -norm principal-component analysis (PCA), we provide the first algorithm in the literature to carry out L 1 -PCA of complex-valued data. Then, we use this algorithm to develop a novel subspace-based direction-of-arrival (DoA) estimation method that is resistant to faulty measurements or jamming. As demonstrated by numerical experiments, the proposed algorithm is as effective as state-of-the-art L 2 -norm methods in clean-data environments a...
11 CitationsSource
#1Matthew Johnson (RIT: Rochester Institute of Technology)H-Index: 2
#2Andreas Savakis (RIT: Rochester Institute of Technology)H-Index: 18
Face recognition using eigenfaces is a popular technique based on principal component analysis (PCA). However, its performance suffers from the presence of outliers due to occlusions and noise often encountered in unconstrained settings. We address this problem by utilizing L 1 -eigenfaces for robust face recognition. We introduce an effective approach for L 1 -eigenfaces based on combining fast computation of L 1 -PCA with a greedy search technique. Experimental results demonstrate that L 1 -ei...
7 CitationsSource
#1Panos P. Markopoulos (SUNY: State University of New York System)H-Index: 9
#2George N. Karystinos (TUC: Technical University of Crete)H-Index: 14
Last. Dimitrios A. Pados (SUNY: State University of New York System)H-Index: 1
view all 3 authors...
We describe ways to define and calculate L 1 -norm signal subspaces that are less sensitive to outlying data than L 2 -calculated subspaces. We start with the computation of the L 1 maximum-projection principal component of a data matrix containing N signal samples of dimension D. We show that while the general problem is formally NP-hard in asymptotically large N, D, the case of engineering interest of fixed dimension D and asymptotically large sample size N is not. In particular, for the case ...
85 CitationsSource
Jan 1, 2014 in ICML (International Conference on Machine Learning)
#1Feiping Nie (UTA: University of Texas at Arlington)H-Index: 57
#2Jianjun Yuan (UTA: University of Texas at Arlington)H-Index: 1
Last. Heng Huang (UTA: University of Texas at Arlington)H-Index: 41
view all 3 authors...
Principal Component Analysis (PCA) is the most widely used unsupervised dimensionality reduction approach. In recent research, several robust PCA algorithms were presented to enhance the robustness of PCA model. However, the existing robust PCA methods incorrectly center the data using the l2-norm distance to calculate the mean, which actually is not the optimal mean due to the l1-norm used in the objective functions. In this paper, we propose novel robust PCA objective functions with removing o...
108 Citations
#1Gonzalo Mateos (UMN: University of Minnesota)H-Index: 24
#2Georgios B. Giannakis (UMN: University of Minnesota)H-Index: 117
Principal component analysis (PCA) is widely used for dimensionality reduction, with well-documented merits in various applications involving high-dimensional data, including computer vision, preference measurement, and bioinformatics. In this context, the fresh look advocated here permeates benefits from variable selection and compressive sampling, to robustify PCA against outliers. A least-trimmed squares estimator of a low-rank bilinear factor analysis model is shown closely related to that o...
71 CitationsSource
Cited By0
Newest