Match!

On the Convergence of a Bayesian Algorithm for Joint Dictionary Learning and Sparse Recovery

Published on Jan 1, 2020in IEEE Transactions on Signal Processing5.23
路 DOI :10.1109/TSP.2019.2954526
Geethu Joseph1
Estimated H-index: 1
(SU: Syracuse University),
Chandra R. Murthy16
Estimated H-index: 16
(IISc: Indian Institute of Science)
Sources
Abstract
Dictionary learning (DL) is a well-researched problem, where the goal is to learn a dictionary from a finite set of noisy training signals, such that the training data admits a sparse representation over the dictionary. While several solutions are available in the literature, relatively little is known about their convergence and optimality properties. In this paper, we make progress on this problem by analyzing a Bayesian algorithm for DL. Specifically, we cast the DL problem into the sparse Bayesian learning (SBL) framework by imposing a hierarchical Gaussian prior on the sparse vectors. This allows us to simultaneously learn the dictionary as well as the parameters of the prior on the sparse vectors using the expectation-maximization algorithm. The dictionary update step turns out to be a non-convex optimization problem, and we present two solutions, namely, an alternating minimization (AM) procedure and an Armijo line search (ALS) method. We analytically show that the ALS procedure is globally convergent, and establish the stability of the solution by characterizing its limit points. Further, we prove the convergence and stability of the overall DL-SBL algorithm, and show that the minima of the cost function of the overall algorithm are achieved at sparse solutions. As a concrete example, we consider the application of the SBL-based DL algorithm to image denoising, and demonstrate the efficacy of the algorithm relative to existing DL algorithms.
  • References (42)
  • Citations (0)
馃摉 Papers frequently viewed together
2015
4 Authors (Yujie Li, ..., Wuhui Chen)
2012
2 Authors (Ribhu, Debashis Ghosh)
7 Citations
2010ICML: International Conference on Machine Learning
89 Citations
78% of Scinapse members use related papers. After signing in, all features are FREE.
References42
Newest
#1Huikang Liu (CUHK: The Chinese University of Hong Kong)H-Index: 5
#2Anthony Man-Cho So (CUHK: The Chinese University of Hong Kong)H-Index: 23
Last. Weijie Wu (CUHK: The Chinese University of Hong Kong)H-Index: 3
view all 3 authors...
The problem of optimizing a quadratic form over an orthogonality constraint (QP-OC for short) is one of the most fundamental matrix optimization problems and arises in many applications. In this paper, we characterize the growth behavior of the objective function around the critical points of the QP-OC problem and demonstrate how such characterization can be used to obtain strong convergence rate results for iterative methods that exploit the manifold structure of the orthogonality constraint (i...
12 CitationsSource
#1Xiao-bo Li (SWPU: Southwest Petroleum University)H-Index: 1
#2Nan-jing Huang (Sichuan University)H-Index: 27
Last. Jen-Chih Yao (PRC: China Medical University (PRC))H-Index: 50
view all 4 authors...
In this paper, we propose the descent method with new inexact line-search for unconstrained optimization problems on Riemannian manifolds. The global convergence of the proposed method is established under some appropriate assumptions. We further analyze some convergence rates, namely R-linear convergence rate, superlinear convergence rate and quadratic convergence rate, of the proposed descent method.
1 CitationsSource
This paper addresses the problem of learning dictionaries for multimodal datasets, i.e. datasets collected from multiple data sources. We present an algorithm called multimodal sparse Bayesian dictionary learning (MSBDL). MSBDL leverages information from all available data modalities through a joint sparsity constraint. The underlying framework offers a considerable amount of flexibility to practitioners and addresses many of the shortcomings of existing multimodal dictionary learning approaches...
#1Juan G. Serra (UGR: University of Granada)H-Index: 2
#2Matteo Testa (Polytechnic University of Turin)H-Index: 3
Last. Aggelos K. Katsaggelos (NU: Northwestern University)H-Index: 65
view all 4 authors...
Recent work in signal processing in general and image processing in particular deals with sparse representation related problems. Two such problems are of paramount importance: an overriding need for designing a well-suited overcomplete dictionary containing a redundant set of atoms鈥攊.e., basis signals鈥攁nd how to find a sparse representation of a given signal with respect to the chosen dictionary. Dictionary learning techniques, among which we find the popular K-singular value decomposition algo...
4 CitationsSource
Mar 1, 2017 in ICASSP (International Conference on Acoustics, Speech, and Signal Processing)
#1Igor Fedorov (UCSD: University of California, San Diego)H-Index: 4
#2Bhaskar D. Rao (UCSD: University of California, San Diego)H-Index: 59
Last. Truong Q. Nguyen (UCSD: University of California, San Diego)H-Index: 4
view all 3 authors...
In this paper, we present a novel multimodal sparse dictionary learning algorithm based on a hierarchical sparse Bayesian framework. The framework allows for enforcing joint sparsity across dictionaries without restricting the actual entries to be equal. We show that the proposed method is able to learn dictionaries of higher quality than existing approaches. We validate our claims with extensive experiments on synthetic data as well as real-world data.
2 CitationsSource
#1Linxiao Yang (University of Electronic Science and Technology of China)H-Index: 7
#2Jun Fang (University of Electronic Science and Technology of China)H-Index: 24
Last. Hongbin Li (Stevens Institute of Technology)H-Index: 49
view all 4 authors...
We consider a dictionary learning problem aimed at designing a dictionary such that the signals admit a sparse or an approximate sparse representation over the learnt dictionary. The problem finds a variety of applications including image denoising, feature extraction, etc. In this paper, we propose a new hierarchical Bayesian model for dictionary learning, in which a Gaussian-inverse Gamma hierarchical prior is used to promote the sparsity of the representation. Suitable non-informative priors ...
8 CitationsSource
#1Bin GaoH-Index: 2
#2Xin LiuH-Index: 9
Last. Ya-xiangYuanH-Index: 31
view all 4 authors...
In this paper, we prove that the global version of the {\L}ojasiewicz gradient inequality holds for quadratic sphere constrained optimization problem with exponent \theta=\frac{3}{4} An example from Ting Kei Pong shows that \theta=\frac{3}{4}is tight. This is the first {\L}ojasiewicz gradient inequality established for the sphere constrained optimization problem with a linear term.
4 Citations
Abstract In this work we show that iterative thresholding and K means (ITKM) algorithms can recover a generating dictionary with K atoms from noisy S sparse signals up to an error e 藴 as long as the initialisation is within a convergence radius, that is up to a log 鈦 K factor inversely proportional to the dynamic range of the signals, and the sample size is proportional to K log 鈦 K e 藴 鈭 2 . The results are valid for arbitrary target errors if the sparsity level is of the order of the square ro...
20 CitationsSource
#1Jason T. Parker (AFRL: Air Force Research Laboratory)H-Index: 11
#2Philip Schniter (OSU: Ohio State University)H-Index: 40
Last. Volkan Cevher (EPFL: 脡cole Polytechnique F茅d茅rale de Lausanne)H-Index: 33
view all 3 authors...
In this paper, we extend the generalized approximate message passing (G-AMP) approach, originally proposed for high-dimensional generalized-linear regression in the context of compressive sensing, to the generalized-bilinear case. In Part I of this two-part paper, we derived our Bilinear G-AMP (BiG-AMP) algorithm as an approximation of the sum-product belief propagation algorithm in the high-dimensional limit, and proposed an adaptive damping mechanism that aids convergence under finite problem ...
35 CitationsSource
May 1, 2014 in ICASSP (International Conference on Acoustics, Speech, and Signal Processing)
#1Meisam Razaviyayn (UMN: University of Minnesota)H-Index: 24
#2Hung-Wei Tseng (UMN: University of Minnesota)H-Index: 4
Last. Zhi-Quan Luo (UMN: University of Minnesota)H-Index: 71
view all 3 authors...
In this paper we consider the dictionary learning problem for sparse representation. We first show that this problem is NP-hard and then propose an efficient dictionary learning scheme to solve several practical formulations of this problem. Unlike many existing algorithms in the literature, such as K-SVD, our proposed dictionary learning scheme is theoretically guaranteed to converge to the set of stationary points under certain mild assumptions. For the image denoising application, the perform...
14 CitationsSource
Cited By0
Newest