Ming Yan

Michigan State University

Mathematical optimizationPsychologyCognitive psychologyMathematicsComputer science

66Publications

15H-index

954Citations

What is this?

Publications 59

Newest

#1Sulaiman A. Alghunaim (UCLA: University of California, Los Angeles)H-Index: 6

#2Ming Yan (MSU: Michigan State University)H-Index: 15

Last. Ali H. Sayed (EPFL: École Polytechnique Fédérale de Lausanne)H-Index: 74

view all 3 authors...

This work studies multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) function coupling all agents. This scenario arises in many machine learning and engineering applications, such as regression over distributed features and resource allocation. We reformulate this problem into an equivalent saddle-point problem, which is amenable to decentralized solutions. We then propose a proximal primal-dual algori...

Dec 8, 2019 in NeurIPS (Neural Information Processing Systems)

#1Rongrong Wang (MSU: Michigan State University)H-Index: 8

#1Rongrong Wang (MSU: Michigan State University)

Last. Shuyang Qin (MSU: Michigan State University)

view all 6 authors...

This paper extends robust principal component analysis (RPCA) to nonlinear manifolds. Suppose that the observed data matrix is the sum of a sparse component and a component drawn from some low dimensional manifold. Is it possible to separate them by using similar ideas as RPCA? Is there any benefit in treating the manifold as a whole as opposed to treating each local region independently? We answer these two questions affirmatively by proposing and analyzing an optimization framework that separa...

#1He Lyu (MSU: Michigan State University)

#2Ningyu Sha (MSU: Michigan State University)

Last. Rongrong Wang (MSU: Michigan State University)H-Index: 8

view all 6 authors...

This paper extends robust principal component analysis (RPCA) to nonlinear manifolds. Suppose that the observed data matrix is the sum of a sparse component and a component drawn from some low dimensional manifold. Is it possible to separate them by using similar ideas as RPCA? Is there any benefit in treating the manifold as a whole as opposed to treating each local region independently? We answer these two questions affirmatively by proposing and analyzing an optimization framework that separa...

#1Xiaorui Liu (MSU: Michigan State University)H-Index: 4

Last. Ming Yan (MSU: Michigan State University)H-Index: 15

view all 4 authors...

Large-scale machine learning models are often trained by parallel stochastic gradient descent algorithms. However, the communication cost of gradient aggregation and model synchronization between the master and worker nodes becomes the major obstacle for efficient learning as the number of workers and the dimension of the model increase. In this paper, we propose DORE, a DOuble REsidual compression stochastic gradient descent algorithm, to reduce over 95\%of the overall communication such tha...

#1Ningyu Sha (MSU: Michigan State University)H-Index: 1

#2Ming Yan (MSU: Michigan State University)H-Index: 15

Last. Youzuo Lin (LANL: Los Alamos National Laboratory)H-Index: 10

view all 3 authors...

Sep 20, 2019 in ICIP (International Conference on Image Processing)

#1Jun Liu (Northeast Normal University)H-Index: 1

#2Ming Yan (MSU: Michigan State University)

Last. Tieyong Zeng (CUHK: The Chinese University of Hong Kong)H-Index: 1

view all 4 authors...

Image smoothing is a very important topic in image processing. Among these image smoothing methods, the L0 gradient minimization method is one of the most popular ones. However, the L0 gradient minimization method suffers from the staircasing effect and over-sharpening issue, which highly degrade the quality of the smoothed image. To overcome these issues, we use not only the L0 gradient term for finding edges, but also a surface area based term for the purpose of smoothing the inside of each re...

#1Jun Liu (Northeast Normal University)H-Index: 1

#2Ming Yan (MSU: Michigan State University)H-Index: 15

Last. Tieyong Zeng (CUHK: The Chinese University of Hong Kong)H-Index: 1

view all 3 authors...

Blind image deblurring is a conundrum because there are infinitely many pairs of latent images and blur kernels. To get a stable and reasonable deblurred image, proper prior knowledge of the latent image and the blur kernel is required. Different from recent works on the statistical observations of the difference between the blurred image and the clean one, our method is based on the surface-aware strategy from the intrinsic geometrical consideration. This approach facilitates the blur kernel es...

#1Xiaolin Huang (SJTU: Shanghai Jiao Tong University)H-Index: 16

#2Haiyan Yang (SJTU: Shanghai Jiao Tong University)H-Index: 1

Last. Ming Yan (MSU: Michigan State University)H-Index: 15

view all 7 authors...

Abstract When a measurement falls outside the quantization or measurable range, it becomes saturated and cannot be used in conventional signal recovery methods. Aiming at acquiring information from noisy saturated and regular measurements, we in this paper propose a new signal recovery method called mixed one-bit compressive sensing (M1bit-CS) and develop an efficient algorithm in the framework of alternating direction methods of multipliers. Numerical experiments on one-dimensional symmetric si...

A Decentralized Proximal-Gradient Method With Network Independent Step-Sizes and Separated Convergence Rates

This paper proposes a novel proximal-gradient algorithm for a decentralized optimization problem with a composite objective containing smooth and nonsmooth terms. Specifically, the smooth and nonsmooth terms are dealt with by gradient and proximal updates, respectively. The proposed algorithm is closely related to a previous algorithm, PG-EXTRA (W. Shi, Q. Ling, G. Wu, and W. Yin, “A proximal gradient algorithm for decentralized composite optimization,” IEEE Trans. Signal Process., vol. 63, no. ...

Decentralized algorithms solve multi-agent problems over a connected network, where the information can only be exchanged with accessible neighbors. Though there exist several decentralized optimization algorithms, there are still gaps in convergence conditions and rates between decentralized algorithms and centralized ones. In this paper, we fill some gaps by considering two decentralized consensus algorithms: EXTRA and NIDS. Both algorithms converge linearly with strongly convex functions. We ...

123456