Match!
Ming Yu
University of Chicago
Machine learningMatrix (mathematics)Information cascadeMathematicsComputer science
11Publications
3H-index
33Citations
What is this?
Publications 14
Newest
#1Ming YuH-Index: 3
#2Varun GuptaH-Index: 11
Last. Mladen KolarH-Index: 14
view all 3 authors...
Source
Dec 8, 2019 in NeurIPS (Neural Information Processing Systems)
#1Ming Yu (U of C: University of Chicago)H-Index: 3
#2Zhuoran Yang (Princeton University)H-Index: 8
Last. Zhaoran Wang (NU: Northwestern University)H-Index: 14
view all 4 authors...
We study the safe reinforcement learning problem with nonlinear function approximation, where policy optimization is formulated as a constrained optimization problem with both the objective and the constraint being nonconvex functions. For such a problem, we construct a sequence of surrogate convex constrained optimization problems by replacing the nonconvex functions locally with convex quadratic functions obtained from policy gradient estimators. We prove that the solutions to these surrogate ...
#1Ming YuH-Index: 3
#2Varun GuptaH-Index: 11
Last. Mladen KolarH-Index: 14
view all 3 authors...
In typical high dimensional statistical inference problems, confidence intervals and hypothesis tests are performed for a low dimensional subset of model parameters under the assumption that the parameters of interest are unconstrained. However, in many problems, there are natural constraints on model parameters and one is interested in whether the parameters are on the boundary of the constraint or not. e.g. non-negativity constraints for transmission rates in network diffusion. In this paper, ...
3 Citations
#1Ming Yu (U of C: University of Chicago)H-Index: 3
#2Zhuoran YangH-Index: 8
Last. Zhaoran WangH-Index: 14
view all 4 authors...
We study the safe reinforcement learning problem with nonlinear function approximation, where policy optimization is formulated as a constrained optimization problem with both the objective and the constraint being nonconvex functions. For such a problem, we construct a sequence of surrogate convex constrained optimization problems by replacing the nonconvex functions locally with convex quadratic functions obtained from policy gradient estimators. We prove that the solutions to these surrogate ...
1 Citations
#1Ming Yu (U of C: University of Chicago)H-Index: 3
Last. Aurelie C. Lozano (IBM)H-Index: 15
view all 4 authors...
We consider multi-response and multi-task regression models, where the parameter matrix to be estimated is expected to have an unknown grouping structure. The groupings can be along tasks, or features, or both, the last one indicating a bi-cluster or "checkerboard" structure. Discovering this grouping structure along with parameter inference makes sense in several applications, such as multi-response Genome-Wide Association Studies (GWAS). By inferring this additional structure we can obtain val...
Source
#1Ming YuH-Index: 3
#2Varun GuptaH-Index: 11
Last. Mladen KolarH-Index: 14
view all 3 authors...
We consider the problem of estimating the latent structure of a social network based on the observed information diffusion events, or {\it cascades}. Here for a given cascade, we only observe the times of infection for infected nodes but not the source of the infection. Most of the existing work on this problem has focused on estimating a diffusion matrix without any structural assumptions on it. In this paper, we propose a novel model based on the intuition that an information is more likely to...
#1Ming YuH-Index: 3
#2Varun GuptaH-Index: 11
Last. Mladen KolarH-Index: 14
view all 3 authors...
Probabilistic graphical models provide a flexible yet parsimonious framework for modeling dependencies among nodes in networks. There is a vast literature on parameter estimation and consistent model selection for graphical models. However, in many of the applications, scientists are also interested in quantifying the uncertainty associated with the estimated parameters and selected models, which current literature has not addressed thoroughly. In this paper, we propose a novel estimator for sta...
2 Citations
Dec 3, 2018 in NeurIPS (Neural Information Processing Systems)
#1Ming Yu (U of C: University of Chicago)H-Index: 3
#2Zhuoran Yang (Princeton University)H-Index: 8
Last. Princeton Zhaoran Wang (Princeton University)H-Index: 2
view all 5 authors...
Success of machine learning methods heavily relies on having an appropriate representation for data at hand. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about data. However, recently there has been a surge in approaches that learn how to encode the data automatically in a low dimensional space. Exponential family embedding provides a probabilistic framework for learning low-dimensional representation for various...
3 Citations
#1Ming YuH-Index: 3
#2Zhuoran YangH-Index: 8
Last. Zhaoran WangH-Index: 14
view all 5 authors...
The success of machine learning methods heavily relies on having an appropriate representation for data at hand. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about data. However, recently there has been a surge in approaches that learn how to encode the data automatically in a low dimensional space. Exponential family embedding provides a probabilistic framework for learning low-dimensional representation for var...
1 Citations
#1Ming YuH-Index: 3
#2Varun GuptaH-Index: 11
Last. Mladen KolarH-Index: 14
view all 3 authors...
Traditional works on community detection from observations of information cascade assume that a single adjacency matrix parametrizes all the observed cascades. However, in reality the connection structure usually does not stay the same across cascades. For example, different people have different topics of interest, therefore the connection structure would depend on the information/topic content of the cascade. In this paper we consider the case where we observe a sequence of noisy adjacency mat...
1 Citations
12