Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization

Volume: 67, Issue: 9, Pages: 2357 - 2370
Published: May 1, 2019
Abstract
Neural networks with rectified linear unit (ReLU) activation functions (a.k.a. ReLU networks) have achieved great empirical success in various domains. Nonetheless, existing results for learning ReLU networks either pose assumptions on the underlying data distribution being, e.g., Gaussian, or require the network size and/or training size to be sufficiently large. In this context, the problem of learning a two-layer ReLU network is approached in...
Paper Details
Title
Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization
Published Date
May 1, 2019
Volume
67
Issue
9
Pages
2357 - 2370
Citation AnalysisPro
  • Scinapse’s Top 10 Citation Journals & Affiliations graph reveals the quality and authenticity of citations received by a paper.
  • Discover whether citations have been inflated due to self-citations, or if citations include institutional bias.