A Hitchhiker’s Guide On Distributed Training Of Deep Neural Networks

Volume: 137, Pages: 65 - 76
Published: Mar 1, 2020
Abstract
Deep learning has led to tremendous advancements in the field of Artificial Intelligence. One caveat, however, is the substantial amount of compute needed to train these deep learning models. Training a benchmark dataset like ImageNet on a single machine with a modern GPU can take up to a week and distributing training on multiple machines has been observed to drastically bring this time down. Recent work has brought down ImageNet training time...
Paper Details
Title
A Hitchhiker’s Guide On Distributed Training Of Deep Neural Networks
Published Date
Mar 1, 2020
Volume
137
Pages
65 - 76
Citation AnalysisPro
  • Scinapse’s Top 10 Citation Journals & Affiliations graph reveals the quality and authenticity of citations received by a paper.
  • Discover whether citations have been inflated due to self-citations, or if citations include institutional bias.