Adversarial Examples on Object Recognition

Volume: 53, Issue: 3, Pages: 1 - 38
Published: Jun 12, 2020
Abstract
Deep neural networks are at the forefront of machine learning research. However, despite achieving impressive performance on complex tasks, they can be very sensitive: Small perturbations of inputs can be sufficient to induce incorrect behavior. Such perturbations, called adversarial examples, are intentionally designed to test the network’s sensitivity to distribution drifts. Given their surprisingly small size, a wide body of literature...
Paper Details
Title
Adversarial Examples on Object Recognition
Published Date
Jun 12, 2020
Volume
53
Issue
3
Pages
1 - 38
Citation AnalysisPro
  • Scinapse’s Top 10 Citation Journals & Affiliations graph reveals the quality and authenticity of citations received by a paper.
  • Discover whether citations have been inflated due to self-citations, or if citations include institutional bias.