Icons / Logo / Facebook Created with Sketch. Icons / Logo / Google Created with Sketch. Icons / Logo / ORCID Created with Sketch. Branding/Logomark minus Citation Combined Shape Icon/Bookmark-empty Icon/Copy Icon/Collection Icon/Close Copy 7 no author result Created with Sketch. Icon/Back Created with Sketch. Match!

Efficient Randomized Defense against Adversarial Attacks in Deep Convolutional Neural Networks

Published on May 1, 2019 in ICASSP (International Conference on Acoustics, Speech, and Signal Processing)
· DOI :10.1109/ICASSP.2019.8683348
Fatemeh Sheikholeslami3
Estimated H-index: 3
(UMN: University of Minnesota),
Swayambhoo Jain5
Estimated H-index: 5
(UMN: University of Minnesota),
Georgios B. Giannakis112
Estimated H-index: 112
(UMN: University of Minnesota)
Despite their well-documented learning capabilities in clean environments, deep convolutional neural networks (CNNs) are extremely fragile in adversarial settings, where carefully crafted perturbations created by an attacker can easily disrupt the task at hand. Numerous methods have been proposed for designing effective attacks, while the design of effective defense schemes is still an open area. This work leverages randomization-based defense schemes to introduce a sampling mechanism for strong and efficient defense. To this end, sampling is proposed to take place over the matricized mid-layer data in the neural network, and the sampling probabilities are systematically obtained via variance minimization. The proposed defense only requires adding sampling blocks to the network in the inference phase without extra overhead in the training. In addition, it can be utilized on any pre-trained network without altering the weights. Numerical tests corroborate the improved defense against various attack schemes in comparison with state-of-the-art randomized defenses.
  • References (27)
  • Citations (0)
Cited By0