|

Research and statistical analysis of attacks on neural networks in technical vision tasks

Authors: Kapitonova L.I., Ushakova A.A., Shalna N.A., Storozheva A.A.
Published in issue: #2(31)/2019
DOI: 10.18698/2541-8009-2019-2-443


Category: Informatics, Computer Engineering and Control | Chapter: Methods and Systems of Information Protection, Information Security

Keywords: neural network, dataset, hostile attack, protection against attacks, neural network vulnerabilities, gradient method, pixel attacks, security administrator
Published: 15.02.2019

Different types of attacks on neural networks in technical vision tasks are considered, compared and classified. The analysis of such types of attacks from the class of “hostile images”, such as attacks based on the gradient method and pixel attacks, is presented. The statistics of using data sets for training the neural network, available in the public domain, is analyzed. Based on it, the dependence of the probability of a successful attack for data sets, that are publicly available, is obtained. The most effective methods of protection against various types of attacks on neural networks are identified and analyzed.


References

[1] Komashinskiy V.I., Smirnov D.A. Neyronnye seti i ikh primenenie v sistemakh upravleniya i svyazi [Neural networks and using them in communication control systems]. Moscow, Goryachaya liniya-Telekom Publ., 2002 (in Russ.).

[2] Kruglov V.V., Borisov V.V. Iskusstvennye neyronnye seti. Teoriya i praktika [Artificial neural networks. Theory and practice]. Moscow, Goryachaya liniya-Telekom Publ., 2001 (in Russ.).

[3] Rutkovskaya D., Pilin’skiy Moscow, Rutkovskiy L. Neyronnyy seti, geneticheskie algoritmy i nechetkie sistemy [Neural networks, genetic algorithms and fuzzy systems]. Moscow, Goryachaya liniya-Telekom Publ., 2006 (in Russ.).

[4] Galushkina A.I. Teoriya neyronnykh setey [Neural networks theory]. Moscow, IPRZhR Publ., 2000 (in Russ.).

[5] Gomes J. Adversarial attacks and defences for convolutional neural networks. medium.com: website. URL: https://medium.com/onfido-tech/adversarial-attacks-and-defences-for-convolutional-neural-networks-66915ece52e7 (accessed: 03.06.2018).

[6] Nguyen A., Yosinski J., Clune J. Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. Proc. IEEE Conf. CVPR, 2015. DOI: 10.1109/CVPR.2015.7298640 URL: https://ieeexplore.ieee.org/document/7298640

[7] Chan-Hon-Tong A. On the simplicity to produce falsified deep learning results. hal.archives-ouvertes.fr: website. URL: https://hal.archives-ouvertes.fr/hal-01676691v1 (accessed: 29.06.2018).

[8] Su J., Vargas D.V., Sakurai K. One pixel attack for fooling deep neural networks. arxiv.org: website. URL: https://arxiv.org/pdf/1710.08864.pdf (accessed: 07.07.2018).

[9] Moosavi-Dezfooli S.M., Fawzi O., Fawzi A., et al. Universal adversarial perturbations. arxiv.org: website. URL: https://arxiv.org/pdf/1610.08401.pdf (accessed: 28.07.2018).

[10] Huang S., Papernot N., Goodfellow I., et al. Adversarial attacks on neural network policies. arxiv.org: website. URL: https://arxiv.org/abs/1702.02284 (accessed: 28.07.2018).

[11] Papernot N., McDaniel P., Goodfellow I., et al. Practical black-box attacks against machine learning. arxiv.org: website. URL: https://arxiv.org/abs/1602.02697 (accessed: 28.07.2018).

[12] Papernot N., McDaniel P. Extending defensive distillation. arxiv.org: website. URL: https://arxiv.org/abs/1705.05264 (accessed: 28.07.2018).