| Русский Русский | English English |
   
Главная Архив номеров
20 | 04 | 2024
10.14489/vkit.2020.02.pp.032-038

DOI: 10.14489/vkit.2020.02.pp.032-038

Сакулин С. А., Алфимцев А. Н., Локтев Д. А., Коваленко А. О., Девятков В. В.
ЗАЩИТА ИЗОБРАЖЕНИЯ ЧЕЛОВЕКА ОТ РАСПОЗНАВАНИЯ НЕЙРОСЕТЕВОЙ СИСТЕМОЙ НА ОСНОВЕ СОСТЯЗАТЕЛЬНЫХ ПРИМЕРОВ
(c. 32-38)

Аннотация. Предложен метод конструирования наносимого на одежду камуфляжа специально подобранного типа, который защищает человека как от распознавания человеком-наблюдателем, так и от глубокой нейросетевой системы распознавания. Такой тип камуфляжа сконструирован на основе состязательных примеров, которые генерируются с помощью глубокой нейросети. Дано описание экспериментов по защите человека от распознавания системами Faster-RCNN (Regional Convolution Neural Networks) Inception V2 и Faster-RCNN ResNet101. Результаты показали высокую эффективность предложенного метода в виртуальном мире, когда имеется доступ к каждому пикселу подаваемого на вход системы изображения. В реальном мире результаты менее внушительные, что может быть объяснено искажениями цветов при печати на ткани, а также недостаточным пространственным разрешением такой печати.

Ключевые слова:  распознавание человека; глубокое обучение; состязательные примеры; глубокие нейросети.

 

Sakulin S. A., Alfimtsev A. N., Loktev D. A., Kovalenko A. O., Devyatkov V. V.
USAGE OF ADVERSARIAL EXAMPLES TO PROTECT A HUMAN IMAGE FROM BEING DETECTED BY RECOGNITION SYSTEMS BASED ON DEEP NEURAL NETWORKS
(pp. 32-38)

Abstract. Recently, human recognition systems based on deep machine learning, in particular, on the basis of deep neural networks, have become widespread. In this regard, research has become relevant in the field of protection against recognition by such systems. In this article a method of designing a specially selected type of camouflage applied to clothing, which will protect a person both from recognition by a human observer and from a deep neural network recognition system is proposed. This type of camouflage is constructed on the basis of competitive examples that are generated by a deep neural network. The article describes experiments on human protection from recognition by Faster-RCNN (Regional Convolution Neural Networks) Inception V2 and Faster-RCNN ResNet101 systems. However, the implementation of camouflage is considered on a macro level, which assesses the combination of the camouflage and background, and the micro level which analyzes the relationship between the properties of individual regions of the camouflage properties of the adjacent regions, with constraints on their continuity, smoothness, closure, asymmetry. The dependence of camouflage characteristics on the conditions of observation of the object and the environment is also considered: the transparency of the atmosphere, the intensity of pixels of the sky horizon and the background, the level of contrast of the background and the camouflaged object, the distance to the object. As an example of a possible attack, a “black box” attack, which involves preliminary testing of generated adversarial examples on a target recognition system without knowledge of the internal structure of this system, is considered. Results of these experiments showed the high efficiency of the proposed method in the virtual world, when there is access to each pixel of the image supplied to the input systems. In the real world, results are less impressive, which can be explained by the distortion of colors when printing on the fabric, as well as the lack of spatial resolution of this print.

Keywords: Human detection; Deep learning; Adversarial examples; Deep neural networks.

Рус

С. А. Сакулин, А. Н. Алфимцев, Д. А. Локтев, А. О. Коваленко, В. В. Девятков (Московский государственный технический университет им. Н. Э. Баумана, Москва, Россия) E-mail: Этот e-mail адрес защищен от спам-ботов, для его просмотра у Вас должен быть включен Javascript  

Eng

S. A. Sakulin, A. N. Alfimtsev, D. A. Loktev, A. O. Kovalenko, V. V. Devyatkov (Bauman Moscow State Technical University, Moscow, Russia) E-mail: Этот e-mail адрес защищен от спам-ботов, для его просмотра у Вас должен быть включен Javascript  

Рус

1. Залесский Б. А., Кравчонок А. И. Отслеживание и распознавание движущихся объектов на основе их кластерного представления // Информатика. 2019. № 2(02). С. 68 – 78.
2. Nguyen D. T., Li W., Ogunbona P. O. Human Detection from Images and Videos: a Survey // Pattern Recognition. 2016. V. 51. P. 148 – 175.
3. Akhtar N., Mian A. Threat of Adversarial Attacks on Deep Learning in Computer Vision: a Survey // IEEE Access. 2018. V. 6. P. 14410 – 14430. doi: 10.1109/ACCESS.2018.2807385
4. Yamada T., Gohshi S., Echizen I. Privacy Visor: Method for Preventing Face Image Detection by Using Differences in Human and Device Sensitivity // Intern. Conf. on Communications and Multimedia Security. Berlin, Heidelberg, Springer, 2013. V. 8099. P. 152 – 161.
5. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition / M. Sharif et al. // Proc. of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016. Р. 1528 – 1540.
6. Papernot N., McDaniel P., Goodfellow I. Transferability in Machine Learning: from Phenomena to Black-Box Attacks Using Adversarial Samples // arXiv preprint arXiv:1605.07277. 2016.
7. Detection of People With Camouflage Pattern Via Dense Deconvolution Network / Y. Zheng et al. // IEEE Signal Processing Letters. 2018. V. 26, No. 1. P. 29 – 33.
8. Design of Enhanced Camouflage Pattern Painting and Camouflage Effect Experiment / W. T. Hu et al. // Proc. of the 2016 Intern. Conf. on Advanced Materials and Energy Sustainability (AMES’2016). 2017. P. 576 – 581.
9. Practical Black-Box Attacks Against Machine Learning / N. Papernot et al. // Proc. of the 2017 ACM on Asia Conf. on Computer and Communications Security. ACM, 2017. P. 506 – 519.
10. Su J., Vargas D. V., Sakurai K. One Pixel Attack for Fooling Deep Neural Networks // IEEE Transactions on Evolutionary Computation. arXiv:1710.08864v6 [cs.LG] 3 May 2019.
11. NAG: Network for Adversary Generation / K. R. Mopuri et al. // Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018. P. 742 – 751.
12. Kurakin A., Goodfellow I., Bengio S. Adversarial Examples in the Physical World // arXiv preprint arXiv:1607.02533. 2016.
13. Generative Adversarial Nets / I. Goodfellow et al. // Advances in Neural Information Processing Systems. 2014. P. 2672 – 2680.
14. Ковалев В. А., Козловский С. А., Калиновский А. А. Генерация искусственных рентгеновских изображений грудной клетки с использованием генеративно-состязательных нейронных сетей // Информатика. 2018. Т. 15, № 2. С. 7 – 16.
15. Hitawala S. Comparative Study on Generative Adversarial Networks // arXiv preprint arXiv:1801.04271. 2018.
16. Radford A., Metz L., Chintala S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks // arXiv preprint arXiv:1511.06434. 2015.
17. Mirza M., Osindero S. Conditional Generative Adversarial Nets // arXiv preprint arXiv:1411.1784. 2014.

Eng

1. Zalesskiy B. A., Kravchonok A. I. (2019). Tracking and recognition of moving objects based on their cluster representation. Informatika, 2(02), pp. 68 – 78. [in Russian language]
2. Nguyen D. T., Li W., Ogunbona P. O. (2016). Human Detection from Images and Videos: a Survey. Pattern Recognition, Vol. 51, pp. 148 – 175.
3. Akhtar N., Mian A. (2018). Threat of Adversarial Attacks on Deep Learning in Computer Vision: a Survey. IEEE Access, Vol. 6, pp. 14410 – 14430. doi: 10.1109/ACCESS.2018.2807385
4. Yamada T., Gohshi S., Echizen I. (2013). Privacy Visor: Method for Preventing Face Image Detection by Using Differences in Human and Device Sensitivity, Vol. 8099, pp. 152 – 161. International Conference on Communications and Multimedia Security. Berlin: Springer.
5. Sharif M. et al. (2016). Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528 – 1540. ACM.
6. Papernot N., McDaniel P., Goodfellow I. (2016). Transferability in Machine Learning: from Phenomena to Black-Box Attacks Using Adversarial Samples. arXiv preprint arXiv:1605.07277. 2016.
7. Zheng Y. et al. (2018). Detection of People With Camouflage Pattern Via Dense Deconvolution Network. IEEE Signal Processing Letters, Vol. 26, (1), pp. 29 – 33.
8. Hu W. T. et al. (2017). Design of Enhanced Camouflage Pattern Painting and Camouflage Effect Experiment. Proceedings of the 2016 International Conference on Advanced Materials and Energy Sustainability (AMES’2016), pp. 576 – 581.
9. Papernot N. et al. (2017). Practical Black-Box Attacks Against Machine Learning. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506 – 519. ACM.
10. Su J., Vargas D. V., Sakurai K. (2019). One Pixel Attack for Fooling Deep Neural Networks. IEEE Transactions on Evolutionary Computation. arXiv:1710.08864v6 [cs.LG]
11. Mopuri K. R. et al. (2018). NAG: Network for Adversary Generation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 742 – 751.
12. Kurakin A., Goodfellow I., Bengio S. (2016). Adversarial Examples in the Physical World. arXiv preprint arXiv:1607.02533.
13. Goodfellow I. et al. (2014). Generative Adversarial Nets. Advances in Neural Information Processing Systems, pp. 2672 – 2680.
14. Kovalev V. A., Kozlovskiy S. A., Kalinovskiy A. A. (2018). Generation of artificial x-ray images of the chest using generative adversarial neural networks. Informatika, Vol. 15, (2), pp. 7 – 16. [in Russian language]
15. Hitawala S. (2018). Comparative Study on Generative Adversarial Networks. arXiv preprint arXiv:1801.04271.
16. Radford A., Metz L., Chintala S. (2015). Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv preprint arXiv:1511.06434.
17. Mirza M., Osindero S. (2014). Conditional Generative Adversarial Nets. arXiv preprint arXiv:1411.1784. 2014.

Рус

Статью можно приобрести в электронном виде (PDF формат).

Стоимость статьи 350 руб. (в том числе НДС 18%). После оформления заказа, в течение нескольких дней, на указанный вами e-mail придут счет и квитанция для оплаты в банке.

После поступления денег на счет издательства, вам будет выслан электронный вариант статьи.

Для заказа скопируйте doi статьи:

10.14489/vkit.2020.02.pp.032-038

и заполните  форму 

Отправляя форму вы даете согласие на обработку персональных данных.

.

 

Eng

This article  is available in electronic format (PDF).

The cost of a single article is 350 rubles. (including VAT 18%). After you place an order within a few days, you will receive following documents to your specified e-mail: account on payment and receipt to pay in the bank.

After depositing your payment on our bank account we send you file of the article by e-mail.

To order articles please copy the article doi:

10.14489/vkit.2020.02.pp.032-038

and fill out the  form  

 

.

 

 

 
Поиск
Баннер
Баннер
Rambler's Top100 Яндекс цитирования