Academic Journal

The impact of adversarial attacks on a computer vision models perception of images Set intersection protocol with privacy preservation

Bibliographic Details
Title: The impact of adversarial attacks on a computer vision models perception of images Set intersection protocol with privacy preservation
Authors: R. R. Bolozovskii, A. B. Levina, K. S. Krasov
Source: Научно-технический вестник информационных технологий, механики и оптики, Vol 25, Iss 4, Pp 694-702 (2025)
Publisher Information: ITMO University, 2025.
Publication Year: 2025
Collection: LCC:Information technology
Subject Terms: состязательные атаки, компьютерное зрение, resnet50, кластеризация изображений, knn, hnsw, Information technology, T58.5-58.64
Description: Advances in computer vision have led to the development of powerful models capable of accurately recognizing and interpreting visual information in various fields of knowledge. However, these models are increasingly vulnerable to adversarial attacks – deliberate manipulations of input data designed to mislead the machine-learning model and produce incorrect recognition results. This article presents the results of an investigation into the impact of various types of adversarial attacks on the ResNet50 model in image classification and clustering tasks. Various types of adversarial attacks have been investigated: Fast Gradient Sign Method, Basic Iterative Method, Projected Gradient Descent, Carlini&Wagner, Elastic-Net Attacks to Deep Neural Networks, Expectation Over Transformation Projected Gradient Descent, and jitter-based attacks. The Gradient-Weighted Class Activation Mapping (Grad-CAM) method was used to visualize the attention areas of the model. The t-SNE algorithm was applied to visualize clusters in the feature space. Attack robustness was assessed by attack success rate using k-Nearest Neighbors algorithm and Hierarchical Navigable Small World algorithms with different similarity metrics. Significant differences in the effects of attacks on the internal representations of the model and areas of focus have been identified. It is shown that iterative attack methods cause significant changes in the feature space and significantly affect Grad-CAM visualizations, whereas simple attacks have less impact. The high sensitivity of most clustering algorithms to perturbations has been established. The metric of the inner product showed the greatest stability among the studied approaches. The results obtained indicate the dependence of the stability of the model on the attack parameters and the choice of similarity metrics, which is manifested in the peculiarities of the formation of cluster structures. The observed feature-space redistributions under targeted attacks suggest avenues for further optimizing clustering algorithms to enhance the resilience of computer-vision systems.
Document Type: article
File Description: electronic resource
Language: English
Russian
ISSN: 2226-1494
2500-0373
Relation: https://ntv.elpub.ru/jour/article/view/494; https://doaj.org/toc/2226-1494; https://doaj.org/toc/2500-0373
DOI: 10.17586/2226-1494-2025-25-4-694-702
Access URL: https://doaj.org/article/b1e9b679dfe74c178885acf90a9ba40d
Accession Number: edsdoj.b1e9b679dfe74c178885acf90a9ba40d
Database: Directory of Open Access Journals
Description
ISSN:22261494
25000373
DOI:10.17586/2226-1494-2025-25-4-694-702