Academic Journal

Explainability and interpretability are important aspects in ensuring the security of decisions made by intelligent systems (review article)

Λεπτομέρειες βιβλιογραφικής εγγραφής
Τίτλος: Explainability and interpretability are important aspects in ensuring the security of decisions made by intelligent systems (review article)
Συγγραφείς: D. N. Biryukov, A. S. Dudkin
Πηγή: Научно-технический вестник информационных технологий, механики и оптики, Vol 25, Iss 3, Pp 373-386 (2025)
Στοιχεία εκδότη: ITMO University, 2025.
Έτος έκδοσης: 2025
Συλλογή: LCC:Information technology
Θεματικοί όροι: искусственный интеллект, нейронные сети, глубокое обучение, «черный ящик», объяснимость, интерпретируемость, xai, Information technology, T58.5-58.64
Περιγραφή: The issues of trust in decisions made (formed) by intelligent systems are becoming more and more relevant. A systematic review of Explicable Artificial Intelligence (XAI) methods and tools aimed at bridging the gap between the complexity of neural networks and the need for interpretability of results for end users is presented. A theoretical analysis of the differences between explainability and interpretability in the context of artificial intelligence as well as their role in ensuring the security of decisions made by intelligent systems is carried out. It is shown that explainability implies the ability of a system to generate justifications understandable to humans, whereas interpretability focuses on the passive clarity of internal mechanisms. A classification of XAI methods is proposed based on their approach (preliminary/subsequent analysis: ante hoc/post hoc) and the scale of explanations (local/global). Popular tools, such as Local Interpretable Model Agnostic Explanations, Shapley Values, and integrated gradients, are considered, with an assessment of their strengths and limitations of applicability. Practical recommendations are given on the choice of methods for various fields and scenarios. The architecture of an intelligent system based on the V.K. Finn model and adapted to modern requirements for ensuring “transparency” of solutions, where the key components are the information environment, the problem solver and the intelligent interface, are discussed. The problem of a compromise between the accuracy of models and their explainability is considered: transparent models (“glass boxes”, for example, decision trees) are inferior in performance to deep neural networks, but provide greater certainty of decision-making. Examples of methods and software packages for explaining and interpreting machine learning data and models are provided. It is shown that the development of XAI is associated with the integration of neuro-symbolic approaches combining deep learning capabilities with logical interpretability.
Τύπος εγγράφου: article
Περιγραφή αρχείου: electronic resource
Γλώσσα: English
Russian
ISSN: 2226-1494
2500-0373
Relation: https://ntv.elpub.ru/jour/article/view/461; https://doaj.org/toc/2226-1494; https://doaj.org/toc/2500-0373
DOI: 10.17586/2226-1494-2025-25-3-373-386
Σύνδεσμος πρόσβασης: https://doaj.org/article/1c7a5a2a2bc6417bae0374b8a9a6df6e
Αριθμός Καταχώρησης: edsdoj.1c7a5a2a2bc6417bae0374b8a9a6df6e
Βάση Δεδομένων: Directory of Open Access Journals
Περιγραφή
ISSN:22261494
25000373
DOI:10.17586/2226-1494-2025-25-3-373-386