Joint Class and Domain Continual Learning for Decentralized Federated Processes

Λεπτομέρειες βιβλιογραφικής εγγραφής
Τίτλος: Joint Class and Domain Continual Learning for Decentralized Federated Processes
Συγγραφείς: Chiara Lanza, Tinne Tuytelaars, Marco Miozzo, Eduard Angelats, Paolo Dini
Πηγή: UPCommons. Portal del coneixement obert de la UPC
Universitat Politècnica de Catalunya (UPC)
IEEE Access, Vol 13, Pp 56982-56993 (2025)
IEEE Access
Στοιχεία εκδότη: Institute of Electrical and Electronics Engineers (IEEE), 2025.
Έτος έκδοσης: 2025
Θεματικοί όροι: domain incremental, Technology, STRATEGIES, Decentralized learning, Federated learning, Knowledge distillation, 46 Information and computing sciences, 09 Engineering, Engineering, Àrees temàtiques de la UPC::Informàtica::Arquitectura de computadors, 10 Technology, Àrees temàtiques de la UPC::Ensenyament i aprenentatge::TIC's aplicades a l'educació, decentralized learning, Class incremental, continual learning, 40 Engineering, Distributed learning, Science & Technology, Computer Science, Information Systems, federated learning, Engineering, Electrical & Electronic, Domain incremental, domainincremental, TK1-9971, knowledge distillation, Computer Science, Telecommunications, 08 Information and Computing Sciences, Electrical engineering. Electronics. Nuclear engineering, Continual learning, distributed learning
Περιγραφή: Continual learning (CL) is attracting attention to learn in dynamic and non-stationary environments, typical of many real applications, such as those related to Internet of things (IoT). So far, learning in IoT scenarios has almost exclusively been studied in a centralized fashion, but CL can be used in decentralized settings involving a multitude of nodes to learn, adapt, and generalize from their streams of data, even with limited communication and computation capabilities. In this work, we formulate a joint Class and Domain Continual Learning problem, where local data at the nodes might include multiple classes belonging to different domains. In our solution, we adopt knowledge distillation (KD) to share knowledge extracted from local models trained with the data at each node and to incrementally generate a global model, while preserving privacy and avoiding centralization bottlenecks. Thanks to KD, the learning process can be performed without any prior information about the model architecture, i.e., every local model can be based on different architectures, which can also differ from the global. The learning process is also agnostic to domain and classes’ distribution across the nodes. With these features, our distributed learning solution may assist in privacy preservation and save communication resources, which are fundamental challenges in IoT scenarios. Compared against two baseline losses based on state-of-the-art CL solutions and using different datasets (MNIST, USPS, DomainNet), our proposal outperforms different baselines in terms of accuracy, spanning from a gain of 15.9%, up to 95.4%, by limiting catastrophic forgetting.
This work was supported in part by Spanish Project through Ministerio de Ciencia, Innovación y Universidades (MCIN)/Agencia Estatal de Investigación (AEI)/10.13039/50110001103 under Grant PID2020-113832RB-C22(ORIGIN), and in part by European Union Horizon 2020 Research and Innovation Program through PCI2021-122043-2A/AEI/10.13039/501100011033 under Grant 953775 (GREENEDGE) and Grant CHIST-ERA-20-SICT-004 (SONATA).
Τύπος εγγράφου: Article
Journal
Περιγραφή αρχείου: application/pdf
ISSN: 2169-3536
DOI: 10.1109/access.2025.3555750
DOI: 10.5281/zenodo.14833259
DOI: 10.5281/zenodo.12532156
DOI: 10.5281/zenodo.12532157
Σύνδεσμος πρόσβασης: https://doaj.org/article/5d2697745f6446569e812ec3ca5842ad
Rights: CC BY
Αριθμός Καταχώρησης: edsair.doi.dedup.....bce840217ad742dacbfbf35d992d2356
Βάση Δεδομένων: OpenAIRE
Περιγραφή
ISSN:21693536
DOI:10.1109/access.2025.3555750