2025-02-26 アメリカ合衆国・ニューヨーク大学 (NYU)
<関連情報>
- https://engineering.nyu.edu/news/self-driving-cars-learn-share-road-knowledge-through-digital-word-mouth
- https://arxiv.org/abs/2408.14001
モバイルエージェントにおけるモデルキャッシングを用いた分散協調学習 Decentralized Federated Learning with Model Caching on Mobile Agents
Xiaoyu Wang, Guojun Xiong, Houwei Cao, Jian Li, Yong Liu
arXiv last revised 4 Feb 2025 (this version, v2)
DOI:https://doi.org/10.48550/arXiv.2408.14001
Abstract
Federated Learning (FL) trains a shared model using data and computation power on distributed agents coordinated by a central server. Decentralized FL (DFL) utilizes local model exchange and aggregation between agents to reduce the communication and computation overheads on the central server. However, when agents are mobile, the communication opportunity between agents can be sporadic, largely hindering the convergence and accuracy of DFL. In this paper, we propose Cached Decentralized Federated Learning (Cached-DFL) to investigate delay-tolerant model spreading and aggregation enabled by model caching on mobile agents. Each agent stores not only its own model, but also models of agents encountered in the recent past. When two agents meet, they exchange their own models as well as the cached models. Local model aggregation utilizes all models stored in the cache. We theoretically analyze the convergence of Cached-DFL, explicitly taking into account the model staleness introduced by caching. We design and compare different model caching algorithms for different DFL and mobility scenarios. We conduct detailed case studies in a vehicular network to systematically investigate the interplay between agent mobility, cache staleness, and model convergence. In our experiments, Cached-DFL converges quickly, and significantly outperforms DFL without caching.