自己組織化する人工ニューロン (Artificial neurons organize themselves)

ad

2025-03-28 ドイツ連邦共和国・マックスプランク協会 (MPG)

自己組織化する人工ニューロン (Artificial neurons organize themselves)
© Andreas Schneider, MPI-DS

マックス・プランク研究所とゲッティンゲン大学の研究者らは、生物の神経細胞に近い新型人工ニューロン「インフォモルフィック・ニューロン」を開発した。これは、大脳皮質の錐体細胞をモデルにし、外部の教師信号なしで隣接ニューロンと連携して自律的に学習できるのが特徴。情報理論に基づいた学習目標を通じて、情報の冗長性を減らし専門化を促す。従来の人工ニューラルネットワークと異なり、解釈可能性とエネルギー効率に優れるため、AIの新たな学習手法として期待される。

<関連情報>

局所情報理論的ゴール関数に基づく解釈可能なニューラル学習の一般的枠組み A General Framework for Interpretable Neural Learning based on Local Information-Theoretic Goal Functions

Abdullah Makkeh, Marcel Graetz, Andreas C. Schneider, David A. Ehrlich, Viola Priesemann, Michael Wibral
arXiv  last revised 26 Mar 2025 (this version, v3)
DOI:https://doi.org/10.48550/arXiv.2306.02149

Abstract

Despite the impressive performance of biological and artificial networks, an intuitive understanding of how their local learning dynamics contribute to network-level task solutions remains a challenge to this date. Efforts to bring learning to a more local scale indeed lead to valuable insights, however, a general constructive approach to describe local learning goals that is both interpretable and adaptable across diverse tasks is still missing. We have previously formulated a local information processing goal that is highly adaptable and interpretable for a model neuron with compartmental structure. Building on recent advances in Partial Information Decomposition (PID), we here derive a corresponding parametric local learning rule, which allows us to introduce ‘infomorphic’ neural networks. We demonstrate the versatility of these networks to perform tasks from supervised, unsupervised and memory learning. By leveraging the interpretable nature of the PID framework, infomorphic networks represent a valuable tool to advance our understanding of the intricate structure of local learning.

 

ニューロンは何を目指すべきか?情報理論に基づく局所目的関数の設計 What should a neuron aim for? Designing local objective functions based on information theory

Andreas C. Schneider, Valentin Neuhaus, David A. Ehrlich, Abdullah Makkeh, Alexander S. Ecker, Viola Priesemann, Michael Wibral
arXiv  last revised 21 Jan 2025

Abstract

In modern deep neural networks, the learning dynamics of the individual neurons is often obscure, as the networks are trained via global optimization. Conversely, biological systems build on self-organized, local learning, achieving robustness and efficiency with limited global information. We here show how self-organization between individual artificial neurons can be achieved by designing abstract bio-inspired local learning goals. These goals are parameterized using a recent extension of information theory, Partial Information Decomposition (PID), which decomposes the information that a set of information sources holds about an outcome into unique, redundant and synergistic contributions. Our framework enables neurons to locally shape the integration of information from various input classes, i.e. feedforward, feedback, and lateral, by selecting which of the three inputs should contribute uniquely, redundantly or synergistically to the output. This selection is expressed as a weighted sum of PID terms, which, for a given problem, can be directly derived from intuitive reasoning or via numerical optimization, offering a window into understanding task-relevant local information processing. Achieving neuron-level interpretability while enabling strong performance using local learning, our work advances a principled information-theoretic foundation for local learning strategies.

1600情報工学一般
ad
ad
Follow
ad
タイトルとURLをコピーしました