2023-10-25 ニューヨーク大学 (NYU)
◆MLCは、ニューラルネットワークを練習を通じて構成的な一般化能力を向上させることに焦点を当てており、人間のタスクと同等の実験を通じてMLCの効果をテストし、人間と同等またはそれ以上の性能を示しました。この研究は、大規模言語モデルが構成的な一般化に苦労している課題を克服し、機械学習モデルの能力向上に貢献する可能性があると示しています。
<関連情報>
- https://www.nyu.edu/about/news-publications/news/2023/october/can-ai-grasp-related-concepts-after-learning-only-one-.html
- https://www.nature.com/articles/s41586-023-06668-3
メタ学習ニューラルネットワークによる人間のような体系的汎化 Human-like systematic generalization through a meta-learning neural network
Brenden M. Lake &Marco Baroni
Nature Published:25 October 2023
DOI:https://doi.org/10.1038/s41586-023-06668-3
Abstract
The power of human language and thought arises from systematic compositionality—the algebraic ability to understand and produce novel combinations from known components. Fodor and Pylyshyn1 famously argued that artificial neural networks lack this capacity and are therefore not viable models of the mind. Neural networks have advanced considerably in the years since, yet the systematicity challenge persists. Here we successfully address Fodor and Pylyshyn’s challenge by providing evidence that neural networks can achieve human-like systematicity when optimized for their compositional skills. To do so, we introduce the meta-learning for compositionality (MLC) approach for guiding training through a dynamic stream of compositional tasks. To compare humans and machines, we conducted human behavioural experiments using an instruction learning paradigm. After considering seven different models, we found that, in contrast to perfectly systematic but rigid probabilistic symbolic models, and perfectly flexible but unsystematic neural networks, only MLC achieves both the systematicity and flexibility needed for human-like generalization. MLC also advances the compositional skills of machine learning systems in several systematic generalization benchmarks. Our results show how a standard neural network architecture, optimized for its compositional skills, can mimic human systematic generalization in a head-to-head comparison.