人間とAIの学習に共通点を発見(Researchers uncover similarities between human and AI learning)

2025-09-04 ブラウン大学

ブラウン大学の研究で、AIが人間と同様に「柔軟な学習(少ない例で素早く理解)」と「漸進的学習(繰り返しで習得)」を結びつけて活用できることが示された。メタラーニングを用いてAIに多数の課題を与えた結果、未学習の組み合わせ(例:青いキリン)を認識できる柔軟性が獲得され、人間の学習様式と類似するプロセスが確認された。また、難しい課題ほど記憶に残りやすい一方、容易な課題は柔軟性を高めるが長期記憶には残りにくいというトレードオフも明らかになった。これは、人間の学習プロセスとAI内部の仕組みに共通点があることを裏付ける発見であり、今後の直感的で信頼性の高いAI開発や人間の認知科学研究に大きな示唆を与える。

<関連情報>

ヒト認知と神経ネットワークにおける並行するトレードオフ:文脈学習と重み学習の動的相互作用 Parallel trade-offs in human cognition and neural networks: The dynamic interplay between in-context and in-weight learning

Jacob Russin, Ellie Pavlick, and Michael J. Frank
Proceedings of the National Academy of Sciences  Published:August 28, 2025
DOI:https://doi.org/10.1073/pnas.2510270122

Significance

The neural networks dominating AI in recent years have achieved a remarkable level of behavioral flexibility, in part due to their capacity to learn new tasks from only a few examples. These in-context learning abilities are analogous to human inference and have different properties than the usual in-weight learning. Here, we show that when both are present in a single network, their dynamic interplay can tie together several key learning phenomena observed in humans, including curriculum effects, compositional generalization, and a trade-off between flexibility and retention. Our findings highlight important computational principles that may be shared between human and artificial intelligence.

Abstract

Human learning embodies a striking duality: Sometimes, we can rapidly infer and compose logical rules, benefiting from structured curricula (e.g., in formal education), while other times, we rely on an incremental approach or trial-and-error, learning better from curricula that are randomly interleaved. Influential psychological theories explain this seemingly conflicting behavioral evidence by positing two qualitatively different learning systems—one for rapid, rule-based inferences (e.g., in working memory) and another for slow, incremental adaptation (e.g., in long-term and procedural memory). It remains unclear how to reconcile such theories with neural networks, which learn via incremental weight updates and are thus a natural model for the latter, but are not obviously compatible with the former. However, recent evidence suggests that metalearning neural networks and large language models are capable of in-context learning (ICL)—the ability to flexibly infer the structure of a new task from a few examples. In contrast to standard in-weight learning (IWL), which is analogous to synaptic change, ICL is more naturally linked to activation-based dynamics thought to underlie working memory in humans. Here, we show that the interplay between ICL and IWL naturally ties together a broad range of learning phenomena observed in humans, including curriculum effects on category-learning tasks, compositionality, and a trade-off between flexibility and retention in brain and behavior. Our work shows how emergent ICL can equip neural networks with fundamentally different learning properties that can coexist with their native IWL, thus offering an integrative perspective on dual-process theories of human cognition.

1603情報システム・データ工学
ad
ad
Follow
ad
タイトルとURLをコピーしました