2025-11-19 東京大学

ヒトと生成AIのダンスに対する情報処理を、世界で初めて定量的に比較
<関連情報>
- https://www.u-tokyo.ac.jp/focus/ja/press/z0105_00010.html
- https://www.u-tokyo.ac.jp/content/400274324.pdf
- https://www.nature.com/articles/s41467-025-65039-w
クロスモーダル深層生成モデルはダンスの皮質表現を明らかにする Cross-modal deep generative models reveal the cortical representation of dancing
Yu Takagi,Daichi Shimizu,Mina Wakabayashi,Ryu Ohata & Hiroshi Imamizu
Nature Communications Published:18 November 2025
DOI:https://doi.org/10.1038/s41467-025-65039-w
Abstract
Dance is an ancient, holistic art form practiced worldwide throughout human history. Although it offers a window into cognition, emotion, and cross‑modal processing, fine‑grained quantitative accounts of how its diverse information is represented in the brain have rarely been performed. Here, we relate features from a cross‑modal deep generative model of dance to functional magnetic resonance imaging responses while participants watched naturalistic dance clips. We demonstrate that cross-modal features explain dance‑evoked brain activity better than low‑level motion and audio features. Using encoding models as in silico simulators, we quantify how dances that elicit different emotions yield distinct neural patterns. While expert dancers’ brain activity is more broadly explained by dance features than that of novices, experts exhibit greater individual variability. Our approach links cross-modal representations from generative models to naturalistic neuroimaging, clarifying how motion, music, and expertise jointly shape aesthetic and emotional experience.


