エニシング・イン・アニシング・アウト:新しいモジュール型AIモデル(Anything-in-anything-out: a new modular AI model)

ad

2024-02-26 スイス連邦工科大学ローザンヌ校(EPFL)

◆MultiModNは、大規模な言語モデル(LLM)とは異なり、テキスト、画像、ビデオ、音声などの多様な入力から学習できるモデルです。これは、複数の小さなモジュールからなり、利用可能な情報に応じて選択され、任意の組み合わせの入力を受け取り、任意の予測を出力する。
◆これにより、実世界のデータに適応し、偏りを持たずに予測を行うことが可能です。医療診断支援などのさまざまな実務タスクに適用され、特に資源の制約がある場合に有用です。

<関連情報>

MultiModN- マルチモーダル、マルチタスク、解釈可能なモジュラーネットワーク MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks

Vinitra Swamy, Malika Satayeva, Jibril Frej, Thierry Bossy, Thijs Vogels, Martin Jaggi, Tanja Käser, Mary-Anne Hartley
arXiv  last revised:6 Nov 2023
DOI:https://doi.org/10.48550/arXiv.2309.14118

エニシング・イン・アニシング・アウト:新しいモジュール型AIモデル(Anything-in-anything-out: a new modular AI model)

Abstract

Predicting multiple real-world tasks in a single model often requires a particularly diverse feature space. Multimodal (MM) models aim to extract the synergistic predictive potential of multiple data types to create a shared feature space with aligned semantic meaning across inputs of drastically varying sizes (i.e. images, text, sound). Most current MM architectures fuse these representations in parallel, which not only limits their interpretability but also creates a dependency on modality availability. We present MultiModN, a multimodal, modular network that fuses latent representations in a sequence of any number, combination, or type of modality while providing granular real-time predictive feedback on any number or combination of predictive tasks. MultiModN’s composable pipeline is interpretable-by-design, as well as innately multi-task and robust to the fundamental issue of biased missingness. We perform four experiments on several benchmark MM datasets across 10 real-world tasks (predicting medical diagnoses, academic performance, and weather), and show that MultiModN’s sequential MM fusion does not compromise performance compared with a baseline of parallel fusion. By simulating the challenging bias of missing not-at-random (MNAR), this work shows that, contrary to MultiModN, parallel fusion baselines erroneously learn MNAR and suffer catastrophic failure when faced with different patterns of MNAR at inference. To the best of our knowledge, this is the first inherently MNAR-resistant approach to MM modeling. In conclusion, MultiModN provides granular insights, robustness, and flexibility without compromising performance.

1600情報工学一般
ad
ad
Follow
ad
タイトルとURLをコピーしました