2026-03-09 北海道大学

注意の視覚的な比較の例
<関連情報>
- https://www.hokudai.ac.jp/news/2026/03/post-2211.html
- https://www.hokudai.ac.jp/news/pdf/260309_pr2.pdf
- https://iclr.cc/virtual/2026/poster/10010847
ASMIL:全スライド病理画像のための注意安定化型多重インスタンス学習ASMIL: Attention-stabilized multiple instance learning for whole-slide imaging
Linfeng Ye · Shayan Mohajer Hamidi · Zhixiang Chi · Guang Li · Mert Pilanci · Takahiro Ogawa · Miki Haseyama · Konstantinos Plataniotis
The Fourteenth International Conference on Learning Representations(ICLR 2026・⼈⼯ 知能と機械学習の国際会議) 2026 年 4 ⽉ 23 ⽇(⽊)〜4 ⽉ 27 ⽇(⽉)
Abstract
Attention-based multiple instance learning (MIL) has emerged as a powerful framework for whole slide image (WSI) diagnosis, leveraging attention to aggregate instance-level features into bag-level predictions. Despite this success, we find that such methods exhibit a new failure mode: unstable attention dynamics. Across four representative attention-based MIL methods and two public WSI datasets, we observe that attention distributions oscillate across epochs rather than converging to a consistent pattern, degrading performance. This instability adds to two previously reported challenges: overfitting and over-concentrated attention distribution. To simultaneously overcome these three limitations, we introduce attention-stabilized multiple instance learning (ASMIL), a novel unified framework. ASMIL uses an anchor model to stabilize attention, replaces softmax with a normalized sigmoid function in the anchor to prevent over-concentration, and applies token random dropping to mitigate overfitting. Extensive experiments demonstrate that ASMIL uses an anchor model to stabilize attention, replaces softmax with a normalized sigmoid function in the anchor to prevent over-concentration, and applies token random dropping to mitigate overfitting. Extensive experiments demonstrate that ASMIL achieves up to a 6.49% F1 score improvement over state-of-the-art methods. Moreover, integrating the anchor model and normalized sigmoid into existing attention-based MIL methods consistently boosts their performance, with F1 score gains up to 10.73%. All code and data are publicly available at https://anonymous.4open.science/r/ASMIL-5018/.

