AIがわずかな例から宇宙イベントを検出(AI breakthrough helps astronomers spot cosmic events with just a handful of examples)

2025-10-08 オックスフォード大学

オックスフォード大学とGoogle Cloudの研究チームは、汎用AI「Gemini」を用いて、わずか15枚の例示画像と簡単な指示だけで宇宙の変化(爆発する星やブラックホールなど)を93%の精度で分類する手法を開発した。従来のブラックボックス型AIと異なり、分類理由を英語で説明できる点が透明性向上に寄与。AIは自己評価により不確実な判断を人間に委ねる「人間との協働」も実現し、精度を96.7%まで向上させた。この技術は最小限のデータで新しい観測装置にも適応可能で、科学発見の民主化と自律的研究支援AIの基盤として期待されている。

AIがわずかな例から宇宙イベントを検出(AI breakthrough helps astronomers spot cosmic events with just a handful of examples)
Examples of input transient images (New, Reference, and Difference) with Gemini’s corresponding classification outputs, detailed explanations, and interest scores. Credit: Stoppa & Bulmus et al., Nature Astronomy (2025).

<関連情報>

大規模言語モデルからの一時画像分類のテキスト解釈 Textual interpretation of transient image classifications from large language models

Fiorenzo Stoppa,Turan Bulmus,Steven Bloemen,Stephen J. Smartt,Paul J. Groot,Paul Vreeswijk & Ken W. Smith
Nature Astronomy  Published:08 October 2025
DOI:https://doi.org/10.1038/s41550-025-02670-z

Abstract

Modern astronomical surveys deliver immense volumes of transient detections, yet distinguishing real astrophysical signals (for example, explosive events) from bogus imaging artefacts remains a challenge. Convolutional neural networks are effectively used for real versus bogus classification; however, their reliance on opaque latent representations hinders interpretability. Here we show that large language models (LLMs) can approach the performance level of a convolutional neural network on three optical transient survey datasets (Pan-STARRS, MeerLICHT and ATLAS) while simultaneously producing direct, human-readable descriptions for every candidate. Using only 15 examples and concise instructions, Google’s LLM, Gemini, achieves a 93% average accuracy across datasets that span a range of resolution and pixel scales. We also show that a second LLM can assess the coherence of the output of the first model, enabling iterative refinement by identifying problematic cases. This framework allows users to define the desired classification behaviour through natural language and examples, bypassing traditional training pipelines. Furthermore, by generating textual descriptions of observed features, LLMs enable users to query classifications as if navigating an annotated catalogue, rather than deciphering abstract latent spaces. As next-generation telescopes and surveys further increase the amount of data available, LLM-based classification could help bridge the gap between automated detection and transparent, human-level understanding.

1701物理及び化学
ad
ad
Follow
ad
タイトルとURLをコピーしました