2025-10-08 オックスフォード大学

Examples of input transient images (New, Reference, and Difference) with Gemini’s corresponding classification outputs, detailed explanations, and interest scores. Credit: Stoppa & Bulmus et al., Nature Astronomy (2025).
<関連情報>
- https://www.ox.ac.uk/news/2025-10-08-ai-breakthrough-helps-astronomers-spot-cosmic-events-just-handful-examples
- https://www.nature.com/articles/s41550-025-02670-z
大規模言語モデルからの一時画像分類のテキスト解釈 Textual interpretation of transient image classifications from large language models
Fiorenzo Stoppa,Turan Bulmus,Steven Bloemen,Stephen J. Smartt,Paul J. Groot,Paul Vreeswijk & Ken W. Smith
Nature Astronomy Published:08 October 2025
DOI:https://doi.org/10.1038/s41550-025-02670-z
Abstract
Modern astronomical surveys deliver immense volumes of transient detections, yet distinguishing real astrophysical signals (for example, explosive events) from bogus imaging artefacts remains a challenge. Convolutional neural networks are effectively used for real versus bogus classification; however, their reliance on opaque latent representations hinders interpretability. Here we show that large language models (LLMs) can approach the performance level of a convolutional neural network on three optical transient survey datasets (Pan-STARRS, MeerLICHT and ATLAS) while simultaneously producing direct, human-readable descriptions for every candidate. Using only 15 examples and concise instructions, Google’s LLM, Gemini, achieves a 93% average accuracy across datasets that span a range of resolution and pixel scales. We also show that a second LLM can assess the coherence of the output of the first model, enabling iterative refinement by identifying problematic cases. This framework allows users to define the desired classification behaviour through natural language and examples, bypassing traditional training pipelines. Furthermore, by generating textual descriptions of observed features, LLMs enable users to query classifications as if navigating an annotated catalogue, rather than deciphering abstract latent spaces. As next-generation telescopes and surveys further increase the amount of data available, LLM-based classification could help bridge the gap between automated detection and transparent, human-level understanding.


