2025-08-13 アルゴンヌ国立研究所(ANL)

Researchers from the University of Michigan are using Argonne supercomputers to develop foundation models that accelerate molecular design and the discovery of new battery materials. (Image by Anoushka Bhutani, University of Michigan.)
<関連情報>
- https://www.anl.gov/article/building-ai-foundation-models-to-accelerate-the-discovery-of-new-battery-materials
- https://arxiv.org/abs/2409.15370
分子基礎モデルのためのトークン化 Tokenization for Molecular Foundation Models
Alexius Wadell, Anoushka Bhutani, Venkatasubramanian Viswanathan
arXiv last revised 8 Jul 2025 (this version, v3)
DOI:https://doi.org/10.48550/arXiv.2409.15370
Abstract
Text-based foundation models have become an important part of scientific discovery, with molecular foundation models accelerating advancements in material science and molecular this http URL, existing models are constrained by closed-vocabulary tokenizers that capture only a fraction of molecular space. In this work, we systematically evaluate 34 tokenizers, including 19 chemistry-specific ones, and reveal significant gaps in their coverage of the SMILES molecular representation. To assess the impact of tokenizer choice, we introduce n-gram language models as a low-cost proxy and validate their effectiveness by pretraining and finetuning 18 RoBERTa-style encoders for molecular property prediction. To overcome the limitations of existing tokenizers, we propose two new tokenizers — Smirk and Smirk-GPE — with full coverage of the OpenSMILES specification. The proposed tokenizers systematically integrate nuclear, electronic, and geometric degrees of freedom; facilitating applications in pharmacology, agriculture, biology, and energy storage. Our results highlight the need for open-vocabulary modeling and chemically diverse benchmarks in cheminformatics.


