2025-06-12 アルゴンヌ国立研究所(ANL)
A high-energy collision probes the internal structure of subatomic particles, depicted as a neural-network-like web of quantum connections. This graphic highlights how physicists use AI/ML to map the quark-gluon structure inside particles and search for new physics beyond the standard model. (Image by Brandon Kriesten/Argonne National Laboratory.)
<関連情報>
- https://www.anl.gov/article/decoding-the-fundamental-forces-of-the-universe
- https://journals.aps.org/prd/abstract/10.1103/PhysRevD.111.014028
- https://link.springer.com/article/10.1007/JHEP11(2024)007
メリン空間における解釈可能な潜在表現によるPDFの学習 Learning PDFs through interpretable latent representations in Mellin space
Brandon Kriesten and T. J. Hobbs
Physical Review D Published: 29 January, 2025
DOI: https://doi.org/10.1103/PhysRevD.111.014028
Abstract
Representing the parton distribution functions (PDFs) of the proton and other hadrons through flexible, high-fidelity parametrizations has been a long-standing goal of particle physics phenomenology. This is particularly true since the chosen parametrization methodology can play an influential role in the ultimate PDF uncertainties as extracted in QCD global analyses; these, in turn, are often determinative of the reach of experiments at the LHC and other facilities to nonstandard physics, including at large , where parametrization effects can be significant. In this study, we explore a series of encoder-decoder machine-learning (ML) models with various neural-network topologies as efficient means of reconstructing PDFs from meaningful information stored in an interpretable latent space. Given recent effort to pioneer synergies between QCD analyses and lattice-gauge calculations, we formulate a latent representation based on the behavior of PDFs in Mellin space, i.e., their integrated moments, and test the ability of various models to decode PDFs from this information faithfully. We introduce a numerical package, PDFdecoder, which implements several encoder-decoder models to reconstruct PDFs with high fidelity and use this end-to-end tool to explore how such neural-network-based models might connect PDF parametrizations to underlying properties like their Mellin moments. We additionally dissect patterns of learned correlations between encoded Mellin moments and reconstructed PDFs that suggest opportunities for further improvements to ML-based approaches to PDF parametrizations and uncertainty quantification.
パートン密度理論のための説明可能なAI分類 Explainable AI classification for parton density theory
Brandon Kriesten,Jonathan Gomprecht & T. J. Hobbs
Journal of High Energy Physics Published:05 November 2024
DOI:https://doi.org/10.1007/JHEP11(2024)007
Abstract
Quantitatively connecting properties of parton distribution functions (PDFs, or parton densities) to the theoretical assumptions made within the QCD analyses which produce them has been a longstanding problem in HEP phenomenology. To confront this challenge, we introduce an ML-based explainability framework, XAI4PDF, to classify PDFs by parton flavor or underlying theoretical model using ResNet-like neural networks (NNs). By leveraging the differentiable nature of ResNet models, this approach deploys guided backpropagation to dissect relevant features of fitted PDFs, identifying x-dependent signatures of PDFs important to the ML model classifications. By applying our framework, we are able to sort PDFs according to the analysis which produced them while constructing quantitative, human-readable maps locating the x regions most affected by the internal theory assumptions going into each analysis. This technique expands the toolkit available to PDF analysis and adjacent particle phenomenology while pointing to promising generalizations.