AIトレーニング中のデータを物理的ハードウェアから漏洩する脆弱性を発見(Hardware Vulnerability Allows Attackers to Hack AI Training Data)

2025-10-08 ノースカロライナ州立大学(NC State)

ノースカロライナ州立大学の研究者らは、AIシステムのトレーニングデータを物理ハードウェア経由で漏洩させる初の脆弱性「GATEBLEED」を発見した。これはIntel XeonのAIアクセラレータ(AMX)の電力制御機構「パワーゲーティング」を悪用し、AIが学習したデータを特定できる攻撃である。従来のマルウェア検出や暗号化をすり抜けるため、プライバシー保護に深刻な影響を及ぼす。修正にはCPUの再設計が必要で、短期的には性能低下を伴う防御策しかないという。

<関連情報>

GATEBLEED: オンコアアクセラレータのパワーゲーティングを活用したAIへの高性能化とステルス攻撃 GATEBLEED: Exploiting On-Core Accelerator Power Gating for High Performance & Stealthy Attacks on AI

Joshua Kalyanapu, Farshad Dizani, Darsh Asher, Azam Ghanbari, Rosario Cammarota, Aydin Aysu, Samira Mirbagher Ajorpaz
arXiv  last revised: 2 Oct 2025 (this version, v3)
DOI:https://doi.org/10.48550/arXiv.2507.17033

AIトレーニング中のデータを物理的ハードウェアから漏洩する脆弱性を発見(Hardware Vulnerability Allows Attackers to Hack AI Training Data)

Abstract

As power consumption from AI training and inference continues to increase, AI accelerators are being integrated directly into the CPU. Intel’s Advanced Matrix Extensions (AMX) is one such example, debuting on the 4th generation Intel Xeon Scalable CPU. We discover a timing side and covert channel, GATEBLEED, caused by the aggressive power gating utilized to keep the CPU within operating limits. We show that the GATEBLEED side channel is a threat to AI privacy as many ML models such as transformers and CNNs make critical computationally-heavy decisions based on private values like confidence thresholds and routing logits. Timing delays from selective powering down of AMX components mean that each matrix multiplication is a potential leakage point when executed on the AMX accelerator. Our research identifies over a dozen potential gadgets across popular ML libraries (HuggingFace, PyTorch, TensorFlow, etc.), revealing that they can leak sensitive and private information. GATEBLEED poses a risk for local and remote timing inference, even under previous protective measures. GATEBLEED can be used as a high performance, stealthy remote covert channel and a generic magnifier for timing transmission channels, capable of bypassing traditional cache defenses to leak arbitrary memory addresses and evading state of the art microarchitectural attack detectors under realistic network conditions and system configurations in which previous attacks fail. We implement an end-to-end microarchitectural inference attack on a transformer model optimized with Intel AMX, achieving a membership inference accuracy of 81% and a precision of 0.89. In a CNN-based or transformer-based mixture-of-experts model optimized with Intel AMX, we leak expert choice with 100% accuracy. To our knowledge, this is the first side-channel attack on AI privacy that exploits hardware optimizations.

1601コンピュータ工学
ad
ad
Follow
ad
タイトルとURLをコピーしました