AIによるプロセッサ性能向上ツールを開発(AI-Powered Tool Helps Computer Architects Boost Processor Performance)

2026-04-15 ノースカロライナ州立大学(NC State)

North Carolina State Universityの研究チームは、コンピュータのキャッシュ設計を最適化する新ツール「CacheMind」を開発した。キャッシュは高速処理の要だが、従来は設計が複雑で最適化が困難だった。本ツールは機械学習を活用し、アプリケーションごとの最適なキャッシュ構成を自動探索することで、性能向上とエネルギー効率改善を同時に実現する。実験では、従来手法と比べて性能や消費電力の面で大幅な改善が確認された。さらに、設計時間の短縮にも寄与し、半導体設計プロセスの効率化が期待される。今後は高性能コンピューティングやデータセンターなど幅広い分野への応用が見込まれる。

<関連情報>

CacheMind:ミス率からその理由まで – キャッシュ置換のための自然言語とトレースに基づく推論 CacheMind: From Miss Rates to Why – Natural-Language, Trace-Grounded Reasoning for Cache Replacement

ASPLOS ’26: Proceedings of the 31st ACM International Conference on Architectural Support for Programming Languages and Operating Systems  Published: 22 March 2026
DOI:https://doi.org/10.1145/3779212.3790136

Abstract

Cache replacement remains a challenging problem in CPU microarchitecture, often addressed using hand-crafted heuristics that limit cache performance. Cache data analysis requires parsing millions of trace entries with manual filtering, making the process slow and non-interactive.

To address this, we introduce CacheMind, a conversational tool that uses Retrieval-Augmented Generation (RAG) and Large Language Models (LLMs) to enable semantic reasoning over cache traces. Architects can now ask natural language questions like, ”Why is the memory access associated with PC X causing more evictions?”, and receive trace grounded, human-readable answers linked to program semantics for the first time.

To evaluate CacheMind, we present CacheMindBench, the first verified benchmark suite for LLM-based reasoning for the cache replacement problem. Using the Sieve retriever, CacheMind achieves 66.67% on 75 unseen trace-grounded questions and 84.80% on 25 unseen policy-specific reasoning tasks; with Ranger, it achieves 89.33% and 64.80% on the same evaluations. Additionally, with Ranger, CacheMind achieves 100% accuracy on 4 out of 6 categories in the trace-grounded tier of CacheMindBench. Compared to LlamaIndex (10% retrieval success), Sieve achieves 60% and Ranger achieves 90%, demonstrating that existing RetrievalAugmented Generation (RAGs) are insufficient for precise, trace-grounded microarchitectural reasoning.

1601コンピュータ工学
ad
ad
Follow
ad
タイトルとURLをコピーしました