2026-04-15 ノースカロライナ州立大学(NC State)
<関連情報>
- https://news.ncsu.edu/2026/04/cachemind-tool-computer-architecture/
- https://dl.acm.org/doi/10.1145/3779212.3790136
CacheMind:ミス率からその理由まで – キャッシュ置換のための自然言語とトレースに基づく推論 CacheMind: From Miss Rates to Why – Natural-Language, Trace-Grounded Reasoning for Cache Replacement
ASPLOS ’26: Proceedings of the 31st ACM International Conference on Architectural Support for Programming Languages and Operating Systems Published: 22 March 2026
DOI:https://doi.org/10.1145/3779212.3790136
Abstract
Cache replacement remains a challenging problem in CPU microarchitecture, often addressed using hand-crafted heuristics that limit cache performance. Cache data analysis requires parsing millions of trace entries with manual filtering, making the process slow and non-interactive.
To address this, we introduce CacheMind, a conversational tool that uses Retrieval-Augmented Generation (RAG) and Large Language Models (LLMs) to enable semantic reasoning over cache traces. Architects can now ask natural language questions like, ”Why is the memory access associated with PC X causing more evictions?”, and receive trace grounded, human-readable answers linked to program semantics for the first time.
To evaluate CacheMind, we present CacheMindBench, the first verified benchmark suite for LLM-based reasoning for the cache replacement problem. Using the Sieve retriever, CacheMind achieves 66.67% on 75 unseen trace-grounded questions and 84.80% on 25 unseen policy-specific reasoning tasks; with Ranger, it achieves 89.33% and 64.80% on the same evaluations. Additionally, with Ranger, CacheMind achieves 100% accuracy on 4 out of 6 categories in the trace-grounded tier of CacheMindBench. Compared to LlamaIndex (10% retrieval success), Sieve achieves 60% and Ranger achieves 90%, demonstrating that existing RetrievalAugmented Generation (RAGs) are insufficient for precise, trace-grounded microarchitectural reasoning.


