2026-02-25 ジョージア工科大学

<関連情報>
- https://research.gatech.edu/your-ai-researchers-crack-ai-blackbox
- https://www.ndss-symposium.org/ndss-paper/achieving-zen-combining-mathematical-and-programmatic-deep-learning-model-representations-for-attribution-and-reuse/
- https://www.ndss-symposium.org/wp-content/uploads/2026-s1628-paper.pdf
禅の実現:帰属と再利用のための数学的およびプログラム的な深層学習モデル表現の統合
Achieving Zen: Combining Mathematical and Programmatic Deep Learning Model Representations for Attribution and Reuse
David Oygenblik, Dinko Dermendzhiev, Filippos Sofias, Mingxuan Yao, Haichuan Xu (), Runze Zhang, Jeman Park, Amit Kumar Sikder, Brendan Saltaformaggio
Network and Distributed System Security (NDSS) Symposium
Prior work has developed techniques capable of extracting deep learning (DL) models in universal formats from system memory or program binaries for security analysis. Unfortunately, such techniques ignore the recovery of the DL model’s programmatic representation required for model reuse and any white-box analysis techniques. Addressing this, we propose a novel recovery methodology, and prototype ZEN, that automatically recovers the DL model programmatic representation complementing the recovery of the mathematical representation by prior work. ZEN identifies novel code in an unknown DL system relative to a base model and generates patches such that the recovered DL model can be reused. We evaluated ZEN on 21 SOTA DL models, including models across the language and vision domains, such as Llama 3 and YoloV10. ZEN successfully attributed custom models to their base models with 100% accuracy, enabling model reuse.


