2025-09-04 カリフォルニア大学リバーサイド校(UCR)

Open-source AI models have the potential for misuse without safeguards. (sankai/Getty)
<関連情報>
- https://news.ucr.edu/articles/2025/09/04/ucr-researchers-fortify-ai-against-rogue-rewiring
- https://arxiv.org/abs/2411.04291
層別アラインメント:視覚言語モデルにおける画像エンコーダ層間の安全性アラインメント検証 Layer-wise Alignment: Examining Safety Alignment Across Image Encoder Layers in Vision Language Models
Saketh Bachu, Erfan Shayegani, Rohit Lal, Trishna Chakraborty, Arindam Dutta, Chengyu Song, Yue Dong, Nael Abu-Ghazaleh, Amit K. Roy-Chowdhury
arXiv last revised 19 Jun 2025 (this version, v2)
DOI:https://doi.org/10.48550/arXiv.2411.04291
Abstract
Vision-language models (VLMs) have improved significantly in their capabilities, but their complex architecture makes their safety alignment challenging. In this paper, we reveal an uneven distribution of harmful information across the intermediate layers of the image encoder and show that skipping a certain set of layers and exiting early can increase the chance of the VLM generating harmful responses. We call it as “Image enCoder Early-exiT” based vulnerability (ICET). Our experiments across three VLMs: LLaVA-1.5, LLaVA-NeXT, and Llama 3.2, show that performing early exits from the image encoder significantly increases the likelihood of generating harmful outputs. To tackle this, we propose a simple yet effective modification of the Clipped-Proximal Policy Optimization (Clip-PPO) algorithm for performing layer-wise multi-modal RLHF for VLMs. We term this as Layer-Wise PPO (L-PPO). We evaluate our L-PPO algorithm across three multimodal datasets and show that it consistently reduces the harmfulness caused by early exits.


