ドライバーレス車のAIの安全性を調査、脆弱性を発見(UB researchers probe safety of AI in driverless cars, find vulnerabilities)

ad

2024-08-30 バッファロー大学(UB)

ドライバーレス車のAIの安全性を調査、脆弱性を発見(UB researchers probe safety of AI in driverless cars, find vulnerabilities)
UB’s autonomous Lincoln MKZ sedan is one of the vehicles that researchers have used to test vulnerabilites to attacks.

バッファロー大学の研究者は、自動運転車におけるAIの安全性を検証し、攻撃の脆弱性を発見しました。例えば、3Dプリントされた物体を車両に戦略的に配置することで、AIレーダーシステムから車両を検出不能にすることが可能です。この研究は自動車、技術、保険業界に影響を及ぼす可能性があり、AIシステムの安全性を高める必要性を強調しています。現在、研究者たちはAIの脆弱性を悪用した攻撃の防止策を模索しています。

<関連情報>

自律走行におけるマルチセンサーフュージョンに対する悪意ある攻撃 Malicious Attacks against Multi-Sensor Fusion in Autonomous Driving

Yi Zhu, Chenglin Miao, Hongfei Xue, Yunnan Yu, Lu Su, Chunming Qiao
ACM MobiCom ’24: Proceedings of the 30th Annual International Conference on Mobile Computing and Networking  Published: 29 May 2024
DOI:https://doi.org/10.1145/3636534.3649372

Abstract

Multi-sensor fusion has been widely used by autonomous vehicles (AVs) to integrate the perception results from different sensing modalities including LiDAR, camera and radar. Despite the rapid development of multi-sensor fusion systems in autonomous driving, their vulnerability to malicious attacks have not been well studied. Although some prior works have studied the attacks against the perception systems of AVs, they only consider a single sensing modality or a camera-LiDAR fusion system, which can not attack the sensor fusion system based on LiDAR, camera, and radar. To fill this research gap, in this paper, we present the first study on the vulnerability of multi-sensor fusion systems that employ LiDAR, camera, and radar. Specifically, we propose a novel attack method that can simultaneously attack all three types of sensing modalities using a single type of adversarial object. The adversarial object can be easily fabricated at low cost, and the proposed attack can be easily performed with high stealthiness and flexibility in practice. Extensive experiments based on a real-world AV testbed show that the proposed attack can continuously hide a target vehicle from the perception system of a victim AV using only two small adversarial objects.

タイルマスク: 自律走行におけるミリ波レーダー物体検出に対する受動反射ベースの攻撃 TileMask: A Passive-Reflection-based Attack against mmWave Radar Object Detection in Autonomous Driving

Yi Zhu, Chenglin Miao, Hongfei Xue, Zhengxiong Li, Yunnan Yu, Wenyao Xu, Lu Su, Chunming QiaoAuthors Info & Claims
CCS ’23: Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security  Published: 21 November 2023
DOI:https://doi.org/10.1145/3576915.3616661

Abstract

In autonomous driving, millimeter wave (mmWave) radar has been widely adopted for object detection because of its robustness and reliability under various weather and lighting conditions. For radar object detection, deep neural networks (DNNs) are becoming increasingly important because they are more robust and accurate, and can provide rich semantic information about the detected objects, which is critical for autonomous vehicles (AVs) to make decisions. However, recent studies have shown that DNNs are vulnerable to adversarial attacks. Despite the rapid development of DNN-based radar object detection models, there have been no studies on their vulnerability to adversarial attacks. Although some spoofing attack methods are proposed to attack the radar sensor by actively transmitting specific signals using some special devices, these attacks require sub-nanosecond-level synchronization between the devices and the radar and are very costly, which limits their practicability in real world. In addition, these attack methods can not effectively attack DNN-based radar object detection. To address the above problems, in this paper, we investigate the possibility of using a few adversarial objects to attack the DNN-based radar object detection models through passive reflection. These objects can be easily fabricated using 3D printing and metal foils at low cost. By placing these adversarial objects at some specific locations on a target vehicle, we can easily fool the victim AV’s radar object detection model. The experimental results demonstrate that the attacker can achieve the attack goal by using only two adversarial objects and conceal them as car signs, which have good stealthiness and flexibility. To the best of our knowledge, this is the first study on the passive-reflection-based attacks against the DNN-based radar object detection models using low-cost, readily-available and easily concealable geometric shaped objects.

自律走行におけるLiDAR知覚を攻撃するために任意の物体を使用することは可能か? Can We Use Arbitrary Objects to Attack LiDAR Perception in Autonomous Driving?

Yi Zhu, Chenglin Miao, Tianhang Zheng, Foad Hajiaghajani, Lu Su, Chunming Qiao
CCS ’21: Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security  Published: 13 November 2021
DOI:https://doi.org/10.1145/3460120.3485377

Abstract

As an effective way to acquire accurate information about the driving environment, LiDAR perception has been widely adopted in autonomous driving. The state-of-the-art LiDAR perception systems mainly rely on deep neural networks (DNNs) to achieve good performance. However, DNNs have been demonstrated vulnerable to adversarial attacks. Although there are a few works that study adversarial attacks against LiDAR perception systems, these attacks have some limitations in feasibility, flexibility, and stealthiness when being performed in real-world scenarios. In this paper, we investigate an easier way to perform effective adversarial attacks with high flexibility and good stealthiness against LiDAR perception in autonomous driving. Specifically, we propose a novel attack framework based on which the attacker can identify a few adversarial locations in the physical space. By placing arbitrary objects with reflective surface around these locations, the attacker can easily fool the LiDAR perception systems. Extensive experiments are conducted to evaluate the performance of the proposed attack, and the results show that our proposed attack can achieve more than 90% success rate. In addition, our real-world study demonstrates that the proposed attack can be easily performed using only two commercial drones. To the best of our knowledge, this paper presents the first study on the effect of adversarial locations on LiDAR perception models’ behaviors, the first investigation on how to attack LiDAR perception systems using arbitrary objects with reflective surface, and the first attack against LiDAR perception systems using commercial drones in physical world. Potential defense strategies are also discussed to mitigate the proposed attacks.

0108交通物流機械及び建設機械
ad
ad
Follow
ad
タイトルとURLをコピーしました