視覚障害者が触覚で物体を「感じる」AIツール(AI tool helps visually impaired users ‘feel’ where objects are in real time)

2025-11-24 ペンシルベニア州立大学 (Penn State)

ペンシルベニア州立大学(Penn State)の研究チームは、視覚障害者が“触覚で周囲の物体位置をリアルタイム把握できる” 新しいAI支援ツールを開発した。システムは小型カメラで周囲を撮影し、AIが物体の形状・距離・方向を瞬時に解析、その情報を手のひらに装着した触覚デバイスに振動パターンとして伝達する。ユーザーは振動の強さ・位置・リズムから、物体が「どこに」「どれくらいの距離で」「どう分布しているか」を直感的に理解できる。従来の白杖や音声案内では捉えきれない空間情報を補完し、障害物回避や屋内移動の安全性向上に役立つことが確認された。初期テストでは、参加者が物体の方向推定や空間認識で高い精度を示し、特に複雑な室内環境での移動がスムーズになった。研究者は、日常生活・教育現場・ナビゲーション支援への応用可能性を強調している。

視覚障害者が触覚で物体を「感じる」AIツール(AI tool helps visually impaired users ‘feel’ where objects are in real time)
NaviSense can identify objects in a users environment, based solely off their voice commands, without requiring users to preload 3D models of the objects prior to use. Upon identification, vibration and audio feedback guides users directly to the desired object. These quality of life features distinguish NaviSense from other forms of guidance technology. Credit: Caleb Craig/Penn State. All Rights Reserved.

<関連情報>

NaviSense: 視覚障害者による物体検索を支援するマルチモーダルモバイル支援アプリケーション NaviSense: A Multimodal Assistive Mobile application for Object Retrieval by Persons with Visual Impairment

Ajay Narayanan Sridhar, Fuli Qiao, Nelson Daniel Troncoso Aldas, Yanpei Shi, Mehrdad Mahdavi, Laurent Itti, Vijaykrishnan Narayanan
ASSETS ’25: Proceedings of the 27th International ACM SIGACCESS Conference on Computers and Accessibility  Published: 22 October 2025
DOI:https://doi.org/10.1145/3663547.3759726

Abstract

People with visual impairments often face significant challenges in locating and retrieving objects in their surroundings. Existing assistive technologies present a trade-off: systems that offer precise guidance typically require pre-scanning or support only fixed object categories, while those with open-world object recognition lack spatial feedback for reaching the object. To address this gap, we introduce NaviSense, a mobile assistive system that combines conversational AI, vision-language models, augmented reality (AR), and LiDAR to support open-world object detection with real-time audio-haptic guidance. Users specify objects via natural language and receive continuous spatial feedback to navigate toward the target without needing prior setup. Designed with insights from a formative study and evaluated with 12 blind and low-vision participants, NaviSense significantly reduced object retrieval time and was preferred over existing tools, demonstrating the value of integrating open-world perception with precise, accessible guidance.

1603情報システム・データ工学
ad
ad
Follow
ad
タイトルとURLをコピーしました