2025-12-09 テキサスA&M大学

A pair of robotic dogs with the ability to navigate through artificial intelligence climb concrete obstacles during a demonstration of their capabilities.Credit: Logan Jinks/Texas A&M University College of Engineering
<関連情報>
- https://stories.tamu.edu/news/2025/12/09/meet-the-ai-powered-robotic-dog-ready-to-revolutionize-emergency-response/
- https://ieeexplore.ieee.org/document/11078086
思い出の散歩:Mllm メモリ駆動型ビジュアルナビゲーション A Walk to Remember: Mllm Memory-Driven Visual Navigation
Sandun S. Vitharana; Sanjaya Mallikarachchi; HG Chamika Wijayagrahi; Nuralem Abizov; Amanzhol Bektemessov; Aidos Ibrayev,…
2025 22nd International Conference on Ubiquitous Robots Date Added to IEEE Xplore: 18 July 2025
DOI:https://doi.org/10.1109/UR65550.2025.11078086
Abstract
This paper presents a novel framework for memory-based navigation for terrestrial robots, utilizing a customized multimodal large language model (MLLM) to interpret visual inputs and generate navigation commands. The system employs a Unitree GO1 robot equipped with a camera to capture environmental images, which are processed by the customized MLLM for navigation. By leveraging a memorybased approach, the robot efficiently reuses previously traversed paths, reducing the need for re-exploration and enhancing navigation efficiency. The hybrid controller in this work features a deliberation unit and a reactive controller for high-level commands and robot alignment. Experimental validation in a hallway-like environment demonstrates that memory-driven navigation improves path retracing and overall performance.


