2025-10-19 東京大学

従来の単一モデルによる把持(a)と、Planner・Coder・Observerの協調によるGraspMAS(b)の比較
<関連情報>
- https://www.i.u-tokyo.ac.jp/news/press/2025/202510192686.shtml
- https://www.i.u-tokyo.ac.jp/news/files/20251019pr_graspmas_zero-shot.pdf
- https://arxiv.org/abs/2506.18448
GraspMAS: マルチエージェントシステムによるゼロショット言語駆動型把持検出 GraspMAS: Zero-Shot Language-driven Grasp Detection with Multi-Agent System
Quang Nguyen, Tri Le, Huy Nguyen, Thieu Vo, Tung D. Ta, Baoru Huang, Minh N. Vu, Anh Nguyen
arXiv last revised 19 Jul 2025 (this version, v2)
DOI:https://doi.org/10.48550/arXiv.2506.18448
Abstract
Language-driven grasp detection has the potential to revolutionize human-robot interaction by allowing robots to understand and execute grasping tasks based on natural language commands. However, existing approaches face two key challenges. First, they often struggle to interpret complex text instructions or operate ineffectively in densely cluttered environments. Second, most methods require a training or finetuning step to adapt to new domains, limiting their generation in real-world applications. In this paper, we introduce GraspMAS, a new multi-agent system framework for language-driven grasp detection. GraspMAS is designed to reason through ambiguities and improve decision-making in real-world scenarios. Our framework consists of three specialized agents: Planner, responsible for strategizing complex queries; Coder, which generates and executes source code; and Observer, which evaluates the outcomes and provides feedback. Intensive experiments on two large-scale datasets demonstrate that our GraspMAS significantly outperforms existing baselines. Additionally, robot experiments conducted in both simulation and real-world settings further validate the effectiveness of our approach. Our project page is available at this https URL


