2023-10-31 カーネギーメロン大学
◆この研究により、ロボットは自身の知覚に依存して障害を克服し、自らの動きを制御する能力を獲得しました。研究チームは、この成果を論文とビデオで公開し、プロジェクトのソースコードも他の開発者に提供しています。この研究により、ロボットが適応力を持ち、パルクールのような複雑な動きを自力で実現することが可能となりました。
<関連情報>
- https://www.cs.cmu.edu/news/2023/robot-parkour
- https://arxiv.org/pdf/2309.14341.pdf
- https://extreme-parkour.github.io/
脚式ロボットによるエクストリーム・パルクール Extreme Parkour with Legged Robots
Xuxin Cheng, Kexin Shi, Ananye Agarwal, Deepak Pathak
arXiv 25 Sep 2023
Abstract
Humans can perform parkour by traversing obstacles in a highly dynamic fashion requiring precise eye-muscle coordination and movement. Getting robots to do the same task requires overcoming similar challenges. Classically, this is done by independently engineering perception, actuation, and control systems to very low tolerances. This restricts them to tightly controlled settings such as a predetermined obstacle course in labs. In contrast, humans are able to learn parkour through practice without significantly changing their underlying biology. In this paper, we take a similar approach to developing robot parkour on a small low-cost robot with imprecise actuation and a single front-facing depth camera for perception which is low-frequency, jittery, and prone to artifacts. We show how a single neural net policy operating directly from a camera image, trained in simulation with largescale RL, can overcome imprecise sensing and actuation to output highly precise control behavior end-to-end. We show our robot can perform a high jump on obstacles 2x its height, long jump across gaps 2x its length, do a handstand and run across tilted ramps, and generalize to novel obstacle courses with different physical properties. Parkour videos at https://extreme-parkour.github.io/.