ロボット、「チーズ」って言える?(Robot, can you say ’Cheese’?)

ad

2024-03-27 コロンビア大学

コロンビア工学部のクリエイティブ・マシンズ・ラボは、5年以上にわたって、Emoという名のロボットを開発してきました。このロボットは、人間の表情を予測し、同時にそれらを表現することができます。Emoは、26のアクチュエータを備えた人間のような頭部であり、微妙な表情を幅広く表現できます。また、高解像度カメラを搭載し、視線を交わすことができるため、非言語コミュニケーションに重要な視線を合わせることができます。さらに、Emoは人間の表情を観察し、学習して同時に表現することができます。将来的には、人間との信頼関係を築くのに役立つロボットとして、Emoは人々と共に共感することができるようになるでしょう。

<関連情報>

人間とロボットの顔の共同表現 Human-robot facial coexpression

YUHANG HU , BOYUAN CHEN , JIONG LIN , YUNZHE WANG , […], AND HOD LIPSON
Science Robotics  Published:27 Mar 2024
DOI:https://doi.org/10.1126/scirobotics.adi4724

ロボット、「チーズ」って言える?(Robot, can you say ’Cheese’?)

Editor’s summary

Humanoid robots are capable of mimicking human expressions by perceiving human emotions and responding after the human has finished their expression. However, a delayed smile can feel artificial and disingenuous compared with a smile occurring simultaneously with a companion’s smile. Hu et al. trained their anthropomorphic facial robot named Emo to display an anticipatory expression to match its human companion. Emo is equipped with 26 motors and flexible silicone skin to provide precise control over its facial expressions. The robot was trained with a video dataset of humans making expressions. By observing subtle changes in a human face, the robot could predict an approaching smile 839 milliseconds before the human smiled and adjust its face to smile simultaneously. —Melisa Yashinski

Abstract

Large language models are enabling rapid progress in robotic verbal communication, but nonverbal communication is not keeping pace. Physical humanoid robots struggle to express and communicate using facial movement, relying primarily on voice. The challenge is twofold: First, the actuation of an expressively versatile robotic face is mechanically challenging. A second challenge is knowing what expression to generate so that the robot appears natural, timely, and genuine. Here, we propose that both barriers can be alleviated by training a robot to anticipate future facial expressions and execute them simultaneously with a human. Whereas delayed facial mimicry looks disingenuous, facial coexpression feels more genuine because it requires correct inference of the human’s emotional state for timely execution. We found that a robot can learn to predict a forthcoming smile about 839 milliseconds before the human smiles and, using a learned inverse kinematic facial self-model, coexpress the smile simultaneously with the human. We demonstrated this ability using a robot face comprising 26 degrees of freedom. We believe that the ability to coexpress simultaneous facial expressions could improve human-robot interaction.

0109ロボット
ad
ad
Follow
ad
タイトルとURLをコピーしました