より安全な機械学習のエンジニアリング(Engineering Safer Machine Learning)

ad

2023-06-14 ピッツバーグ大学

◆新しい研究では、安全な行動を学ぶために無制限なトライアルは必要ないという考えに挑戦しています。安全なポリシーを学ぶための新しいアプローチが提案され、最適性と危険な状況の回避、危険な行動の素早い認識のバランスを管理しながら、完全な自信を持って安全な行動を学ぶことが可能です。
◆彼らの理論的発見を検証するために、研究者は特定されたトレードオフを確認するシミュレーションを実施しました。これらの知見は、安全制約を組み込むことで学習プロセスを迅速化できることも示唆した。
◆この研究はロボット、自律システム、人工知能に重要な影響を与えるとされています。

<関連情報>

限られた被ばく量とほぼ確実な信頼性で安全な行動を学ぶ Learning to Act Safely With Limited Exposure and Almost Sure Certainty

Agustin Castellano,Hancheng Min,Juan Andres Bazerque,Enrique Mallada
IEEE Transactions on Automatic Control  Published:31 January 2023
DOI:https://doi.org/10.1109/TAC.2023.3240925

Abstract

This article puts forward the concept that learning to take safe actions in unknown environments, even with probability one guarantees, can be achieved without the need for an unbounded number of exploratory trials. This is indeed possible, provided that one is willing to navigate tradeoffs between optimality, level of exposure to unsafe events, and the maximum detection time of unsafe actions. We illustrate this concept in two complementary settings. We first focus on the canonical multiarmed bandit problem and study the intrinsic tradeoffs of learning safety in the presence of uncertainty. Under mild assumptions on sufficient exploration, we provide an algorithm that provably detects all unsafe machines in an (expected) finite number of rounds. The analysis also unveils a tradeoff between the number of rounds needed to secure the environment and the probability of discarding safe machines. We then consider the problem of finding optimal policies for a Markov decision process (MDP) with almost sure constraints. We show that the action-value function satisfies a barrier-based decomposition that allows for the identification of feasible policies independently of the reward process. Using this decomposition, we develop a barrier-learning algorithm, that identifies such unsafe state–action pairs in a finite expected number of steps. Our analysis further highlights a tradeoff between the time lag for the underlying MDP necessary to detect unsafe actions, and the level of exposure to unsafe events. Simulations corroborate our theoretical findings, further illustrating the aforementioned tradeoffs, and suggesting that safety constraints can speed up the learning process.

ad

1600情報工学一般2100総合技術監理一般
ad
ad
Follow
ad
タイトルとURLをコピーしました