Monday, January 21, 2008

[Lab meeting] Jan. 22nd, 2008 (Stanley): Apprenticeship Learning via Inverse Reinforcement Learning

Author: Pieter Abbeel, Andrew Y. Ng
From: Proceedings of the 21 st International Conference on Machine Learning, Banff, Canada, 2004.
Link

Abstract: We consider learning in a Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform. This setting is useful in applications (such as the task of driving) where it may be diffcult to write down an explicit reward function specifying exactly how di erent desiderata should be traded off. We think of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and give an algorithm for learning the task demonstrated by the expert. Our algorithm is based on using "inverse reinforcement learning" to try to recover the unknown reward function. We show that our algorithm terminates in a small number of iterations, and that even though we may never recover the expert's reward function, the policy output by the algorithm will attain performance close to that of the expert, where here performance is measured with respect to the expert's unknown reward function.

No comments: