Saturday, November 04, 2006

[The ML Lunch talk ]Boosting Structured Prediction for Imitation Learning

Speaker: Nathan Ratliff, CMU
http://www.cs.cmu.edu/~ndr

Title: Boosting Structured Prediction for Imitation Learning

Venue: NSH 1507

Date: November 06

Time: 12:00 noon


Abstract:

The Maximum Margin Planning (MMP) algorithm solves imitation learning
problems by learning linear mappings from features to cost functions in
a planning domain. The learned policy is the result of minimum-cost
planning using these cost functions. These mappings are chosen so that
example policies (or trajectories) given by a teacher appear to be lower
cost (with a loss-scaled margin) than any other policy for a given
planning domain. We provide a novel approach, MMPBoost, based on the
functional gradient descent view of boosting that extends MMP by
``boosting'' in new features. This approach uses simple binary
classification or regression to improve performance of MMP imitation
learning, and naturally extends to the class of structured maximum
margin prediction problems. Our technique is applied to navigation and
planning problems for outdoor mobile robots and robotic legged
locomotion.

In this talk, I will first provide an overview of the MMP approach to
imitation learning, followed by an introduction to our boosting
technique for learning nonlinear cost functions within this framework. I
will finish with a number of experimental results and a sketch of how
stuctured boosting algorithms of this sort can be derived.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.