Friday, April 20, 2007

CMU ML lunch: Learning without the loss function

Speaker: John Langford, Yahoo! Research, http://hunch.net/~jl/

Title: Learning without the loss function

Abstract: When learning a classifier, we use knowledge of the loss of different choices on training examples to guide the choice of a classifier. An often incorrect assumption is embedded in this paradigm: the assumption that we know the loss of different choices. This assumption is often incorrect, and the talk is about the feasibility of (and algorithms for) fixing this.

One example where the assumption is incorrect is the ad placement problem. Your job is to choose relevant ads for a user given various sorts of context information. You can test success by displaying an ad checking if the user is interested in it. However, this test does _not_reveal what would have happened if a different ad was displayed. Restated, the "time always goes forward" nature of reality does not allow us to answer "What would have happened if I had instead made a different choice in the past?"

Somewhat surprisingly, this is _not_ a fundamental obstacle to application of machine learning. I'll describe what we know about learning without the loss function, and some new better algorithms for this setting.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.