Saturday, March 08, 2008

MIT Brains & Machines Seminar Series: Online Learning with Limited Feedback: An Efficient and Optimal Algorithm

Speaker: Alexander Rakhlin, Berkeley, Dpt. Computer Science
Date: Tuesday, March 11 2008
Host: Tomaso Poggio, CSAIL, BCS
Relevant URL: http://cbcl.mit.edu/

Title: "Online Learning with Limited Feedback: An Efficient and Optimal Algorithm"

Abstract:
One's ability to learn and make decisions rests heavily on the availability of feedback. In sequential decision-making problems such feedback is often limited. A gambler, for example, can observe entirely the outcome of a horse race regardless of where he placed his bet; however, when the same gambler chooses his route to travel to the race track, perhaps at a busy hour, he will likely never learn the outcome of possible alternatives. The latter limited-feedback problem is the focus of this talk.

The problem can be phrased as an Online Linear Optimization game with ``bandit'' feedback. The existence of a low-regret algorithm has been an open question since the work of Awerbuch and Kleinberg in 2004. We present the first known efficient algorithm for bandit Online Linear Optimization over arbitrary convex decision sets. We show how the difficulties encountered by previous approaches are overcome by employing Regularization -- a method well-known in statistical learning, but under-appreciated in online learning. Furthermore, our solution reveals surprising connections between online learning and Interior Point methods in Optimization.

In particular, our method solves the Online Shortest Path problem: at each round, a path from source to sink is chosen and only the total length (delay) of this path is revealed. Our method has numerous applications in network routing, resource allocation, dynamic treatment of patients, and many more. The worst-case guarantees imply robustness with respect to noise and malicious adversary.

Joint work with Jacob Abernethy and Elad Hazan.

No comments: