Tuesday, April 17, 2007

[Machine Learning Lunchtime Chats]April 16, Pradeep Ravikumar, Sparsity recovery and structure learning

Title: Sparsity recovery and structure learning
Speaker: Pradeep Ravikumar

The sparsity pattern of a vector, the number and location of its non-zeros, is of importance in varied settings as structure learning in graphical models, subset selection in regression, signal denoising and compressed sensing. In particular, a key task in these settings is to estimate the sparsity pattern of an unknown ``parameter'' vector from a set of n noisy observations.

Suppose a sparse ``signal'' vector (edges in graphical models, covariates in regression) enters into linear combinations, and has observations which are functions of these linear combinations with added noise. The task is to identify the set of relevant signals from n such noisy observations. In graphical models for instance, the task in structure learning is to identify the set of edges from the noisy samples.

This is an intractable problem under general settings, but there has been a flurry of recent work on using convex relaxations and L1 penalties to recover the underlying sparse signal, and in particular, the sparsity pattern of the signal.

The tutorial will cover the basic problem abstraction, the applications in various settings and some general conditions under which the tractable methods ``work''. It will also focus in particular on the application setting of structure learning in graphical models.

No comments: