Call for Contributions
http://www.kuleuven.be/wehys/
Whistler, BC, Canada
December 13, 2008
Important Dates:
- Deadline: October 31, 2008,
- Notification: November 7, 2008
Workshop Chairs:
. Maria-Florina Balcan
. Shai Ben-David
. Avrim Blum
. Kristiaan Pelckmans
. John Shawe-Taylor
Contact:
Wehys08@gmail.com
Scope:
This workshop aims at collecting theoretical insights in the design of data-dependent learning strategies. Specifically we are interested in how far learned prediction rules may be characterized in terms of the observations themselves. This amounts to capturing how well data can be used to construct structured hypothesis spaces for risk minimization strategies - termed empirical hypothesis spaces. Classical analysis of learning algorithms requires the user to define a proper hypothesis space before seeing the data. In practice however, one often decides on the proper learning strategy or the form of the prediction rules of interest after inspection of the data (see e.g. [5, 7]). This theoretical gap constitutes exactly the scope of this workshop. A main theme is then the extent to which prior knowledge or additional (unlabeled) samples can or should be used to improve learning curves. ...(read further -
Tentative Program:
One day divided in to four sessions, two morning, two afternoon with coffee between. Each session would have one invited contributor talking for 45 mins followed by 15 mins discussion, except the first where there would be two 45 min tutorial presentations. The sessions would each have an additional part:
* Session 1 Tutorials by S. Ben-David and A. Blum;
* Session 2 Invited talk plus two contributed 15 mins presentations (posters to be shown in the afternoon);
* Session 3 Invited talk plus spotlight (2min) presentations for posters with poster session following during coffee break;
* Session 4 Invited talk followed by discussion aimed at identifying 10 key open questions.
Both John Langford
Call for contributions:
We solicit discussions and insights (controversial or otherwise) into any of the following topics:
1.
Relations between the luckiness framework, compatibility functions and empirically defined regularization strategies in general.
2.
Luckiness and compatibility can be seen as defining a prior in terms of the (unknown but fixed) distribution generating the data. To what extent can this approach be generalised while still ensuring effective learning?
3.
Models of prior knowledge that capture both complexity and distribution dependence for powerful learning.
4.
Theoretical analysis of the use of additional (empirical) side information in the form of unlabeled data or data labeled by related problems
5.
Examples of proper or natural luckiness or compatibility functions in practical learning tasks. How could, for example, luckiness be defined in the context of collaborative filtering?
6.
The effect of (empirical) preprocessing of the data not involving the labels as for example in PCA, other data-dependent transformations or cleaning, as well as using label information as for example in PLS or in feature selection and construction based on the training sample.
7.
Empirically defined theoretical measures such as Rademacher complexity or sparsity coefficients and their relevance for analysing empirical hypothesis spaces.
This workshop is intended for researchers interested in the theoretical underpinnings of learning algorithms which do not comply to the standard learning theoretical assumptions.
Submissions should be in the form of a 2-page abstract (i) summarizing a formal result, (ii) a discussion of its relevance to the workshop and (iii) pointers to the relevant literature. The abstract can be supported by an additional paper (either published or technical report), that contain detailed proofs of any assertions. We especially encourage contributions which describe how to bring in results from other formal frameworks.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.