Intelligence Seminar
February 3, 2009
3:30 pm
Intelligent Preference Assessment: The Next Steps?
Craig Boutilier, Department of Computer Science, University of Toronto
Preference elicitation is generally required when making or recommending decisions on behalf of users whose utility function is not known with certainty. Full elicitation of user utility functions is infeasible in practice, leading to an emphasis on approaches that
(a) attempt to make good recommendations with incomplete utility information; and (b) heuristically minimize the amount of user interaction needed to assess relevant aspects of a utility function. Current techniques are, however, limited in a number of ways: (i) they rely on specific forms of information for assessment; (ii) they require very stylized forms of interaction; (iii) they are limited in the types of decision problems that can be handled.
In this talk, I will outline several key research challenges in taking preference assessment to a point where wide user acceptance is possible. I will focus on three three current techniques we're developing that will help move in the direction of greater user acceptance. Each tackles one of the weaknesses discussed above.
1. The first two techniques allows users to define "personalized" features over which they can express their preferences. Users provide (positive and negative) instances of a concept (or feature) over which they have preferences. We relate this to models of concept learning, and discuss how they existence of utility functions allows decisions to be made with very incomplete knowledge of the target concept. I'll also discuss possible means integrating
data-intensive collaborative filtering approaches with explicit preference elicitation techniques, especially when tackling "subjective" features.
2. I'll discuss some of our recent work on applying explicit decision-theoretic models to more "conversational" critiquing approaches to recommender systems. We consider several semantics (wrt user preferences) for unstructured user choices and show how these can be integrated into regret-based models.
3. Time permitting, I'll provide a sketch of some recent work on eliciting reward functions in Markov decision processes using the notion of minimax regret.
Bio:
Craig Boutilier received his Ph.D. in Computer Science (1992) from the University of Toronto, Canada. He is Professor and Chair of the Department of Computer Science at the University of Toronto. He was previously an Associate Professor at the University of British Columbia, a consulting professor at Stanford University, and a visiting professor at Brown University. He has served on the Technical Advisory Board of CombineNet, Inc. since 2001.
Dr. Boutilier's research interests span a wide range of topics, with a focus on decision making under uncertainty, including preference elicitation, mechanism design, game theory, Markov decision processes, and reinforcement learning. He is a Fellow of the American Association of Artificial Intelligence (AAAI) and the recipient of the Isaac Walton Killam Research Fellowship, an IBM Faculty Award and the Killam Teaching Award. He has also served in a variety of conference organization and editorial positions, and is Program Chair of the upcoming Twenty-first International Joint Conference on Artificial Intelligence (IJCAI-09).