Bayesian models of human learning and inference
Josh Tennenbaum, MIT
Faculty Host: Tom Mitchell
Bayesian methods have revolutionized major areas of artificial intelligence, machine learning, natural language processing and computer vision. Recently Bayesian approaches have also begun to take hold in cognitive science, as a principled framework for explaining how humans might learn, reason, perceive and communicate about their world. This talk will sketch some of the challenges and prospects for Bayesian models in cognitive science, and also draw some lessons for bringing probabilistic approaches to artificial intelligence closer to human-level abilities.
The focus will be on learning and reasoning tasks where people routinely make successful generalizations from very sparse evidence. These tasks include word learning and semantic interpretation, inference about unobserved properties of objects and relations between objects, reasoning about the goals of other agents, and causal learning and inference. These inferences can be modeled as Bayesian computations operating over constrained representations of world structure -- what cognitive scientists have called "intuitive theories" or "schemas". For each task, we will consider how the appropriate knowledge representations are structured, how these representations guide Bayesian learning and reasoning, and how these representations could themselves be learned via Bayesian methods. Models will be evaluated both in terms of how well they capture quantitative or qualitative patterns of human behavior, and their ability to solve analogous real-world problems of learning and inference. The models we discuss will draw on -- and hopefully, offer new insights for -- several directions in contemporary machine learning, such as semi-supervised learning, modeling relational data, structure learning in graphical models, hierarchical Bayesian modeling, and Bayesian nonparametrics.
Speaker Bio
Josh Tenenbaum studies learning and reasoning in humans and machines, with the twin goals of understanding human intelligence in computational terms and bringing artificial intelligence closer to human-level capacities. He received his Ph.D. from MIT in 1999, and from 1999-2002, he was a member of the Stanford University faculty in the Departments of Psychology and (by courtesy) Computer Science. In 2002, he returned to MIT, where he currently holds the Paul E. Newton Career Development Chair in the Department of Brain and Cognitive Sciences, and is a member of the Computer Science and Artificial Intelligence Laboratory. He has published extensively in cognitive science, machine learning and other AI fields, and his group has received several outstanding paper or student-paper awards at NIPS, CVPR, and Cognitive Science. He received the 2006 New Investigator Award from the Society for Mathematical Psychology, and the 2007 Young Investigator Award from the Society of Experimental Psychologists. He serves as an associate editor of the journal Cognitive Science and is currently co-organizing a summer school on "Probabilistic Models of Cognition: The Mathematics of Mind" for July 2007 at IPAM, the Institute of Pure and Applied Mathematics at UCLA.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.