Sunday, October 16, 2005

CMU RI Defense: A Latent Cause Model of Classical Conditioning

Aaron Courville
Robotics Institute, Carnegie Mellon University

Abstract
Classical conditioning experiments probe what animals learn about their environment. This thesis presents an exploration of the probabilistic, generative latent cause theory of classical conditioning. According to the latent cause theory, animals assume that events within their environment are attributable to a latent cause. Learning is interpreted as an attempt to recover the generative model that gave rise to these observed events. In this thesis, I apply the latent cause theory to three distinct areas of classical conditioning, in each case offering a novel account of empirical phenomena.

In the first instance, I develop a version of a latent cause model that explicitly encodes a latent timeline to which observed stimuli and reinforcements are associated, thus preserving their temporal order. In this context, the latent cause model is equivalent to a hidden Markov model. This model is able to account for a theoretically challenging set of experiments which collectively suggest that animals encode the temporal relationships among stimuli and use this representation to predict impending reinforcement.

Next, I explore the effects of inference over an uncertain latent cause model structure. A key property of Bayesian structural inference is the tradeoff between the model complexity and data fidelity. Recognizing the equivalence between this tradeoff and the tradeoff between generalization and discrimination found in configural conditioning suggests a statistically sound account of these phenomena. By considering model simulations of a number of conditioning paradigms (including some not previously viewed as "configural'', I reveal behavioral signs that animals employ model complexity tradeoffs.

Finally I explore the consequence of merging latent variable theory with a generative model of change. A model of change describes how the parameters and structure of the latent cause model evolve over time. The resulting non-stationary latent cause model offers a novel perspective on the factors that influence animal judgments about changes in their environment. In particular, the model correctly predicts that the introduction of an unexpected stimulus can spur fast learning and eliminate latent inhibition.

This thesis offers a unified theoretical framework for classical conditioning. It uses state of the art machine reasoning concepts, including reversible jump MCMC and particle filtering search techniques, to explore a novel theoretical account of a wide range of empirical phenomena, many of which have otherwise resisted a computational explanation.

A copy of the thesis oral document can be found at http://www.cs.mcgill.ca/~jpineau/thesis.pdf.

No comments: