Sunday, August 20, 2006

(Casey)My talk, 24 August 2006 :Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object...

Title: Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories.(CVPR 2004, Workshop on Generative-Model Based Vision.)

Author: L. Fei-Fei, R. Fergus, and P. Perona.

Title: Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories.(CVPR 2004, Workshop on Generative-Model Based Vision.)

Author: L. Fei-Fei, R. Fergus, and P. Perona.

Abstract: Current computational approaches to learning visual
object categories require thousands of training images,
are slow, cannot learn in an incremental manner and cannot
incorporate prior information into the learning process. In
addition, no algorithm presented in the literature has been tested
on more than a handful of object categories. We present an
method for learning object categories from just a few training
images. It is quick and it uses prior information in a principled
way. We test it on a dataset composed of images of objects
belonging to 101 widely varied categories. Our proposed method
is based on making use of prior information, assembled from
(unrelated) object categories which were previously learnt. A
generative probabilistic model is used, which represents the shape
and appearance of a constellation of features belonging to the
object. The parameters of the model are learnt incrementally
in a Bayesian manner. Our incremental algorithm is compared
experimentally to an earlier batch Bayesian algorithm, as well
as to one based on maximum-likelihood. The incremental and
batch versions have comparable classification performance on
small training sets, but incremental learning is significantly
faster, making real-time learning feasible. Both Bayesian methods
outperform maximum likelihood on small training sets.

PDF file: [Link]

You can download other Li fei-fei's paper in this link: [Link]

No comments: