Wednesday, January 07, 2009

NTU talk: Toward Robust Online Visual Tracking

Speaker: Prof. Ming-Hsuan Yang, UC Merced

Time: 02:20pm, January 9 (Fri), 2009
Place: Room 102, CSIE building

Title: Toward Robust Online Visual Tracking

Abstract:

Human beings are capable of tracking objects in dynamic scenes effortlessly, and yet visual tracking remains a challenging problem in computer vision. The main reason can be attributed to the difficulty in handling appearance variation of a target object. Intrinsic appearance change include out-of-plane motion and shape deformation of a target object, whereas extrinsic illumination change, camera motion, camera viewpoint, and occlusions inevitably cause large appearance variation.

Visual tracking is a fundamental problem in computer vision that has important applications in a variety of areas, including recovering 3D structure from moving scenes, camera calibration, estimating the underlying motion of the scene, and object recognition. It also has other applications in autonomous robotics and vehicles, medical imaging, as well as entertainment. While existing algorithms are able to track objects in controlled environments, they usually fail in the presence of significant image variations caused by changes in illumination, pose and occlusions. In addition, most of them require significant efforts in offline training prior to tracking. In the first part of this talk, I will present an efficient online learning algorithm for simultaneously tracking objects and learning compact appearance models. Numerous experiments show that this method is able to learn compact generative models for tracking target objects undergoing large pose and
illumination changes. I will then discuss discriminative algorithms that track objects by separating foreground targets from backgrounds in an online manner. Experimental validation demonstrates that these algorithms are robust for tracking fast moving objects undergoing illumination change, occlusion, and articulated motion in real time with better results than existing systems.


Short Biography:

Ming-Hsuan Yang is an assistant professor in Electrical Engineering and Computer Science of University of California at Merced. After receiving his PhD degree in Computer Science from the University of Illinois at Urbana-Champaign (UIUC), he worked as a senior researcher at the Honda Research Institute in Mountain View, California, and was an assistant professor with Computer Science and Information Engineering at National Taiwan University.

His research interests include computer vision, pattern recognition, robotics, cognitive science, and machine learning. While at UIUC, he was awarded the Ray Ozzie Fellowship given to outstanding graduate students in Computer Science. He has co-authored the book Face Detection and Gesture Recognition for Human-Computer Interaction (Kluwer Academic Publishers), and co-edited a special issue on face recognition of Computer Vision and Image Understanding. He serves as an Associate Editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence, and an Area Chair of the IEEE Computer Vision and Pattern Recognition in 2008 and 2009. He is a senior member of the IEEE and the ACM.

2 comments:

Anonymous said...

Hi,
i am doing a R&D project at the University of Applied Science in Bonn and would like to know, if it is possible to get access to this paper!?

regards

Frederik Hegger

Bob said...

Dear Frederik,

Please send the request to Prof. Yang directly. Please feel free to let us know if you can not find the contact information.

Cheers,

-Bob