Wednesday, October 24, 2012

NTU CSIE Talk: [2012-11-09] Dr. Koji Yatani, "A Ph.D. – What does it take?"


Title: A Ph.D. – What does it take?
Date: 2012-11-09 2:20pm
Location: R103
Speaker: Dr. Koji Yatani, Microsoft Research Asia
 
Abstract:
 
Getting a Ph.D. surely needs long effort, but why? Of course, research takes time, but a Ph.D. is not just about research. A Ph.D. student needs to be more than just a research person to be a successful Ph.D. This talk is not about a collection of my research projects (although I will introduce some of them a bit); rather, a collection of my experience in research at University of Toronto, Microsoft Research Asia, and industry labs where I did my internship. Through this talk, I will attempt to share my thoughts on what I believe a Ph.D. student should do and learn before getting her Ph.D. Your honest discussions, opinions and feedback would be greatly appreciated.
 
Biography: 
 
Dr. Koji Yatani (http://yatani.jp) is an associate researcher in Human-Computer Interaction Group at Microsoft Research Asia. His main research interests lie in Human-Computer Interaction (HCI) and its intersections with Ubiquitous Computing and Computational Linguistics. More specifically, he is interested in designing new forms of interacting with mobile devices, and developing new hardware and sensing technologies to support user interactions in mobile/ubiquitous computing environments. He is also interested in developing interactive systems and exploring new applications using computational linguistics methods.
 
He received B.Eng. and M.Sci. from University of Tokyo in 2003 and 2005, respectively, and his Ph.D. in Computer Science from University of Toronto in 2011. On November 2011, he joined HCI group at Microsoft Research Asia in Beijing. He was a recipient of NTT Docomo Scholarship (October 2003 -- March 2005), and Japan Society for the Promotion of Science Research Fellowship for Young Scientists (April 2005 -- March 2006). He received the Best Paper Award at CHI 2011. He served as a program committee on CHI 2013, Ubicomp 2012, and WHC 2013. He also served as a Mentoring co-chair on ITS 2012.

Tuesday, October 16, 2012

Lab meeting Oct 17th 2012 (Hank): Motion Segmentation of Multiple Objects from a Freely Moving Monocular Camera

Link

Presented by Hank Lin

From ICRA2012

Authors: Rahul Kumar Namdev, Abhijit Kundu, K Madhava Krishna and C. V. Jawahar


Abstract:
Motion segmentation or segmentation of moving
objects is an inevitable component for mobile robotic systems
such as the case with robots performing SLAM and collision
avoidance in dynamic worlds. This paper proposes an incre-mental motion segmentation system that efficiently segments
multiple moving objects and simultaneously build the map of
the environment using visual SLAM modules. Multiple cues
based on optical flow and two view geometry are integrated
to achieve this segmentation. A dense optical flow algorithm
provides for dense tracking of features. Motion potentials based
on geometry are computed for each of these dense tracks. These
geometric potentials along with optical flow potentials are used
to form a graph like structure. A graph based segmentation
algorithm then clusters together nodes of similar potentials
to form the eventual motion segments. Experimental results
of high quality segmentation on different publicly available
datasets demonstrate the effectiveness of our method.

Tuesday, October 02, 2012

Lab meeting Oct 3rd 2012 (Gene): Decentralised Cooperative Localisation for Heterogeneous Teams of Mobile Robots

Link

Presented by Chun-Kai (Gene) Chang

From ICRA2011 Australian Centre for Field Robotics, University of Sydney, NSW, Australia

Authors: Tim Bailey, Mitch Bryson, Hua Mu , John Vial, Lachlan McCalman and Hugh Durrant-Whyte


Abstract:
This paper presents a distributed algorithm for
performing joint localisation of a team of robots. The mobile
robots have heterogeneous sensing capabilities, with some having
high quality inertial and exteroceptive sensing, while others have
only low quality sensing or none at all. By sharing information,
a combined estimate of all robot poses is obtained. Interrobot
range-bearing measurements provide the mechanism for
transferring pose information from well-localised vehicles to those
less capable.
In our proposed formulation, high frequency egocentric data
(e.g., odometry, IMU, GPS) is fused locally on each platform. This
is the distributed part of the algorithm. Inter-robot measurements,
and accompanying state estimates, are communicated to a central
server, which generates an optimal minimum mean-squared
estimate of all robot poses. This server is easily duplicated for
full redundant decentralisation. Communication and computation
are efficient due to the sparseness properties of the informationform
Gaussian representation. A team of three indoor mobile
robots equipped with lasers, odometry and inertial sensing provides
experimental verification of the algorithms effectiveness in
combining location information.