Sunday, May 31, 2009

Some talks at ICRA 2009.

Talks at three forums (Industrial Forum, Science Forum, and Citizen's Forum) and plenary talks of IEEE ICRA 2009 are posted at http://podcasts.icra2009.org/rss.xml.


Tuesday, May 26, 2009

ICRA2009 Awards

Best Vision Paper
Moving Obstacle Detection in Highly Dynamic Scenes [Local Copy][Attachment]

Best Automation Paper and Best Student Paper
Design and Calibration of a Microfabricated 6-Axis Force-Torque Sensor for Microrobotic Applications [Local Copy]

Best Conference Paper
Towards a Navigation System for Autonomous Indoor Flying [Local Copy]

Monday, May 25, 2009

Lab Meeting June 1st, 2009 (Jeff): Modeling RFID Signal Strength and Tag Detection for Localization and Mapping

Title: Modeling RFID Signal Strength and Tag Detection for Localization and Mapping

Authors: Dominik Joho, Christian Plagemann and Wolfram Burgard

Abstract:

In recent years, there has been an increasing interest within the robotics community in investigating whether Radio Frequency Identification (RFID) technology can be utilized to solve localization and mapping problems in the context of mobile robots. We present a novel sensor model which can be utilized for localizing RFID tags and for tracking a mobile agent moving through an RFID-equipped environment. The proposed probabilistic sensor model characterizes the received signal strength indication (RSSI) information as well as the tag detection events to achieve a higher modeling accuracy compared to state-of-the-art models which deal with one of these aspects only. We furthermore propose a method that is able to bootstrap such a sensor model in a fully unsupervised fashion. Real-world experiments demonstrate the effectiveness of our approach also in comparison to existing techniques.

Link:
ICRA2009
http://www.informatik.uni-freiburg.de/~joho/publications/joho09icra.html

Sunday, May 17, 2009

Lab Meeting May 25, 2009 (fish60):

I may talk about what I read in recent weeks based on the following paper.

Brenna Argall, Sonia Chernova and Manuela Veloso. A Survey of Robot Learning from Demonstration. Robotics and Autonomous Systems. Vol. 57, No. 5, pages 469-483, 2009.

Link

abstract:
We present a comprehensive survey of robot Learning from Demonstration (LfD), a technique that develops policies from example state to action mappings. We introduce the LfD design choices in terms of demonstrator, problem space, policy derivation and performance, and contribute the foundations for a structure in which to categorize LfD research. Specifically, we analyze and categorize the multiple ways in which examples are gathered, ranging from teleoperation to imitation, as well as the various techniques for policy derivation, including matching functions, dynamics models and plans. To conclude we discuss LfD limitations and related promising areas for future research.

Wednesday, May 13, 2009

MIT CSAIL Technical Report: Scene Classification with a Biologically Inspired Method

Title: Scene Classification with a Biologically Inspired Method
(MIT CSAIL Technical Report)

Abstract:
We present a biologically motivated method for scene image classification. The core of the method is to use shape based image property that is provided by a hierarchical feedforward model of the visual cortex [18]. Edge based and color based image properties are additionally used to improve the accuracy. The method consists of two stages of image analysis. In the first stage, each of three paths of classification uses each image property (i.e. shape, edge or color based features) independently. In the second stage, a single classifier assigns the category of an image based on the probability distributions of the first stage classifier outputs. Experiments show that the method boosts the classification accuracy over the shape based model. We demonstrate that this method achieves a high accuracy comparable to other reported methods on publicly available color image dataset.

Monday, May 11, 2009

VASC Seminar: A Data-Driven Vision Compiler for Automatic Object Pose Recognition

VASC Seminar
Monday, May 11, 2009
3:30p-4:30p
NSH 1507

A Data-Driven Vision Compiler for Automatic Object Pose Recognition

Rosen Diankov
Carnegie Mellon University, Robotics Institute


Abstract:

This presentation focuses on an object-specific vision system that detects and extracts the precise 6D pose of objects in an image. The system builds a data-driven statistical model of the expected features of an object's surface and combines this with a discrete search method
to extract the pose of all object. The training phase of the vision system can be interpreted as a compiler that automatically analyzes the statistics of how the features are distributed on the object and determines a feature set's stability and discriminable power. This compilation phase requires the precise CAD model of an object along with a training set of real-world images. After compilation, a CAD-independent model of how features relate with respect to theobject's pose and inter-relate with each other is created. These relationships allow both point-based features like SIFT and edge-based features to be used simultaneously when computing the 6D pose of an
object. Using this data-driven model, we employ a discrete randomized search with RANSAC to find the poses of all instances of the object in a novel image.


Bio:

Rosen Diankov graduated from University of California Berkeley in 2006 with Electrical Engineering and Computer Science, and Applied Math degrees. At the moment he is a PhD graduate student at the Robotics Institute at Carnegie Mellon University. Rosen's main research focus is tackling the robotics problem: combining perception, planning, and control into one coherent framework. Up until now he has worked on several vision and planning systems involving autonomous robots in everyday scenarios both in the United States and Japan.

VASC Seminars are sponsored by Tandent Vision Science, Inc.

Lab Meeting May 25, 2009(Chung-Han) COLD : The CoSy Localization Database

Title : COLD : The CoSy Localization Database

Author : A. Pronobis, B. Caputo

Abstract : Two key competencies for mobile robotic systems are localizationand semantic context interpretation. Recently, vision has become themodality of choice for these problems as it provides richer and moredescriptive sensory input. At the same time, designing and testingvision-based algorithms still remains a challenge, as large amounts ofcarefully selected data are required to address the high variability ofvisual information. In this paper we present a freely available databasewhich provides a large-scale, flexible testing environment forvision-based topological localization and semantic knowledge extractionin robotic systems. The database contains 76 image sequencesacquired in three different indoor environments across Europe. Acquisitionwas performed with the same perspective and omnidirectionalcamera setup, in rooms of different functionality and under variousconditions. The database is an ideal testbed for evaluating algorithmsin real-world scenarios with respect to both dynamic and categoricalvariations.


[Full text]

Tuesday, May 05, 2009

CMU talk: Fast Feature Detection and Stochastic Parameter Estimation of Road Shape using Multiple LIDAR

FRC Seminar

Fast Feature Detection and Stochastic Parameter Estimation of Road Shape using Multiple LIDAR

Kevin Peterson
PhD Student, Robotics Institute, Carnegie Mellon University


Thursday, May 7th, 2009

Developers of autonomous vehicles must overcome significant challenges before these vehicles can operate around human drivers. In urban environments autonomous cars will be required to follow complex traffic rules regarding merging and queuing, navigate in close proximity to other drivers, and safely avoid collision with pedestrians and fixed obstacles near the road. Knowledge of the location and shape of the roadway near then autonomous car is fundamental to these behaviors. While it is tempting to build an /a priori /GPS-registered map of the road network, the possibility for change in the road structure (e.g. new roads, construction, etc.) precludes the use of maps alone. It is therefore necessary to detect and track roads in real-time.

A rich body of work exists in the area of road tracking. Although some early work was performed on unimproved roads, a majority of the available research focuses on paved roads and highways. Additionally, a vast majority of this work focuses on the use of cameras as the single sensing modality. In this talk I will present a framework for road tracking that uses a particle filter to fuse several sources of data including LIDAR and video. The approach enables road tracking on unimproved roads and, because of the flexible nature of the particle filter, can easily be extended to incorporate new forms of data. I will present results from our preparations for the DARPA Urban Challenge.

Speaker Bio: Kevin Peterson’s research focuses on perception techniques for robust autonomous vehicles. Kevin holds a B.S. and M.S. in Electrical and Computer Engineering from Carnegie Mellon University in Pittsburgh, Pennsylvania and is currently pursuing a PhD in Robotics also from Carnegie Mellon University. Kevin has built software systems for many autonomous systems with applications ranging from cave exploration to unexploded ordinance cleanup. Notably, Kevin led Red Team Too, one of CMUs entries in the 2005 DARPA Grand Challenge and was a participant on Tartan Racing, CMUs winning entry in the 2007 Urban Grand Challenge. Since then, he has been applying inverse optimal control techniques to build models of pedestrian motion in structured and unstructured environments.

Saturday, May 02, 2009

NTU PhD oral: Mobile Agent Enhanced Service-Oriented Smart Home Architecture and its Human-System Interaction Framework and Algorithm

NTU CSIE PhD oral:

Mobile Agent Enhanced Service-Oriented Smart Home Architecture and its Human-System Interaction Framework and Algorithm

Chao-Lin Wu

Date: May 7, 2009 (Thursday)
Time: 10 am~ 12 noon
Place: CSIE R340

Advisor: Li-Chen Fu

NTU PhD oral: Computer Vision Techniques for Effective Pedestrian Detection

Computer Vision Techniques for Effective Pedestrian Detection

Yu-Ting Chen

Date: May 6, 2009 (Wednesday)
Time: 10 am~12 noon
Place: CSIE R440

Advisors: Chu-Song Chen and Yi-Ping Hung

口試委員:
(1) 校內:洪一平老師、陳祝嵩老師、傅立成老師、王傑智老師
(2) 校外:陳世旺老師、王聖智老師、賴尚宏老師、鍾國亮老師

MIT PhD Thesis: Robust and Efficient Robotic Mapping

MIT PhD Thesis

Title: Robust and Efficient Robotic Mapping

Author: Edwin B. Olson
Date: June 2008

Abstract
Mobile robots are dependent upon a model of the environment for many of their basic functions. Locally accurate maps are critical to collision avoidance, while large-scale maps (accurate both metrically and topologically) are necessary for efficient route planning. Solutions to these problems have immediate and important applications to autonomous vehicles, precision surveying, and domestic robots.
Building accurate maps can be cast as an optimization problem: find the map that is most probable given the set of observations of the environment. However, the problem rapidly becomes difficult when dealing with large maps or large numbers of observations. Sensor noise and non-linearities make the problem even more difficult— especially when using inexpensive (and therefore preferable) sensors.
This thesis describes an optimization algorithm that can rapidly estimate the maximum likelihood map given a set of observations. The algorithm, which iteratively reduces map error by considering a single observation at a time, scales well to large environments with many observations. The approach is particularly robust to noise and non-linearities, quickly escaping local minima that trap current methods. Both batch and online versions of the algorithm are described.
In order to build a map, however, a robot must first be able to recognize places that it has previously seen. Limitations in sensor processing algorithms, coupled with environmental ambiguity, make this difficult. Incorrect place recognitions can rapidly lead to divergence of the map. This thesis describes a place recognition algorithm that can robustly handle ambiguous data.
We evaluate these algorithms on a number of challenging datasets and provide quantitative comparisons to other state-of-the-art methods, illustrating the advantages of our methods.


Link: pdf

CMU talk: Click Chain Model in Web Search

ML Lunch talk:
Speaker: Fan Guo
Date: Monday, May 4, 2009

Title: Click Chain Model in Web Search

Abstract:
Given a terabyte click log, can we build an efficient and effective click model? It is commonly believed that web search click logs are a gold mine for search business, because they reflect users' preference over web documents presented by the search engine. Click models provide a principled approach to inferring user-perceived relevance of web documents, which can be leveraged in numerous applications in search businesses. Due to the huge volume of click data, scalability is a must. I will present the click chain model, which is based on a solid, Bayesian framework. It is both scalable and incremental, perfectly meeting the computational challenges imposed by the voluminous click logs that constantly grow.

Joint work with Chao Liu, Anitha Kannan, Tom Minka, Michael Taylor, Yi-Min Wang and Christos Faloutsos.

Friday, May 01, 2009

NTU talk: AHCBoost: Boosting for Multi-class Classification

Title: AHCBoost: Boosting for Multi-class Classification
Speaker: Prof. Andy Chen-Hai Tsao, National Dong Hwa University
Time: 02:20pm, May 22 (Friday), 2009
Place: Room 102, CSIE building

Abstract:
AdaBoost is one of the important ensemble classifiers developed in the last decade. However, there are difficulties in applying AdaBoost for multi-class classifications. In this study, we introduce the adjustable hyperbolic cosine loss to develop a new boosting algorithm, AHCBoost. Our experiments on benchmark data sets suggest that AHCBoost is very competitive with or even better than the multi-class classifiers SVM and glmBoost. In addition to the fast reduction of training and testing errors, AHCBoost is relatively immune to overfitting and requires little parameter tuning. Some experiments for exploring the potential and limitation of using AHCBoost for ordinal response will also be reported.

Short Biography:

Professional Positions
Department of Applied Math, National Dong Hwa University
* Professor: 2005 to present
* Associate Professor: 1997 to 2005
* Assistant Professor : 1995 to 1997

Institute of Math Statistics, National Chung Cheng University
* Visiting Associate Professor: 1994 to 1995