Friday, November 30, 2007

IROS 2007: Spatial Reasoning for Human Robot Interaction

Emrah Akin Sisbot, Luis F. Marin and Rachid Alami

Abstract
Robots’ interaction with humans raises new issuesfor geometrical reasoning where the humans must be taken explicitly into account. We claim that a human-aware motion system must not only elaborate safe robot motions, but also synthesize good, socially acceptable and legible movement.
This paper focuses on a manipulation planner and a placement mechanism that take explicitly into account its human partners by reasoning about their accessibility, their vision field and their preferences. This planner is part of a human-aware motion and manipulation planning and control system that we aim to develop in order to achieve motion and manipulation tasks in presence or in synergy with humans.

Tuesday, November 27, 2007

[Intelligence Seminar]Activity Recognition from Wearable Sensors

Intelligence Seminar
Title: Activity Recognition from Wearable Sensors
Date: Nov 29

Speaker:

Dieter Fox is Associate Professor and Director of the Robotics and State Estimation Lab in the Computer Science & Engineering Department at the University of Washington, Seattle. He obtained his Ph.D. from the University of Bonn, Germany. Before joining UW, he spent two years as a postdoctoral researcher at the CMU Robot Learning Lab.

Dieter's research focuses on probabilistic state estimation with applications in robotics and activity recognition.

Abstract:

Recent advances in wearable sensing and computing devices and in fast, probabilistic inference techniques make possible the fine-grained estimation of a person's activities over extended periods of time. In this talk I will show how dynamic Bayesian networks and conditional random fields can be used to estimate the location and activity of a person based on information such as GPS readings or WiFi signal strength. Our models use multiple levels of abstraction to bridge the gap between raw sensor measurements and high level information such as a user's mode of transportation, her current goal, and her significant places (e.g. home or work place). I will also present work on using RFID tags or a wearable multi-sensor system to estimate a person's fine-grained activities.

This is joint work with Brian Ferris, Lin Liao, Don Patterson, Amarnag Subramanya, Jeff Bilmes, Gaetano Borriello, and Henry Kautz.

Monday, November 26, 2007

[IROS'07]Feature Selection in Conditional Random Fields for Activity Recognition

Title: Feature Selection in Conditional Random Fields for Activity Recognition

Author:
Vail, Douglas Carnegie Mellon Univ.
Lafferty, John Carnegie Mellon Univ.
Veloso, Manuela Carnegie Mellon Univ.

Abstract:

Temporal classification, such as activity recognition,
is a key component for creating intelligent robot systems.
In the case of robots, classification algorithms must robustly
incorporate complex, non-independent features extracted from
streams of sensor data. Conditional random fields are discriminatively
trained temporal models that can easily incorporate
such features. However, robots have few computational
resources to spare for computing a large number of features
from high bandwidth sensor data, which creates opportunities
for feature selection. Creating models that contain only the most
relevant features reduces the computational burden of temporal
classification. In this paper, we show that l1 regularization is an
effective technique for feature selection in conditional random
fields. We present results from a multi-robot tag domain with
data from both real and simulated robots that compare the
classification accuracy of models trained with l1 regularization,
which simultaneously smoothes the model and selects features;
l2 regularization, which smoothes to avoid over-fitting, but
performs no feature selection; and models trained with no
smoothing.

Sunday, November 25, 2007

VASC Seminar : Object Recognition by Scene Alignment

Bryan Russell
MIT
Monday, Nov 26, 3:30pm, NSH 1507

Current object recognition systems can only recognize a limited number of object categories; scaling up to many categories is the next challenge inobject recognition. We seek to build a system to recognize and localize many different object categories in complex scenes. We achieve thisthrough a deceptively simple approach: by matching the input image, in anappropriate representation, to images in a large training set of labeled images. This gives us a set of retrieval images, which provide hypothesesfor object identities and locations. We combine this knowledge from theretrieval images with an object detector to detect objects in the image. The simplicity of the approach allows learning for a large number ofobject classes embedded in many different scenes. We demonstrate improvedclassification and localization performance over a standard objectdetector using a held-out test set from the Label Me database.Furthermore, our system restricts the object search space and therefore greatly increases computational efficiency.

Bio:
After leaving sunny Phoenix, AZ, Bryan received his A.B. from DartmouthCollege. He recently defended his dissertation "Labeling, Discovering,and Detecting Objects in Images" at MIT under the supervision of WilliamFreeman and Antonio Torralba. His next journey will be as a post-doctoral fellow at Ecole Normale Supérieure under Jean Ponce and Andrew Zisserman.There, he will continue to pursue research in visual object recognitionand scene understanding.

Saturday, November 24, 2007

IROS 2007 : A Spatio-Temporal Probabilistic Model for Multi-Sensor Object

Bertrand Douillard, Dieter Fox, Fabio Ramos

Abstract:

This paper presents a general framework for multi-sensor object recognition through a discriminative probabilistic approach modelling spatial and temporal correlations.The algorithm is developed in the context of Conditional Random Fields (CRFs) trained with virtual evidence boosting.The resulting system is able to integrate arbitrary sensorinformation and incorporate features extracted from the data.The spatial relationships captured by are further integratedinto a smoothing algorithm to improve recognition over time.We demonstrate the benefits of modelling spatial and temporal relationships for the problem of detecting cars using laser and vision data in outdoor environments.

link

Friday, November 23, 2007

IROS 2007: Detection and Tracking of Multiple Pedestrians

Xiaowei Shao, Huijing Zhao, Katsuyuki Nakamura, Kyoichiro Katabira, Ryosuke Shibasaki and Yuri Nakagawa

Abstract:
We propose a novel system for tracking multiple
pedestrians in a crowded scene by exploiting single-row laser
range scanners that measure distances of surrounding objects.
A walking model is built to describe the periodicity of the
movement of the feet in the spatial-temporal domain, and a
mean-shift clustering technique in combination with spatialtemporal
correlation analysis is applied to detect pedestrians.
Based on the walking model, particle filter is employed to track
multiple pedestrians. Compared with camera-based methods,
our system provides a novel technique to track multiple pedestrians
in a relatively large area. The experiments, in which over
300 pedestrians were tracked in 5 minutes, show the validity
of the proposed system.

IROS 2007: An Augmented State Vector Approach to GPS-Based Localization

Francesco Capezio, Antonio Sgorbissa, Renato Zaccaria
DIST – University of Genova, Italy

Abstract:
The paper focuses on the localization subsystem
of ANSER, a mobile robot for autonomous surveillance in
civilian airports and similar wide outdoor areas. ANSER
localization subsystem is composed of a non-differential GPS
unit and a laser rangefinder for landmark-based localization
(inertial sensors are absent). An augmented state vector
approach and an Extended Kalman filter are successfully
employed to estimate the colored components in GPS noise,
thus getting closer to the conditions for the EKF to be
applicable.

Thursday, November 22, 2007

CMU RI Thesis Proposal Nov 27, 2007 Peer-Advising: An Approach for Policy Improvement when Learning by Demonstration

Brenna Argall

Abstract:
The presence of robots within the world is becoming ever more prevalent. Whether exploration rovers in space or recreational robots for the home, successful autonomous robot operation requires a motion control algorithm, or policy, which maps observations of the world to actions available on the robot. Policy development is generally a complex process restricted to experts within the field. However, as robots become more commonplace, the need for policy development which is straightforward and feasible for non-experts will increase. Furthermore, as robots co-exist with people, humans and robots will necessarily share experiences. With this thesis, we explore an approach to policy development which exploits information from shared human-robot experience. We introduce the concept of policy development through peer-advice: to improve its policy, the robot learner takes advice from a human peer. We characterize a peer as able to execute the robot motion task herself, and to evaluate robot performance according to the measures used to evaluate her own executions.

We develop peer-advising within a Learning by Demonstration (LbD) framework. In typical LbD systems, a teacher provides demonstration data, and the learner estimates the underlying function mapping observations to actions within this dataset. With our approach, we extend this framework to then enter an explicit policy improvement phase. We identify two basic conduits for policy improvement within this setup: to modify the demonstration dataset, and to change the approximating function directly. The former approach we refer to as data-advising, and the latter as function-advising. We have developed a preliminary algorithm which extends the LbD framework along both of these conduits.

This algorithm has been validated empirically both within simulation and using a Segway RMP robot. Peer-advice has proven effective towards control policy modification, and to improve policy performance. Within classical LbD learner performance is limited by the demonstrator’s abilities; however, through advice learner performance has been shown to extend and even exceed capabilities of the demonstration set. In our proposed work, we will further develop and explore peer-advice as an effective tool for LbD policy improvement. Our primary focus will be the development of novel techniques for both function-advising and data-advising. This proposed work will be validated on a Segway RMP robot.

Link

IROS07: Improved Likelihood Models for Probabilistic Localization based on Range Scans

Patrick Pfaff, Christian Plagemann, and Wolfram Burgard
University of Freiburg

Abstract—Range sensors are popular for localization since they directly measure the geometry of the local environment. Another distinct benefit is their typically high accuracy and spatial resolution. It is a well-known problem, however, that the high precision of these sensors leads to practical problems in probabilistic localization approaches such as Monte Carlo localization (MCL), because the likelihood function becomes extremely peaked if no means of regularization are applied. In practice, one therefore artificially smoothes the likelihood function or only integrates a small fraction of the measurements. In this paper we present a more fundamental and robust approach, that provides a smooth likelihood model for entire range scans. Additionally, it is location-dependent. In practical experiments we compare our approach to previous methods and demonstrate that it leads to a more robust localization.

IROS07: Global Urban Localization based on Road Maps

Jose Guivant and Roman Katz
ARC Centre of Excellence for Autonomous Systems
Australian Centre for Field Robotics
The University of Sydney, Australia

Abstract—This paper presents a method to perform localization in urban environments using segment-based maps together with particle filters. In the proposed approach, the likelihood function is generated as a grid, derived from segment-based maps. The scheme can efficiently assign weights to the particles in real time, with minimum memory requirements and without any additional pre-filtering procedure. Multi-hypotheses cases are handled transparently by the filter. A local history-based observation model is formulated as an extension to deal with ‘out-of-map’ navigation cases. This feature is highly desirable since the map can be incomplete, or the vehicle can be actually located outside the boundaries of the provided map. The system behaves like a ‘virtual GPS’, providing global localization in urban environments, without using an actual GPS. Experimental results show the performance of the proposed architecture in large scale urban environments using route network description (RNDF) segment-based maps.

[Robotics Institute Seminar]Conditional Random Fields for Labeling Tasks in Robotics

Date: Nov. 30, 2007
Title: Conditional Random Fields for Labeling Tasks in Robotics
Speaker: Dieter Fox, University of Washington, Seattle

Abstract:

Over the last decade, the mobile robotics community has developed highly efficient and robust solutions to estimation problems such as robot localization and map building. With the availability of various techniques for spatially consistent sensor integration, an important next goal is the extraction of high-level information from sensor data. Such information is often discrete, requiring techniques different from those typically applied to mapping and localization.

In this talk I will describe how Conditional Random Fields (CRF) can be applied to tasks such as semantic place labeling, object recognition, and scan matching. CRFs are discriminative, undirected graphical models that were developed for labeling sequence data. Due to their ability to handle arbitrary dependencies between observation features, CRFs are extremely well suited for classification problems involving high-dimensional feature vectors.

This is joint work with Bertrand Douillard, Stephen Friedman, Benson Limketkai, Lin Liao, and Fabio Ramos.

[IROS2007] Decentralized SLAM for Pedestrians without direct Communication

Title:
Decentralized SLAM for Pedestrians without direct Communication

Author:
Alexander Kleiner and Dali Sun

Abstract:
We consider the problem of Decentralized Simultaneous Localization And Mapping (DSLAM) for pedestrians in the context of Urban Search And Rescue (USAR). In this context, DSLAM is a challenging task. First, data exchange fails due to cut off communication links. Second, loop-closure is cumbersome due to the fact that fireman will intentionally try to avoid performing loops, when facing the reality of emergency response, e.g. while they are searching for victims.

In this paper, we introduce a solution to this problem based on the non-selfish sharing of information between pedestrians for loop-closure. We introduce a novel DSLAM method which is based on data exchange and association via RFID technology, not requiring any radio communication. The approach has been evaluated within both outdoor and semi-indoor environments. The presented results show that sharing information between single pedestrians allows to optimize globally their individual paths, even if they are not able to communicate directly.

Link:
http://www.informatik.uni-freiburg.de/~kleiner/papers/kleiner_et_al_tr07d.pdf

Wednesday, November 21, 2007

[IROS 07] Ground Truth Evaluation of Large Urban 6D SLAM

Author: Oliver Wulf, Andreas NĂĽchter, Joachim Hertzberg, and Bernardo Wagner

Abstract—In the past many solutions for simultaneous localization and mapping (SLAM) have been presented. Recently these solutions have been extended to map large environments
with six degrees of freedom (DoF) poses. To demonstrate the capabilities of these SLAM algorithms it is common practice to present the generated maps and successful loop closing.
Unfortunately there is often no objective performance metric that allows to compare different approaches. This fact is attributed to the lack of ground truth data. For this reason we present a novel method that is able to generate this ground truth data based on reference maps. Further on, the resulting reference path is used to measure the absolute performance of different 6D SLAM algorithms building a large urban outdoor map.

(not available online jet -> lab server /2007.IROS/data/papers/0154.pdf)

Tuesday, November 20, 2007

Lab Meeting November 20, 2007 (YuChun): Design of a Social Mobile Robot Using Emotion-Based Decision Mechanisms

Author:
Geoffrey A. Hollinger, Yavor Georgiev, Anthony Manfredi, Bruce A. Maxwell, Zachary A. Pezzementi, and Benjamin Mitchell

IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems

Abstract:
In this paper, we describe a robot that interacts with humans in a crowded conference environment. The robot detects faces, determines the shirt color of onlooking conference attendants, and reacts with a combination of speech, musical, and movement responses. It continuously updates an internal emotional state, modeled realistically after human psychology research. Using empirically-determined mapping functions, the robot’s state in the emotion space is translated to a particular set of sound and movement responses. We successfully demonstrate this system at the AAAI ’05 Open Interaction Event, showing the potential for emotional modeling to improve human-robot interaction.

link

Monday, November 19, 2007

Lab Meeting November 20, 2007 (Stanley): Progress Report

I will introduce PID control and talk about my progress of "Position PID controller".

Sunday, November 18, 2007

Lab Meeting November 20, 2007 (fish60):Symbolic Planning and Control of Robot Motion

I will talk about what I've read this week.

Environment-Driven Discretization
Control-Driven Discretization

Link

Lab Meeting 20 November (Anta): Clustering by Passing Messages Between Data Points

From: Science VOL 315 ,16 FEBRUARY 2007

Author: Brendan J. Frey and Delbert Dueck

Abstract:
Clustering data by identifying a subset of representative examples is important for processing sensory signals and detecting patterns in data. Such “exemplars” can be found by randomly choosing an initial subset of data points and then iteratively refining it, but this works well only if that initial choice is close to a good solution. We devised a method called “affinity propagation,” which takes as input measures of similarity between pairs of data points. Real-valued messages are exchanged between data points until a high-quality set of exemplars and corresponding clusters gradually emerges. We used affinity propagation to cluster images of faces, detect genes in microarray data, identify representative sentences in this manuscript, and identify cities that are efficiently accessed by airline travel. Affinity propagation found clusters with much lower error than other methods, and it did so in less than one-hundredth the amount of time.

link

Saturday, November 17, 2007

News: Is mathematical pattern the theory of everything?

www.newscientist.com
* 17 November 2007
* Zeeya Merali
* Magazine issue 2630

GARRETT LISI is an unlikely individual to be staking a claim for a theory of everything. He has no university affiliation and spends most of the year surfing in Hawaii. In winter, he heads to the mountains near Lake Tahoe, California, to teach snowboarding. Until recently, physics was not much more than a hobby.

That hasn't stopped some leading physicists sitting up and taking notice after Lisi made his theory public on the physics pre-print archive this week (www.arxiv.org/abs/0711.0770). By analysing the most elegant and intricate pattern known to mathematics, Lisi has uncovered a relationship underlying all the universe's particles and forces, including gravity -

See the full article.

Thursday, November 15, 2007

CMU ML Lunch: The Maximum Entropy Principle

Speaker: Miroslav Dudik, post-doc in MLD
Title: The Maximum Entropy Principle
Date: Monday November 19

Abstract:
The maximum entropy principle (maxent) has been applied to solve density estimation problems in physics (since 1871), statistics and information theory (since 1957), as well as machine learning (since 1993). According to this principle, we should represent available information as constraints and among all the distributions satisfying the constraints choose the one of maximum entropy. In this overview I will contrast various motivations of maxent with the main focus on applications in statistical inference. I will discuss the equivalence between robust Bayes, maximum entropy, and regularized maximum likelihood estimation, and the implications for principled statistical inference. Finally, I will describe how maxent has been applied to model natural languages and geographic distributions of species.

News: 'Personal Robot' wins iRobot's challenge

A personal robot that can water plants, remind owners to take their medication, turn lights on and off, and control appliances has won a contest sponsored by iRobot.

Danh Trinh, 35, of Towson, Md., won iRobot's Create Challenge contest and its $5,000 prize, with his Personal Home Robot, the company announced Tuesday.

iRobot Create is a preassembled programmable robot designed so developers can create new robots without having to build everything from scratch.

See the full article.


Jeff, make our PAL4 robot smarter and more powerful!

-Bob

Monday, November 12, 2007

Lab Meeting November 13, 2007 (Leo) : Tracking Multiple Targets With Correlated Measurements and Maneuvers

Tracking Multiple Targets With Correlated Measurements and Maneuvers

Author: Rogers, S.R.


Abstract:

The problem of tracking N targets with correlation in both
measurement and maneuver statistics is solved by transforming to a
coordinate frame in which the N targets are decoupled. For the case
of N identical targets, the decoupling is shown to coincide with a
transformation to a set of nested center-of-mass coordinates.
Absolute and differential tracking accuracies are compared with
suboptimal results to show the improvement that is achieved by
properly exploiting the correlation between targets.

[link]

Lab Meeting November 13, 2007 (Chihao):Acoustic events localization using SFS

I will show the results of Structure from Sound and discuss some issue.

Lab Meeting 13 November (Any): An Efficient FastSLAM Algorithm for Generating Maps of Large-Scale Cyclic Environments from Raw Laser Range Measurement

Dirk Hähnel, Wolfram Burgard, Dieter Fox and Sebastian Thrun

Intl. Conference on Intelligent Robots and Systems

The ability to learn a consistent model of its environment is a prerequisite for autonomous mobile robots. A particularly challenging problem in acquiring environment maps is that of closing loops; loops in the environment create challenging data association problems. This paper presents a novel algorithm that combines Rao-Blackwellized particle filtering and scan matching. In our approach scan matching is used for minimizing odometric errors during mapping. A probabilistic model of the residual errors of scan matching process is then used for the resampling steps. This way the number of samples required is seriously reduced. Simultaneously we reduce the particle depletion problem that typically prevents the robot from closing large loops. We present extensive experiments that illustrate the superior performance of our approach compared to previous approaches. - Link.

[CVPR'06] Robust AAM Fitting by Fusion of Images and Disparity Data

Title: Robust AAM Fitting by Fusion of Images and Disparity Data

Author: Joerg Liebelt@cmu, Jing Xiao@Epson, Jie Yang@cmu

Abstract:
Active Appearance Models (AAMs) have been popularly used to represent the appearance and shape variations of human faces. Fitting an AAM to images recovers the face pose as well as its deformable shape and varying appearance. Successful fitting requires that the AAM is sufficiently generic such that it covers all possible facial appearances and shapes in the images. Such a generic AAM is often difficult to be obtained in practice, especially when the image quality is low or when occlusion occurs. To achieve robust AAM fitting under such circumstances, this paper proposes to incorporate the disparity data obtained from a stereo camera with the image fitting process. We develop an iterative multi-level algorithm that combines efficient AAM fitting to 2D images and robust 3D shape alignment to disparity data. Experiments on tracking faces in low-resolution images captured from meeting scenarios show that the proposed method achieves better performance than the original 2D AAM fitting algorithm. We also demonstrate an application of the proposed method to a facial expression recognition task.

paper link

Sunday, November 11, 2007

Giggling Robot Becomes One of the Kids

* 20:00 05 November 2007
* NewScientist.com news service
* Mason Inman

Computers might not be clever enough to trick adults into thinking they are intelligent yet, but a new study shows that a giggling robot is sophisticated enough to get toddlers to treat it as a peer.

An experiment led by Javier Movellan at the University of California San Diego, US, is the first long-term study of interaction between toddlers and robots.

QRIO stayed in the middle of a classroom of a dozen toddlers aged between 18 months and two years, using its sensors to avoid bumping the kids or the walls. It was initially programmed to giggle when the kids touched its head, to occasionally sit down, and to lie down when its batteries died. A human operator could also make the robot turn its gaze towards a child or wave as they went away. "We expected that after a few hours, the magic was going to fade," Movellan says. "That's what has been found with earlier robots." But, in fact, the kids warmed to the robot over several weeks, eventually interacting with QRIO in much the same way they did with other toddlers. These interactions increased in quality over several months.

Eventually, the children seemed to care about the robot's well being. They helped it up when it fell, and played "care-taking" games with it. When the researchers programmed QRIO to spend all its time dancing, the kids quickly lost interest. When the robot went back to its old self, the kids again treated it like a peer again.

Movellan says that a robot like this might eventually be useful as a classroom assistant. "You can think of it as an appliance," he says. "We need to find the things that the robots are better at, and leave to humans the things humans are better at," Movellan says.

full article
video

Lab Meeting November 13th, 2007 A high integrity IMU/GPS navigation loop for autonomous landvehicle applications

Abstract
This paper describes the development and implementation of a high integrity navigation system, based on the combined use of the Global Positioning System (GPS) and an inertial measurement unit (IMU), for autonomous land vehicle applications. The paper focuses on the issue of achieving the integrity required of the navigation loop for use in autonomous systems. The paper highlights the detection of possible faults both before and during the fusion process in order to enhance the integrity of the navigation loop. The implementation of this fault detection methodology considers both low frequency faults in the IMU caused by bias in the sensor readings and the misalignment of the unit, and high frequency faults from the GPS receiver caused by multipath errors. The implementation, based on a low-cost, strapdown IMU, aided by either standard or carrier phase GPS technologies, is described. Results of the fusion process are presented
TEXT LINK

Friday, November 09, 2007

[ML Lunch] Monday Nov 12 at noon in NSH 1507: Cosma Shalizi on Stochastic Processes

Speaker: Cosma Shalizi, Assistant Professor, Statistics, CMU
Title: Spatiotemporal Stochastic Processes and Their Prediction
Venue: NSH 1507
Date: Monday November 12
Time: 12:00 noon

Abstract:
This talk will continue the over-view of stochastic processes, movingfrom those which just evolve in time to ones which evolve in time andspace, where "space" can be a regular lattice, Euclidean space, a graph, etc. Adding space creates lots of interesting possibilities, which I'llillustrate with "cellular automata" models of physical and biologicalself-organization. After the challenges this setting raises for statistical learning have had a chance to sink in, I'll describe anapproach to discovering efficient "local predictors", and using them toautomatically identify interesting coherent structures in spatio-temporal data.

[ML Lunch] Nati Srebro (Thursday, 11/08/07 at 5pm in NSH 3002)

Does a large data-set mean more, or less, work?

In devising methods for optimization problems associated with learning tasks, and in studying the runtime of these methods, we usually think of the runtime as increasing with the data set size. However, from a learning performance perspective, having more data available should not mean we need to spend more time optimizing. At the extreme, we can always ignore some of the data if it makes optimization difficult. But perhaps having more data available can actually allow us to spend less time optimizing?


Two types of behaviors:

(1) a phase transition behavior, where a computationally intractable problems becomes tractable, at the cost of excess information. I will demonstrate this through a detailed study of informational and computational limits in clustering.

(2) the scaling of the computational cost of training, e.g. supportvector machines (SVMs). I will argue that the computational cost should scale down with data set size, and up with the "hardness" ofthe decision problem. In particular, I will describe a simple training procedure, achieving state-of-the-art performance on large data sets, whose runtime does not increase with data set size.

[CMU RI Thesis Oral] Face View Synthesis Using A Single Image

Thesis title: Face View Synthesis Using A Single Image
Speaker: Jiang Ni

Abstract:
Face view synthesis involves using one view of a face to artificially render another view. It is an interesting problem in computer vision and computer graphics, and can be applied in the entertainment industry for animated movies and video games. The fact that the input is only a single image, makes the problem very difficult. Previous approaches learn a linear model on pair of poses from 2D training data and then predict the unknown pose in the test example. Such 2D approaches are much more practical than approaches requiring 3D data and more computationally efficient. However they perform inadequately when dealing with large angles between poses. In this thesis, we seek to improve performance through better choices in probabilistic modeling. As a first step, we have implemented a statistical model combining distance in feature space (DIFS) and distance from feature space (DFFS) for such pair of poses. Such a representation leads to better performance. As a second step, we model the relationship between the poses using a Bayesian network. This representation takes advantage of the sparse statistical structure of faces. In particular, we have observed that a given pixel is often statistically correlated with only a small number of other pixel variables. The Bayesian network provides a concise representation for this behavior reducing the susceptibility to over-fitting. Compared with the linear method, the Bayesian network more accurately predicts small and localized features.

Here is the link.

Tuesday, November 06, 2007

Lab Meeting November 6th, 2007

Omnidirectional vision scan matching for robot localization in dynamic environments
download: http://w.csie.org/~b93026/OmnidirectionalVisionScan.pdf

Abstract
The localization problem for an autonomous robot moving in a known environment is a well-studied problem which has seen many elegant solutions. Robot localization in a dynamic environment populated by several moving obstacles, however, is still a challenge for research. In this paper, we use an omnidirectional camera mounted on a mobile robot to perform a sort of scan matching. The omnidirectional vision system finds the distances of the closest color transitions in the environment, mimicking the way laser rangefinders detect the closest obstacles. The similarity of our sensor with classical rangefinders allows the use of practically unmodified Monte Carlo algorithms, with the additional advantage of being able to easily detect occlusions caused by moving obstacles. The proposed system was initially implemented in the RoboCup Middle-Size domain, but the experiments we present in this paper prove it to be valid in a general indoor environment with natural color transitions. We present localization experiments both in the RoboCup environment and in an unmodified office environment. In addition, we assessed the robustness of the system to sensor occlusions caused by other moving robots. The localization system runs in real-time on low-cost hardware.

Monday, November 05, 2007

Lab Meeting November 6th, 2007 (Atwood) : Conditional Random Fields for Binary Image Denoising

I will make an introduction to Conditional Random Fields (CRFs) and one specific form of CRFs that I recently work on for binary image denoising.

related links:

An Introduction to Conditional Random Fields for Relational Learning

Accelerated Training of Conditional Random Fields

Conditional Random Fields: Probabilistic Models for Segmenting and and Labeling Sequence Data

Lab Meeting November 6th, 2007 (Yu-Hsiang) : Learning and Inferring Transportation Routines

Title : Learning and Inferring Transportation Routines
Author : Lin Liao, Donald J. Patterson, Dieter Fox, Henry Kautz

Abstract :
This paper introduces a hierarchical Markov model that can learn and infer auser’s daily movements through an urban community. The model uses multiple levelsof abstraction in order to bridge the gap between raw GPS sensor measurementsand high level information such as a user’s destination and mode of transportation.To achieve efficient inference, we apply Rao-Blackwellized particle filters at multiplelevels of the model hierarchy. Locations such as bus stops and parking lots, wherethe user frequently changes mode of transportation, are learned from GPS datalogs without manual labeling of training data. We experimentally demonstrate howto accurately detect novel behavior or user errors (e.g. taking a wrong bus) byexplicitly modeling activities in the context of the user’s historical data. Finally, wediscuss an application called “Opportunity Knocks” that employs our techniques tohelp cognitively-impaired people use public transportation safely.

link

Lab Meeting November 6th, 2007 (Kuo_Hwei Lin):Progress report

I will present my recent work on "Moving Objects Detection" with scan data in two virsion.

Thursday, November 01, 2007

[ML Lunch Seminars] Cosma Shalizi : Stochastic Processes and Their Prediction

Speaker: Cosma Shalizi, Assistant Professor, Statistics, CMU
Title: Stochastic Processes and Their Prediction
Venue: NSH 1507
Date: Monday November 5
Time: 12:00 noon

Abstract:
Stochastic processes are collections of interdependent random variables; this talk will be an overview of some of the main concepts, and ways in which they might interest people in machine learning. After a brief mathematical introduction, I focus on stochastic processes whose variables are indexed by time, which are closely related to dynamical systems. The key problem here is understanding the dependence of the variables across time, and the different sorts of long-run behavior to which it can give rise. I will talk about various kinds of dependence structure, especially Markov dependence; how to give Markovian representations of non-Markovian processes
; and how to use these Markovian representations for prediction. Finally, I'll close with some recent work on discovering predictive Markovian representations from time series.