Tuesday, December 30, 2008

Robot PAL PhD Thesis Proposal: Towards Robust Localization in Highly Dynamic Environments

Shao-Wen Yang
Proposal for Doctoral Thesis

Thesis Committee:
Chieh-Chih Wang (Chair)
Li-Chen Fu
Jane Yung-Jen Hsu
Han-Pang Huang
Ta-Te Lin
John J. Leonard, MIT

Date: January 12 2009
Time: 1:00pm
Place: R524

Abstract--Localization in urban environments is a key prerequisite for making a robot truly autonomous, as well as an important issue in collective and cooperative robotics. It is not easily achievable when moving objects are involved or environment changes. Ego-motion estimation is the problem of determining the pose of a robot relative to its previous location without an absolute frame of reference. Mobile robot localization is the problem of determining the pose of a robot relative to a given map of the environment. The performance of ego-motion estimation completely depends on the consistency between sensor information at successive time steps, whereas the performance of global localization highly depends on the consistency between the sensor information and the a priori environment knowledge. The inconsistencies make a robot unable to robustly localize itself in real environments. Explicitly taking into account the inconsistencies serves as the basis for mobile robot localization.

In this thesis, we explore the problem of mobile robot localization in highly dynamic environments. We proposed a multiple-model approach to solve the problems of ego-motion estimation and moving object detection jointly in a random sample consensus (RANSAC) paradigm. We show that accurate identification of static environments can help classification of moving objects, whereas discrimination of moving objects also yields better ego-motion estimation, particularly in environments containing a significant percentage of moving objects.

It is believed that a solution to the moving object detection problem can provide a bridge between the simultaneous localization and mapping (SLAM) and the detection and tracking of moving objects (DATMO) problems. Based on the ego-motion estimation framework, to provide reliable moving object detection, data association can still be problematic due to merge and split of objects and temporal occlusion. We propose the use of discriminative models to reason about the joint association between measurements. Scaling such a system to solve the global localization problem will increase the reliability for mobile robots to perform autonomous tasks in crowded urban scenes. We propose to use a multiple-model approach based on the probabilistic mobile robot localization framework and formulate an extension to the global localization problem. Besides, detecting objects of small sizes at low speeds, such as pedestrians, is difficult, but of particular interest in mobile robotics. We propose the use of prior knowledge from the mobile robot localization framework to deal with the problem of pedestrian detection, and formalize the localization-by-detection and detection-by-localization framework. The proposed approach will be demonstrated using experimental testing with real data.

Full text: PDF

Monday, December 29, 2008

Lab Meeting January 5, 2009 (Shao-Chen):Blended Local Planning for Generating Safe and Feasible Paths

Title:Blended Local Planning for Generating Safe and Feasible Paths(IROS2008)
Authors:Ling Xu, Anthony Stentz

Abstract—Many planning approaches adhere to the twotiered architecture consisting of a long-range, low fidelity global planner and a short-range high fidelity local planner. While this architecture works well in general, it fails in highly constrained environments where the available paths are limited. These situations amplify mismatches between the global and local plans due to the smaller set of feasible actions. We present an approach that dynamically blends local plans online to match the field of global paths. Our blended local planner generates paths from control commands to ensure the safety of the robot as well as achieve the goal. Blending also results in more complete plans than an equivalent unblended planner when navigating cluttered environments. These properties enable the blended local planner to utilize a smaller control set while achieving more efficient planning time. We demonstrate the advantages of blending in simulation using a kinematic car model navigating through maps containing tunnels, cul-de-sacs, and random obstacles.

link

Tuesday, December 23, 2008

Lab Meeting December 29, 2008 (fish60) DWA and/or GND

I will try to report want I have read recently.

Dynamic window based approach to mobile robot motion control in the presence of moving obstacles
Abstract:
This paper presents a motion control method for mobile robots in partially unknown environments populated with moving obstacles. The proposed method is based on the in-tegration of focused D* search algorithm and dynamic window local obstacle avoidance algorithm with some adaptations that provide efficient avoidance of moving obstacles.

Proceedings of IEEE International Conference on Robotics and Automation - ICRA 2007, Roma, Italy, 10-14 April 2007, pp. 1986-1991, 2007.

Link

Global Nearness Diagram Navigation (GND)

Abstract:
The GND generates motion commands to drive a robot safely between locations, whilst avoiding collisions. This system has all the advantages of using the reactive scheme nearness diagram (ND), while having the ability to reason and plan globally (reaching global convergence to the navigation problem).

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2001. Seoul, Korea.

Link

Lab Meeting December 29, 2008 (Alan): Toward a Unified Bayesian Approach to Hybrid Metric--Topological SLAM (IEEE Transactions on Robotics)

Title: Toward a Unified Bayesian Approach to Hybrid Metric--Topological SLAM (IEEE Transactions on Robotics)
Authors: Blanco, J.-L.; Fernandez-Madrigal, J.-A.; Gonzalez, J.

Abstract—This paper introduces a new approach to simultaneous localization and mapping (SLAM) that pursues robustness and accuracy in large-scale environments. Like most successful works on SLAM, we use Bayesian filtering to provide a probabilistic estimation that can cope with uncertainty in the measurements, the robot pose, and the map. Our approach is based on the reconstruction of the robot path in a hybrid discrete-continuous state space, which naturally combines metric and topological maps. There are two fundamental characteristics that set this paper apart from previous ones: 1) the use of a unified Bayesian inference approach both for the metrical and the topological parts of the problem and 2) the analytical formulation of belief distributions over hybridmaps, which allows us to maintain the spatial uncertainty in large spaces more accurately and efficiently than in previous works. We also describe a practical implementation that aims for real-time operation. Our ideas have been validated by promising experimental results in large environments (up to 30 000 m2, a 2 km robot path) with multiple nested loops, which could hardly be managed appropriately by other approaches.

[Local copy]

Monday, December 22, 2008

Lab Meeting December 22th, 2008 (slyfox):σSLAM:Stereo Vision SLAM Using the Rao-Blackwellised Particle Filter and Novel Mixture Proposal Distribution

Title: σSLAM: Stereo Vision SLAM Using the Rao-Blackwellised Particle Filter and a Novel Mixture Proposal Distribution

Author: Pantelis Elinas, Robert Sim, James J. Little

Abstract:
We consider the problem of Simultaneous Localization and Mapping (SLAM) using the Rao-Blackwellised Particle Filter (RBPF) for the class of indoor mobile robots equipped only with stereo vision. Our goal is to construct dense metric maps of natural 3D point landmarks for large cyclic environments in the absence of accurate landmark position measurements and motion estimates. Our work differs from other approaches because landmark estimates are derived from stereo vision and motion estimates are based on sparse optical ow. We distinguish between landmarks using the Scale Invariant Feature Transform (SIFT). This is in contrast to current popular approaches that rely on reliable motion models derived from odometric hardware and
accurate landmark measurements obtained with laser sensors. Since our approach depends on a particle filter whose main component is the proposal distribution, we develop and evaluate
a novel mixture proposal distribution that allows us to robustly close large loops. We validate our approach experimentally for long camera trajectories processing thousands of images at
reasonable frame rates.

link

Tuesday, December 16, 2008

CMU talk: Enhancing Photographs using Content-Specific Image Priors

VASC Seminar
December 15, 2008

Enhancing Photographs using Content-Specific Image Priors
Neel Joshi
Microsoft Research

Abstract:
The digital imaging revolution has made the camera practically ubiquitous; however, image quality has not improved with increased camera availability, and image artifacts such as blur, noise, and poor color-balance are still quite prevalent. As a result, there is a strong need for simple, automatic, and accurate methods for image correction. Correcting these artifacts, however, is challenging, as problems such as deblurring, denoising, and color-correction are ill-posed, where the number of unknown values outweighs the number of observations. As a result, it is necessary to add additional prior information as constraints.

In this talk, I will present three aspects of my dissertation on performing image enhancement using content-specific image models and priors, i.e. models tuned to a particular image. First, I will discuss my work in methods that learn from a photographer's image collection, where I use identity-specific priors to perform corrections for images containing faces. These methods introduce an intuitive paradigm for image enhancement, where users fix images by simply providing examples of good photos from their personal photo album. Second, I will discuss a fast blur estimation method which uses a model that all edges in a sharp image are step-edges. Lastly, I will discuss a framework for image deblurring and denoising that uses local color statistics to produce sharp, low-noise results.

Bio:
Neel Joshi is a Postdoctoral Researcher at Microsoft Research. He recently completed his Ph.D. in Computer Science at UC San Diego where he was advised by Dr. David Kriegman. His research interests include computer vision and graphics, specifically computational photography and video, data-driven graphics, and appearance measurement and modeling. Previously, he earned his Sc.B. in Computer Science from Brown University and his M.S. in Computer Science from Stanford University. He has also held internships at Mitsubishi Electric Research Labs (MERL), Adobe Systems, and Microsoft Research.

Monday, December 15, 2008

Lab Meeting December 22th, 2008 (swem): Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based Visual Servo

Title: Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based Visual Servo
Author: Changhyun Choi, Seung-Min Baek and Sukhan Lee, Fellow Member, IEEE

Abstract:

A real-time solution for estimating and tracking the 3D pose of a rigid object is presented for image-based visual servo with natural landmarks. The many state-of-the-art technologies that are available for recognizing the 3D pose of an object in a natural setting are not suitable for real-time servo due to their time lags. This paper demonstrates that a real-time solution of 3D pose estimation become feasible by combining a fast tracker such as KLT [7] [8] with a method of determining the 3D coordinates of tracking points on an object at the time of SIFT based tracking point initiation, assuming that a 3D geometric model with SIFT description of an object is known a-priori. Keeping track of tracking points with KLT, removing the tracking point outliers automatically, and reinitiating the tracking points using SIFT once deteriorated, the 3D pose of an object can be estimated and tracked in real-time. This method can be applied to both mono and stereo camera based 3D pose estimation and tracking. The former guarantees higher frame rates with about 1 ms of local pose estimation, while the latter assures of more precise pose results but with about 16 ms of local pose estimation. The experimental investigations have shown the effectiveness of the proposed approach with real-time performance.

link

Monday, December 08, 2008

CMU talk: Differentially Constrained Motion Re-Planning

CMU FRC Seminar

Differentially Constrained Motion Re-Planning

Mihail Pivtoraiko
Graduate Student, Robotics Institute, CMU

Thursday, December 11th

Abstract
This talk presents an approach to differentially constrained robot motion planning and efficient re-planning. Satisfaction of differential constraints is guaranteed by the state lattice, a search space which consists of feasible motions. Any systematic re-planning algorithm, e.g. D*, can be utilized to search the state lattice to find a motion plan that satisfies the differential constraints, and to repair it efficiently in the event of a change in the environment. Further efficiency is obtained by varying the fidelity of representation of the planning problem. High fidelity is utilized where it matters most, while it is lowered in the areas that do not affect the quality of the plan significantly. The talk presents a method to modify the fidelity between re-plans, thereby enabling dynamic flexibility of the search space, while maintaining its compatibility with re-planning algorithms. The approach is especially suited for mobile robotics applications in unknown challenging environments. We successfully applied the motion planner to robot navigation in this setting.

Speaker Bio: Mihail Pivtoraiko, is a graduate student at the Robotics Institute. He received his Master's degree at the Robotics Institute in 2005 and worked in the Robotics Section at the NASA/Caltech Jet Propulsion Laboratory (JPL) before returning to RI. Mihail's interests include improving the performance and reliability of mobile robots through research in artificial intelligence and robot control. Over the past five years, he focused on off-road robot motion planning and navigation, and has participated in DARPA projects (PerceptOR, LAGR), as well as research projects at JPL .

CMU talk: Hamming Embedding and Weak Geometric consistency for large-scale image and video search

CMU VASC Seminar
Monday, December 8, 2008

Hamming Embedding and Weak Geometric consistency for large-scale image and video search
Herve Jegou
INRIA

Abstract:
We address the problem of large scale image search, for which many recent methods use a bag-of-features image representation. We show the sub-optimality of such a representation for matching descriptors and derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. Experiments performed on a dataset of one million images show a significant improvement due to our approach. This is confirmed by the Trecvid2008 video copyright detection task, where we obtained the best results in terms of accuracy for all types of transformation.

This is joint work with M. Douze and C. Schmid.

Bio:
Herve Jegou holds a M.S. degree and a PhD in Computer Science from the University of Rennes. He is a former student of the Ecole Normale Superieure de Cachan. After being a post-doctoral research assistant in the INRIA TEXMEX project, he is a full-time researcher at the LEAR project-team at INRIA Rhone-Alpes, France, since 2006. His research interests concern large scale image retrieval and approximate nearest neighbor search.

CMU Thesis: Effective Motion Tracking Using Known and Learned Actuation Models

Effective Motion Tracking Using Known and Learned Actuation Models

Yang Gu
Computer Science Department
Carnegie Mellon University

Robots need to track objects. We consider tasks where robots actuate on the target that is visually tracked. Object tracking efficiency completely depends on the accuracy of the motion model and of the sensory information. The motion model of the target becomes particularly complex in the presence of multiple agents acting on a mobile target. We assume that the tracked object is actuated by a team of agents, composing of robots and possibly humans. Robots know their own actions, and team members are collaborating according to coordination plans and communicated information. The thesis shows that using a previously known or learned action model of the single robot or team members improves the efficiency of tracking.

First, we introduce and implement a novel team-driven motion tracking approach. Team-driven motion tracking is a tracking paradigm defined as a set of principles for the inclusion of a hierarchical, prior knowledge and construction of a motion model. We illustrate a possible set of behavior levels within the Segway soccer domain that correspond to the abstract motion modeling decomposition.

Second, we introduce a principled approach to incorporate models of the robot-object interaction into the tracking algorithm to effectively improve the performance of the tracker. We present the integration of a single robot behavioral model in terms of skills and tactics with multiple actions into our dynamic Bayesian probabilistic tracking algorithm.

Third, we extend to multiple motion tracking models corresponding to known multi-robot coordination plans or from multi-robot communication. We evaluate our resulting informed tracking approach empirically in simulation and using a setup Segway soccer task. The input of the multiple single and multi-robot behavioral sources allow a robot to much more effectively visually track mobile targets with dynamic trajectories.

Fourth, we present a parameter learning algorithm to learn actuation models. We describe the parametric system model and the parameters we need to learn in the actuation model. As in the KLD-sampling algorithm applied to tracking, we adapt the number of modeling particles and learn the unknown parameters. We successfully decrease the computation time of learning and the state estimation process by using significantly fewer particles on average. We show the effectiveness of learning using simulated experiments. The tracker that uses the learned actuation model achieves improved tracking performance.

These contributions demonstrate that it is possible to effectively improve an agent’s object tracking ability using tactics, plays, communication and learned action models in the presence of multiple agents acting on a mobile object. The introduced tracking algorithms are proven effective in a number of simulated experiments and setup Segway robot soccer tasks. The team-driven motion tracking framework is demonstrated empirically across a wide range of settings of increasing complexity.

Thursday, December 04, 2008

CFP: IJCAI 2009 Learning by Demonstration Challenge

IJCAI 2009 Robot Learning by Demonstration Challenge
July 13-16, 2009
Pasadena, CA, USA
http://www.cc.gatech.edu/~athomaz/IJCAI-LbD-Exhibit/

CALL FOR CONTRIBUTIONS

The IJCAI 2009 Robot Learning by Demonstration (LbD) Challenge, held in conjunction with the International Joint Conference on Artificial Intelligence, welcomes contributions that demonstrate physically embodied robots learning a task or skill from a human teacher. This year, we aim to bring together several research/commercial groups to demonstrate complete platforms performing relevant LbD tasks. Our long-term aim is to define increasingly challenging experiments for future LbD events and greater scientific understanding of the area.

CONTRIBUTIONS can include live hardware demonstrations and/or short video clips, showcasing Learning by Demonstration abilities. Those interested in contributing should submit a 1-2 page proposal, by March 1 2009, containing the following information:

- the names and affiliation of the exhibitors;
- a summary of the objectives and methods of the underlying research;
- description of the LbD demonstration;
- citations to any relevant or supporting papers;
- if you are proposing a live hardware demonstration, a list and short description of the hardware you will be using at the Challenge.


SUBMISSION can be done online at:
http://www.easychair.org/conferences/?c=.21071;conf=ijcai09lbdchallenge
Notifications of acceptance will be sent out by March 20, 2009.

TRAVEL SUPPORT may be possible for selected participants and their hardware, depending on available funds and level of demand.

MISSION: the IJCAI 2009 Challenge will serve as the foundation for more focused and commonly pursued challenges for AAAI 2010 and beyond. Please visit the Challenge website for more details: http://www.cc.gatech.edu/~athomaz/IJCAI-LbD-Exhibit/

The IJCAI 2009 Robotics site can be consulted for more information about the overall robotics events: http://robotics.cs.brown.edu/ijcai09/


ORGANIZERS

Andrea Thomaz <athomaz@cc.gatech.edu>
Chad Jenkins <cjenkins@cs.brown.edu>
Monica Anderson <anderson@cs.ua.edu>

[Call for Papers] Autonomous Robots Journal Special Issue: Characterizing Mobile Robot Localization and Mapping

Autonomous Robots Journal Special Issue:
Characterizing Mobile Robot Localization and Mapping
Editors: Raj Madhavan, Chris Scrapper, and Alexander Kleiner

Stable navigation solutions are critical for mobile robots intended to operate in dynamic and unstructured environments. In the context of this special issue, stable navigation solution is taken to mean the ability of a robotic system "to sense and create internal representations of its environment and estimate pose (where pose consists of position and orientation) with respect to a fixed coordinate frame". Such competency, usually termed localization and mapping, will enable mobile robots to identify obstacles and hazards present in the environment, and maintain an estimate of where they are and where they have been. A myriad of approaches have been proposed and implemented, some with greater success than others. Since the capabilities and limitations of these approaches vary significantly depending on the requirements of the end user, the operational domain, and onboard sensor suite limitations, it is essential for developers of robotic systems to understand the performance characteristics of methodologies employed to produce a stable navigation solution.

Currently, there is no way to quantitatively measure the performance of a robot or a team of robots against user-defined requirements. Additionally, there exists no consensus on what objective evaluation procedures need to be followed to deduce the performance of various robots operating in a variety of domains. Lack of reproducible and repeatable test methods have precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing robot performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer from the "drawing board" to the field. For instance, currently, the evaluation of robotic maps is based on qualitative analysis (i.e. visual inspection). This approach does not allow for better understanding of what errors specific systems are prone to and what systems meet the needs. It has become common practice in the literature to compare newly developed mapping algorithms with former methods by presenting images of generated maps. This procedure turns out to be suboptimal, particularly when applied to large-scale maps. The absence of standardized methods for evaluating emerging robotic technologies has caused segmentation in the research and development communities. This lack of cohesion hinders the attainment of robust mobile robot navigation, in turn slowing progress in many domains, such as manufacturing, service, health care, and security. Providing the research community access to standardized tools, reference data sets, and an open-source library of navigation solutions, researchers and consumers of mobile robot technologies will be able to evaluate the cost and benefits associated with various navigation solutions.

The primary focus of this special issue is to bring together what is so far an amorphous research community to define standardized methods for the quantitative evaluation of robot localization algorithms and/or robot-generated maps. The performance characteristics of several approaches will be documented towards developing a stable navigation solution by detailing the capabilities and limitations of each approach and by the inter-comparison of experimental results, as well as the underlying mechanisms used to formulate these solutions. Through this effort, we seek to start the process, which will compile the results of these evaluations into a reference guide that documents lessons learned and the performance characteristics of various navigation solutions. This will enable end users to select the "best" possible method that meets their needs and will also lead to the development of the adaptive systems that are more technically capable and at the same time are safe thus permitting collaborative operations of man and machine.

Topics of interest include (but are not limited to):
* Characterizing navigation in complex unstructured domains & requirements imposed by dynamic nature of operating domains
* Evaluation frameworks and adaptive approaches to developing stable navigation solutions
* Probabilistic methodologies with particular attention to uncertainty in assessing robot-generated maps
* Visualization tools for assessing localization and mapping
* Methods for ground truth generation from public map sources
* Multi-robot localization and mapping
* Testing in various domains of interest ranging from manufacturing floors to urban search and rescue
* Applications with demonstrated success or lessons learnt from failures

The above topics are by no means exhaustive but are only meant to be a representative list. We particularly encourage submissions related to mobile robot field deployments, challenges encountered, and lessons learnt during such implementations. Theoretical investigations into assessing performance of robot localization and mapping algorithms are also welcome. Please contact the guest editors if you are not sure if a particular topic fits the special issue.

IMPORTANT DATES
* Paper submission deadline: February 1, 2009
* Notification to authors: May 1, 2009
* Camera ready papers: August 1, 2009

SUBMISSION INFORMATION
See journal webiste at http://www.springer.com/10514
Manuscripts should be submitted to: http://AURO.edmgr.com
This online system offers easy and straightforward log-in and submission procedures, and supports a wide range of submission file formats.

Tuesday, December 02, 2008

Call for Contributions - IJCAI 2009 Mobile Manipulation Challenge

IJCAI 2009 Mobile Manipulation Challenge
July 13-16, 2009
Pasadena, CA, USA
http://mobile-manipulation-challenge.net/

CALL FOR CONTRIBUTIONS

The IJCAI 2009 Mobile Manipulation Challenge, held in conjunction with the International Joint Conference on Artificial Intelligence, welcomes contributions that demonstrate physically embodied robots performing a mobile manipulation task. This year, we aim to bring together several research/commercial groups to demonstrate complete platforms performing relevant mobile manipulation tasks. Our long-term aim is to define increasingly challenging experiments future mobile manipulation events and greater scientific understanding of the area.


AREAS OF INTEREST include (but are not limited to):

- point-and-click fetching: where human users can select various objects (possibly using a laser pointer) for a mobile robot to fetch, we invite participants to bring objects for collective use for all contributors;

- assembling structures: robot manipulators that can build larger structures by connecting smaller primitive parts;

- searching for hidden objects: search tasks that involve manipulation of occluding objects to find hidden goal object.


CONTRIBUTIONS can include live hardware demonstrations and/or short video clips, showcasing manipulation abilities as described above. Those interested in contributing should submit a 1-2 page proposal, by March 1 2009, containing the following information:

- the names and affiliation of the exhibitors;
- a summary of the objectives and methods of the underlying research;
- description of the manipulation demonstration;
- citations to any relevant or supporting papers;
- if you are proposing a live hardware demonstration, a list and short description of the hardware you will be using at the Challenge.


SUBMISSION can be done via email at the address:
contribute@mobile-manipulation-challenge.net

Notifications of acceptance will be sent out by March 20, 2009.

TRAVEL SUPPORT may be possible for selected participants and their hardware, depending on available funds and level of demand.

MISSION: the IJCAI 2009 Challenge will serve as the foundation for more focused and commonly pursued challenges for AAAI 2010 and beyond. Please visit the Challenge website for more details:
http://mobile-manipulation-challenge.net/

The IJCAI 2009 Robotics site can be consulted for more information about the overall robotics events:

http://robotics.cs.brown.edu/ijcai09/

ORGANIZERS
Matei Ciocarlie <cmatei@cs.columbia.edu>
Radu Bogdan Rusu <rusu@cs.tum.edu>
Chad Jenkins <cjenkins@cs.brown.edu>
Monica Anderson <anderson@cs.ua.edu>

CMU talk: A Hierarchical Image Analysis for Extracting Parking Lot Structure from Aerial Image.

A Hierarchical Image Analysis for Extracting Parking Lot Structure from Aerial Image.

Young-Woo Seo
Ph.D Student
Robotics Institute
Carnegie Mellon University

Thursday, December 4th

Abstract
The road network information simplify autonomous driving by providing strong priors on driving environments for planning and perception. It tells a robotic vehicle where it can drive and provides contextual cues that inform the driving behavior. For example, this information lets the robotic vehicle know information about upcoming intersections (e.g. that the intersection is a four-way stop and that the robot must conform to precedence rules) and other fixed rules of the road (e.g. speed limits). Currently the road network information about driving environments is manually generated using a combination of GPS survey and aerial imagery. These techniques for converting digital imagery into road network information are labor intensive, reducing the benefit provided by digital maps. To fully exploit the benefits of digital imagery, these processes should be automated. As a step toward this goal, we present a machine learning algorithm that extracts the structure of parking lot from a given aerial image. We approach this problem hierarchically from low-level image analysis through high-level structure inference. We test three different methods and their combinations. From the experimental results, our Markov Random Fields implementation outperforms other methods in terms of false negative and positive rates.

Monday, December 01, 2008

Lab Meeting December 8th, 2008 (Jeff):Topological mapping, localization and navigation using image collections

Title: Topological mapping, localization and navigation using image collections

Authors: Friedrich Fraundorfer, Christopher Engels, and David Nister

Abstract:

In this paper we present a highly scalable vision based localization and mapping method using image collections. A topological world representation is created online during robot exploration by adding images to a database and maintaining a link graph. An efficient image matching scheme allows real-time mapping and global localization. The compact image representation allows us to create image collections containing millions of images, which enables mapping of very
large environments. A path planning method using graph search is proposed and local geometric information is used to navigate in the topological map. Experiments show the good performance
of the image matching for global localization and demonstrate path planning and navigation.

Link:
IROS2007
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04399123

Lab Meeting December 8st (Casey): The Painful Face - Pain Expression Recognition Using Active Appearance Models

Title: The Painful Face - Pain Expression Recognition Using Active Appearancne Models (ICMI'07)
Authors: Ahmed Bilal Ashraf, Simon Lucey, Jeffer F. Cohn, Tsuhan Chen, Zara Ambadar(CMU), Ken Prkachin, Patty Solomon, Barry-John Theobald

Abstract:
Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or not even possible, as in young children or severely ill. Behavioral scientists have identified reliable and valid facial indicator of pain. Until now they required manual measurement by highly skilled observers. We developed an approach that automatically recognizes acute pain. Adult patients with rotator cuff injury were video-recorded while a physiotherapist manipulated their affected and unaffected shoulder. Skilled observers rated pain expression from the video on a 5-point Likert-type scale. From these ratings, sequences were categorized as no-pain(rating of 0), pain(rating of 3,4, or 5), and indeterminate(rating of 1 or 2). We explored machine learning approaches for pain-no pain classification. Active Appearance Models(AAM) were used to decouple shape and appearance parameters from the digitized face images. Support vector machines (SVM) were used with several representations from the AAM. Using a leave-one-out procedure, we achieved an equal error rate of 19%(hit rate=81%) using canonical appearance and shape features. These findings suggest the feasibility of automatic pain detection from video.