Thursday, June 25, 2009

News: Language may be key to theory of mind

Language may be key to theory of mind

How blind and deaf people approach a cognitive test regarded as a milestone in human development has provided clues to how we deduce what others are thinking.

Understanding another person's perspective, and realising that it can differ from our own, is known as theory of mind. It underpins empathy, communication and the ability to deceiveMovie Camera – all of which we take for granted. Although our theory of mind is more developed than it is in other animals, we don't acquire it until around age four, and how it develops is a mystery.

See the full article.


Sunday, June 21, 2009

Lab Meeting June 21th, 2009 (swem): Improved Inverse-Depth Parameterization for Monocular Simultaneous Localization and Mapping

Title: Improved Inverse-Depth Parameterization for Monocular
Simultaneous Localization and Mapping
In: ICRA2009

Author: E. Imre, M.-O. Berger, N. Noury

Abstract:

Inverse-depth parameterization can successfully
deal with the feature initialization problem in monocular
simultaneous localization and mapping applications. However,
it is redundant, and when multiple landmarks are initialized
from the same image, it fails to enforce the “common origin”
constraint. The authors propose two new variants that
addresses both of these issues. The experimental results indicate
that the proposed approach achieves a better performance at a
lower computational cost.

[ Link ]

Saturday, June 20, 2009

Intelligence Seminar: Action Perception, June 23, 2009

Intelligence Seminar

June 23, 2009 (note special place)
3:30 pm
NSH 1507
Host: Jaime Carbonell
For meetings, contact Michelle Pagnani (pagnani@cs.cmu.edu).

Action Perception

Robert Thibadeau
Seagate Research

Abstract:
The human perception of actions has barely been studied, but this study of action perception promises to provide a wealth of interesting hypotheses regarding cognitive processing. Action perception is distinct from motion perception in that the direct perception of causation is central to the percept. Among the interesting hypotheses is that it can be hypothesized that what we know as thought and reasoning is where we perceive and plan actions. Another hypothesis is that what we know as logic and mathematics derives from our direct perceptions of causation in the actions we perceive and think about.

I will present a study that attempts to estimate the scale of computation needed to implement a system for visually perceiving meaningful actions and non-trivially producing an English narration of what is being visually perceived, as well as answering questions about what is visually perceived. The scale of the computation for learning could easily reach exaflops over distributed datasets (HADOOP or MapReduce style).

This study is partly based on my work (Thibadeau, 1986), and Doug Rohde's 2002 dissertation (http://tedlab.mit.edu:16080/~dr/Thesis/), as well as Simon and Rescher (1966 see summary below). The study includes an explicit proposal for extending Rohde's work to multimodal, multisensory, processing.

(Simon and Rescher 1966 From Wikipedia, Causality)
Derivation theories

The Nobel Prize holder Herbert Simon and Philosopher Nicholas Rescher[20] claim that the asymmetry of the causal relation is unrelated to the asymmetry of any mode of implication that
contraposes. Rather, a causal relation is not a relation between values of variables, but a function of one variable (the cause) on to another (the effect). So, given a system of equations, and a set of
variables appearing in these equations, we can introduce an asymmetricrelation among individual equations and variables that corresponds perfectly to our commonsense notion of a causal ordering. The system of equations must have certain properties, most importantly, if some values are chosen arbitrarily, the remaining values will be determined uniquely through a path of serial discovery that is perfectly causal. They postulate the inherent serialization of such a system of equations may correctly capture causation in all empirical fields, including physics and economics.

Sunday, June 14, 2009

RSS 2009 paper: Non-parametric Learning To Aid Path Planning Over Slopes

Title: Non-parametric Learning To Aid Path Planning Over Slopes (RSS 2009)

Sisir Karumanchi, Thomas Allen, Tim Bailey and Steve Scheding
ARC Centre of Excellence For Autonomous Systems (CAS),
Australian Centre For Field Robotics (ACFR),
The University of Sydney,
NSW. 2006, Australia.


Abstract—This paper addresses the problem of closing the loop from perception to action selection for unmanned ground vehicles, with a focus on navigating slopes. A new non-parametric learning technique is presented to generate a mobility representation where maximum feasible speed is used as a criterion to classify the world. The inputs to the algorithm are terrain gradients derived from an elevation map and past observations of wheel slip. It is argued that such a representation can aid in path planning with improved selection of vehicle heading and operating velocity in off-road slopes. Results of mobility map generation and its benefits to path planning are shown.

Link

Saturday, June 13, 2009

RSS 2009 Paper: Generalized-ICP

Title: Generalized-ICP (RSS 2009)

Authors: Aleksandr Segal, Dirk Haehnel, Sebastian Thrun

Abstract:
In this paper we combine the Iterative Closest Point (’ICP’) and ‘point-to-plane ICP‘ algorithms into a single probabilistic frame work. We then use this framework to model locally planar surface structure from both scans instead of just the ”model” scan as is typically done with point-to-plane. This can be thought of as ‘plane-to-plane’. The new approach is tested with both simulated and real-world data and is shown to outperform both standard ICP and point-to-lane. Furthermore, the new approach is shown to be more robust to incorrect correspondences, and thus makes it easier to tune the maximum match distance parameter. In addition to the demonstrated performance improvement, the proposed framework allows for more expressive probabilistic models to be incorporated into the ICP framework. While maintaining the speed and simplicity of ICP, Generalized-ICP allows the addition of outlier terms, measurement noise, and other probabilistic techniques to increase robustness.

RSS 2009 Paper: Large Scale Graph-based SLAM using Aerial Images as Prior Information

Title: Large Scale Graph-based SLAM using Aerial Images as Prior Information (RSS 2009)

Authors: Rainer K¨ummerle Bastian Steder Christian Dornhege Alexander Kleiner Giorgio Grisetti Wolfram Burgard

Abstract:
To effectively navigate in their environments and accurately reach their target locations, mobile robots require a globally consistent map of the environment. The problem of learning a map with a mobile robot has been intensively studied in the past and is usually referred to as the simultaneous localization and mapping (SLAM) problem. However, existing solutions to the SLAM problem typically rely on loop-closures to obtain global consistency and do not exploit prior information even if it is available. In this paper, we present a novel SLAM approach that achieves global consistency by utilizing publicly accessible aerial photographs as prior information. Our approach inserts correspondences found between three-dimensional laser range scans and the aerial image as constraints into a graph-based formulation of the SLAM problem. We evaluate our algorithm based on large real-world datasets acquired in a mixed in- and outdoor environment by comparing the global accuracy with state-of-the-art SLAM approaches and GPS. The experimental results demonstrate that the maps acquired with our method show increased global consistency.

Lab Meeting June 15th, 2009 (Andi): Progress Report

I will talk about my recent progress in 3D Mapping and Localization using one 2D LIDAR.

Using the Distribution Theory to Simultaneously Calibrate the Sensors of a Mobile Robot

Title: Using the Distribution Theory to Simultaneously Calibrate the Sensors of a Mobile Robot

Author: Agostino Martinelli

Abstract:
This paper introduces a simple and very efficient strategy to extrinsically calibrate a bearing sensor (e.g. a camera) mounted on a mobile robot and simultaneously estimate the parameters describing the systematic error of the robot odometry system. The paper provides two contributions. The first one is the analytical computation to derive the part of the system which
is observable when the robot accomplishes circular trejectories. This computation consists in performing a local decomposition of the system, based on the theory of distributions. In this respect, this paper represents the first application of the distribution theory in the frame-work of mobile robotics. Then, starting from this decomposition, a method to efficiently estimate the parameters describing both the extrinsic bearing sensor calibration and the odometry calibration is derived (second contribution). Simulations and experiments with the robot e-Puck equipped
with encoder sensors and a camera validate the approach.

Link:
RSS05
http://www.roboticsproceedings.org/rss05/p11.pdf
http://hal.inria.fr/docs/00/35/30/79/PDF/RR-6796.pdf

Sunday, June 07, 2009

Lab Meeting June 8th, 2009(ZhenYu):Vertical Line Matching for Omnidirectional Stereovision Images

Title: Vertical Line Matching for Omnidirectional Stereovision Images (ICRA2009)

Authors: Guillaume Caron and El Mustapha Mouaddib

Abstract: We are investigating the mobile robot indoor localization and environment mapping using an omnidirectional stereovision sensor. It uses four parabolic mirrors and an orthographic camera, giving four images of the same scene. At least, only two mirrors are needed. Using four mirrors gives redundancy. We propose to exploit the images of vertical lines. This paper presents a new method in order to match these lines in the four images. Contrary to existing approaches, we took into account the four sub-images existence in the design of this method, in order to exploit redundancy. This brought an original algorithm combining matching and pose estimation of vertical lines from the 3D environment. Experimental results will be presented to validate this approach.

[Link]

Saturday, June 06, 2009

Lab Meeting June 8th, 2009 (Any): CRF-Filters

Paper title: CRF-Filters: Discriminative Particle Filters for Sequential State Estimation

Authors: Benson Limketkai, Dieter Fox and Lin Liao
Appears in: ICRA 2007

Abstract: Particle filters have been applied with great success to various state estimation problems in robotics. However, particle filters often require extensive parameter tweaking in order to work well in practice. This is based on two observations. First, particle filters typically rely on independence assumptions such as “the beams in a laser scan are independent given the robot’s location in a map”. Second, even when the noise parameters of the dynamical system are perfectly known, the sample-based approximation can result in poor filter performance. In this paper we introduce CRF-Filters, a novel variant of particle filtering for sequential state estimation. CRF-Filters are based on conditional random fields, which are discriminative models that can handle arbitrary dependencies between observations. We show how to learn the parameters of CRF-Filters based on labeled training data. Experiments using a robot equipped with a laser range-finder demonstrate that our technique is able to learn parameters of the robot’s motion and sensor models that result in good localization performance, without the need of additional parameter tweaking.

Full text: PDF