Thursday, May 31, 2007

Lab meeting 31 May (ZhenYu): Robust and Real-time Rotation Estimation of Compound Omnidirectional Sensor

Title: Robust and Real-time Rotation Estimation of Compound Omnidirectional Sensor

Author: Trung Ngo Thanh, Hajime Nagahara, Ryusuke Sagawa, Yasuhiro Mukaigawa,
Masahiko Yachida, Yasushi Yagi

Abstract: Camera ego-motion consists of translation and rotation, in which rotation can be described simply by distant features. We present a robust rotation estimation using distant features given by our compound omni-directional sensor. Features are detected by a conventional feature detector, and then distant features are identified by checking the infinity on the omni-directional image of the compound sensor. The rotation matrix is estimated between consecutive video frames using RANSAC with only distant features. Experiments with various environments show that our approach is robust and also gives reasonable accuracy in real-time.

Tuesday, May 29, 2007

Developing Landmark-Based Pedestrian-Navigation Systems

Title : Developing Landmark-Based Pedestrian-Navigation System

Authur : Alexandra Millonig Katja Schechtner

Abstract :
Pedestrian-navigation services enable people to retrieve precise instructions to reach a specific location. However, the development of mobile spatial-information technologies for pedestrians is still at the beginning and faces several difficulties. As the spatial behavior of people on foot differs in many ways from the driver's performance, common concepts for car-navigation services are not suitable for pedestrian navigation. Particularly, the usage of landmarks is vitally important in human navigation. This contribution points out the main requirements for pedestrian-navigation technologies and presents an approach to identify pedestrian flows and to imply landmark information into navigation services for pedestrians

[Link]

Wednesday, May 23, 2007

Robot Perception and Learning: Lab meeting 24 May (Stanley): Dynamic window based approach to mobile robot motion control in the presence of moving obstacles

paper link

Lab meeting 24 May (Stanley): Dynamic window based approach to mobile robot motion control in the presence of moving obstacles

Author:
Marija Seder and Ivan Petrovi´c

From:
2007 IEEE International Conference on Robotics and Automation, ThA11.3

Abstract:
This paper presents a motion control method for mobile robots in partially unknown environments populated with moving obstacles. The proposed method is based on the integration of focused D* search algorithm and dynamic window local obstacle avoidance algorithm with some adaptations that provide efficient avoidance of moving obstacles. The moving obstacles are modelled as moving cells in the occupancy grid
map and their motion is predicted by applying a procedure similar to the dynamic window approach. The collision points of the robot predicted trajectory and moving cells predicted trajectories form the new fictive obstacles in the environment,
which should be avoided. The algorithms are implemented and verified using a Pioneer 3DX mobile robot equipped with laser range finder.

MIT talk: Selective Use of Multiple Sources of Robot Sensory Information

Selective Use of Multiple Sources of Robot Sensory Information

Speaker: Manuela M. Veloso , Carnegie Mellon University
Date: Thursday, May 24 2007

An autonomous robot needs to assess the state of the environment, make decisions towards achieving its goals, and execute the selected actions. In teams of autonomous robots, robots have individually limited perception, but can communicate state information to each other creating therefore multiple perceptual inputs. We present an algorithm for selectively merging and using the own perceptual data combined with the communicated data from teammate robots. In general, robots face the challenge of combining multiple sources of sensory information. We illustrate different concrete instances of this problem and discuss a prioritized approach to effectively merge multi-modal information. The talk will be organized as an explanation of the algorithms underlying a series of different robot videos, including robot soccer players, humanoid robot soccer commentators, and machine visual object recognition.

CMU VASC talk: Detecting Pedestrians by Learning Shapelet Features and Boosted Multiple Deformable Trees for Parsing Human Poses

Greg Mori
Simon Fraser University
**Friday, May 25, 3:30pm**

Detecting Pedestrians by Learning Shapelet Features and Boosted Multiple Deformable Trees for Parsing Human Poses

In this talk we present two pieces of work in the "Looking at People" domain. In the first part, we address the problem of detecting pedestrians in still images. We introduce an algorithm for learning shapelet features, a set of mid-level features. These features are focused on local regions of the image and are built from low-level gradient information that discriminates between pedestrian and non-pedestrian classes. Using AdaBoost, these shapelet features are created as a combination of oriented gradient responses. To train the final classifier, we use AdaBoost for a second time to select a subset of our learned shapelets. By first focusing locally on smaller feature sets, our algorithm attempts to harvest more useful information than by examining all the low-level features together. We present quantitative results demonstrating the effectiveness of our algorithm. In particular, we obtain an error rate 14 percentage points lower (at $10^{-6}$ FPPW) than the previous state of the art detector of Dalal and Triggs on the INRIA dataset.

In the second part, we present a method for estimating human pose in still images. Tree-structured models have been widely used for this problem. While such models allow efficient learning and inference, they fail to capture additional dependencies between body parts, other than kinematic constraints. In this paper, we consider the use of multiple tree models, rather than a single tree model for human pose estimation. Our model can alleviate the limitations of a single tree-structured model by combining information provided across different tree models. The parameters of each individual tree model are trained via standard learning algorithms in a
single tree-structured model. Different tree models are combined in a discriminative fashion by a boosting procedure. We present experimental results showing the improvement of our model over previous approaches on a very challenging dataset.

MIT talk: Discovering Meaning in the Visual World

Speaker: Fei-Fei Li , Assistant Professor, Princeton University
Date: Wednesday, May 23 2007

When humans encounter images or videos of the visual world, our visual system is capable of extracting a rich plethora of information in as short as a single glance. A large portion of this information is related to semantic meanings, such as objects, scenes and purposeful motions. This ability still poses a large challenge to today's computer vision algorithms. In this talk, we will introduce algorithms that perform such high level visual recognition tasks as object, scene, event and human motion categorization. Furthermore, we will attempt to achieve these recognition tasks under various learning conditions that mimic the human learning conditions, such as one-shot learning, unsupervised learning, and incremental learning. In object categorization, we will show two projects focusing on one-shot learning as well as incremental learning of objects. We will also show a recent study of true 3D object categorization. Beyond objects, we will introduce several studies on scene and event categorization. Finally, we will finish the talk with a study on unsupervised learning of human motion categories.

Tuesday, May 22, 2007

[Thesis proposal] Seeing the Future: Integrating Perception & Planning for Humanoid Autonomy

Author:
Philipp Michel
Robotics Institute
Carnegie Mellon University

Abstract:
Today's agile humanoid robots are testament to the impressive advances in the design of biped mechanisms and control in recent robotics history. The big challenge, however, remains to properly exploit the generality and flexibility of humanoid platforms during fully autonomous operation in obstacle-filled and dynamically changing environments. Increasing effort has thus been focused on the challenges arising for perception and motion planning, as well as the interplay between both, as foundations of humanoid autonomy.

This thesis will explore appropriate approaches to perception on humanoids and ways of coupling sensing and planning to generate navigation and manipulation strategies that can be executed reliably. We investigate perception methods employing on- and off-body sensors that are combined with an efficient motion planner to allow the humanoid robot HRP-2 and Honda's ASIMO to traverse complex and unpredictably changing environments. We examine how predictive information about the future state of the world gathered from observation enables navigation in the presence of challenging moving obstacles. We will show how programmable graphics hardware can be exploited to robustly address the difficulties of real-time sensing specifically encountered on a locomoting humanoid. Using the humanoid robot ARMAR-III as a motivating example, we argue furthermore that reliability of autonomous operation can be improved by reasoning about perception during the planning process, rather than maintaining the traditional separation of the sensing and planning stages.

We review our motivation, current work and proposed research on the integration of perception and planning toward the eventual goal of allowing humanoids to operate autonomously and reliably in the real world.

News: Roboticist inspired by more than machines

Aside from its Robot Hall of Fame, CMU has unique outreach projects to engage mainstream America with robots. It has hosted RoboCup, a global soccer tournament played by robots, and most recently released DIY robot recipes that allow anyone to make robots from off-the-shelf parts through its Terk program. The people behind CMU's unique Robotics Institute have also become a hot topic for analysis since the release of a nonfiction book about them by Lee Gutkind.

On Tuesday, Matt Mason, the director of the Robotics Institute at CMU announced the 2007 inductees into the Robot Hall of Fame. The honor, which is judged by a jury of both leading science and science fiction experts, was created in April 2003 to call attention to the contributions robots and their creators make to society.

Go to this link for the full article.

Sunday, May 20, 2007

CMU RI Thesis Proposal: Rhythmic Human-Robot Social Interaction

Marek Michalowski
Robotics Institute
Carnegie Mellon University

Abstract:

Social interaction is a dynamic process of coordinated activity between constantly adapting participants. Social scientists have discovered interactional synchrony (the temporal coordination of rhythmic communicative behaviors between interactors) as an important foundation or scaffold for establishing rapport, engagement, common ground, and emotional contagion between children and caregivers, between conversational partners, between teammates performing joint tasks, and so on. It is our goal to develop the capacity for robots to participate rhythmically in social interactions with people. The proposed thesis aims to:

  • create robotic technologies that perceive, represent, and behave according to social rhythms;
  • understand the effects of rhythmic synchrony on human-robot social interaction; and
  • explore the application of such systems in educational or therapeutic settings.
We believe that interpersonal coordination, and specifically rhythmic interactional synchrony, is necessary for the regulation of natural, comfortable, effective human-robot interaction. The challenge is to create a computational framework for sensing and behaving according to these principles and to demonstrate the importance of doing so correctly. We have selected play as the domain in which to develop and evaluate such a framework. Play-oriented interactions are a particularly appropriate context for this type of work, as the principles under study (rhythm and synchrony) are magnified or emphasized in playful interactions -- physical, repetitive, and exaggerated as they often are. We are developing a system that can perceive rhythms in multiple modalities and synchronize its periodic dance-like movement to these rhythms while under high-level attentional control of a human teleoperator. The modeling, perception, and generation of rhythmic behaviors forms the basis for rhythmic intelligence -- the ability to establish and maintain interpersonal coordination. Rhythmic intelligence includes the ability to achieve specific goals in an interaction through the selection of behaviors and to understand the dynamically changing nature of an interaction such that it is possible to select appropriate roles for different situations.

While we regard dance as a form of play that magnifies the rhythmic qualities of interaction (and is therefore useful for developing the relevant technologies), it is also an potentially important application for such technologies in its own right. We intend to identify movement-based therapeutic interventions that are amenable to implementation using our rhythmically intelligent technology, and create dance activities with robots that implement games and techniques that are currently in use by dance therapists.

Link

Thursday, May 17, 2007

Lab Meeting 17 May (Nelson): Statistical moving object detection and tracking from a moving platform

Outline:

  • Introduction
  • Framework
  • Progress: segmentation uncertainty
    • Related work
      • Multi-scale segmentation
      • Sampling and correlation –based range image matching (SCRIM)
    • Multi-scale segmentation with SCRIM
      • segmentation uncertainty

Lab Meeting 17 May (Rabby / Wang Li): Exploiting Locality in Probabilistic Inference (Chap 1 to 2.4.5)

In my talk, I will present chapter 1 to chapter 2, section 2.4.5 of "Exploiting Locality in Probabilistic Inference" (Ph. D. thesis of Mr. Mark A. Paskin, 2004), which contain factorized probability models, graphical models and an introduction to junction trees.

Wednesday, May 16, 2007

Lab Meeting 17 May (Any): Robust Monte Carlo Localization for Mobile Robots

Authors: Sebastian Thrun, Dieter Fox, Wolfram Burgard, Frank Dellaert
From: Artificial Intelligence 128 (2001) 99-141

Abstract:
Mobile robot localization is the problem of determining a robot’s pose from sensor data. This article presents a family of probabilistic localization algorithms known as Monte Carlo Localization (MCL). MCL algorithms represent a robot’s belief by a set of weighted hypotheses (samples), which approximate the posterior under a common Bayesian formulation of the localization problem. Building on the basic MCL algorithm, this article develops a more robust algorithm called Mixture- MCL, which integrates two complimentary ways of generating samples in the estimation. To apply this algorithm to mobile robots equipped with range finders, a kernel density tree is learned that permits fast sampling. Systematic empirical results illustrate the robustness and computational efficiency of the approach.

Friday, May 11, 2007

CMU RI Thesis Proposal: Robot Navigation for Social Tasks

Rachel Gockley
Robotics Institute
Carnegie Mellon University

Abstract:
This thesis addresses the problem of robots navigating in populated environments. Because traditional obstacle-avoidance algorithms do not differentiate between people and other objects in the environment, this thesis argues that such methods do not produce socially acceptable results. Rather, robots must detect people in the environment and obey the social conventions that people use when moving around each other, such as tending to the right side of a hallway and respecting the personal space of others. By moving in a human-like manner, a robot will cause its actions to be easily understood and appear predictable to people, which will facilitate its ability to interact with people and thus to complete its tasks.

We are interested in general spatial social tasks, such as navigating through a crowded hallway, as well as more cooperative tasks, such as accompanying a person side-by-side. We propose a novel framework for representing such tasks as a series of navigational constraints. In particular, we argue that each of the following must be considered at the navigational level: the task definition, societal conventions, and efficiency optimization. This thesis provides a theoretical basis for each of these categories. We propose to validate this conceptual framework by using it to design a simple navigational algorithm that will allow a robot to move through a populated environment while observing social conventions. We will then extend this algorithm within the framework to allow a robot to escort a person side-by-side. Finally, we will examine how human-like and appropriate the robot's behavior is in controlled user studies.

Link

[Talk]2D Localization of Outdoor Mobile Robots Using 3D Laser Range Data

Title:
2D Localization of Outdoor Mobile Robots Using 3D Laser Range Data

Speaker:
Takeshi Takahashi, Masters Student, Robotics Institute

Date/Time/Location:
Thursday, May 10, 2007 11:00am, WeH 4615A @CMU

Abstract:
Robot localization in outdoor environments is a challenging problem because of unstructured terrains. 2D ladars that are not horizontally attached have benefits for detecting obstacles but are not suitable for some localization algorithms used for indoor robots, which have horizontally fixed ladars. The data obtained from tilted ladars are 3D while these from non-tilted ladars are 2D. We present a 2D localization approach for these non-horizontally
attached ladars. This algorithm combines 2D particle filter localization with a 3D perception system. We localize the vehicle without GPS by comparing a local map with a known map. These maps are created by converting 3D data into 2D data. Experimental results show that our approach is able to utilize the benefits of 3D data and 2D maps to efficiently overcome the problems of outdoor environments.

Speaker Bio:
Takeshi is currently a second year Masters student at the Robotics Institute, and is advised by Sanjiv Singh. His main research is localization for outdoor mobile robots. He received his B.Eng degree in computer science from National Defense Academy, Japan, in 2002.

'Guessing' robots navigate faster

Robots that use educated guesswork to build maps of their surroundings are being tested by US researchers. The approach could let them navigate more easily through complex environments such as unfamiliar buildings, the researchers claim.

......

The algorithm was initially tested using simulated robots, placed inside virtual mazes and office environments. The simulated robots were able to navigate successfully while exploring 33% less of their environment.

......

Lee and colleagues plan to extend the method to multiple robots. "You could have two robots building their own maps," he says, "which then share them when they meet." This will allow a robot to make predictions based on data collected by its teammate.

......
-----------------------------------------------------------------------------------------------

For full content, see here: Link

Wednesday, May 09, 2007

Lab Meeting 10 May 2007 ( Jim / fish60 ): [video link]

Sorry for bad link......

See Vedio here......

Lab Meeting 10 May 2007 ( Jim / fish60 ): Probabilistic Mobile Manipulation in Dynamic Environments, with Application to Opening Doors

This paper propose a unified approach to two problems (mobile robot localization, and to object state estimation for manipulation) that dynamically models the objects to be manipulated and localizes the robot at the same time. The approach applies in the common setting where only a low-resolution (10cm) grid-map of a building is available, but we also have a high-resolution (0.1cm) model of the object to be manipulated.

Link

Also See Vedio here

Lab Meeting May 10th 2007 (Jeff): Multi-Robot Marginal-SLAM

Title: Multi-Robot Marginal-SLAM

Authors: Ruben Martinez-Cantin, Jose A. Castellanos, and Nando de Freitas

Abstract:
This paper has two goals. First, it expands the presentation of the marginal particle filter for SLAM proposed recently in [Martinez-Cantin et al., 2006].In particular, it presents detailed pseudo-code to enable practitioners to implement the algorithm easily. Second, it proposes an extension to the multi-robot setting. In the marginal representation, the robots share a common map and their locations are independent given this map. The robot's relative locations with respect to each other are assumed to be unknown. The multi-robot Marginal-Slam algorithm estimates these transformations of coordinates between the robots to produce a common global map.

[Martinez-Cantin et al., 2006]:
R. Martinez-Cantin, N. de Freitas, and J.A. Castellanos
Marginal-SLAM: A Convergent Particle Method for Simultaneous Robot Localization and Mapping, 2006.

Link:
http://www.cs.ubc.ca/~nando/papers/marginalslamijcai.pdf

Tuesday, May 08, 2007

[URCS Seminars & Talks ]Object and Scene Recognition with Bags of Features and Spatial Pyramids

Speaker: Svetlana Lazebnik, Post-doc, U. Illinois, Urbana-Champaign

Title: Object and Scene Recognitionwith Bags of Features and Spatial Pyramids

Abstract:

Bag-of-features models, which represent images by distributions of salient local features contained in them, are among the most robust andpowerful image descriptions currently used for object and scenerecognition. In this talk, I will present fundamental techniques for designing effective bag-of-features models and their extensions by constructing discriminative visual codebooks and incorporating spatial relationships between local features.

The most basic operation in building a bag-of-features model is quantizing the local features, so that their distribution can be represented as ahistogram of discrete "visual codewords." I will introduce aninformation-theoretic approach to designing visual codebooks by minimizing the loss of discriminative information incurred when a continuous high-dimensional feature vector is mapped to a discrete codeword index. I will present experiments demonstrating the advantage of these codebooks for image classification, as well as an application of the same information-theoretic framework to image segmentation.

In the second part of the talk, I will describe an extension of a bag of features into a spatial pyramid, or a collection of feature histograms computed at different levels of a hierarchical spatial decomposition of an image. The resulting method is simple and efficient, and it achieves state-of-the-art performance on difficult object and scene recognition tasks. It has already been adopted as a baseline for datasets containing hundreds of object categories, and has given rise to a winning recognition system in the international PASCAL Visual Object Classes Challenge.

Monday, May 07, 2007

CMU ML talk: Probabilistic Inference in Distributed Systems

Probabilistic Inference in Distributed Systems

Speaker: Stanislav Funiak, CMU
http://www.cs.cmu.edu/~sfuniak

Abstract: Probabilistic inference problems arise naturally in distributed systems. For example, robots in a team may combine local laser range scans to build a global map of the environment; sensors in an emergency response deployment may collect local temperature measurements to anticipate the spread of fire. By distributing the computation across several devices, sensor networks offer a fundamentally different computational medium . one where the nodes need to communicate with each other, in order to exchange information. This medium imposes new requirements on probabilistic inference: for example, even if some of the nodes fail, and the information they carry is lost, the rest of the nodes should still be able to recover a principled approximation of the distribution.

In my talk, I will discuss fundamental aspects of probabilistic inference in distributed systems and outline algorithms that perform robustly in this more stringent setting. One key idea is to represent the prior information as a set of marginals that are carried redundantly by the nodes of the network; if a node fails, the remaining nodes can still compute a KL projection of the true distribution. I will consider both the static and the dynamic settings, and show results on
applications from real sensor network deployments.

Thursday, May 03, 2007

Lab Meeting 3 May 2007 (Atwood): detecting raising hands

Lab Meeting 3 May 2007 (Leo): MCMC Data Association and Sparse Factorization Updating for Real Time Multitarget Tracking with Merged and Multiple ...

MCMC Data Association and Sparse Factorization Updating for Real Time Multitarget Tracking with Merged and Multiple Measurements

authors: Zia Khan, Tucker Balch, Frank Dellaert

abstract:
In several multitarget tracking applications a target may return more than one measurement per target, and interacting targets may return multiple merged measurements between targets. Existing algorithms for tracking and data association, initially applied to radar tracking, do not adequately address these types of measurements. Here, we introduce a probabilistic model for interacting targets that addresses both types of measurements simultaneously. We provide an algorithm for approximate inference in this model using a Markov chain Monte Carlo (MCMC) based auxiliary variable particle filter. We Rao-Blackwellize the Markov chain to eliminate sampling over the continuous state space of the targets. A major contribution of this work is the use of sparse least squares updating and downdating techniques, which significantly reduce the computational cost per iteration of the Markov chain. Also, when combined with a simple heuristic, they enable the algorithm to correctly focus computation on interacting targets. We include experimental results on a challenging simulation sequence. We test the accuracy of the algorithm using two sensor modalities, video and laser range data. We also show the algorithm exhibits real time performance on a conventional PC.

Link

Wednesday, May 02, 2007

Lab Meeting 1 May 2007 (Vincent): Robust face recognition using temporal information

In this talk, I will present the following topics :

1. Some proposed face recognition algorithm using image set(instead of single image).

2. Robust face recognition using temporal information. I will propose a framework to recognize human face using probabilistic approach.

CMU talk: STAIR: The STanford Artificial Intelligence Robot project

STAIR: The STanford Artificial Intelligence Robot project
Andrew Ng, Stanford University

This talk will describe the STAIR home assistant robot project, and several satellite projects that led to key STAIR components such as (i) robotic grasping of previously unknown objects, (ii) depth perception from a single still image, and (iii) apprenticeship learning for control.

Since its birth in 1956, the AI dream has been to build systems that exhibit broad-spectrum competence and intelligence. STAIR revisits this dream, and seeks to integrate onto a single robot platform tools drawn from all areas of AI including learning, vision, navigation, manipulation, planning, and speech/NLP. This is in distinct contrast to, and also represents an attempt to reverse, the 30 year old trend of working on fragmented AI sub-fields. STAIR's goal is a useful home assistant robot, and over the long term, we envision a single robot that can perform tasks such as tidying up a room, using a dishwasher, fetching and delivering items, and preparing meals.

STAIR is still a young project, and in this talk I'll report on our progress so far on having STAIR fetch items from around the office. Specifically, I'll describe: (i) learning to grasp previously unseen objects (including its application to unloading items from a dishwasher); (ii) probabilistic multi-resolution maps, which enable the robot to open/use doors; (iii) a robotic foveal+peripheral vision system for object recognition and tracking. I'll also outline some of the main technical ideas---such as learning 3-d reconstructions from a single still image, and reinforcement learning algorithms for robotic control---that played key roles in enabling these STAIR components. In describing these satellite projects, I'll also show our latest results on aerobatic helicopter flight control and quadruped obstacle negotiation.