Wednesday, January 31, 2007

Lab Meeting 1 Feb 2007 (Any): Dynamic Maps for Long-Term Operation of Mobile Service Robots

Title: Dynamic Maps for Long-Term Operation of Mobile Service Robots
Authors: Peter Biber, Tom Duckett
Conference: Robotics: Science and System 2005 (RSS'05)
Local Copy: [PDF]

Abstract:
This paper introduces a dynamic map for mobile robots that adapts continuously over time. It resolves the stability-plasticity dilemma (the trade-off between adaptation to new patterns and preservation of old patterns) by representing the environment over multiple timescales simultaneously (5 in our experiments). A sample-based representation is proposed, where older memories fade at different rates depending on the timescale. Robust statistics are used to interpret the samples. It is shown that this approach can track both stationary and non-stationary elements of the environment, covering the full spectrum of variations from moving objects to structural changes. The method was evaluated in a five week experiment in a real dynamic environment. Experimental results show that the resulting map is stable, improves its quality over time and adapts to changes.

Friday, January 26, 2007

News: 'Sniffer-bot' algorithm helps robots seek scents

19:00 24 January 2007
NewScientist.com news service
Mason Inman

Moths are renowned for their ability to pick up a faint whiff of pheromones from faraway mates. Robots may soon match this feat with the help of a new mathematical method developed to help guide them toward a scent. In tests, the algorithm made virtual scent-hunter bots move just like moths do, snaking and spiralling toward their goal.

...

Massimo Vergassola at the Pasteur Institute in Paris, France, and colleagues created an algorithm that tells a robot how to move in order to gather as much olfactory information as possible. This allows it to home in on even the faintest of scents.

See the full article.
Journal reference: Nature (vol 445, p 406)

News: Web cam periscope

This little web cam accessory ($99) is like a periscope so you can look directly at the person you're video conferencing with... might be a fun re-make, links below to get you started - Link

Thursday, January 25, 2007

Lab Meeting 25 Jan 2007(AShin):The Expectation Maximization Algorithm

Arthur:Frank Dellaert

College of Computing, Georgia Institute of Technology Technical Report

February 2002

Abstract:

This note represents my attempt at explaining the EM algorithm (Hartley, 1958;Dempster et al., 1977; McLachlan and Krishnan, 1997). This is just a slightvariation on Tom Minka’s tutorial (Minka, 1998), perhaps a little easier (or perhapsnot). It includes a graphical example to provide some intuition.

[Link]

Lab Meeting 25 Jan 2007 (ZhenYu):Psychophysiological control architecture for human-robot coordination-concepts and initial experiments

Nilanjan Sarkar
Dept. of Mech. Eng., Vanderbilt Univ., Nashville, TN;

Abstract:
The use of robots is expected to be pervasive in many spheres of society: in hospitals, homes, offices and battlefields, where the robots will need to interact and cooperate closely with a variety of people. The paper proposes an innovative approach to human-robot cooperation where the robot will be able to recognize the psychological state of the interacting human and modify its (i.e., robot's) own action to make the human feel comfortable in working with the robot. Wearable biofeedback sensors are used to measure a variety of physiological indices to infer the underlying psychological states (affective states) of the human. The eventual idea is to correlate the psychological states with the actions of the robot to determine which action(s) is responsible for a particular affective state. The robot controller will then modify that action if there is a need to alter the affective state. A concept of such a control architecture, a requirement analysis, and initial results from human experiments for stress detection are presented.

[link]

Wednesday, January 24, 2007

Lab Meeting 25 Jan 2007 (Casey): Contrast Context Histogram – A Discriminating Local Descriptor for Image Matching

From: The 18th International Conference on Pattern Recognition ( ICPR'06)
Title: Contrast Context Histogram – A Discriminating Local Descriptor for Image Matching
Author: Chun-Rong Huang, Chu-Song Chen and Pau-Choo Chung

Abstract:
This paper presents a new invariant local descriptor, contrast context histogram, for image matching. It represents the contrast distributions of a local region, and serves as a local distinctive descriptor of this region. Object recognition can be considered as matching salient corners with similar contrast context histograms on two or more images in our work. Our experimental results show that the developed descriptor is accurate and efficient for matching.

Paper Download: Link

Tuesday, January 23, 2007

CMU VASC talk: Subspectral Algorithms for Sparse Learning, Optimization & Inference

Baback Moghaddam
MERL
Monday, Jan 29, 3:30pm, NSH 1507

Subspectral Algorithms for Sparse Learning, Optimization & Inference

I will present a class of "subspectral" algorithms (i.e. sparse eigenvector techniques) for solving NP-hard combinatorial optimization problems in three general applied domains: (1) Supervised/unsupervised learning, in the traditional or orthodox sense (e.g. PCA & LDA), (2) Quadratic/Entropic Optimization (e.g. Least-Squares & MaxEnt) and (3) Inference, in the strict probabilistic/Bayesian sense (e.g. Automatic Relevance Determination and variational methods like Expectation Propagation). Subspectral algorithms for both exact (optimal) and greedy (approximate) solutions of these general sparse optimization problems are derived using analytic eigenvalue bounds. Specifically, an efficient "dual-pass" greedy algorithm is shown to yield near-optimal solutions for all possible cardinalities (at once) in a fraction of the time it takes for most continuous relaxation methods to find solutions of comparable quality for a single cardinality. I will present sample applications of subspectral optimization techniques in .sparse PCA. for feature selection (statistics), .sparse LDA. for classification (gene discovery), sparse kernel regression (robotics & control), sparse quadratic programming (portfolio optimization), graph model selection (sensor networks) as well as sparse Bayesian inference for computer vision (face recognition & OCR).

Bio:
Baback Moghaddam's research interests are in computational vision with a main focus on probabilistic visual learning. His related areas of interest and expertise include statistical modeling, Bayesian data analysis, machine learning and pattern recognition. He obtained his PhD in Electrical Engineering and Computer Science (EECS) from the Massachusetts Institute of Technology (MIT) in 1997 where he was a member of the Vision and Modeling Group at the MIT Media Laboratory where he developed a fully-automatic vision system which won DARPA's 1996 "FERET" Face Recognition Competition.

Dr. Moghaddam was the winner of the 2001 Pierre Devijver Prize from the International Association of Pattern Recognition for his "innovative approach to face recognition" and received the Pattern Recognition Society Award for "exceptional outstanding quality" for his journal paper "Bayesian Face Recognition." He currently serves on the editorial board of the journal Pattern Recognition and has contributed to numerous textbooks on image processing and computer vision including the core chapter in Springer-Verlag's latest biometric series, "Handbook of Face Recognition."

Dr. Moghaddam's past research included infrared (IR) image analysis for the Office of Naval Research (ONR), segmentation of synthetic aperture radar (SAR) imagery for MIT Lincoln Laboratory as well as designing a micro-gravity experiment for laser annealing of amorphous silicon which was flown aboard the US Space Shuttle in 1990.

http://www.merl.com/people/baback

CMU ML talk: Approximate inference using planar graph decomposition

Approximate inference using planar graph decomposition
by Amir Globerson and Tommi Jaakkola
NIPS 2006

A number of exact and approximate methods are available for inference calculations in graphical models. Many recent approximate methods for graphs with cycles are based on tractable algorithms for tree structured graphs. Here we base the approximation on a different tractable model, planar graphs with binary variables and pure interaction potentials (no external field). The partition function for such models can be calculated exactly using an algorithm introduced by Fisher and Kasteleyn in the 1960s. We show how such tractable planar models can be used in a decomposition to derive upper bounds on the partition function of non-planar models. The resulting algorithm also allows for the estimation of marginals. We compare our planar decomposition to the tree decomposition method of Wainwright et. al., showing that it results in a much tighter bound on the partition function, improved pairwise marginals, and comparable singleton marginals.

CMU ML talks: Greedy Layer-Wise Training of Deep Networks

Greedy Layer-Wise Training of Deep Networks
by Yoshua Bengio, Pascal Lamblin, Dan Popovici and Hugo Larochelle
NIPS 2006

Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient- based optimization starting from random initialization appears to often get stuck in poor solutions. Hinton et al. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input bringing better generalization.

Monday, January 22, 2007

News: Robot nurses ready for wards 'in three years'

http://www.thisislondon.co.uk/

Robot nurses could be bustling around hospital wards in as little as three years.

The mechanised "angels" being developed by EU-funded scientists, will perform basic tasks such as mopping up spillages, taking messages, and guiding visitors to hospital beds.

They could also distribute medicines and even monitor the temperature of patients remotely with laser thermometers.

...

He told The Engineer magazine: "The idea is not only to have mobile robots but also a full system of integrated information terminals and guide lights, so the hospital is full of interaction and intelligence.

"Operating as a completely decentralised network means that the robots can co-ordinate things between themselves, such as deciding which one would be best equipped to deal with a spillage or to transport medicine."

He said the robots could provide a valuable service guiding people around the hospital. A visitor would state the name of a patient at an information terminal and then follow a robot to the correct bedside.

...

See the full article.

Thursday, January 18, 2007

MIT CSAIL PhD thesis: Robot Manipulation in Human Environments

Authors: Edsinger, Aaron
Advisor: Rodney Brooks
Issue Date: 16-Jan-2007

Abstract: Human environments present special challenges for robot manipulation. They are often dynamic, difficult to predict, and beyond the control of a robot engineer. Fortunately, many characteristics of these settings can be used to a robot's advantage. Human environments are typically populated by people, and a robot can rely on the guidance and assistance of a human collaborator. Everyday objects exhibit common, task-relevant features that reduce the cognitive load required for the object's use. Many tasks can be achieved through the detection and control of these sparse perceptual features. And finally, a robot is more than a passive observer of the world. It can use its body to reduce its perceptual uncertainty about the world. In this thesis we present advances in robot manipulation that address the unique challenges of human environments. We describe the design of a humanoid robot named Domo, develop methods that allow Domo to assist a person in everyday tasks, and discuss general strategies for building robots that work alongside people in their homes and workplaces. PDF

Paper: Experimental Characterization of Commercial Flash Ladar Devices

D. Anderson, H. Herman, and A. Kelly
International Conference of Sensing and Technology, November, 2005.

Abstract: Flash ladar is a new class of range imaging sensors. Unlike traditional ladar devices that scan a collimated laser beam over the scene, flash ladar illuminates the entire scene with diffuse laser light. Recently, several companies have begun offering demonstration flash ladar units commercially. In this work, we seek to characterize the performance of two such devices, examining the effects of target range, reflectance and angle of incidence, as well as mixed pixel effects.

PDF

CMU RI seminar: The neuroarchitecture of complex cognition

Marcel Adam Just
D. O. Hebb Professor of Psychology
Carnegie Mellon University

Recent findings in brain imaging, particularly fMRI, are beginning to reveal some of the fundamental properties of the organization of the cortical systems that underpin complex cognition. A set of operating principles govern the system organization, characterizing the system as a set of collaborating cortical centers that operate as a large-scale cortical network. Two of the network's critical features are that it is resource-constrained and dynamically-configured, with resource constraints and demands dynamically shaping the network topology. The operating principles are embodied in a cognitive neuroarchitecture, 4CAPS, consisting of a number of interacting computational centers that correspond to activating cortical areas. Each 4CAPS center is a hybrid production system, possessing both symbolic and connectionist attributes. 4CAPS models of several cognitive tasks (sentence comprehension, spatial problem solving, and complex multitasking) have been developed and compared to brain activation and behavioral results.

Congratulations to Casey Wang!

Congratulations! Casey Wang successfully defended his master thesis on "Hand Gesture Recognition using Adaboost with SIFT". Good job!

-Bob

Saturday, January 13, 2007

IEEE ITSS Newsletter vol 8 nr 4, December 2006

In this issue you will find a lot of interesting material:
- ITSS related news; in particular many new initiatives from the ITS Society
- technical papers
- conference reports and announcements
- research overview focusing nomadic devices in the new intelligent vehicles

You can find the ITSS Newsletter at the IEEE ITSS official web site address:
http://www.ewh.ieee.org/tc/its/
or directly at:
http://www.its.washington.edu/itsc/v8n4.pdf

Thursday, January 11, 2007

MIT CSAIL report: Latent-Dynamic Discriminative Models for Continuous Gesture Recognition

Authors: Morency, Louis-Philippe; Quattoni, Ariadna; Darrell, Trevor
Issue Date: 7-Jan-2007

Abstract: Many problems in vision involve the prediction of a class label for each frame in an unsegmented sequence. In this paper we develop a discriminative framework for simultaneous sequence segmentation and labeling which can capture both intrinsic and extrinsic class dynamics. Our approach incorporates hidden state variables which model the sub-structure of a class sequence and learn the dynamics between class labels. Each class label has a disjoint set of associated hidden states, which enables efficient training and inference in our model. We evaluated our method on the task of recognizing human gestures from unsegmented video streams and performed experiments on three different datasets of head and eye gestures. Our results demonstrate that our model for visual gesture recognition outperform models based on Support Vector Machines, Hidden Markov Models, and Conditional Random Fields.

PDF, PS

Lab meeting 12 Jan, 2007 (Stanley): A robocentric motion planner for dynamic environments using the velocity space

Author: E. Owen, L. Montano

From: IEEE/RSJ International Conference on Intelligent Robots and Systems. Oct. 9-15, Beijing, China.

Abstract:
This paper addresses a method to optimize the robot motion planning in dynamic environments, avoiding the moving and static obstacles while the robot drives towards the goal. The method maps the dynamic environment into a model in the velocity space, computing the times to potential collision and potential escape and the associated robot velocities. The problem of finding a trajectory to the goal is stated as a constrained nonlinear optimization problem. The initial seed trajectory for the optimization is directly generated in the velocity space using the model built. The method is applied to robots which are subject to both kinematic constraints (i.e. involving the configuration parameters of the robot and their derivatives), and dynamic constraints, (i.e. the constraints imposed by the acceleration/deceleration capabilities). Some experimental results are discussed.

Link

Tuesday, January 09, 2007

News: iRobot Introduces the iRobot Create!

A robust, programmable Robot platform that invites you to stretch your imagination

iRobot, a pioneering robot company that has sold millions of iRobot Roomba vacuuming robots, has introduced a remarkable robot platform that fills a major gap.

The iRobot Create is a dependable, rugged and versatile robot base that can be used for uncounted robotics hobby and research applications. It includes a selection of software routines developed for iRobot’s commercial appliance bots and a well-engineered, robust chassis designed for longevity. Many will consider this to be a DIY-roboticist’s dream come true.

Here, we offer a summary of the iRobot Create’s features in this First Look, initial comments by contributing editor Dan Lynch (who has been playing with one for a few days as of this post) and an interesting overview of applications designed for this new robot platform by iRobot employees worldwide. Stay tuned for an in-depth feature article in the Summer 2007 issue of Robot.

The iRobot Create comes fully assembled. It has 32 built-in sensors, two powered wheels, a castor (and optional 4th castor wheel), 10 pre-programmed behaviors, an expandable input/ouput port for custom sensors and actuators, a cargo bay with mounting points and a tailgate for ballast. This new bot platform works with optional accessories such as the iRobot Command Module, iRobot Roomba Virtual Wall units, the self-charging home base, and iRobot Roomba standard remote. You can use the Roomba rechargeable battery options or standard alkaline batteries. You’ll need a computer with a serial port (USB connectivity is expected soon) and Microsoft Windows XP, Linux or Mac OS X. - Link

Related: iRobot "Create" - Educational robot - Link

Sunday, January 07, 2007

News: The First Kyosho Athlete Humanoid Cup!

The launch of the MANOI AT01 humanoid robot kit, featured in our Winter 2006 issue, was an instant success with many stores in Japan selling out of stock within a few days of its September introduction. So it was no surprise when a large number of customers turned out with their robots fully assembled and customized to compete in the first Kyosho Athletics Humanoid Cup event on December 10th in the trendy Omotesando Hills complex in Tokyo.

The inaugural Kyosho Athlete Humanoid Cup event was of keen interest to robot fans everywhere. The images below tell the story of how this exciting competition unfolded.


Left to right: Dr. GIY's MANOI AT01, Sugiura's AT01 and a silver MANOI PF01 were exhibited at a MANOI launch press conference in September 2006.

The competitions, watched by standing-room only crowds, included 5-meter sprints against the clock with both R/C and autonomous divisions, plus two minute demonstrations/performances scored by an expert panel of judges.

See the full article.

paper: Predictive Mover Detection and Tracking in Cluttered Environments

L. Navarro, C. Mertz, and M. Hebert
Proc. of the 25th. Army Science Conference, November, 2006.
PDF

Abstract:This paper describes the design and experimental evaluation of a system that enables a vehicle to detect and track moving objects in real-time. The approach investigated in this work detects objects in LADAR scan lines and tracks these objects (people or vehicles) over time. The system can fuse data from multiple scanners for 360° coverage. The resulting tracks are then used to predict the most likely future trajectories of the detected objects. The predictions are intended to be used by a planner for dynamic object avoidance. The perceptual capabilities of our system form the basis for safe and robust navigation in robotic vehicles, necessary to safeguard soldiers and civilians operating in the vicinity of the robot.

paper: Bootstrap learning of foundational representations

B.J. Kuipers, P. Beeson, J. Modayil, J. Provost
Connection Science, 2006 - Taylor & Francis
PDF

Abstract: To be autonomous, intelligent robots must learn the foundations of commonsense knowledge from their own sensorimotor experience in the world. We describe four recent research results that contribute to a theory of how a robot learning agent can bootstrap from the “blooming buzzing confusion” of the pixel level to a higher-level ontology including distinctive states, places, objects, and actions. This is not a single learning problem, but a lattice of related learning tasks, each providing prerequisites for tasks to come later. Starting with completely uninterpreted sense and motor vectors, as well as an unknown environment, we show how a learning agent can separate the sense vector into modalities, learn the structure of individual modalities, learn natural primitives for the motor system, identify reliable relations between primitive actions and created sensory features, and can define useful control laws for homing and path-following. Building on this framework, we show how an agent can use to self-organizing maps to identify useful sensory featurs in the environment, and can learn effective hill-climbing control laws to define distinctive states in terms of thos features, and trajectoryfollowing control laws to move from one distinctive state to another. Moving on to place recognition, we show how an agent can combine unsupervised learning, map-learning, and supervised learning to achieve high-performance recognition of places from rich sensory input. And finally, we take the first steps toward learning an ontology of objects, showing tha a bootstrap learning robot can learn to individuate objects through motion, separating them from the static environment and from each other, and learning properties that will be useful for classification. These are four key steps in a much larger research enterprise that lays the foundation for human and robot commonsense knowledge.

News: Wow Wee Unveils Robopanda


Cute, but not cuddly robot will be the company's most charming and interactive part of its 2007 line.

One of the highlights of AI, Steve Spielberg's melancholy look at robots in the distant future, was a little autonomous robot bear, "Teddy," that served as friend and companion to the film's main character David the "mech" boy robot. The bear could walk, talk interact and show real intelligence and affection. Now Wow Wee is apparently ripping a page from that movie's script to introduce the new Robopanda.

Part of Wow Wee's 2007 line of robot toys, the $229 Robopanda is swathed in plastic instead of cuddly fur, and is roughly 19.5 inches tall (standing), 11 inches wide and 6 inches deep. Wow Wee officials, however, promise a level of robot/human interaction scarcely seen in previous robot toys.

Designed for children (and, perhaps, adults) ages 4 and up, the 8-pound Robopanda will be covered with eight touch sensors and, using infrared and stereo sensors, should be able to avoid obstacles, track objects and locate sound sources. It will also tell stories and recognize and interact with its own companion: a plush-panda toy that will ship with the robot.

The robo-mammal will also feature, Wow Wee execs said, "incredible movement," using its nine "ultra-quiet" motors (and one tilt sensor) to sit, crawl, walk (on all fours), roll over and hug. Unlike previous Wow Wee robots, Robopanda is not expected to ship with a remote. Its artificial intelligence may also be greater than previous Wow Wee products: "Robpanda responds with mood-specific behaviors…based on [user] interaction," Wow Wee notes in a recent press release.

See the full article.

News: Spyke Wi-Fi Spy Robot Debuts at CES 2007


Consumer robots could also be a big topic at the CES 2007. Here is a newcomer. French MECCANO introduces the Spyke robot in the United States under the ERECTOR brand.

The Spyke robot is controlled via a PC over Wi-Fi. Spyke has a webcam and moves on a rubber band. Professional robots can climb stairs with such a drive mechanism.

See the full article.

Saturday, January 06, 2007

News: Gates says day of the home-help robot is near

James Randerson, science correspondent
Friday January 5, 2007
The Guardian

An office worker checks her home-gadget webpage from her work computer. The tasks she set for her home robots in the morning have all been completed: washing and ironing, vacuuming the lounge and mowing the lawn.

She orders dinner from the kitchen chefbot - sushi today, using a recipe from a Japanese website - then checks her elderly mother's house. The companionbot has given mum her medicine and helped her out of bed and into a chair.

This is the vision of the future offered by Bill Gates who, in the latest issue of Scientific American, argues that the robotics industry is on the cusp of a big expansion. He likens the current state of robotic technology to the situation in the fledgling computer industry when he and his fellow entrepreneur Paul Allen launched Microsoft in the mid-1970s.

See the full article.

Thursday, January 04, 2007

Lab meeting 5 Jan, 2007 (Nelson): Motion–Egomotion Discrimination and Motion Segmentation from Image-Pair Streams

David Demirdjian and Radu Horaud
[LINK]

Computer Vision and Image Understanding
Volume 78 , Issue 1 (April 2000)

Special issue on robust statistical techniques in image understanding
Pages: 53 - 68


Abstract:

Given a sequence of image pairs we describe a method that segments the observed scene into static and moving objects while it rejects badly matched points.We show that, using a moving stereo rig, the detection of motion can be solved in a projective framework and therefore requires no camera calibration. Moreover the method allows for articulated objects. First we establish the projective framework enabling us to characterize rigid motion in projective space. This characterization is used in conjunction with a robust estimation technique to determine egomotion. Second we describe a method based on data classification which further considers the non-static scene points and groups them into several moving objects. Third we introduce a stereo-tracking algorithm that provides the point-to-point correspondences needed by the algorithms. Finally we show some experiments involving a moving stereo head observing both static and moving objects.

Lab meeting 5 Jan, 2007(Atwood): Plan-view trajectory estimation with dense stereo background models

Title: Plan-view trajectory estimation with dense stereo background models
Auther: Darrell, T.; Demirdjian, D.; Checka, N.; Felzenszwalb, P.

from ICCV 2001.

Abstract:

In a known environment, objects may be tracked in multiple views using a set of background models. Stereo-based models can be illumination-invariant, but often have undefined values which inevitably lead to foreground classification errors. We derive dense stereo models for object tracking using long-term, extended dynamic-range imagery, and by detecting and interpolating uniform but unoccluded planar regions. Foreground points are detected quickly in new images using pruned disparity search. We adopt a “late-segmentation” strategy, using an integrated plan-view density representation. Foreground points are segmented into object regions only when a trajectory is finally estimated, using a dynamic programming-based method. Object entry and exit are optimally determined and are not restricted to special spatial zones


Link