Wednesday, May 31, 2006

Robotics Institute Thesis Proposal: Proactive Replanning for Multi-Robot Teams

Proactive Replanning for Multi-Robot Teams

Brennan Sellner
Robotics Institute
Carnegie Mellon University

Place and Time
NSH 3002 2:00 PM

Rather than blindly following a predetermined schedule, human workers often dynamically change their course of action in order to assist a coworker who is having unexpected difficulties. The goal of this research is to examine how this notion of "helpful" behavior can inspire new approaches to online plan execution and repair in multi-robot systems. Specifically, we are investigating the enabling of proactive replanning by dynamically predicting task duration and adapting to predicted problems or opportunities through the modification of executing tasks. By continuously predicting the remaining task duration, a proactive replanner is able to adjust to upcoming opportunities or problems before they manifest themselves. One way in which it may do so is by adjusting the allocation of agents to the various executing tasks by adding or removing agents, which allows the planner to balance a schedule in response to the realities of execution. We propose to develop a planning/scheduling/execution system that, by supporting duration prediction and adaptation, will be able to execute complex multi-robot tasks in an uncertain environment more efficiently than is possible without such proactive capabilities.

We have developed a proof-of-concept system that implements duration prediction and modification of existing tasks, yielding simulated executed makespans as much as 31.8% shorter than possible without these capabilities. Our initial system does not operate in real time, nor with actual hardware, instead interfacing with a simulator and allowing unlimited time for replanning between time steps. We propose to characterize the applicability of this approach to various domains, extend our algorithms to support more complex scenarios and to address shortcomings we have identified, and optimize the algorithms with respect to both computational complexity and the makespan of the final executed schedule, with the goal of bringing the advantages of duration prediction and task modification to real-time planning/execution system. We will evaluate our approach to proactive replanning both in an extensive series of simulated experiments and in a real-time assembly scenario using actual hardware. We hypothesize that proactive replanning can be performed in real time while yielding significant improvements in overall execution time, as compared with a baseline repair-based planner.

Further Details:
A copy of the thesis proposal document can be found at

Thesis Committee:
Reid Simmons, Chair
Sanjiv Singh
Stephen Smith
Tara Estlin, Jet Propulsion Laboratory

PhD position at EPFL: Indoor Flying Robot for Search and Rescue

Open PhD Position Announcement
Indoor Flying Robot for Search and Rescue

Laboratory of Intelligent Systems (LIS)
Swiss Federal Institute of Technology Lausanne (EPFL)

One funded PhD student position is available immediately or upon agreement at the Laboratory of Intelligent Systems, Swiss Federal Institute of Technology Lausanne (Lab Director: Prof. Dario Floreano, Project Leader: Jean- Christophe Zufferey) to work on the micro-mechatronic development of a novel indoor flying robot for search and rescue operations.

The candidate is expected to develop innovative aerodynamic and micro-mechatronic solutions, including design, prototyping, integration of sensors and electronics, as well as characterization of the system. The candidate must have a strong motivation for aerodynamics, micro-engineering, and system integration. She or he must also have a systematic and rigorous approach to system design and characterization.

Applicants should preferably have an MSc in one of the following disciplines:
- Aerodynamics
- Micro-engineering
- Electronic Engineering
- Material Engineering
- Robotics

Candidates from other disciplines are also welcome if they demonstrate skills in the above-mentioned areas. Proficiency in writing/spoken English is mandatory. PhD students will be requested to do some class assistance and supervise undergraduate and master projects.

The candidate will join a young and enthusiastic team with several activities in aerial robotics that featured on top international media and scientific magazines. The Laboratory offers access to state-of-the-art facilities ranging from micro-machining to printed circuit realization as well as collaboration with companies in micro-robotics.

To apply for the position, please send a letter of motivation along with a CV, certificates, and two academic references to

For more information please refer to:
Swiss Federal Institute of Technology Lausanne:
Laboratory of Intelligent Systems:

Prof. Dr. Dario Floreano
Laboratory of Intelligent Systems
Swiss Federal Institute of Technology (EPFL)
Building ELE, Station 11
CH-1015 Lausanne, Switzerland

Voice: +41 21 693 5230
Secretary: +41 21 693 5966
Fax: +41 21 693 5859

'Baby' robot learns like a human news service
Tom Simonite

A robot that learns to interact with the world in a similar way to a human baby could provide researchers with fresh insights into biological intelligence.
Created by roboticists from Italy, France and Switzerland, "Babybot" automatically experiments with objects nearby and learns how best to make use of them. This gives the robot an ability to develop motor skills in the same way as a human infant.

Soldiers bond with bots on battlefield

They may not be as cuddly as RI-MAN or as human-like as EveR-1, but Reuters is reporting that soldiers in Iraq have nonetheless formed strong bonds with their battlefield bots, giving them names and grieving when they meet an unfortunate end. When one bomb defusing PackBot from Roomba-maker iRobot named "Scooby Doo" was blown up after 35 successful missions, the bot's operator asked of iRobot, "Please fix Scooby Doo, because he saved my life."Of course, humans forming meaningful emotional attachments to their robot companions and servants is by no means unusual; studies have shown robo-pets to be as therapeutic as the real thing, and bots like Paro the seal have been helping patients in nursing facilities for years now (and are even crossing over to starring in movies). Still, if there's one kind of robot we'd want to stay away from as the robot-revolution looms near, it's that kind designed for military use. Ruh roh, Raggy.

Tuesday, May 30, 2006

Call For Papers: RSS Workshop on Robotic Systems on Rehabilitation, Exoskeleton, and Prosthetics

Robotics: Science and Systems workshop titled "Robotic Systems onRehabilitation, Exoskeleton and Prosthetics" will be held on August18th, 2006 in Philadelphia, PA, USA.

Call for Papers:We invite you to submit an extended abstract for thisdiscussion-oriented workshop with distinguished invited speakers. Thetopic includes grand challenge issues in rehabilitation, exoskeleton and prosthetics such as (but not limited to) interface with the neuralsystems, safety, size/weight, functional improvement assessment, andease of use/don/control. The extended abstract is limited to 2 pagesincluding figures.

Deadline: June 23, 2006
Notification: July 7, 2006
Workshop: August 18, 2006

Your extended abstract will be peer-reviewed. Accepted abstracts willbe included in the workshop proceedings and showcased during the poster session.

For more details, see
RSS conference webpage:

We look forward to seeing you in August.Yoky Matsuoka, Carnegie Mellon UniversityBill Townsend, Barrett Technology, Inc.

Monday, May 29, 2006

Call For Papers: IROS 2006 Workshop

IEEE/RSJ IROS 2006 Workshop (October 10, 2006, Beijing, China):

From sensors to human spatial concepts
(geometric approaches and appearance-based approaches)

Organizers: Zoran Zivkovic, Ben Kröse, Raja Chatila, Henrik Christensen,
Roland Siegwart


The aim of the workshop is to bring together researchers that work on space representations appropriate for communicating with humans and on developing algorithms for relating the robot sensor data to the human spatial concepts. Furthermore, the workshop should renew the discussion about the appearance based and geometric approaches for space modeling in the light of the problem of relating the robot sensor data to the human spatial concepts. Additionally in order to facilitate the discussion a dataset consisting of omnidirectional camera images, laser range readings and robot odometry is provided.

The often used geometric space representations present a natural choice for robot localization and navigation. However, it is hard to communicate with a robot in terms of, for example, 2D (x,y) positions. Some common human spatial concepts are: “the living room”, “the corridor between the living room and the kitchen”; or more general as “a room”, “a corridor”; or more specific and related to objects “behind the TV in the living room”; etc, etc. Appropriate space representation is needed for the natural communication with the robot. Furthermore, the robot should be able to relate its sensor reading to the human spatial concepts.

Suggested topics include, but are not limited to the following areas:

  • space representations appropriate for communicating with humans
  • papers applying and analyzing the results from cognitive science about the human spatial concepts
  • space representations suited to cover cognitive requirements of learning, knowledge acquisition and contextual control
  • methods for relating the robot sensor data to the human spatial concepts
  • comparison and/or combination of the appearance based and geometric approach for the task of relating the robot sensor data to the human spatial concepts
We also encourage the use of the provided dataset ( Selected papers will be considered for a Special Issue of the Robotics and Automation Systems Journal.

CFP: RSS Workshop on Socially Assistive Robotics 2006

Robotics: Science and Systems (RSS) Workshop on "Socially Assistive Robotics", August 19th, 2006, Philadelphia, PA, USA

Call for Posters:

The organizers invite you to submit a 2-page poster abstract for the RSS 2006 Workshop on "Socially Assistive Robotics". The number of posters that can be accepted is limited; acceptance will depend on relevance to the workshop topics and contribution quality. The accepted posters will be included in the workshop proceedings and the authors will have the opportunity to present their work during the poster session. Please submit your poster abstracts by email to tapus(at) (use "RSS Workshop 2006 Abstract Submission" in the subject).

Key Dates:
  • Deadline of abstract submission: June 25, 2006
  • Notification of acceptance: July 15, 2006
  • "Socially Assistive Robotics" RSS Workshop: August 19, 2006


Research into Human-Robot Interaction (HRI) for socially assistive applications is still in its infancy. Various systems have been built for different user groups. For example, for the elderly, robot-pet companions aiming to reduce stress and depression have been developed, for people with physical impairments, assistive devices such as wheelchairs and robot manipulators have been designed, for people in rehabilitation therapy, therapist robots that assist, encourage and
socially interact with patients have been tested, for people with cognitive disorders, many applications focused on robots that can therapeutically interact with children with autism have been done, and for students, tutoring applications have been implemented. An ideal assistive robot should feature sufficiently complex cognitive and social skills permitting it to understand and interact with its environment, to exhibit social behaviors, and to focus its attention and communicate with people toward helping them achieve their goals.

The objectives of this workshop are to present the grand challenge of socially assistive robotics, the current state-of-the-art, and recent progress on key problems. Speakers at the workshop will address a variety of multidisciplinary topics, including social behavior and interaction, human-robot communication, task learning, psychological implications, and others. The workshop will also cover a variety of assistive applications, based on hands-off and hands-on therapies for helping people in need of assistance as part of convalescence, rehabilitation, education, training, and ageing. The proposed workshop is aimed at providing a general overview of the critical issues and key points in building effective, acceptable and reliable human-robot interaction systems for socially assistive applications and providing indications for further directions and developments in the field, based on the diverse expertise of the participants.

  • Social and physical embeddedness
  • Encouraging compliance, goal sharing and transfer
  • Designing behavioral studies
  • Modeling embodied empathy
  • Experimental design for human-robot interaction


Dr. Adriana TAPUS
Robotics Research Lab/Interaction Lab
Computer Science Department
University of Southern California
RTH 423, Mailcode 0781
941 West 37th Place, Los Angeles, CA, USA
e-mail: tapus(at)

Prof. Maja MATARIĆ
Robotics Research Lab/Interaction Lab
Computer Science Department
University of Southern California
RTH 407, Mailcode 0781
941 West 37th Place, Los Angeles, CA, USA
e-mail: mataric(at)


Neural Information Processing Systems (NIPS)

Deadline for Paper Submissions: June 9, 2006

Submissions are solicited for the Twentieth Annual meeting of an interdisciplinary Conference (December 5-7) which brings together researchers interested in all aspects of neural and statistical processing and computation. The Conference will include invited talks as well as oral and poster presentations of refereed papers. It is single track and highly selective. Preceding the main Conference will be one day of Tutorials (December 4), and following it will be two days of Workshops at Whistler/Blackcomb ski resort (December 8-9).

Submissions: Papers are solicited in all areas of neural information processing, including (but not limited to) the following:

  • Algorithms and Architectures: statistical learning algorithms, neural networks, kernel methods, graphical models, Gaussian processes, dimensionality reduction and manifold learning, model selection, combinatorial optimization.
  • Applications: innovative applications or fielded systems that use machine learning, including systems for time series prediction, bioinformatics, text/web analysis, multimedia processing, and robotics.
  • Brain Imaging: neuroimaging, cognitive neuroscience, EEG (electroencephalogram), ERP (event related potentials), MEG (magnetoencephalogram), fMRI (functional magnetic resonance imaging), brain mapping, brain segmentation, brain computer interfaces.
  • Cognitive Science and Artificial Intelligence: theoretical, computational, or experimental studies of perception, psychophysics, human or animal learning, memory, reasoning, problem solving, natural language processing, and neuropsychology.
  • Control and Reinforcement Learning: decision and control, exploration, planning, navigation, Markov decision processes, game-playing, multi-agent coordination, computational models of classical and operant conditioning.
  • Hardware Technologies: analog and digital VLSI, neuromorphic engineering, computational sensors and actuators, microrobotics, bioMEMS, neural prostheses, photonics, molecular and quantum computing.
  • Learning Theory: generalization, regularization and model selection, Bayesian learning, spaces of functions and kernels, statistical physics of learning, online learning and competitive analysis, hardness of learning and approximations, large deviations and asymptotic analysis, information theory.
  • Neuroscience: theoretical and experimental studies of processing and transmission of information in biological neurons and networks, including spike train generation, synaptic modulation, plasticity and adaptation.
  • Speech and Signal Processing: recognition, coding, synthesis, denoising, segmentation, source separation, auditory perception, psychoacoustics, dynamical systems, recurrent networks, Language Models, Dynamic and Temporal models.
  • Visual Processing: biological and machine vision, image processing and coding, segmentation, object detection and recognition, motion detection and tracking, visual psychophysics, visual scene analysis and interpretation.

Review Criteria:
New as of 2006, NIPS submissions will be reviewed double-blind: the reviewers will not know the identities of the authors. Submissions will be refereed on the basis of technical quality, novelty, potential impact on the field, and clarity. There will be an opportunity after the meeting to revise accepted manuscripts. We particularly encourage submissions by authors new to NIPS, as well as application papers that combine concrete results on novel or previously unachievable applications with analysis of the underlying difficulty from a machine learning perspective.

Paper Format: The paper format is fully described at Please use the latest style file for your submission.

Submission Instructions: NIPS accepts only electronic submissions at These submissions must be in postscript or PDF format. The Conference web site will accept electronic submissions from May 26, 2006 until midnight, June 9, 2006, Pacific daylight time.

Demonstrations: There is a separate Demonstration track at NIPS. Authors wishing to submit to the Demonstration track should consult the Conference web site.

Organizing Committee:
General Chair --- Bernhard Schölkopf (MPI for Biological Cybernetics)
Program Chair --- John Platt (Microsoft Research)
Tutorials Chair --- Daphne Koller (Stanford)
Workshop Chairs --- Charles Isbell (Georgia Tech), Rajesh Rao (University of Washington)
Demonstrations Chairs --- Alan Stocker (New York University), Giacomo Indiveri (UNI ETH Zurich)
Publications Chair --- Thomas Hofmann (TU Darmstadt)
Volunteers Chair --- Fernando Perez Cruz (Gatsby Unit, London)
Publicity Chair --- Matthias Franz (Max Plack Institute, Tübingen)
Online Proceedings Chair --- Andrew McCallum (Univ. Massachusetts, Amherst)

Program Committee:
Chair --- John Platt (Microsoft Research)
Bob Williamson (National ICT Australia)
Cordelia Schmid (INRIA)
Corinna Cortes (Google)
Dan Ellis (Columbia University)
Dan Hammerstrom (Portland State University)
Dan Pelleg (IBM)
Dennis DeCoste (Yahoo Research)
Dieter Fox (University of Washington)
Hubert Preissl (University of Tuebingen)
John Langford (Toyota Technical Institute)
Kamal Nigam (Google)
Kevin Murphy (University of British Columbia)
Koji Tsuda (MPI for Biological Cybernetics)
Maneesh Sahani (University College London)
Neil Lawrence (University of Sheffield)
Samy Bengio (IDIAP)
Satinder Singh (University of Michigan)
Shimon Edelman (Cornell University)
Thomas Griffiths (UC Berkeley)

Saturday, May 27, 2006

I am Atwood Liu

My english name is Atwood Liu, my cyber name is popandy and my chinese name is 劉德成. My email/MSN is

It is my pleasure to meet each member of the lab. I hope to get my first presentation well-prepared in the future lab meeting. By the way, I am so glad to have a dinner with some members of the lab yesterday.

The above is my girl friend's favorite rabbit named "Peter Rabbit",
which is also my mischievous son.

Thursday, May 25, 2006

What's New @ IEEE in Wireless, May 2006

In response to security concerns raised over placing wireless RFID tags on consumer goods, IBM is introducing a new kind tag called the Clipped Tag. Unlike normal RFID tags, which broadcast information up to 30 feet away, the new tag allows consumers to reduce that distance down to only two inches. It works by way of a tiny antennae, which is removed after the product has been purchased. The tags will still retain all the same tracking information, such as where the product was purchased, whose credit card it was purchased on, and even the number of the card, but the ability to intercept the information with a hacked remote sensor would now require information thieves to get right up next to the product. Read more:,70793-0.html

Game software that uses RFID tags embedded in toys to teach English to non-English-speaking children has been developed by a pair of students from Purdue University. Their game, Merlin's Magic Castle, comes with computer software, a scanner, and electronic tags which are embedded into appropriate objects. When a toy is run over the computer's scanner, the program registers the RFID, and a computer character on-screen says the toy's name or poses a question, which according to the developers, provides auditory, visual, and tactile stimulation that promotes better comprehension and retention of information in children. Read more:

New research being conducted by scientists in Europe and the Middle East suggests that existing mobile phone masts can be used to predict the weather -- in some case more accurately than meteorological equipment. When bad weather such as rain or electromagnetic activity is about to appear, automatic systems in the mobile masts boost their signals to ensure calls stay connected. The signal data from one provider in Tel Aviv, Haifa and Jerusalem was matched against data collected from the country's meteorological equipment, and found to be a match. The mobile masts, however, proved to cover more ground than the meteorological equipment.
Researchers in the UK also used a similar method of collecting data from global positioning satellites to measure atmospheric humidity. Professor Hagit Messer-Yaron, of the University of Tel Aviv, says the information could be used to predict hurricanes and other catastrophic events before they happen. The next step would involve getting existing mobile service providers to hand over their mast's information on a regular basis. Meeser-Yaron also believes that with this knowledge, people will one day be able to use read their own cell phones' reception patterns and predict their local forecast. Read more:

Wednesday, May 24, 2006

PAL lab meeting 25, May, 2006 (Casey): Fast Rotation Invariant Multi-View Face Detection Based on Real Adaboost

Title: Fast Rotation Invariant Multi-View Face Detection Based on Real Adaboost

Authors: Bo WU, Haizhou AI, Chang HUANG and Shihong LAO
(Department of Computer Science and Technology, Tsinghua University, Beijing, 100084, China)
(Sensing Technology Laboratory, Omron Corporation)

AbstractIn this paper, we propose a rotation invariant multiviewface detection method based on Real Adaboostalgorithm [1]. Human faces are divided into severalcategories according to the variant appearance fromdifferent view points. For each view category, weakclassifiers are configured as confidence-rated look-uptable(LUT) of Haar feature [2]. Real Adaboostalgorithm is used to boost these weak classifiers andconstruct a nesting-structured face detector. To make itrotation invariant, we divide the whole 360-degree rangeinto 12 sub-ranges and construct their correspondingview based detectors separately. To improve performance,a pose estimation method is introduced and results in aprocessing speed of four frames per second on 320 ×240sized image. Experiments on faces with 360-degree inplanerotation and ±90-degree out-of-plane rotation arereported, of which the frontal face detector subsystemretrieves 94.5% of the faces with 57 false alarms on theCMU+MIT frontal face test set and the multi-view facedetector subsystem retrieves 89.8% of the faces with 221false alarms on the CMU profile face test set.

Tuesday, May 23, 2006

The 2006 IEEE International Conference on Robotics and Automation

The proceedings of the 2006 IEEE International Conference on Robotics and Automation (ICRA) are available at our ftp server. Please take a look and present good/interesting papers at our lab meetings.


Monday, May 22, 2006

PAL lab meeting 25, May, 2006 (Vincent): Video-based face recognition using probabilistic appearance manifolds

Title : Video-based face recognition using probabilistic appearance manifolds
Authors :
Kuang-Chih Lee Ho @ UIUC
J. Ming-Hsuan Yang @ Honda-RI
Kriegman, D. @ CS.USCD

This paper appears in CVPR'2003

Abstract :
This paper presents a method to model and recognize human faces in video sequences. Each registered person is represented by a low-dimensional appearance manifold in the ambient image space, the complex nonlinear appearance manifold expressed as a collection of subsets (named pose manifolds), and the connectivity among them. Each pose manifold is approximated by an affine plane. To construct this representation, exemplars are sampled from videos, and these exemplars are clustered with a K-means algorithm; each cluster is represented as a plane computed through principal component analysis (PCA). The connectivity between the pose manifolds encodes the transition probability between images in each of the pose manifold and is learned from a training video sequences. A maximum a posteriori formulation is presented for face recognition in test video sequences by integrating the likelihood that the input image comes from a particular pose manifold and the transition probability to this pose manifold from the previous frame. To recognize faces with partial occlusion, we introduce a weight mask into the process. Extensive experiments demonstrate that the proposed algorithm outperforms existing frame-based face recognition methods with temporal voting schemes.

You can find the pdf file here

Sunday, May 21, 2006

Lab meeting schedule this week.

As I will give a talk at National Taipei University this Wednesday, we will have our lab meeting at 10:30 AM, this Thursday. Note that we will have our advisee meetings after 3 PM this Wednesday.


Thursday, May 18, 2006

CMU Thesis proposal: Common Ground-Based Human-Robot Interaction

Kristen Stubbs, Robotics Institute
Carnegie Mellon University

Building more effective human-robot systems requires that we understand the complex interactions that occur within such systems. As human-robot interaction (HRI) develops, we are becoming more ambitious about the types of interactions we envision for our robots and their users. In particular, we have become interested in the deployment of autonomous robots that are designed to work in complex, real-world settings. Our users are not likely to be experts in robotics, and they may possess inaccurate mental models of robotic technologies. In order to facilitate successful interactions, I am interested in promoting common ground between users and robots: that is, I wish to increase users' understanding of robots and foster accurate mental models, and, at the same time, enhance robots' understanding of users and their goals.

In particular, through an ethnographic study of the Life in the Atacama project, I have documented a number of challenges faced by a group of scientists as they used a remotely-located, autonomous robot to explore the Atacama Desert in Chile. I have used this data to hypothesize a basic, operational model of how scientists generate plans for the robot. I then use the data from this study to illustrate the need for three particular components which together form the proposed work: a representation of the scientists' goals, a plan validation system which alerts the science team when a given plan cannot meet these goals, and a plan repair system which, at execution time, uses the science goals to make intelligent decisions about what to do in the event actions fail. The major contributions of this work include detailed analyses of a particular human-robot system, explicit modeling of common ground, and software systems which will improve the grounding process and task performance for exploration robotics-related tasks.

Further Details
A copy of the thesis proposal document can be found at


Researchers in Texas and Utah report they have created a new means of producing three-dimensional embryonic images called microCT-based virtual histology. The process uses computer visualization techniques to convert X-ray CT scans of mouse embryos into detailed three dimensional images showing both the mouse's exterior and interior. Normally embryos are sliced up physically and examined under a microscope, a very time-consuming method. With the new process, the embryos are instead stained with special dyes which permeate the skin and other membranes. The team of researchers wrote a new computer algorithm to take the CT scan data and automatically distinguish various organs and structures in the embryo. The virtual rendering of the CT scan data also includes a virtual light source so the 3-D image includes shadows that make it easier for the human eye to interpret the image. The embryo images can be made transparent and have cutaways so that internal organs and body parts are visible. The process allows researchers to study more embryos much faster than normal. Mouse
embryos are typically used in genetic studies, and to test the safety of drugs and various chemicals. Read more:

Tuesday, May 16, 2006

CMU thesis proposal: Approaches for approximate inference, structure learning and omputing event probability bounds in undirected graphical models

ate: 5/17/06 (Wednesday)
Time: 3:00pm
Place: 3305 Newell-Simon Hall
Speaker: Pradeep Ravikumar, PhD Candidate

Graphical models are a powerful tool for representing and manipulating probability distributions which are used in many fields, including image processing, document analysis, and error-control coding. In this thesis, we develop new approaches to three key tasks for undirected graphical models: inference, structure learning, and computing event probability bounds.

For the inference task of estimating the log-partition function and general event probabilities, we propose a preconditioner-based family of techniques. As with generalized mean field methods, the preconditioner approach focuses on sparse subgraphs, but optimizes linear system condition numbers rather than relative entropy. For the inference task of computing the maximum a posteriori (MAP) configuration, we propose a quadratic programming relaxation that is potentially more powerful than linear program relaxations and belief propagation approximations that are the current state of the art. The quadratic program relaxation is generally tight, but under certain conditions results in a non-convex problem, for which we propose a convex approximation with an additive bound guarantee. For the task of computing event probability bounds, we propose a family of generalized variational Chernoff bounds for graphical models, where we "variationally" derive probability bounds in terms of convex optimization problems involving certain support functions and a difference of log partition functions. For the task of learning the structure of the graph, instead of the standard heuristic search through the combinatorial space of graph structures, we are considering a parametrized optimization approach by looking at alternative parametrizations of the graph structure variable.

Thesis Committee:
John Lafferty (Chair)
Carlos Guestrin
Martin Wainwright (Univ. of California at Berkeley)
Eric Xing

CMU Thesis proposal: Stacked Graphical Learning

Date: 5/15/06 (Monday)
Time: 1:30pm
Place: 3305 Newell-Simon Hall
Speaker: Zhenzhen Kou, PhD Candidate

Traditional machine learning methods assume that instances are independent while in reality there are many relational datasets, such as hyperlinked web pages, scientific literatures with dependencies among citations, social networks, and more. Recent work on graphical models has demonstrated performance improvement on relational data. In my thesis I plan to study a meta-learning scheme called stacked graphical learning. Given a relational template, the stacked graphical model augments a base learner by expanding one instance’s features with predictions on other related instances. The stacked graphical learning is efficient, capable of capturing dependencies easily, and can be constructed based on any kind of base learner. The thesis proposal describes the algorithm for stacked graphical models, evaluates the approach on some real world data, and compares the performance to other methods. For my thesis I plan to explore more of the strengths and weaknesses of the approach, and apply it to tasks in the SLIF system.
Thesis Committee:
William Cohen
David Jensen (Univ. of Massachusetts, Amherst)
Tom Mitchell
Robert Murphy

Monday, May 15, 2006

PAL lab meeting 17, May, 2006 (Stanley): Nearness Diagram (ND) Navigation: Collision Avoidance in Troublesome Scenarios

Author: Javier Minguez, Associate Member, IEEE, and Luis Montano, Member, IEEE

Abstract: This paper addresses the reactive collision avoidance for vehicles that move in very dense, cluttered, and complex scenarios. First, we describe the design of a reactive navigation method that uses a "divide and conquer" strategy based on situations to simplify the difficulty of the navigation. Many techniques could be used to implement this design (since it is described at symbolic level), leading to new reactive methods that must be able to navigate in arduous environments (as the difficulty of the navigation is simplified). We also propose a geometry-based implementation of our design called the nearness diagram navigation. The advantage of this reactive method is to successfully move robots in troublesome scenarios, where other methods present a high degree of difficulty in navigating. We show experimental results on a real vehicle to validate this research, and a discussion about the advantages and limitations of this new approach.

local link.

PAL lab meeting 17, May, 2006 (Chihao): Microphone Arrays (a tutorial)

Microphone Arrays: A tutorial
April 2001
Author: Iain McCowan
B Eng, B InfoTech, PhD

What are microphone arrays?

Microphone arrays consist of multiple microphones at different locations. Using sound propagation principles, the individual microphone signals can be filtered and combined to enhance sound originating from a particular direction or location. The location of the principal sounds sources can also be determined dynamically by investigating the correlation between different microphone channels.


Saturday, May 13, 2006

News: Flying Robot Attack Called Unstoppable

May 11, 2006— It may sound like science fiction, but the prospect that suicide bombers and hijackers could be made redundant by flying robots is a real one, according to experts.

The technology for remote-controlled light aircraft is now highly advanced, widely available and, experts say, virtually unstoppable.

Models with a wingspan of five meters (16 feet), capable of carrying up to 50 kilograms (110 pounds), remain undetectable by radar.

And thanks to satellite positioning systems, they can now be programmed to hit targets some distance away with just a few meters (yards) short of pinpoint accuracy.

The full article.

Stanford talk: Graphical Models, Distributed Fusion, and Sensor Networks

Alan Willsky
May 15, 2006, 4:15PM

In this talk we provide a picture of one groupÞ²s journey through a set of related research topics and lines of inquiry. The point of departure for this talk is our groupÞ²s work on multiresolution models defined on trees. We provide a brief overview of the nature of the results from that research, and then turn to work that weÞ²ve pursued fueled by the limitations of models defined on trees rather than on more general graphs. Markov models defined on graphs with loops is a very rich and active field, finding applications in a surprisingly wide array of disciplines, and challenging theoreticians and algorithm developers to devise methods that are both computationally tractable and high-performance. We provide a picture of some of our contributions in this area, all of which build (in one way or another) on our work on models defined on trees but that also make explicit contact with the rich class of so-called "message-passing" algorithms (such as the celebrated Belief Propagation" (BP) algorithm) for graphical models. Among the contributions we will mention are recursive cavity modeling (RCM) algorithms that blend tree-based estimation with ideas in information geometry to lead to algorithms that allow scalable solution of very large estimation problems; the concept of "walk-sums" for graphical models and the new theoretical results they admit for belief propagation; and Nonparametric Belief Propagation, an approach that involves a nontrivial extension of the idea of particle filtering to message-passing algorithms.

We also describe our growing investigation of distributed fusion algorithms for sensor networks, in which there is a natural graph associated with network connectivity, as well as possibly two other graphs: one, relating the variables that are sensed and those that are to be estimated and a second relating the sources of information to the desired "sinks" (i.e., to nodes with responsibility for certain actions). We are still early in this investigation, but we describe several results including some on what we call "message-censoring" in which a sensor decides not to send a BP message, in which empirical studies motivated a theoretical investigation into the propagation of messaging errors in BP, a study that has also produced the as-yet tightest results for BP convergence. We also describe our results on efficient communication of messages and the tradeoff between communication load and performance and on sensor resource management in which we take into account not just the power cost of taking a measurement and communicating a message but also of dynamically "handing off" responsibility for estimation from one node to another. Further, in some initial work on the rapprochement of message-passing algorithms and decentralized detection, we describe the fact that an important component of sensor network activity is "self-organization" and describe, for a simple scenario, how the solution to a team-decision problem can (a) be solved via a message-passing algorithm; and (b) leads to what can be thought of as a network protocol coupling the physical and application layers.

About the Speaker
Dr. Willsky has held visiting positions at Imperial College, London, L'Universite de Paris-Sud, and the Institut de Recherche en Informatique et Systemes Aleatoires (IRISA) in Rennes, France. Dr. Willsky has given a number of plenary and keynote lectures at major scientific meetings. He is the author of the research monograph Digital Signal Processing and Control and Estimation Theory and is co-author of the undergraduate text Signals and Systems. He has published more than 180 journal publications and 300 conference papers. In 1975 he received the Donald P. Eckman Award from the American Automatic Control Council. He was awarded the 1979 Alfred Noble Prize by the ASCE and the 1980 Browder J. Thompson Memorial Prize Award by the IEEE for a paper excerpted from his monograph, and he recently received the 2004 Donald G. Fink Award from the IEEE. Dr. Willsky and his students, colleagues and postdoctoral associates have received a variety of Best Paper Awards at various conferences, most recently including the 2001 IEEE Conference on Computer Vision and Pattern Recognition, the 2002 Symposium on Uncertainty in Artificial Intelligence, the 2003 Spring Meeting of the American Geophysical Union, the 2004 International Conference on Information Processing in Sensor Networks, the 2004 Neural Information Processing Symposium, and Fusion 2005. In addition, in October 2005, Dr. Willsky was presented with a Doctorat Honoris Causa from Universite de Rennes in 2005 in connection with the 30th anniversary of the establishment of IRISA.

Dr. Willsky is the leader of MIT's Stochastic Systems Group ( Prof. Willsky's research has focused on both theoretical and applied problems in statistical signal and image processing. His early work on methods for failure detection in dynamic systems is still widely cited and used in practice, and his more recent research on multiresolution methods for large-scale data fusion and assimilation has found application in fields including target tracking, object recognition, fusion of nontraditional data sources, oil exploration, oceanographic remote sensing, and groundwater hydrology. Dr. Willsky's present research interests are in problems involving multidimensional and multiresolution estimation and imaging, inference algorithms for graphical and relational models, statistical image and signal processing, data fusion and estimation for complex systems, image reconstruction, and computer vision.

Stanford talk: Visual Recognition for Perceptive Interfaces

Trevor Darrell
May 8, 2006, 4:15PM

Devices should be perceptive, and respond directly to their human user and/or environment. In this talk I'll present new computer vision algorithms for fast recognition, indexing, and tracking that make this possible, enabling multimodal interfaces which respond to users' conversational gesture and body language, robots which recognize common object categories, and mobile devices which can search using visual cues of specific objects of interest. I'll describe in detail a method for image indexing and recognition of object categories based on a new kernel function over sets of local features that approximates the true correspondence-based similarity between set elements. Our pyramid match efficiently forms an implicit partial matching between two sets of feature vectors. The matching has linear time complexity and is robust to clutter or outlier features--a critical advantage for handling images with variable backgrounds, occlusions, and viewpoint changes. With this technique, mobile devices can recognize locations and gather information about newly encountered objects by finding matching images on the web or other available databases.

About the Speaker
Trevor Darrell is an Associate Professor of Electrical Engineering and Computer Science at M.I.T. He leads the Vision Interface Group at the Computer Science and Artificial Intelligence Laboratory. His interests include computer vision, interactive graphics, and machine learning. Prior to joining the faculty of MIT he worked as a Member of the Research Staff at Interval Research in Palo Alto, CA, reseaching vision-based interface algorithms for consumer applications. He received his PhD and SM from the MIT Media Lab in 1996 and 1991, and the BSE while working at the GRASP Robotics Laboratory at the University of Pennsylvania in 1988.

MIT talk: Video Based System Monitoring

Speaker: Brian Anthony, MIT CSAIL
Date: Monday, May 15 2006
Time: 2:00PM to 3:00PM

In this talk we present new algorithms for Video Event Analysis and Video Event Detection. This work is motivated by applications for video based [automated] system monitoring in industrial, manufacturing, and research environments. We discuss the unique attributes, constraints, and needs of such environments.

We develop deterministic algorithms to simultaneously move-and-stretch space-and-time to determine a model-free similarity measures between an example [template] video and an unknown video. The similarity measures themselves are functions of space and time. We demonstrate the applicability of such similarity measures to industrial wear monitoring, failure prediction, and assembly line feedback control. We then demonstrate the applicability to non-industrial environments with examples in sports, surveillance, and entertainment.

We extend the similarity machinery and introduce a new technique for Video Event Detection. We demonstrate the applicability to content query; we identify the temporal and spatial location inside of a large video stream which is similar to a query [template] video. We explore the performance degradation and robustness of the Video Event Analysis and Video Event Detection algorithms to various types of noise via simulation and distortion of real examples. We develop techniques, particular important in the context of industrial applications, to aid the engineer in the selection of a video template that is relevant to their application and locally robust to various types of noise.

We conclude with a discussion of how some of these techniques are currently applied to the broadcast of the PGA Tour on CBS, for which the speaker recently won an Emmy Award from the National Television Academy. (May 1, 2006. 27th Annual Sports Emmy for Innovative Technical Achievement)

AFFILIATIONS: Department of Mechanical Engineering MIT. Visiting Lecturer in Sloan. CTO Xcitex Inc.

MIT thesis defense: Learning Continuous Models for Estimating Intrinsic Component Images

Speaker: Marshall Tappen , MIT CSAIL
Date: Tuesday, May 16 2006
Time: 10:30AM to 11:30AM

Interpreting an image of a scene is difficult because the various characteristics of the scene contribute to its appearance. For example, an edge in an image could be caused by either an edge on a surface or change in the surface's color. Distinguishing the effects of different scene characteristics is an important step towards high-level analysis of an image.

This talk will describe how to use machine learning to build a system that recovers different characteristics of the scene from a single, gray-scale image of the scene. Using the observed image, the system estimates a shading image, which captures the interaction of the illumination and shape of the scene pictured, and an albedo image, which represents how the surfaces in the image reflect light. Measured both qualitatively and quantitatively, this system produces state-of-the-art estimates of shading and albedo images. This system is also flexible enough to be used for the separate problem of removing noise from an image.

Building this system requires algorithms for continuous regression and learning the parameters of a Conditionally Gaussian Markov Random Field. Unlike previous work, this system is trained using real-world surfaces with ground-truth shading and albedo images.

Committee Members:
Professor Edward Adelson
Professor William Freeman
Professor Michael Collins

Friday, May 12, 2006

News: Shape-shifting car will brace for impact

15:34 10 May 2006 news service
Tom Simonite

A car that can anticipate a side-on impact and subtly alter its body shape to absorb the force of the crash is being developed by researchers in Germany.

The car will use hood-mounted cameras and radar to spot a vehicle on course for a side-on collision. Once it realises an impact is imminent it will activate a shape-shifting metal in the door. This reinforces the bond between door and frame, which is normally a weak spot, and distributes the force of the blow more safely.

The full article.

News: Robo-roach could betray real cockroaches

16:29 09 May 2006 news service
Tom Simonite

The tiny robot smells and acts just like a cockroach, fooling the insects into accepting it as one of their own. the full article.

Monday, May 08, 2006

PAL lab meeting 10, May, 2006 (Bright): Detecting and tracking multiple interacting objects without class-specific models


We propose a framework for detecting and tracking multiple interacting objects from a single, static, uncalibrated camera. The number of objects is variable and unknown,
and object-class-specific models are not available. We use background subtraction results as measurements for object detection and tracking. Given these constraints, the main challenge is to associate pixel measurements with (possibly interacting) object targets. We first track clusters of pixels, and note when they merge or split. We then build an inference graph, representing relations between the tracked
clusters. Using this graph and a generic object model based on spatial connectedness and coherent motion, we label the tracked clusters as whole objects, fragments of objects or groups of interacting objects. The outputs of our algorithm are entire tracks of objects, which may include corresponding tracks from groups of objects during interactions. Experimental results on multiple video sequences are shown.


Thesis Defense: Long-Range Video Motion Estimation using Point Trajectories

Speaker: Peter Sand , MIT - CSAIL - Computer Graphics Group
Date: Tuesday, May 9 2006
Time: 4:15PM to 5:15PM
Refreshments: 4:00PM
Location: 32-D463 (Star)
Host: Seth Teller, MIT - CSAIL - CGG & RVSN
Contact: Britton 'Bryt' Bradley, 617-253-6583,
Relevant URL

This talk presents a new approach to video motion estimation, in which motion is represented using a set of particles. Each particle is an image point sample with a
long-duration trajectory and other properties. To optimize these particles, we measure point-based matching along the particle trajectories and distortion between the particles. The resulting motion representation is useful for a variety of applications and differs from optical flow, feature tracking, and parametric or layer-based models. We demonstrate the algorithm on challenging real-world videos
that include complex scene geometry, multiple types of occlusion, regions with low texture, and non-rigid deformation.

Thesis Supervisor: Seth Teller
Committee: Berthold K.P. Horn, William Freeman

CMU: VASC seminar Monday: Jake Sprouse

Lane Structure Modeling and Tracking

Jake Sprouse

Time : Monday, May 8, 2006
3:30 p.m. - 5:00 p.m.
(goodies at 3:20pm)
Location : 3305 Newell Simon Hall

Abstract :
In this talk I will discuss recent work in road structure
modeling and tracking as part of the Denso project. Previous work in
lane monitoring has focused on the "ego-lane" and does not attempt to
model adjacent lanes, on/off ramps, intersections, etc... We have
taken preliminary steps towards the broader goal. I will first
present work in detection of stripe-like features using steerable
filters. Then I will discuss our attempt at tracking road stripes
using a multiple-hypothesis linear/Gaussian approach. Finally I will
show preliminary results in using particle filters to track a richer
road structure model.

Bio :
Jake Sprouse is a PhD candidate in the CMU Robotics Institute.
His research is towards scene understanding using contextual
relationships between object classes. He received the BS degree in
Mathematical and Computer Sciences from the Colorado School of Mines
in Golden, Colorado, and spent three years as a programmer for
Nomadic Technologies, Inc. in Mountain View, California.

PAL lab meeting 10, May, 2006 (Eric): Methods for Indexing Stripes in Uncoded Structured Light Scanning Systems

Alan Robinson , Lyuba Alboul and Marcos Rodrigues
This paper presents robust methods for determining the order of a sequence of stripes captured in an uncodedstructured light scanning system, i.e. where all the stripes are projected with uniform colour, width and spacing. Asingle bitmap image shows a pattern of vertical stripes from a projected source, which are deformed by the surface ofthe target object. If a correspondence can be determined between the projected stripes and those captured in thebitmap, a spatial measurement of the surface can be derived using standard rangefinding methods. Previous work hasuniquely encoded each stripe, such as by colour or width, in order to avoid ambiguous stripe identification. However,colour coding suffers due to uneven colour reflection, and a variable width code reduces the measured resolution. Toavoid these problems, we simplify the projection as a uniform stripe pattern, and devise novel methods for correctlyindexing the stripes, including a new common inclination constraint and occlusion classification. We give definitionsof patches and the continuity of stripes, and measure the success of these methods. Thus we eliminate the need forcoding, and reduce the accuracy required of the projected pattern; and, by dealing with stripe continuity andocclusions in a new manner, provide general methods which have relevance to many structured light problems.

Friday, May 05, 2006

Multi-Target Tracking-Linking Identities using Bayesian Network Inference

Peter Nillius, Josephine Sullivan and Stefan Carlsson

Multi-target tracking requires locating the targets and labeling their identities. The latter is a challenge when many targets, with indistinct appearances, frequently occlude one another, as in football and surveillance tracking. We present an approach to solving this labeling problem. When isolated, a target can be tracked and its identity maintained. While, if targets interact this is not always the
case. This paper assumes a track graph exists, denoting when targets are isolated and describing how they interact. Measures of similarity between isolated tracks are defined. The goal is to associate the identities of the isolated tracks, by exploiting the graph constraints and similarity measures. We formulate this as a Bayesian network inference problem, allowing us to use standard message propagation to find the most probable set of paths in an efficient way. The high complexity inevitable in large problems is gracefully reduced by removing dependency links between tracks. We apply the method to a 10 min sequence of an international football game and compare results to ground truth.


Thursday, May 04, 2006

CMU ML Lunch talk: Smoothed Dirichlet distribution: Understanding the Cross-entropy

Speaker: Ramesh Nallapati from the University of Massachusetts Amherst
Date: May 08
Title: Smoothed Dirichlet distribution: Understanding the Cross-entropy ranking function in Information Retrieval

Unigram Language modeling is a successful probabilistic framework for Information Retrieval (IR) that uses the multinomial distribution to model documents and queries. An important feature in this approach is the usage of cross-entropy between the query model and document models as a document ranking function. The Naive Bayes model for text classification uses the same multinomial distribution to model documents but in contrast, employs document-log-likelihood as a scoring function. Curiously, the cross-entropy function roughly corresponds to query-log-likelihood w.r.t. the document models, in some sense an inverse of the scoring function used in the Naive Bayes model. It has been empirically demonstrated that cross entropy is a better performer than document-likelihood, but this interesting phenomenon remains largely unexplained. In this work we investigate the cross-entropy ranking function in IR. In particular, we show that the cross entropy ranking function corresponds to the log-likelihood of documents w.r.t. the approximated Smoothed-Dirichlet (SD) distribution, a novel variant of the Dirichlet distribution. We also empirically demonstrate that this new distribution captures term occurrence patterns in documents much better than the multinomial, thus offering a reason behind the superior performance of the cross entropy ranking function compared to the multinomial document-likelihood.

Our experiments in text classification show that a classifier based on the Smoothed Dirichlet performs significantly better than the multinomial based Naive Bayes model and on par with the SVMs, confirming our reasoning. We also construct a well-motivated classifier for IR based on SD distribution that uses the EM algorithm to learn from pseudo-feedback and show that its performance is equivalent to the Relevance model (RM), a state-of-the-art model for IR in the language modeling framework that also uses cross-entropy as its ranking function. In addition, the SD based classifier provides more flexibility than RM in modeling queries of varying lengths owing to a consistent generative framework. We demonstrate that this flexibility translates into a superior performance compared to RM on the task of topic tracking, an on-line classification task.


CMU master's thesis: Data Structure for Efficient Dynamic Processing in 3-D

J. Lalonde
master's thesis, tech. report CMU-RI-TR-06-22, Robotics Institute, Carnegie Mellon University, May, 2006.



In this paper, we consider the problem of the dynamic processing of large amounts of sparse three-dimensional data. It is assumed that computations are performed in a neighborhood defined around each point in order to retrieve local properties. This general kind of processing can be applied to a wide variety of applications. We propose a new, efficient data structure and corresponding algorithm that significantly improve the speed of the range search operation and that are suitable for on-line operation, where data is accumulated dynamically. The method relies on taking advantage of overlapping neighborhoods and the reuse of previously computed data as the algorithm scans each data point. To demonstrate the dynamic capabilities of the data structure, we use data obtained from a laser radar mounted on a ground mobile robot operating in complex, outdoor environments. We show that this approach considerably improves the speed of an established 3-D perception processing algorithm.

Wednesday, May 03, 2006

PAL lab meeting 4, May, 2006 (Jim): Self-calibration and metric 3D reconstruction from images

M. Pollefeys, R. Koch, L. Van Gool. Self-Calibration and Metric Reconstruction in spite of Varying and Unknown Internal Camera Parameters, Proc.ICCV'98, pp.90-95, Bombay, 1998. joint winner of the David Marr prize (best paper). (PollefeysICCV98.pdf)

M. Pollefeys, Self-calibration and metric 3D reconstruction from uncalibrated image sequences, Ph.D. Thesis, ESAT-PSI, K.U.Leuven, 1999, Scientific Prize BARCO 1999. (PollefeysPhD.pdf)

Pollefeys' website.

Abstract (PollefeysICCV98.pdf):
      In this paper the feasibility of self-calibration in the presence of varying internal camera parameters is under investigation. A self-calibration method is presented which efficiently deals with all kinds of constraints on the internal camera parameters. Within this framework a practical method is proposed which can retrieve metric reconstruction from image sequences obtained with uncalibrated zooming/focusing cameras. The feasibility of the approach is illustrated on real and synthetic examples.

Tuesday, May 02, 2006

CMU VASC seminar : Spectral Rounding: with Applications in Image Segmentation and Clustering

Title : Spectral Rounding: with Applications in Image Segmentation and Clustering

Presenter : David Tolliver

I'll discuss a novel family of spectral partitioning methods. Edgeseparators of a graph are produced by iteratively reweighting the edges until the graph disconnects into the prescribed number of components. At each iteration a small number of eigenvectors with small eigenvalue arecomputed and used to determine the reweighting. In this way spectralrounding directly produces discrete solutions where as current spectralalgorithms must map the continuous eigenvectors to discrete solutions byemploying a heuristic geometric separator (\\eg k-means). We show thatspectral rounding compares favorably to current spectral approximations onthe Normalized Cut criterion (NCut). Results are given for natural imagesegmentation, medical image segmentation, and clustering. A simple versionis shown to converge.This is joint work with Gary Miller in the Computer Science Departmentat CMU.

More info about VASC seminar.

Machine Learning List

Machine Learning List: Volume 18, Number 4
Monday, May 1, 2006

Administrative announcements
New ML List format
Calls for Papers and Participation
ILP 2006
ABiALS Workshop 2006
ICGI 2006
ICML Workshop on Learning in Structured Output Spaces
AAI Special Issue on Applications of Grammatical Inference
ICML Workshop on Surveillance and Event Detection
Symposium on Semantic Web for Collaborative Knowledge Acquisition
ICML Workshop on Transfer Learning
ICML Workshop on Applications of Multiple-Instance Learning
ICML Workshop on Knowledge Discovery from Data Streams
ICML Workshop on Learning with Nonparametric Bayesian Methods
Workshop on Machine Learning in Structural and Systems Biology
ICML Workshop on Kernel Machines and Reinforcement Learning
Human-Competitive Competition at GECCO-2006
Summer School on Neural Networks
Book Announcements
Gaussian Processes for Machine Learning
Career Opportunities
Postdoc job in "Evidence" at UCL

Date: Mon, 1 May 2006
From: Pat Langley
Subject: New ML List format

In this issue of the Machine Learning List, we introduce a new, briefer format that contains only the essential information about conferences, special issues, and similar events. We hope our readers will find it easier to find items that interest them, and that they can then go to the relevant URL to get additional information. We will continue to include longer announcements about open positions.

Sincerely, -Pat Langley


Date: Mon, 13 Mar 2006 19:09:42 +0000
From: ILP 2006
Subject: ILP 2006

Call for Papers
16th International Conference on Inductive Logic Programming (ILP 2006) Santiago, Spain

Submission deadline, short paper July 24, 2006
Acceptance notification, short paper: August 4, 2006
ILP 2006 Conference: August 24-27, 2006
Acceptance notification, selected papers: August 31, 2006
Submission deadline, full paper: September 30, 2006
Acceptance notification, full paper: November 3, 2006
Camera-ready deadline: December 8, 2006
Publication of conference proceedings: Early 2007

From: Giovanni Pezzulo
Subject: ABiALS Workshop 2006
Date: Mon, 13 Mar 2006 17:07:55 +0100

Call for Papers
ABiALS Workshop 2006
Anticipatory Behavior in Adaptive Learning Systems

Submission Deadline: June 15, 2006
ABiALS Workshop 2006: September 30, 2006

Date: 17 Mar 2006 07:18:02 -0000
Subject: ICGI 2006

Eighth International Colloquium on Grammatical Inference (ICGI 2006)
The University of Electro-Communications, Chofu, Tokyo 182-8585, JAPAN

Submission deadline: May 20, 2006
Acceptance notification: June 19, 2006
Final version of manuscripts: July 16, 2006
Conference date: September 20-22, 2006

Date: Mon, 20 Mar 2006 11:46:41 +0100
From: Ulf Brefeld
Subject: ICML Workshop on Learning in Structured Output Spaces

Call for Papers
ICML-2006 Workshop on Learning in Structured Output Spaces
Carnegie Mellon University, Pittsburgh, PA

Submission deadline: April 28, 2006
Acceptance notification: May 19, 2006
Final paper deadline: June 9, 2006
Workshop date: June 29, 2006

Date: Tue, 21 Mar 2006 20:01:31 +1100
From: Menno van Zaanen
Subject: AAI Special Issue on Applications of Grammatical Inference

Call for Submissions
Special Issue on Applications of Grammatical Inference

Submission deadline: May 1, 2006
Acceptance notification: October, 1, 2006
Final versions of accepted papers: December, 1, 2006
Publication: Second half of 2007

Date: Tue, 21 Mar 2006 13:04:59 -0700
From: Terran Lane
Subject: ICML Workshop on Surveillance and Event Detection

Call for Papers and Contributions
ICML-2006 Workshop on Machine Learning Algorithms for Surveillance and Event Detection
Carnegie Mellon University, Pittsburgh, PA

Submissions deadline: April 28, 2006 (tentative)
Acceptance notification: May 19, 2006 (tentative)
Workshop proceedings posted on Web site: June 18, 2006
Workshop date: June 29, 2006

Date: Tue, 21 Mar 2006 15:52:15 -0600
From: Vasant Honavar
Subject: Symposium on Semantic Web for Collaborative Knowledge Acquisition

AAAI Fall Symposium
Semantic Web for Collaborative Knowledge Acquisition (SWeCKa 2006)
Arlington, VA

Submission deadline: May 1, 2006
Acceptance notification: May 22, 2006
Camera-ready deadline: June 2, 2006
Symposium date: October 12-15, 2006

Date: Mon, 27 Mar 2006 16:57:39 -0600 (CST)
From: Bikramjit Banerjee
Subject: ICML Workshop on Transfer Learning

Structural Knowledge Transfer for Machine Learning
Workshop at the 23rd International Conference on Machine Learning
Carnegie Mellon University, Pittsburgh, PA

Submission deadline: May 02, 2006
Workshop date: June 29, 2006

Date: Tue, 28 Mar 2006 16:04:43 -0600 (CST)
From: Stephen D. Scott
Subject: ICML Workshop on Applications of Multiple-Instance Learning

Call for Papers and Participation
Workshop on Applications of Multiple-Instance Learning
at the 23rd International Conference on Machine Learning
Carnegie Mellon University, Pittsburgh, PA

Submission dealine: April 28, 2006
Acceptance notification: June 2, 2006
Camera-ready deadline: June 9, 2006
Workshop date: June 29, 2006

Date: Fri, 31 Mar 2006 13:10:24 -0500
From: Josep Roure
Subject: ICML Workshop on Knowledge Discovery from Data Streams

Call for Papers and Participation
Third International Workshop on Knowledge Discovery from Data Streams
At the 23rd International Conference on Machine Learning
Carnegie Mellon University, Pittsburgh, PA

Submission deadline: April 28, 2006
Acceptance notification: June 2, 2006
Camera-ready deadline: June 16, 2006
Workshop date: June 29, 2006

Date: Fri, 31 Mar 2006 20:27:54 +0200
From: Steffen Bickel
Subject: ICML Workshop on Learning with Nonparametric Bayesian Methods

ICML-2006 Workshop on Learning with Nonparametric Bayesian Methods
Carnegie Mellon University, Pittsburgh, PA

Submission deadline: April 28, 2006
Acceptance notification: May 19, 2006
Camera-ready deadline: June 9, 2006
Workshop date: June 29, 2006

Date: Wed, 05 Apr 2006 13:08:23 +0300
From: Esa Pitkanen
Subject: Workshop on Machine Learning in Structural and Systems Biology

Workshop on Probabilistic Modeling and Machine Learning in
Structural and Systems Biology
Tuusula, Finland

Submission deadline: April 23, 2006
Notification of acceptance: May 7, 2006
Final version due: May 31, 2006
Workshop date: June 17-18, 2006

Date: Wed, 05 Apr 2006 20:09:17 +0200
From: Remi Munos
Subject: ICML Workshop on Kernel Machines and Reinforcement Learning

ICML-2006 Workshop on Kernel Machines and Reinforcement Learning
Carnegie Mellon University, Pittsburgh, PA

Submission deadline: April 30, 2006
Workshop date: June 29, 2006

Date: Tue, 28 Mar 2006 10:14:27 -0800
From: John Koza
Subject: Human-Competitive Competition at GECCO-2006

$10,000 IN AWARDS
to be held as part of the
July 8-12, 2006 (Saturday-Wednesday)
Renaissance Seattle Hotel, Seattle, Washington, USA

Entry deadline: May 29, 2006
Finalists' notification: June 25, 2006
Submission deadline: July 5, 2006

Date: Tue, 21 Mar 2006 05:08:55 -0000
From: Jorge Santos
Subject: Summer School on Neural Networks

Neural Networks in Classification, Regression, and Data Mining
ISEP - Porto, Portugal email:

Early Registration: May 15, 2006
Hotel booking: June 15, 2006
Summer School: July 3-7, 2006

Date: Thu, 23 Mar 2006 15:26:42 -0500
From: David Weininger
Subject: Gaussian Processes for Machine Learning

This title is available from MIT Press:

Gaussian Processes for Machine Learning
Carl Edward Rasmussen and Christopher K. I. Williams

Date: Mon, 27 Mar 2006 23:02:59 +0100
From: Peter Dayan
Subject: Postdoc job in "Evidence" at UCL

A vacancy has arisen for a Postdoctoral Fellow to work on the project "Formal tools for handling evidence", which forms part of the research programme "Evidence, Inference and Enquiry" at University College London -- see

This is a 2-year post, available with immediate effect. Applicants should have a PhD in Statistics, Machine Learning, or similar, and be knowledgeable in theoretical and computational aspects of Bayesian Networks. The appointment will be on Grade 6, salary range 20234-23457 plus London Allowance of 2400.

Letters of application, including a Curriculum Vitae and names of 3 referees, should be sent to: Marion Ware, Department of Statistical Science, University College London, Gower Street, London WC1E 6BT, UK, telephone +44 (0)20 7679 1872, e-mail Full details of the post and the project can be found at:

Monday, May 01, 2006

"Loose-Limbed People" Paradigm: Distributed Approach for Articulated Pose Estimation and Tracking

Title: "Loose-Limbed People" Paradigm: Distributed Approach for Articulated Pose Estimation and Tracking

Speaker: Leonid Sigal , Brown University
Date: Monday, May 1 2006
Time: 2:00PM to 3:00PM
Refreshments: 1:30PM
Location: Seminar Room D463 (Star)
Host: C. Mario Christoudias, Gerald Dalley, MIT CSAIL
Contact: C. Mario Christoudias, Gerald Dalley, 3-4278, 3-6095,,
Relevant URL:

In the recent years we presented a number of methods for a fully automatic pose estimation and tracking of human bodies in 2D and 3D. Initialization and failure recovery in these methods are facilitated by the use of a loose-limbed body model in which limbs are connected via learned probabilistic constraints. The pose estimation and tracking can then be formulated as inference in a loopy graphical model and approximate belief propagation can be used to estimate the pose of the body. Each node in the graphical model represents the position and orientation of the limb, and the directed edges between nodes represent statistical dependencies between limbs. There are a number of significant advantages of this paradigm as compared to the more traditional methods for tracking human motion.

In this talk I will introduce the loose-limbed model paradigm and its application to 3D and 2D pose estimation and tracking. I will also show some preliminary results of a fully-automatic 3D hierarchical inference framework for pose estimation and tracking from a single view, where a 2D loose-limbed body model serves as an intermediate representation in the inference hierarchy.

PAL lab meeting 4, Mar, 2006 (Tailion) Occupancy Grid Maps

Using Occupancy Grids for Mobile Robot Perception and Navigation

Alberto ELfes

Jun. 1989

This article reviews an approach to
robot perception and world modeling that
uses a probabilistic tesselated representation
of spatial information called the occupancy
grid.’ The occupancy grid is a multidimensional
random field that maintains stochastic estimates
of the occupancy state of the cells in a spatial lattice.
To construct a sensor-derived map of the robot’s world,
the cell state estimates are obtained by interpreting the
incoming range readings using probabilistic sensor models.
Bayesian estimation procedures allow the incremental
updating of the occupancy grid using readings taken
from several sensors over multiple points of view.

CMU CFR Seminar: Hierarchical Simultaneous Localization and Mapping (HSLAM)

Title : Hierarchical Simultaneous Localization and Mapping (HSLAM)

Deryck Morales

Time : 5:00pm
Place : NSH 1507


H-SLAM is an autonomous localization and mapping strategy that
scales well to large indoor environments by decomposing the work
space into subregions. This is achieved using a topological graph
representation and associating ahigh resolution local map to each
graph edge. This organized collection ofmaps forms the Hierarchical

In this talk I will present the H-SLAM method in the context of
established mapping strategies and discuss the applications of the
atlas to path planning and global localization. I will present
experimental results verifying the addressed applications, and
compare the computational complexity of the H-SLAM approach to
other recent SLAM solutions. The most current work towards using
natural landmarks will be presented, and finally, future extensions of
this approach will be discussed.