This Blog is maintained by the Robot Perception and Learning lab at CSIE, NTU, Taiwan. Our scientific interests are driven by the desire to build intelligent robots and computers, which are capable of servicing people more efficiently than equivalent manned systems in a wide variety of dynamic and unstructured environments.
Wednesday, May 31, 2006
Robotics Institute Thesis Proposal: Proactive Replanning for Multi-Robot Teams
Brennan Sellner
Robotics Institute
Carnegie Mellon University
Place and Time
NSH 3002 2:00 PM
Abstract:
Rather than blindly following a predetermined schedule, human workers often dynamically change their course of action in order to assist a coworker who is having unexpected difficulties. The goal of this research is to examine how this notion of "helpful" behavior can inspire new approaches to online plan execution and repair in multi-robot systems. Specifically, we are investigating the enabling of proactive replanning by dynamically predicting task duration and adapting to predicted problems or opportunities through the modification of executing tasks. By continuously predicting the remaining task duration, a proactive replanner is able to adjust to upcoming opportunities or problems before they manifest themselves. One way in which it may do so is by adjusting the allocation of agents to the various executing tasks by adding or removing agents, which allows the planner to balance a schedule in response to the realities of execution. We propose to develop a planning/scheduling/execution system that, by supporting duration prediction and adaptation, will be able to execute complex multi-robot tasks in an uncertain environment more efficiently than is possible without such proactive capabilities.
We have developed a proof-of-concept system that implements duration prediction and modification of existing tasks, yielding simulated executed makespans as much as 31.8% shorter than possible without these capabilities. Our initial system does not operate in real time, nor with actual hardware, instead interfacing with a simulator and allowing unlimited time for replanning between time steps. We propose to characterize the applicability of this approach to various domains, extend our algorithms to support more complex scenarios and to address shortcomings we have identified, and optimize the algorithms with respect to both computational complexity and the makespan of the final executed schedule, with the goal of bringing the advantages of duration prediction and task modification to real-time planning/execution system. We will evaluate our approach to proactive replanning both in an extensive series of simulated experiments and in a real-time assembly scenario using actual hardware. We hypothesize that proactive replanning can be performed in real time while yielding significant improvements in overall execution time, as compared with a baseline repair-based planner.
Further Details:
A copy of the thesis proposal document can be found at http://gs295.sp.cs.cmu.edu/brennan/files/sellner_proposal.pdf.
Thesis Committee:
Reid Simmons, Chair
Sanjiv Singh
Stephen Smith
Tara Estlin, Jet Propulsion Laboratory
PhD position at EPFL: Indoor Flying Robot for Search and Rescue
Open PhD Position Announcement
Indoor Flying Robot for Search and Rescue
Laboratory of Intelligent Systems (LIS)
Swiss Federal Institute of Technology Lausanne (EPFL)
---------------------------------------------------------------
One funded PhD student position is available immediately or upon agreement at the Laboratory of Intelligent Systems, Swiss Federal Institute of Technology Lausanne (Lab Director: Prof. Dario Floreano, Project Leader: Jean- Christophe Zufferey) to work on the micro-mechatronic development of a novel indoor flying robot for search and rescue operations.
The candidate is expected to develop innovative aerodynamic and micro-mechatronic solutions, including design, prototyping, integration of sensors and electronics, as well as characterization of the system. The candidate must have a strong motivation for aerodynamics, micro-engineering, and system integration. She or he must also have a systematic and rigorous approach to system design and characterization.
Applicants should preferably have an MSc in one of the following disciplines:
- Aerodynamics
- Micro-engineering
- Electronic Engineering
- Material Engineering
- Robotics
Candidates from other disciplines are also welcome if they demonstrate skills in the above-mentioned areas. Proficiency in writing/spoken English is mandatory. PhD students will be requested to do some class assistance and supervise undergraduate and master projects.
The candidate will join a young and enthusiastic team with several activities in aerial robotics that featured on top international media and scientific magazines. The Laboratory offers access to state-of-the-art facilities ranging from micro-machining to printed circuit realization as well as collaboration with companies in micro-robotics.
To apply for the position, please send a letter of motivation along with a CV, certificates, and two academic references to Jean-Christophe.Zufferey@epfl.ch.
For more information please refer to:
Swiss Federal Institute of Technology Lausanne: http://www.epfl.ch
Laboratory of Intelligent Systems: http://lis.epfl.ch
**********************************
Prof. Dr. Dario Floreano
Laboratory of Intelligent Systems
Swiss Federal Institute of Technology (EPFL)
Building ELE, Station 11
CH-1015 Lausanne, Switzerland
Voice: +41 21 693 5230
Secretary: +41 21 693 5966
Fax: +41 21 693 5859
Dario.Floreano@epfl.ch
http://lis.epfl.ch
**********************************
'Baby' robot learns like a human
Tom Simonite
A robot that learns to interact with the world in a similar way to a human baby could provide researchers with fresh insights into biological intelligence.
Created by roboticists from Italy, France and Switzerland, "Babybot" automatically experiments with objects nearby and learns how best to make use of them. This gives the robot an ability to develop motor skills in the same way as a human infant.
[Read]
Soldiers bond with bots on battlefield
[Read]
Tuesday, May 30, 2006
Call For Papers: RSS Workshop on Robotic Systems on Rehabilitation, Exoskeleton, and Prosthetics
http://www.cs.cmu.edu/~yoky/rss/
Call for Papers:We invite you to submit an extended abstract for thisdiscussion-oriented workshop with distinguished invited speakers. Thetopic includes grand challenge issues in rehabilitation, exoskeleton and prosthetics such as (but not limited to) interface with the neuralsystems, safety, size/weight, functional improvement assessment, andease of use/don/control. The extended abstract is limited to 2 pagesincluding figures.
Deadline: June 23, 2006
Notification: July 7, 2006
Workshop: August 18, 2006
Your extended abstract will be peer-reviewed. Accepted abstracts willbe included in the workshop proceedings and showcased during the poster session.
For more details, see http://www.cs.cmu.edu/~yoky/rss/
RSS conference webpage: http://www.roboticsconference.org/
We look forward to seeing you in August.Yoky Matsuoka, Carnegie Mellon UniversityBill Townsend, Barrett Technology, Inc.
Monday, May 29, 2006
Call For Papers: IROS 2006 Workshop
From sensors to human spatial concepts
(geometric approaches and appearance-based approaches)
Organizers: Zoran Zivkovic, Ben Kröse, Raja Chatila, Henrik Christensen,
Roland Siegwart
Webpage: http://www.science.uva.nl/
The aim of the workshop is to bring together researchers that work on space representations appropriate for communicating with humans and on developing algorithms for relating the robot sensor data to the human spatial concepts. Furthermore, the workshop should renew the discussion about the appearance based and geometric approaches for space modeling in the light of the problem of relating the robot sensor data to the human spatial concepts. Additionally in order to facilitate the discussion a dataset consisting of omnidirectional camera images, laser range readings and robot odometry is provided.
The often used geometric space representations present a natural choice for robot localization and navigation. However, it is hard to communicate with a robot in terms of, for example, 2D (x,y) positions. Some common human spatial concepts are: “the living room”, “the corridor between the living room and the kitchen”; or more general as “a room”, “a corridor”; or more specific and related to objects “behind the TV in the living room”; etc, etc. Appropriate space representation is needed for the natural communication with the robot. Furthermore, the robot should be able to relate its sensor reading to the human spatial concepts.
Suggested topics include, but are not limited to the following areas:
- space representations appropriate for communicating with humans
- papers applying and analyzing the results from cognitive science about the human spatial concepts
- space representations suited to cover cognitive requirements of learning, knowledge acquisition and contextual control
- methods for relating the robot sensor data to the human spatial concepts
- comparison and/or combination of the appearance based and geometric approach for the task of relating the robot sensor data to the human spatial concepts
CFP: RSS Workshop on Socially Assistive Robotics 2006
http://robotics.usc.edu/interaction/rssws06/ws-SAR06.html
Call for Posters:
The organizers invite you to submit a 2-page poster abstract for the RSS 2006 Workshop on "Socially Assistive Robotics". The number of posters that can be accepted is limited; acceptance will depend on relevance to the workshop topics and contribution quality. The accepted posters will be included in the workshop proceedings and the authors will have the opportunity to present their work during the poster session. Please submit your poster abstracts by email to tapus(at)robotics.usc.edu (use "RSS Workshop 2006 Abstract Submission" in the subject).
Key Dates:
- Deadline of abstract submission: June 25, 2006
- Notification of acceptance: July 15, 2006
- "Socially Assistive Robotics" RSS Workshop: August 19, 2006
Description:
Research into Human-Robot Interaction (HRI) for socially assistive applications is still in its infancy. Various systems have been built for different user groups. For example, for the elderly, robot-pet companions aiming to reduce stress and depression have been developed, for people with physical impairments, assistive devices such as wheelchairs and robot manipulators have been designed, for people in rehabilitation therapy, therapist robots that assist, encourage and
socially interact with patients have been tested, for people with cognitive disorders, many applications focused on robots that can therapeutically interact with children with autism have been done, and for students, tutoring applications have been implemented. An ideal assistive robot should feature sufficiently complex cognitive and social skills permitting it to understand and interact with its environment, to exhibit social behaviors, and to focus its attention and communicate with people toward helping them achieve their goals.
The objectives of this workshop are to present the grand challenge of socially assistive robotics, the current state-of-the-art, and recent progress on key problems. Speakers at the workshop will address a variety of multidisciplinary topics, including social behavior and interaction, human-robot communication, task learning, psychological implications, and others. The workshop will also cover a variety of assistive applications, based on hands-off and hands-on therapies for helping people in need of assistance as part of convalescence, rehabilitation, education, training, and ageing. The proposed workshop is aimed at providing a general overview of the critical issues and key points in building effective, acceptable and reliable human-robot interaction systems for socially assistive applications and providing indications for further directions and developments in the field, based on the diverse expertise of the participants.
Topics:
- Social and physical embeddedness
- Encouraging compliance, goal sharing and transfer
- Designing behavioral studies
- Modeling embodied empathy
- Experimental design for human-robot interaction
Organizers:
Dr. Adriana TAPUS
Robotics Research Lab/Interaction Lab
Computer Science Department
University of Southern California
RTH 423, Mailcode 0781
941 West 37th Place, Los Angeles, CA, USA
e-mail: tapus(at)robotics.usc.edu
Prof. Maja MATARIĆ
Robotics Research Lab/Interaction Lab
Computer Science Department
University of Southern California
RTH 407, Mailcode 0781
941 West 37th Place, Los Angeles, CA, USA
e-mail: mataric(at)usc.edu
CFP: NIPS2006
CALL FOR PAPERS NIPS-2006
Deadline for Paper Submissions: June 9, 2006
Submissions are solicited for the Twentieth Annual meeting of an interdisciplinary Conference (December 5-7) which brings together researchers interested in all aspects of neural and statistical processing and computation. The Conference will include invited talks as well as oral and poster presentations of refereed papers. It is single track and highly selective. Preceding the main Conference will be one day of Tutorials (December 4), and following it will be two days of Workshops at Whistler/Blackcomb ski resort (December 8-9).
Submissions: Papers are solicited in all areas of neural information processing, including (but not limited to) the following:
- Algorithms and Architectures: statistical learning algorithms, neural networks, kernel methods, graphical models, Gaussian processes, dimensionality reduction and manifold learning, model selection, combinatorial optimization.
- Applications: innovative applications or fielded systems that use machine learning, including systems for time series prediction, bioinformatics, text/web analysis, multimedia processing, and robotics.
- Brain Imaging: neuroimaging, cognitive neuroscience, EEG (electroencephalogram), ERP (event related potentials), MEG (magnetoencephalogram), fMRI (functional magnetic resonance imaging), brain mapping, brain segmentation, brain computer interfaces.
- Cognitive Science and Artificial Intelligence: theoretical, computational, or experimental studies of perception, psychophysics, human or animal learning, memory, reasoning, problem solving, natural language processing, and neuropsychology.
- Control and Reinforcement Learning: decision and control, exploration, planning, navigation, Markov decision processes, game-playing, multi-agent coordination, computational models of classical and operant conditioning.
- Hardware Technologies: analog and digital VLSI, neuromorphic engineering, computational sensors and actuators, microrobotics, bioMEMS, neural prostheses, photonics, molecular and quantum computing.
- Learning Theory: generalization, regularization and model selection, Bayesian learning, spaces of functions and kernels, statistical physics of learning, online learning and competitive analysis, hardness of learning and approximations, large deviations and asymptotic analysis, information theory.
- Neuroscience: theoretical and experimental studies of processing and transmission of information in biological neurons and networks, including spike train generation, synaptic modulation, plasticity and adaptation.
- Speech and Signal Processing: recognition, coding, synthesis, denoising, segmentation, source separation, auditory perception, psychoacoustics, dynamical systems, recurrent networks, Language Models, Dynamic and Temporal models.
- Visual Processing: biological and machine vision, image processing and coding, segmentation, object detection and recognition, motion detection and tracking, visual psychophysics, visual scene analysis and interpretation.
Review Criteria:
New as of 2006, NIPS submissions will be reviewed double-blind: the reviewers will not know the identities of the authors. Submissions will be refereed on the basis of technical quality, novelty, potential impact on the field, and clarity. There will be an opportunity after the meeting to revise accepted manuscripts. We particularly encourage submissions by authors new to NIPS, as well as application papers that combine concrete results on novel or previously unachievable applications with analysis of the underlying difficulty from a machine learning perspective.
Paper Format: The paper format is fully described at http://research.microsoft.com/conferences/nips06/. Please use the latest style file for your submission.
Submission Instructions: NIPS accepts only electronic submissions at http://papers.nips.cc. These submissions must be in postscript or PDF format. The Conference web site will accept electronic submissions from May 26, 2006 until midnight, June 9, 2006, Pacific daylight time.
Demonstrations: There is a separate Demonstration track at NIPS. Authors wishing to submit to the Demonstration track should consult the Conference web site.
Organizing Committee:
General Chair --- Bernhard Schölkopf (MPI for Biological Cybernetics)
Program Chair --- John Platt (Microsoft Research)
Tutorials Chair --- Daphne Koller (Stanford)
Workshop Chairs --- Charles Isbell (Georgia Tech), Rajesh Rao (University of Washington)
Demonstrations Chairs --- Alan Stocker (New York University), Giacomo Indiveri (UNI ETH Zurich)
Publications Chair --- Thomas Hofmann (TU Darmstadt)
Volunteers Chair --- Fernando Perez Cruz (Gatsby Unit, London)
Publicity Chair --- Matthias Franz (Max Plack Institute, Tübingen)
Online Proceedings Chair --- Andrew McCallum (Univ. Massachusetts, Amherst)
Program Committee:
Chair --- John Platt (Microsoft Research)
Bob Williamson (National ICT Australia)
Cordelia Schmid (INRIA)
Corinna Cortes (Google)
Dan Ellis (Columbia University)
Dan Hammerstrom (Portland State University)
Dan Pelleg (IBM)
Dennis DeCoste (Yahoo Research)
Dieter Fox (University of Washington)
Hubert Preissl (University of Tuebingen)
John Langford (Toyota Technical Institute)
Kamal Nigam (Google)
Kevin Murphy (University of British Columbia)
Koji Tsuda (MPI for Biological Cybernetics)
Maneesh Sahani (University College London)
Neil Lawrence (University of Sheffield)
Samy Bengio (IDIAP)
Satinder Singh (University of Michigan)
Shimon Edelman (Cornell University)
Thomas Griffiths (UC Berkeley)
Saturday, May 27, 2006
I am Atwood Liu

My english name is Atwood Liu, my cyber name is popandy and my chinese name is 劉德成. My email/MSN is b91104055@ntu.edu.tw
It is my pleasure to meet each member of the lab. I hope to get my first presentation well-prepared in the future lab meeting. By the way, I am so glad to have a dinner with some members of the lab yesterday.
The above is my girl friend's favorite rabbit named "Peter Rabbit",
which is also my mischievous son.
Thursday, May 25, 2006
What's New @ IEEE in Wireless, May 2006
In response to security concerns raised over placing wireless RFID tags on consumer goods, IBM is introducing a new kind tag called the Clipped Tag. Unlike normal RFID tags, which broadcast information up to 30 feet away, the new tag allows consumers to reduce that distance down to only two inches. It works by way of a tiny antennae, which is removed after the product has been purchased. The tags will still retain all the same tracking information, such as where the product was purchased, whose credit card it was purchased on, and even the number of the card, but the ability to intercept the information with a hacked remote sensor would now require information thieves to get right up next to the product. Read more:
http://www.wired.com/news/technology/0,70793-0.html
EDUCATIONAL GAMES TEACH ENGLISH WITH RFID TAGS
Game software that uses RFID tags embedded in toys to teach English to non-English-speaking children has been developed by a pair of students from Purdue University. Their game, Merlin's Magic Castle, comes with computer software, a scanner, and electronic tags which are embedded into appropriate objects. When a toy is run over the computer's scanner, the program registers the RFID, and a computer character on-screen says the toy's name or poses a question, which according to the developers, provides auditory, visual, and tactile stimulation that promotes better comprehension and retention of information in children. Read more:
http://www.primidi.com/2006/04/26.html
MOBILE PHONE MASTS PREDICT THE WEATHER
New research being conducted by scientists in Europe and the Middle East suggests that existing mobile phone masts can be used to predict the weather -- in some case more accurately than meteorological equipment. When bad weather such as rain or electromagnetic activity is about to appear, automatic systems in the mobile masts boost their signals to ensure calls stay connected. The signal data from one provider in Tel Aviv, Haifa and Jerusalem was matched against data collected from the country's meteorological equipment, and found to be a match. The mobile masts, however, proved to cover more ground than the meteorological equipment.
Researchers in the UK also used a similar method of collecting data from global positioning satellites to measure atmospheric humidity. Professor Hagit Messer-Yaron, of the University of Tel Aviv, says the information could be used to predict hurricanes and other catastrophic events before they happen. The next step would involve getting existing mobile service providers to hand over their mast's information on a regular basis. Meeser-Yaron also believes that with this knowledge, people will one day be able to use read their own cell phones' reception patterns and predict their local forecast. Read more:
http://news.bbc.co.uk/1/hi/sci/tech/4974542.stm
Wednesday, May 24, 2006
PAL lab meeting 25, May, 2006 (Casey): Fast Rotation Invariant Multi-View Face Detection Based on Real Adaboost
Authors: Bo WU, Haizhou AI, Chang HUANG and Shihong LAO
(Department of Computer Science and Technology, Tsinghua University, Beijing, 100084, China)
(Sensing Technology Laboratory, Omron Corporation)
AbstractIn this paper, we propose a rotation invariant multiviewface detection method based on Real Adaboostalgorithm [1]. Human faces are divided into severalcategories according to the variant appearance fromdifferent view points. For each view category, weakclassifiers are configured as confidence-rated look-uptable(LUT) of Haar feature [2]. Real Adaboostalgorithm is used to boost these weak classifiers andconstruct a nesting-structured face detector. To make itrotation invariant, we divide the whole 360-degree rangeinto 12 sub-ranges and construct their correspondingview based detectors separately. To improve performance,a pose estimation method is introduced and results in aprocessing speed of four frames per second on 320 ×240sized image. Experiments on faces with 360-degree inplanerotation and ±90-degree out-of-plane rotation arereported, of which the frontal face detector subsystemretrieves 94.5% of the faces with 57 false alarms on theCMU+MIT frontal face test set and the multi-view facedetector subsystem retrieves 89.8% of the faces with 221false alarms on the CMU profile face test set.
Tuesday, May 23, 2006
The 2006 IEEE International Conference on Robotics and Automation
-Bob
Monday, May 22, 2006
PAL lab meeting 25, May, 2006 (Vincent): Video-based face recognition using probabilistic appearance manifolds
Authors :
Kuang-Chih Lee Ho @ UIUC
J. Ming-Hsuan Yang @ Honda-RI
Kriegman, D. @ CS.USCD
This paper appears in CVPR'2003
Abstract :
This paper presents a method to model and recognize human faces in video sequences. Each registered person is represented by a low-dimensional appearance manifold in the ambient image space, the complex nonlinear appearance manifold expressed as a collection of subsets (named pose manifolds), and the connectivity among them. Each pose manifold is approximated by an affine plane. To construct this representation, exemplars are sampled from videos, and these exemplars are clustered with a K-means algorithm; each cluster is represented as a plane computed through principal component analysis (PCA). The connectivity between the pose manifolds encodes the transition probability between images in each of the pose manifold and is learned from a training video sequences. A maximum a posteriori formulation is presented for face recognition in test video sequences by integrating the likelihood that the input image comes from a particular pose manifold and the transition probability to this pose manifold from the previous frame. To recognize faces with partial occlusion, we introduce a weight mask into the process. Extensive experiments demonstrate that the proposed algorithm outperforms existing frame-based face recognition methods with temporal voting schemes.
You can find the pdf file here
Sunday, May 21, 2006
Lab meeting schedule this week.
-Bob
Thursday, May 18, 2006
CMU Thesis proposal: Common Ground-Based Human-Robot Interaction
Carnegie Mellon University
Abstract
Building more effective human-robot systems requires that we understand the complex interactions that occur within such systems. As human-robot interaction (HRI) develops, we are becoming more ambitious about the types of interactions we envision for our robots and their users. In particular, we have become interested in the deployment of autonomous robots that are designed to work in complex, real-world settings. Our users are not likely to be experts in robotics, and they may possess inaccurate mental models of robotic technologies. In order to facilitate successful interactions, I am interested in promoting common ground between users and robots: that is, I wish to increase users' understanding of robots and foster accurate mental models, and, at the same time, enhance robots' understanding of users and their goals.
In particular, through an ethnographic study of the Life in the Atacama project, I have documented a number of challenges faced by a group of scientists as they used a remotely-located, autonomous robot to explore the Atacama Desert in Chile. I have used this data to hypothesize a basic, operational model of how scientists generate plans for the robot. I then use the data from this study to illustrate the need for three particular components which together form the proposed work: a representation of the scientists' goals, a plan validation system which alerts the science team when a given plan cannot meet these goals, and a plan repair system which, at execution time, uses the science goals to make intelligent decisions about what to do in the event actions fail. The major contributions of this work include detailed analyses of a particular human-robot system, explicit modeling of common ground, and software systems which will improve the grounding process and task performance for exploration robotics-related tasks.
Further Details
A copy of the thesis proposal document can be found at http://www.fieldrobotics.org/~kristen/proposal/kstubbs_proposal.pdf.
News: ALGORITHM ENABLES 3-D SCANNING
Researchers in Texas and Utah report they have created a new means of producing three-dimensional embryonic images called microCT-based virtual histology. The process uses computer visualization techniques to convert X-ray CT scans of mouse embryos into detailed three dimensional images showing both the mouse's exterior and interior. Normally embryos are sliced up physically and examined under a microscope, a very time-consuming method. With the new process, the embryos are instead stained with special dyes which permeate the skin and other membranes. The team of researchers wrote a new computer algorithm to take the CT scan data and automatically distinguish various organs and structures in the embryo. The virtual rendering of the CT scan data also includes a virtual light source so the 3-D image includes shadows that make it easier for the human eye to interpret the image. The embryo images can be made transparent and have cutaways so that internal organs and body parts are visible. The process allows researchers to study more embryos much faster than normal. Mouse
embryos are typically used in genetic studies, and to test the safety of drugs and various chemicals. Read more: http://www.physorg.com/news65965267.html
Tuesday, May 16, 2006
CMU thesis proposal: Approaches for approximate inference, structure learning and omputing event probability bounds in undirected graphical models
Time: 3:00pm
Place: 3305 Newell-Simon Hall
Speaker: Pradeep Ravikumar, PhD Candidate
Abstract:
Graphical models are a powerful tool for representing and manipulating probability distributions which are used in many fields, including image processing, document analysis, and error-control coding. In this thesis, we develop new approaches to three key tasks for undirected graphical models: inference, structure learning, and computing event probability bounds.
For the inference task of estimating the log-partition function and general event probabilities, we propose a preconditioner-based family of techniques. As with generalized mean field methods, the preconditioner approach focuses on sparse subgraphs, but optimizes linear system condition numbers rather than relative entropy. For the inference task of computing the maximum a posteriori (MAP) configuration, we propose a quadratic programming relaxation that is potentially more powerful than linear program relaxations and belief propagation approximations that are the current state of the art. The quadratic program relaxation is generally tight, but under certain conditions results in a non-convex problem, for which we propose a convex approximation with an additive bound guarantee. For the task of computing event probability bounds, we propose a family of generalized variational Chernoff bounds for graphical models, where we "variationally" derive probability bounds in terms of convex optimization problems involving certain support functions and a difference of log partition functions. For the task of learning the structure of the graph, instead of the standard heuristic search through the combinatorial space of graph structures, we are considering a parametrized optimization approach by looking at alternative parametrizations of the graph structure variable.
Thesis Committee:
John Lafferty (Chair)
Carlos Guestrin
Martin Wainwright (Univ. of California at Berkeley)
Eric Xing
CMU Thesis proposal: Stacked Graphical Learning
Time: 1:30pm
Place: 3305 Newell-Simon Hall
Speaker: Zhenzhen Kou, PhD Candidate
Abstract:
Traditional machine learning methods assume that instances are independent while in reality there are many relational datasets, such as hyperlinked web pages, scientific literatures with dependencies among citations, social networks, and more. Recent work on graphical models has demonstrated performance improvement on relational data. In my thesis I plan to study a meta-learning scheme called stacked graphical learning. Given a relational template, the stacked graphical model augments a base learner by expanding one instance’s features with predictions on other related instances. The stacked graphical learning is efficient, capable of capturing dependencies easily, and can be constructed based on any kind of base learner. The thesis proposal describes the algorithm for stacked graphical models, evaluates the approach on some real world data, and compares the performance to other methods. For my thesis I plan to explore more of the strengths and weaknesses of the approach, and apply it to tasks in the SLIF system.
Thesis Committee:
William Cohen
David Jensen (Univ. of Massachusetts, Amherst)
Tom Mitchell
Robert Murphy
Monday, May 15, 2006
PAL lab meeting 17, May, 2006 (Stanley): Nearness Diagram (ND) Navigation: Collision Avoidance in Troublesome Scenarios
Abstract: This paper addresses the reactive collision avoidance for vehicles that move in very dense, cluttered, and complex scenarios. First, we describe the design of a reactive navigation method that uses a "divide and conquer" strategy based on situations to simplify the difficulty of the navigation. Many techniques could be used to implement this design (since it is described at symbolic level), leading to new reactive methods that must be able to navigate in arduous environments (as the difficulty of the navigation is simplified). We also propose a geometry-based implementation of our design called the nearness diagram navigation. The advantage of this reactive method is to successfully move robots in troublesome scenarios, where other methods present a high degree of difficulty in navigating. We show experimental results on a real vehicle to validate this research, and a discussion about the advantages and limitations of this new approach.
local link.
PAL lab meeting 17, May, 2006 (Chihao): Microphone Arrays (a tutorial)
April 2001
Author: Iain McCowan
B Eng, B InfoTech, PhD
What are microphone arrays?
Microphone arrays consist of multiple microphones at different locations. Using sound propagation principles, the individual microphone signals can be filtered and combined to enhance sound originating from a particular direction or location. The location of the principal sounds sources can also be determined dynamically by investigating the correlation between different microphone channels.link