Friday, December 30, 2005

GaTech: Frank Dellaert

Chech out Frank Dellaert's web page,
http://www.cc.gatech.edu/~dellaert/.
Some projects are very interesting.

CMU RI: Robosapien Hacking

Click this link for more information on how to write your own code to control the Robosapien humanoid.

The offical Robosapien web site:
http://www.wowwee.com/robosapien/robo1/robomain.html

The CMU Humanoids Course:
http://www.cs.cmu.edu/~cga/humanoids-ugrad/

Robot Dream Exposition Taiwan.

展覽日期:2006年1月6日~2月12日
展覽時間:每日早上11:00~晚上09:30(除夕早上11:00~下午5:00)
展覽地點:新光三越百貨A8館7樓
展覽地址:台北市松高路12號7樓

http://consumer.chinatimes.com/EVENT/2006robot/index.htm

本項展覽活動,歷經四年設計策劃,耗資上億元,是國內展覽史上罕見的鉅型盛會。所展出機器人之類型包含有:二足人形機器人、寵物型機器人、動物型機器人、掃除(清潔)型機器人、娛樂型機器人、擬真型機器人、看護(安全)機器人、微型機器人、救難機器人、擬人機器人…等共計展出百餘體融合電子新科技與機械工學的各型倣生機器人。
  各類展品擬真化的程序極高,國人可在會場驚見高科技機器人製造技術,所幻化出各種神靈活現的人造生命體。尤其模擬動物與人類顏面、肢體、肌肉的細微動作,更是維妙維肖,動靜之間,不僅形似,更能傳神;輔以聲光、造景、特效科技,讓參觀民眾,有如身入其境的感受。

  展出以科技、娛樂及教育為主要目的,民眾在參觀過程中,可看到部份展品的金屬結構與組成元件,透過互動式的遊戲設計與專業導覽人員的解說,瞭解展品內部機構互動的原理,達到寓教於樂的效果,非常適合親子同遊。

  本次中、日攜手,精心規劃的展覽,以科學的發展使其自身日益龐大和專業化,在人生重要青少年階段著重的科學教育普及,能使科技的整體發展及應用得到基本之學習。機械人科技除了被廣泛地視為新世紀的重要產業外,尤其要求多範疇的科學理論與實踐能力相融合提昇,並能夠培育不畏難題、勇於開創未來夢想的新一代且必能啟發孩子們豐富的想像力,開拓更寬廣的視野。

I am a new member of this Lab., "???"

Hi all,

My name is ???"Ko-Chih Wang" (Casey) and I also often use the ID, "CL".
Now, I am a NTU-INM master2 student.
Originally, I am a member of ubicomp lab., but I will join your group next year( or semester ).
About the robotics, I have no any fundamental knowledge.
Therefore, I hope that I can learn a lot from this lab. and everyone.
In the near furture, I think that I will move to the new lab and work with you all. :)
Anyway, It's my pleasure to know you all.

My msn: cl_kcw (AT) hotmail (dot) com
Mail (Gmail): caseywang777

Thursday, December 29, 2005

MIT report: ErrorWeighted Classifier Combination for Multi-modal Human Identification

Yuri Ivanov, Thomas Serre, Jacob Bouvrie

Abstract

In this paper we describe a technique of classifier combination used in a human identification system. The system integrates all available features from multi-modal sources within a Bayesian framework. The framework allows representing a class of popular classifier combination rules and methods within a single formalism. It relies on a “perclass” measure of confidence derived from performance of each classifier on training data that is shown to improve performance on a synthetic data set. The method is especially relevant in autonomous surveillance setting where varying time scales and missing features are a common occurrence. We show an application of this technique to the real-world surveillance database of video and audio recordings of people collected over several weeks in the office setting.


LINK

CMU report: Market-Based Multirobot Coordination: A Survey and Analysis

M.B. Dias, R.M. Zlot, N. Kalra, and A. Stentz. tech. report CMU-RI-TR-05-13, Robotics Institute, Carnegie Mellon University, April, 2005.

Market-based multirobot coordination approaches have received significant attention and gained considerable popularity within the robotics research community in recent years. They have been successfully implemented in a variety of domains ranging from mapping and exploration to robot soccer. The research literature on market-based approaches to coordination has now reached a critical mass that warrants a survey and analysis. This paper addresses this need by providing an introduction to market-based multirobot coordination, a comprehensive review of the state of the art in the field, and a discussion of remaining challenges. The pdf file.

MIT report: Automatic Software Upgrades for Distributed Systems (PhD thesis)

Author: Sameer Ajmani

October 6, 2005

Upgrading the software of long-lived, highly-available distributed systems is difficult. It is not possible to upgrade all the nodes in a system at once, since some nodes may be unavailable and halting the system for an upgrade is unacceptable. Instead, upgrades may happen gradually, and there may be long periods of time when different nodes are running different software versions and need to communicate using incompatible protocols. We present a methodology and infrastructure that address these challenges and make it possible to upgrade distributed systems automatically while limiting service disruption. Our methodology defines how to enable nodes to interoperate across versions, how to preserve the state of a system across upgrades, and how to schedule an upgrade so as to limit service disruption. The approach is modular: defining an upgrade requires understanding only the new software and the version it replaces. The upgrade infrastructure is a generic platform for distributing and installing software while enabling nodes to interoperate across versions. The infrastructure requires no access to the system source code and is transparent: node software is unaware that different versions even exist. We have implemented a prototype of the infrastructure called Upstart that intercepts socket communication using a dynamically-linked C++ library. Experiments show that Upstart has low overhead and works well for both local-area and Internet systems.
The pdf file.

CMU report: An Analysis of the Human Odometer

U. Wong, C. Lyons, and S. Thayer. tech. report CMU-RI-TR-05-47, Robotics Institute, Carnegie Mellon University, September, 2005.

The Human Odometer is a personal navigation system developed to provide reliable, lightweight, cost-effective, and embedded absolute 3-D position and communication to firefighters, policemen, EMTs, and dismounted soldiers. The goal of the system is to maintain accurate position information without reliance on external references. The Human Odometer system provides real-time position updates and displays maps of relevant areas are to the user on a handheld computer. The system is designed to help a user place himself in a global context and navigate unknown areas under a variety of conditions. This paper provides a quantitative analysis of the in-field operational performance of the system. The pdf file.

Monday, December 26, 2005

My talk this week

I will present this paper:
    Real-time Non-Rigid Surface Detection (pdf)
by Julien Pilet, Vincent Lepetit, Pascal Fua
of Computer Vision Laboratory, École Polytechnique Fédérale de Lausanne, Switzerland

Abstract:
  We present a real-time method for detecting deformable surfaces, with no need whatsoever for a priori pose knowledge.
  Our method starts from a set of wide baseline point matches between an undeformed image of the object and the image in which it is to be detected. The matches are used not only to detect but also to compute a precise mapping from one to the other. The algorithm is robust to large deformations, lighting changes, motion blur, and occlusions. It runs at 10 frames per second on a 2.8 GHz PC and we are not aware of any other published technique that produces similar results.
  Combining deformable meshes with a well designed robust estimator is key to dealing with the large number of parameters involved in modeling deformable surfaces and rejecting erroneous matches for error rates of up to 95%, which is considerably more than what is required in practice.


There are some videos on their project website.

Monday, December 19, 2005

My talk this week: Information Gain-based Exploration

Title:
Information Gain-based Exploration Using Rao-Blackwellized Particle Filters

Paper in RSS 2005: (8 page, pdf, 520KB) http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss05rss.pdf

Outlines:
Simultaneous Localization, Mapping and Exploration
Related Works
Rao-Blackwellized Particle Filters (RBPF)
The Uncertainty
Maximizing The Information Gain
Experiments

Saturday, December 17, 2005

CNN: Sony robot keeps a third eye on things

Friday, December 16, 2005; Posted: 4:09 p.m. EST (21:09 GMT)

TOKYO, Japan (Reuters) -- Robots may not be able to do everything humans can, but the latest version of Sony humanoid robot has something many people might find useful: a third eye.

The Japanese consumer electronics company's roller-skating robot, QRIO, has now been enlightened with an extra camera eye on its forehead that allows it to see several people at once and focus in on one of them.

The full article

WHAT'S NEW @ IEEE IN COMPUTING

VOLUME 6 NUMBER 12 DECEMBER 2005
Read this issue online:
<http://www.ieee.org/products/whats-new/wncomp/wncomp1205.xml>


6. PERVASIVE COMPUTING CONFERENCE TO EXAMINE EMERGING TECHNOLOGIES
The Fourth IEEE International Conference on Pervasive Computer and Communications (PerCom) will act as a platform for pervasive computing researchers to swap ideas and interact through a variety of workshops. The conference will place special emphasis on the significance of pervasive computing as the natural outcome of advances in wireless networks, mobile computing, distributed computing, and other technologies, and will provide individual workshops and work-in-progress sessions for attendees who wish to better understand some of the current technologies contributing to these rapid advancements. PerCom will convene in Pisa, Italy, from 13 to 17 March 2006. For more information, or to register to attend, visit:
<http://cnd.iit.cnr.it/percom2006/index.html>

9. COMPUTER GUIDANCE COULD INCREASE SPEED AND ACCURACY IN NEUROSURGERY
A new system of computerized brain-mapping techniques may greatly improve a neurosurgical technique used to treat movement disorders such as Parkinson's disease and multiple sclerosis, say its developers at Vanderbilt University. Called deep brain stimulation (DBS), the process involves implanting electrodes deep in the brain, typically one electrode in each hemisphere, a difficult and expensive operation that can take as long as 12 hours per electrode. The new system works from a three-dimensional brain atlas that combines the scans of 21 post operative DBS patients using sophisticated computer-mapping methods, the Vanderbilt team says, then superimposing the atlas on a new patient's scan. The new system automates the most difficult part of the operation: precisely locating pea-sized targets deep in the brain which are not visible in brain scans or to the naked eye, and doing do more quickly and accurately than experienced neurosurgeon, according to researchers writing in IEEE Transactions on Medical Imaging. Read more:<http://www.news-medical.net/?id=14604>

11. SPRAY-ON COMPUTERS IN THE WORKS
According to D.K. Arvind of the Institute for Computing Systems Architecture at the University of Edinburgh, tiny grain-sized network semiconductors could one day be sprayed onto surfaces to give computer access to places out of reach. Dubbed the "Speck-net", each tiny sensor will have its own processor, about two kilobytes of memory, and a program that gives it the ability to extract information from the environment. Each "speck" would be able to communicate wirelessly with one another, to gather information and create a larger picture of a problem. The system is currently under simulation at the Speckled Computing Consortium in the UK. Arvind hopes that one day the Speck-net can be used for real applications, such as detecting structural failures in airborne planes and in helping prevent strokes.

Friday, December 16, 2005

MIT Talk: Information Gain-based Exploration for Mobile Robots Using Rao-Blackwellized Particle Filters

Speaker: Cyrill Stachniss , University of Freiburg
Date: Friday, December 16 2005
Time: 1:00PM to 2:00PM
Location: 32-397
Host: Nick Roy
Contact: Nicholas Roy, x3-2517, nickroy@mit.edu

Abstract:
This talk presents an integrated approach to exploration, mapping, and localization. Our algorithm uses a highly efficient Rao-Blackwellized particle filter to represent the posterior about maps and poses. It applies a decision-theoretic framework which simultaneously considers the uncertainty in the map and in the pose of the vehicle to evaluate potential actions. It trades off the cost of executing an action with the expected information gain and takes into account possible sensor measurements gathered along the path taken by the robot. We furthermore describe how to utilize the properties of the Rao-Blackwellization to efficiently compute the expected information gain. We present experimental results obtained in the real world and in simulation to demonstrate the effectiveness of our approach.

Cyrill Stachniss studied computer science at the University of Freiburg and received his MSc in 2002. Currently he is a PhD student in the research lab for autonomous intelligent systems headed by Wolfram Burgard at the University of Freiburg. His research interests lie in the areas of mobile robot exploration, SLAM, and collision avoidance. He submitted his PhD thesis titled "Exploration and Mapping with Mobile Robots" in December 2005.

Link:
http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss05rss.pdf

Thursday, December 15, 2005

[IVsource.net]: Latest News from IVsource.net (December 14, 2005)

New Articles (more info below):

Assistware Looking for New Talent to Expand Team
Assistware Technology, a longtime provider of systems using vision-based lane detection technology, is looking for new staff as they ramp up for some new projects, including IVBSS (see related article). A program manager, embedded hardware engineer, vision systems engineer, and systems engineers are being sought. Check the job descriptions by clicking the link on the IVsource homepage.

Seeing Machines Partners with Australian Researchers to Diagnose Drowsy Drivers
Australia’s ICT Centre of Excellence, National ICT Australia (NICTA) and Seeing Machines Limited, a global leader in computer vision technology, have signed a one year research collaboration agreement to explore the use of information and communications technologies (ICT) to reduce road accidents relating to driver fatigue. The project will develop ICT solutions to detect the subtle shifts in muscular control and response during the onset of fatigue when driving; the phenomenon which leads to the well known“micro-nod”.

UMTRI Leads Winning Team for USDOT IVBSS Project
USDOT has awarded $25 million for the Intelligent Vehicle-Based Safety Systems Field Operational Test project to the University of Michigan Transportation Research Institute (UMTRI), which will be the largest FOT project of this type within the current government program.

Wednesday, December 14, 2005

MIT Report : Conditional Random People : Tracking Humans with CRFs and Grid Filters

Leonid Taycher, Gregory Shakhnarovich, David Demirdjian, and Trevor Darrell

Abstract

We describe a state-space tracking approach based on a
Conditional Random Field (CRF) model, where the observation
potentials are learned from data. We find functions
that embed both state and observation into a space where
similarity corresponds to L1 distance, and define an observation
potential based on distance in this space. This potential
is extremely fast to compute and in conjunction with
a grid-filtering framework can be used to reduce a continuous
state estimation problem to a discrete one
. We show
how a state temporal prior in the grid-filter can be computed
in a manner similar to a sparse HMM, resulting in
real-time system performance. The resulting system is used
for human pose tracking in video sequences.

LINK

WHAT'S NEW @ IEEE IN COMMUNICATIONS

VOLUME 6 NUMBER 12 DECEMBER 2005
Read this issue online:
<http://www.ieee.org/products/whats-new/wncomm/wncomm1205.xml>

2. IEEE MAGAZINE EXAMINES THE FUTURE OF CONVERGENT PORTABLE DEVICES
This month's issue of IEEE Communications Magazine (v. 43, no. 12) presents a special focus on the future of convergent portable devices that integrate not only cameras and cell phones, but also other functions such as wireless LAN (WLAN), personal video recording (PVR), gaming and digital TV. Articles in this issue take a closer look at topics such as mobile imaging, graphics processing capabilities and integration. The guest editorial on the topic is now accessible to all readers at:http://www.comsoc.org/livepubs/ci1/public/2005/dec/index.htm
<>

12. CALL FOR PAPERS: ROBOT AND HUMAN INTERACTIVE COMMUNICATION
Abstracts for the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) are due by 15 February 2006. Papers can focus on a diversity of Robotics technologies spanning academic, public, and governmental initiatives. For instance, will robots one day be used as assistants to human beings? What future technologies may allow engineers to design such complicated machines? Topics may range from innovative robot
designs to ethical issues in human-robot interaction research. The conference will take place next September in Hertfordshire, U.K. For more information, visit: <http://ro-man2006.feis.herts.ac.uk/>

WHAT'S NEW @ IEEE FOR STUDENTS

VOLUME 7 NUMBER 12 DECEMBER 2005
Read this issue online:
<http://www.ieee.org/products/whats-new/wnstudents/wnstudents1205.xml>

11. RESEARCHERS DEVELOP SURPRISING MATHEMATICAL MODEL -- OF SURPRISE
Two California scientists have created a mathematical theory of surprise based on principles of probability applied to a digital environment and experiments that record eye movements of volunteers. Researchers from the University of Southern California Viterbi School of Engineering and the University of California Irvine Institute for Genomics and Bioinformatics developed their theory using the stream of electronic data making up a video image as a proxy for the complex flood of stimuli in a real environment. By analyzing a data stream, the researchers say they can isolate unique visual stimuli, called "salient," "novelty," and "entropy." The researchers say they have worked out a way of predicting how observing new data will affect the set of beliefs an observer has developed about the world on the basis of data previously received. The scientists analyze a video stream to describe its most "surprising," features, then check the analysis by watching the eye movements of observers viewing the images, to see if the movements correlated with the measure of surprise. Read more:
<http://www.eurekalert.org/pub_releases/2005-11/uosc-scs112805.php>

Monday, December 12, 2005

about my talk

My talk this week will discuss about the following paper:The link
Detecting irregularities in images and in video
(Receive Honorable Mention for the 2005 Marr Prize)

Author : Oren Boiman , Michal Irani (Dept. of Computer Science and Applied Math , The Weizmann Institute of Science , Israel)

Abstract :
We address the problem of detecting irregularities in visualdata, e.g., detecting suspicious behaviors in video sequences,or identifying salient patterns in images. The term“irregular” depends on the context in which the “regular”or “valid” are defined. Yet, it is not realistic to expectexplicit definition of all possible valid configurations fora given context. We pose the problem of determining thevalidity of visual data as a process of constructing a puzzle:We try to compose a new observed image region ora new video segment (“the query”) using chunks of data(“pieces of puzzle”) extracted from previous visual examples(“the database”). Regions in the observed data whichcan be composed using large contiguous chunks of datafrom the database are considered very likely, whereas regionsin the observed data which cannot be composed fromthe database (or can be composed, but only using smallfragmented pieces) are regarded as unlikely/suspicious. Theproblem is posed as an inference process in a probabilisticgraphical model. We show applications of this approach toidentifying saliency in images and video, and for suspiciousbehavior recognition.

Sunday, December 11, 2005

CFP: ICOST2006

Call for Papers

4th International Conference On Smart homes and health Telematics
ICOST2006 - 26-28 June, 2006 - Belfast, Northern Ireland, UK

Organiser
University of Ulster

After three successful editions held in France (2003), Singapore (2004), and Canada (2005), ICOST2006 aims to continue to develop an active research community dedicated to explore how Smart Homes and Health Telematics can foster independent living and offer an enhanced quality of life for ageing and disabled people. A Smart Home can be considered to be an augmented environment with the ability to consolidate embedded computers, information appliances, micro/nano systems, and multi-modal sensors to offer people unprecedented levels of access to information and assistance from information and communication technology. Health Telematics makes the most of networks and telecommunications to provide, within the home environment, health services, expertise and information and hence radically transform the way health-related services are conceived and delivered. We believe that in the future ageing and disabled people will use smart assistive technology to perform daily living activities, socialize, and enjoy entertainment and leisure activities. Nowadays networks, microprocessors, memory chips, smart sensors and actuators are faster, cheaper, more intelligent and smaller than ever. Current advances in such enabling technologies coupled with evolving care paradigms allow us to foresee novel applications and services for improving the quality of life for ageing and disabled people both inside and outside of their homes. The conference will present the latest approaches and technical solutions in the area of Smart Homes, Health Telematics, and emerging enabling technologies. Technical topics of interest include, but are not limited to:

o Intelligent Environments / Smart Homes
o Medical Data Collection and Processing
o Human-Machine Interface / Ambient Intelligence
o Modeling of Physical and Conceptual Information in Intelligent Environments
o Vision / Hearing / Cognitive Devices
o Tele-Assistance and Tele-Rehabilitation
o Personal Robotics and Smart Wheelchairs
o Context Awareness / Autonomous Computing
o Home Networks / Residential Gateways
o Wearable Sensors / Integrated Micro/Nano Systems / Home Health Monitoring
o Social / Privacy / Security Issues
o Middleware Support for Smart Home and Health Telematic Services

Each year, ICOST has a specific flavour. ICOST2003 focused on usability. The theme was "Independent living for persons with disabilities and elderly people". The theme for ICOST2004 was "Towards a Human-Friendly Assistive Environment" and for ICOST2005 was "From Smart Homes to Smart Care". This year the conference has the theme "Smart Homes and Beyond". This focuses on promoting personal autonomy and extending the quality of life. Papers or special sessions addressing the following topics are especially encouraged:

o Inclusive smart home services
o Smart services inside and outside of the home
o Situation awareness
o Location-based services
o Mobility of service delivery

Submission of Papers: There will be a combination of presentations including scientific papers, posters, exhibits and technology demonstrations. Prospective authors are invited, in the first instance, to submit papers for oral presentation in any of the areas of interest for this conference as well as proposals for Special Sessions. The initial submission for evaluation should be in the form of a 4-8 page paper outline. Authors are strongly recommended to submit the full 8-page English language version. IOS Press will publish the proceedings as a volume of the Assistive Technology Research Series therefore their paper publication format (www.iospress.nl/authco/instruction_crc.html) should be used for submission.

Important Dates:
Papers submission: 20 January, 2006
Author notification: 3 March, 2006
Camera-ready copy: 3 April, 2006

Conference Web Page: http://www.icost2006.ulster.ac.uk/
Conference Organisation Email: info@icost2006.ac.uk

Conference Venue: ICOST2006 will be held at the Culloden Hotel, Belfast, Northern Ireland, UK

MIT talk: Object and Place Recognition from Invariant Local Features

speaker: David Lowe , University of British Columbia
date: 2005/12/12

abstract:
Within the past few years, invariant local features have been successfully applied to a wide range of recognition and image matching problems. For recognition applications, it has proved particularly important to develop features that are distinctive as well as invariant, so that a single feature can be used to index into a large database of features from previous images. Robust recognition can then be achieved by identifying clusters of features with geometric consistency followed by detailed model fitting. Efficiency can be obtained with approximate nearest-neighbor methods that identify matches in a large database in real time. Recent work will be presented on applications to location recognition, augmented reality, and the detection of image panoramas from unordered sets of images.

David Lowe is a professor of computer science at the University of British Columbia and a Fellow of the Canadian Institute for Advanced Research. He received his Ph.D. in computer science from Stanford University in 1984. From 1984 to 1987 he was an Assistant Professor at the Courant Institute of Mathematical Sciences at New York University. He is a member of the scientific advisory board for Evolution Robotics. His research interests include object recognition, local invariant features for image matching, robot localization, and models of human visual recognition.

Saturday, December 10, 2005

CVPR Paper: Probabilistic parameter-free motion detection

T. Veit, F. Cao, P. Bouthemy. Probabilistic parameter-free motion detection. In Conf. Computer Vision and Pattern Recognition, CVPR'04, Washington, DC, June 2004.
PDF

We propose an original probabilistic parameter-free method for the detection of independently moving objects in an image sequence. We apply a probabilistic perceptual principle, the Helmholtz principle, whose main advantage is the automatization of the detection decision, by providing a tight control of the number of false alarms. Not only does this method localize the moving objects but it also answers the preliminary question of the presence of motion. In particular, the method works even when no assumption on motion presence is made. The algorithm is composed of three independent steps: estimation of the dominant image motion, spatial segmentation of object boundaries and independent motion detection itself. We emphasize that none of these steps needs any parameter tuning. Results on real image sequences are reported and validate the proposed approach.

With background paper on grouping.

CNN: Robot chopper documents Katrina's power: 'Flying camera' may be ready for next hurricane season


By Marsha Walton, CNN
Friday, December 9, 2005; Posted: 12:20 p.m. EST (17:20 GMT)

BILOXI, Mississippi (CNN) -- "And let's go out over the motel roof so we can get the seams ... go out a little further... alright that's good, hold there."

Kevin Pratt communicated with the other members of the helicopter flight team to shoot the best angles of video of two Katrina damaged structures.

What was unique about this flight? The four-person crew was on the ground, while the camera-carrying, 10-pound robotic aircraft flew around the buildings.

Headed by University of South Florida robotics professor Robin Murphy, the team documented damage to multistory buildings hit hard by the hurricane.

the full article
...

CMU talk: Massively Scalable Computer Vision - The Next Great Challenge

Craig Coulter, HyperActive Technologies, Inc.
Monday, December 12, 2005

Computer vision technologies are only beginning to emerge from research and development and into broader application. Scaling vision algorithms presents enormous, unaddressed challenges to the community and is creating a whole new branch of computer vision research: Massively Scalable Systems.

Consider for a moment the challenges of simply testing a computer vision application. Testing is currently peformed by hand, through visual inspection, and usually by the research group itself, across a relatively small batch of test images - usually a few hundred to a few thousand.

HyperActive Technologies is applying computer vision technologies into the quick-service and general retail markets - applications where the same core set of detection and tracking algorithms will operate in tens of thousands of locations daily. Developing a "retail grade" computer vision application for a 20,000 store chain will require building a system that processes some 40 billion images per day, or 15 trillion images per year. Achieving this goal will require the community to completely rethink its approach to application development, testing, and in-field performance monitoring.

This talk will focus on exploring the challenges of defining a new area of computer vision research: "Massively Scalable Computer Vision Systems".


Speaker Bio
------------
Dr. R. Craig Coulter is Co-Founder and Chief Scientist of HyperActive Technologies, Inc. a Pittsburgh area robotics that addresses the real-time decision-making problems plaguing high-volume, high-demand markets like quick-service restaurants, retail, and grocery stores.

Dr. Coulter is a robotics scientist focused on the commercialization of intelligent robotics technologies. He began his work at the National Robotics Engineering Consortium (NREC), where he organized a project with Ford Motor Company to commercialize a revolutionary vision-based position estimation system that he co-invented. He co-founded Highlander Systems, Inc., a vision systems engineering company; Tpresence, Inc., an early distributed computing company, where he served as CEO; and Mammoth Ventures, a firm focused on the development of new robotics-related intellectual property, where he currently serves as Managing Partner. In 2001, Dr. Coulter co-founded HyperActive Technologies, Inc. Dr. Coulter is a graduate of the PhD in Robotics from Carnegie Mellon's School of Computer Science.

Friday, December 09, 2005

A Vision-Based Approach to Collision Prediction at Traffic Intersections

Stefan Atev, Hemanth Arumugam, Osama Masoud, Ravi Janardan, Senior Member, IEEE, and
Nikolaos P. Papanikolopoulos, Senior Member, IEEE

Abstract

Monitoring traffic intersections in real time and predicting
possible collisions is an important first step towards building
an early collision-warning system. We present a vision-based
system addressing this problem and describe the practical adaptations
necessary to achieve real-time performance. Innovative
low-overhead collision-prediction algorithms (such as the one
using the time-as-axis paradigm) are presented. The proposed
system was able to perform successfully in real time on videos
of quarter-video graphics array (VGA) (320 × 240) resolution
under various weather conditions. The errors in target position
and dimension estimates in a test video sequence are quantified
and several experimental results are presented.

Index Terms

Collision prediction, machine vision, real-time
systems, tracking, traffic control (transportation).

Link

Detection of Text on Road Signs From Video

Wen Wu, Member, IEEE, Xilin Chen, Member, IEEE, and Jie Yang, Member, IEEE

Abstract
A fast and robust framework for incrementally detecting
text on road signs from video is presented in this paper.
This new framework makes two main contributions. 1) The
framework applies a divide-and-conquer strategy to decompose
the original task into two subtasks, that is, the localization of road
signs
and the detection of text on the signs. The algorithms for the
two subtasks are naturally incorporated into a unified framework
through a feature-based tracking algorithm. 2) The framework
provides a novel way to detect text from video by integrating
two-dimensional (2-D) image features in each video frame (e.g.,
color, edges, texture) with the three-dimensional (3-D) geometric
structure information
of objects extracted from video sequence
(such as the vertical plane property of road signs). The feasibility
of the proposed framework has been evaluated using 22 video
sequences captured from a moving vehicle. This new framework
gives an overall text detection rate of 88.9% and a false hit rate of
9.2%. It can easily be applied to other tasks of text detection from
video and potentially be embedded in a driver assistance system.

Index Terms
Object detection from video, road sign detection,
text detection, vehicle navigation.

Link

Thursday, December 08, 2005

CMU RI Thesis Proposal: A Constraint Based Approach to Interleaving Planning and Execution for Multirobot Coordination

Speaker: Mary Koes, RI, CMU
Date: 14 Dec. 2005
Time: 10:30 AM
Location: 14 Dec. 2005

Abstract:
Enabling multiple robots to work together as a team is a difficult problem. Robots must decide amongst themselves who should work on which goals and at what time each goal should be achieved. Since the team is situated in some physical environment, the robots must consider travel time in these decisions. This is particularly challenging in time critical domains where goal rewards decrease over time and for tightly coupled coordination where multiple robots must work together on each goal. Further complications arise when the system is subjected to additional constraints on the ordering of the goals, the use of resources, or the allocation of robots to goals. Optimal team behavior can only be achieved when robots simultaneously consider path planning, task allocation, scheduling, and these additional system constraints. In dynamic and uncertain environments, robots need to reevaluate these decisions as they discover new information. Communication failures may mean that robots are unable to consult as a whole team while replanning. The proposed thesis addresses these challenges with four main points.

Further details:
A copy of the thesis proposal document can be found at http://www.cs.cmu.edu/~mberna/research/proposal.pdf.

Wednesday, December 07, 2005

Talk Today: Affine Structure From Sound

Affine structure from sound,
Sebastian Thrun

Abstract
We consider the problem of localizing a set of microphones together with a set of external acoustic events (e.g., hand claps), emitted at unknown times and unknown locations. We propose a solution that approximates this problem under an “orthocoustic” model defined in the calculus of affine geometry, and that relies on SVD to recover the affine structure of the problem. We then define low-dimensional optimization techniques for embedding the solution into Euclidean geometry, and further techniques for recovering the locations and emission times of the acoustic events. The approach is useful for the calibration of ad-hoc microphone arrays and sensor networks (though it requires centralized computation).
Link

Tuesday, December 06, 2005

Paper: Learning user models of mobility-related activities through instrumented walking aids

J. Glover, S. Thrun, and J.T. Matthews.

We present a robotic walking aid capable of learning models of users' walking-related activities. Our walker is instrumented to provide guidance to elderly people when navigating their environments; however, such guidance is difficult to provide without knowing what activity a person is engaged in (e.g., where a person wants to go). The main contribution of this paper is an algorithm for learning models of users of the walker. These models are defined at multiple levels of abstractions, and learned from actual usage data using statistical techniques. We demonstrate that our approach succeeds in determining the specific activity in which a user engages when using the walker. One of our proto-type walkers was tested in an assisted living facility near Pittsburgh, PA; a more recent model was extensively evaluated in a university environment.

The full paper is available in PDF and gzipped Postscript

Sunday, December 04, 2005

CMU talk: Probabilistic Policy Reuse in Reinforcement Learning

Speaker: Fernando Fernandez Rebollo, CMU
http://www.cs.cmu.edu/~fernando/
Date: December 05
Abstract: We contribute Policy Reuse as a technique to improve a reinforcement learner with guidance from past learned similar policies. Our method relies on using the past policies in a novel way as a probabilistic bias where the learner faces three choices: the exploitation of the ongoing learned policy, the exploration of random unexplored actions, and the exploitation of past policies. We introduce the algorithm and its major components: an exploration strategy to include the new reuse bias, and a similarity metric to estimate the similarity of past policies with respect to a new one. We provide empirical results demonstrating that Policy Reuse improves the learning performance over different strategies that learn without reuse. Policy Reuse further contributes the learning of the structure of a domain. Interestingly and almost as a side effect, Policy Reuse identifies classes of similar policies revealing a basis of "eigen-policies" of the domain. In general, Policy Reuse contributes to the overall goal of lifelong reinforcement learning, as (i) it incrementally builds a policy library; (ii) it provides a mechanism to reuse past policies; and (iii) it learns an abstract domain structure in terms of eigen-policies of the domain.

This is joint work with Prof. Manuela Veloso.

Thursday, December 01, 2005

Project: Wheelesley


The link.

This research project started at Wellesley College in January 1995 where Holly Yanco was an Instructor in the Computer Science Department. The project has since moved to the MIT Artificial Intelligence Laboratory.

MIT Tech Report: Accurate and Scalable Surface Representation and Reconstruction from Images

Author[s]: Gang Zeng, Sylvain Paris, Long Quan, Francois Sillion

November 18, 2005

We introduce a new surface representation, the patchwork, to extend the problem of surface reconstruction from multiple images. A patchwork is the combination of several patches that are built one by one. This design potentially allows the reconstruction of an object of arbitrarily large dimensions while preserving a fine level of detail. We formally demonstrate that this strategy leads to a spatial complexity independent of the dimensions of the reconstructed object, and to a time complexity linear with respect to the object area. The former property ensures that we never run out of storage (memory) and the latter means that reconstructing an object can be done in a reasonable amount of time. In addition, we show that the patchwork representation handles equivalently open and closed surfaces whereas most of the existing approaches are limited to a specific scenario (open or closed surface but not both). Most of the existing optimization techniques can be cast into this framework. To illustrate the possibilities offered by this approach, we propose two applications that expose how it dramatically extends a recent accurate graph-cut technique. We first revisit the popular carving techniques. This results in a well-posed reconstruction problem that still enjoys the tractability of voxel space. We also show how we can advantageously combine several image-driven criteria to achieve a finely detailed geometry by surface propagation. The above properties of the patchwork representation and reconstruction are extensively demonstrated on real image sequences.
[PDF] [PS]

IEEE Career Alert

2. Are Asian Scientists Bumping Up Against a Glass Ceiling in the US?

Asians "are known for being great scientists," but probably shouldn't look forward to heading science labs, says Kuan-Teh Jeang, a virologist at the U.S. National Institutes of Health (NIH). Earlier this year, Taiwan-born Jeang compiled statistics in a bid to confirm or refute anecdotal evidence that there were few opportunities for career advancement for Asian researchers at NIH. What he found was disheartening. Though 21.5 percent of the agency's tenure-track investigators are Asian, only 9.2 percent of senior investigators are of Asian descent. And only 4.7 percent of the people heading NIH labs or branches are Asian.

A similar examination of the American Society for Biology and Molecular Biology (ASBMB) by Yi Rao, a neuroscientist at Northwestern University in Evanston, Illinois, uncovered equally bad news for Asian scientists. In letters to the governing boards of ASBMB and the Society for Neuroscience penned in July, Rao wrote, "However the phenomenon can be described, the underlying problem is discrimination. [Asian] Americans tend to be quiet, partly because their voices and concerns are not listened to. But should that mean obedience and subordination forever?"

For more on whether there is a level playing field in scientific research, and to see what officials at these organizations have done in response, read on at:the link


----------------------
4. Taiwan to Take Center Stage in IC development?

According to Nicky Lu, a former IBM researcher who was a co-inventor of the advanced DRAM technology Big Blue was using when he left the company to return to his native Taiwan in 1991, the global technology market is undergoing a shift that will move semiconductor R&D and other so-called knowledge work from a "pan-Atlantic IC circle" centered in the United States to a "pan-Pacific circle" with Taiwan at its center.

Lu has founded three technology companies in his homeland since a former government minister convinced him to help build the country's nascent IC industry. In an EETimesAsia.com article, he discusses Taiwan's changing role in the global IC market, the importance of intellectual property to generating profit, the entrepreneurial spirit of Taiwanese engineers, and his "pool theory"--of which he says, "the United States has proved that the more open and enjoyable a society is, the more likely it is that all the talent will go there. If you make your pool the cleanest and most beautiful, then people will come over." Read on at (free registration required): the link