Monday, October 31, 2005

My talk this Wednesday


Hybrid Simultaneous Localization and Map Building:
A Natural Integration of Topological and Metric


  • Introduction
  • Environment Modeling
  • Localization and Map Building
  • Experimental Results
  • Conclusions and Outlook

Saturday, October 29, 2005

News: Future smart cars could help to cut accidents

Future smart cars could help to cut accidents
Using peppermint, lavender, citrus scents, vibrating seat belts

Updated: 7:25 a.m. ET Sept. 6, 2005

DUBLIN - Whether it is wafting lavender or citrus scents to calm drivers and keep them awake, or vibrating seat belts to get them to slow down, smart cars in the future could help reduce road accidents.


News: Robot dog: Man's best friend or diet nag?

Updated: 9:07 p.m. ET Aug. 31, 2005

MIT researchers plan to recruit Aibo into the obesity police

LONDON - It could be a dream or a nightmare -- scientists have created a robotic dog that tells you when it's time for your daily walk.


CMU ML lunch: Patient-Specific Predictive Modeling

Speaker: Shyam Visweswaran, University of Pittsburgh
Date: October 31


We investigated two patient-specific and four population-wide machine learning methods for predicting dire outcomes in community acquired pneumonia (CAP) patients. Predicting dire outcomes in CAP patients can significantly influence the decision about whether to admit the patient to the hospital or to treat the patient at home. Population-wide methods induce models that are trained to perform well on average on all future cases. In contrast, patient-specific methods specifically induce a model for a particular patient case. We trained the models on a set of 1601 patient cases and evaluated them on a separate set of 686 cases. One patient-specific method performed better than the population-wide methods when evaluated within a clinically relevant range of the ROC curve. Our study provides support for patient-specific methods being a promising approach for making clinical predictions.

Latest News from (October 28, 2005)

More NHTSA Collision Avoidance Research Project Results Available
The U.S. National Highway Traffic Safety Administration (NHTSA) has posted four new research reports for download. The reports address the Automotive Collision Avoidance Systems (ACAS) project as well as the Collision Avoidance Metrics Partnership (CAMP).

PReVENT Announces First Results for 3D Camera R&D
The European PReVENT Integrated Project has issued information regarding first results for their 3D CMOS camera research. During the past months, the UserRcams consortium has been working on their first prototype for their 3D CMOS camera based on the UseRCams general specification deliverable.

PReVENT ProFusion2 Sets Timeframe for Fusion Forum
ProFusion2 works on sensor data fusion (SDF), developing a common SDF framework for automotive active safety applications and carrying out research on environment modeling and data fusion algorithms for object tracking. The Fusion Forum with SDF experts has been established and will organize its first one-day open workshop in March 2006 in Brussels.

PReVENT Participates in MADYMO Passive Safety Meeting
In September, European PReVENT representatives participated in the 5th Annual meeting of the MADYMO user community. This event is an annual event gathering a large community of experts on passive safety. PReVENT was invited to give a keynote presentation on the integration of passive and active safety. The speech presented by the active safety community attracted a great deal of attention since this field is seen as the next step up in improving passive safety components.

PReVENT ADASIS Forum Holds First Commercial Vehicle Task Force Meeting
The ADASIS Forum held its first Commercial Vehicle Task Force meeting on 21 September in Gothenburg, Sweden. Commercial vehicle makers, ADAS suppliers, map makers and navigation system suppliers met to discuss the future of digital maps for commercial vehicles and their applications.

PReVENT WILLWARN Shares Smarts with Network on Wheels
Within the European PReVENT integrated project, the WILLWARN subproject is sharing its vehicle-vehicle cooperative technology. A joint meeting recently took place between WILLWARN and members of the German Network on Wheels (NOW) project. At the meeting, the WILLWARN consortium agreed upon forming a joint task force for communication issues. The task force, consisting of WILLWARN communication and application experts and NOW representatives, will focus on the integration of the WILLWARN application's communication needs with the NOW project, as well as the use of NOW communication hardware in WILLWARN.

FMCSA Releases Performance Requirements Docs for Safety Systems
Culminating a two-year process, the U.S. Federal Motor Carrier Safety Administration has posted performance requirements for Forward Collision Warning Systems / Adaptive Cruise Control, Lane Departure Warning Systems, and Vehicle Stabilty Systems.

Friday, October 28, 2005

ICCV2005 best papers...

Marr Prize

Globally Optimal Estimates for Geometric Reconstruction Problems
Fredrik Kahl, Didier Henrion

Honorable Mention

A Theory of Refractive and Specular Shape by Light-Path Triangulation
Kiriakos N. Kutulakos, Eron Steger

Detecting Irregularities in Images and in Video
Oren Boiman, Michal Irani

On the Spatial Statistics of Optical Flow
Stefan Roth, Michael J. Black

CVIU: Special Issue on Event Detection in Video

Computer Vision and Image Understanding

Volume 96, Issue 2, Pages 97-268 (November 2004)
Special Issue on Event Detection in Video
Edited by Tanveer Syeda-Mahmood, Ismail Haritaoglu and Thomas Huang

Folks, you should take a look at these journal papers.

Thursday, October 27, 2005

Paper: Beyond Mice and Menus

Grosz, Barbara J. 2004. "Beyond Mice and Menus." To appear in Proceedings of the America Philosophical Society. [pdf]

An insightful paper. Any, you should read this paper.

Widespread use of the Internet has fundamentally changed the computing situation not only for individuals, but also for organizations. Settings in which many people and many computer systems work together, despite being distributed both geographically and in time, dominate individual use. This major shift in the way people use computers has led to a significant challenge for computer science: to construct computer systems that are able to act effectively as collaborative team members. Teams may consist solely of computer agents, but often include both systems and people. They may persist over long periods of time, form spontaneously for a single group activity, or come together repeatedly. Participation in group activities whether competitive, cooperative, or collaborative---frequently requires decision-making on the part of autonomous-agent systems or the support of decision-making by people.

In this talk, I will briefly review the major features of one model of collaborative planning, SharedPlans (Grosz and Kraus, 1996,1999), and will describe efforts to develop collaborative planning agents and systems for human-computer communication based on this model. The model also provides a framework in which to raise and address fundamental questions about collaboration and the construction of collaboration-capable agents. In this context, I will discuss recent approaches to commitment management and group decision-making.

Speaker Bio:

Barbara J. Grosz is Higgins Professor of Natural Sciences in the Division of Engineering and Applied Sciences and Dean of Science of the Radcliffe Institute for Advanced Study at Harvard University. Professor Grosz is known for her seminal contributions to the fields of natural-language processing and multi-agent systems. She developed some of the earliest and most influential computer dialogue systems and established the research field of computational modeling of discourse. Her work on models of collaboration helped establish that field of inquiry and provides the framework for several collaborative multi-agent systems and human-computer interface systems. She has been elected to the American Philosophical Society and the American Academy of Arts and Sciences. She is a Fellow of the American Association for Artificial Intelligence, the ACM, and the American Association for the Advancement of Science, recipient of the University of California at Berkeley Computer Science and Engineering Distinguished Alumna Award and of awards for distinguished service from major AI societies. She is also widely respected for her contributions to the advancement of women in science.

MIT Thesis Defense: Learning to Transform Time Series with a Few Examples

Speaker: Ali Rahimi , MIT CSAIL Vision Group
Date: Friday, October 28 2005

Many problems in machine perception can be framed as mapping one time series to another time series. In tracking, for example, one transforms a time series of observations from sensors to a time series describing the pose of a target. Defining and implementing such transformations by hand is a tedious process, requiring detailed models of the time series involved. I will describe a semi-supervised learning algorithm that learns memoryless transformations of time series from a few example input-output mappings. The algorithm searches for a smooth function that fits the training examples and, when applied to the input time series, produces a time series that evolves according to assumed dynamics. The learning procedure is fast and lends itself to a closed-form solution. I relate this algorithm and its unsupervised extension to nonlinear system identification and manifold learning techniques. I demonstrate it on the tasks of tracking RFID tags from signal strength measurements, recovering the pose of rigid objects, deformable bodies, and articulated bodies from video sequences, and tracking a target in a completely uncalibrated network of sensors. For these tasks, this algorithm requires significantly fewer examples compared to fully-supervised regression algorithms or semi-supervised learning algorithms that do not take the dynamics of the output time series into account.

CNN: Microsoft keeping eye on China, India

Wednesday, October 26, 2005; Posted: 11:54 a.m. EDT (15:54 GMT)
access the full article

TEL AVIV, Israel (Reuters) -- Microsoft Corp. Chairman Bill Gates said on Wednesday the software giant faced growing competition from companies in China and India but, for now, the strength in those countries lies in software services.

"Internet search as it is today will be dramatically better in a few years whether it's us or Google, we're both going to be making dramatic improvements there."

Microsoft faces competition in about half the sectors in which it operates. In the other half -- areas such as Internet television and speech recognition -- Microsoft is driving the frontier, Gates said.

The billionaire founder of Microsoft said the company was interested in writing software wherever software could add value.

"That's taken us into the car now. Think about how you can interface with mapping and communications and entertainment in the car," he said.

"It's taken us into the video-game area which is very software-driven although we have to do hardware there as well. It's taken us into the phone."

Microsoft is also seeking to develop "user centric" software platforms that enable people to move between various devices without having to manually move the information, he said.

Monday, October 24, 2005

About my talk

The talk I'll give is about how to recover the geometric context from a single image.
Here is link of the paper,
and there are the links of the author's projects

Sunday, October 23, 2005

Stanford talk: Information Extraction, Social Network Analysis and Joint Inference

Andrew McCallum
October 10, 2005, 4:15PM

Although information extraction and data mining appear together in many applications, their interface in most current systems would better be described as serial juxtaposition than as tight integration. Information extraction populates slots in a database by identifying relevant subsequences of text, but is usually not aware of the emerging patterns and regularities in the database. Data mining methods begin from a populated database, and are often unaware of where the data came from, or its inherent uncertainties. The result is that the accuracy of both suffers, and accurate mining of complex text sources has been beyond reach.

In this talk I will describe work in probabilistic models that perform joint inference across multiple components of an information processing pipeline in order to avoid the brittle accumulation of errors. After briefly introducing conditional random fields, I will describe recent work in information extraction leveraging factorial state representations, object deduplication, and transfer learning, as well as scalable methods of inference and learning.

I will then describe two methods of integrating textual data into a particular type of data mining---social network analysis. The Author-Recipient-Topic (ART) model performs summarization and question routing from large quantities of email or other message data by discovering clusters of words associated with topics, and also role-similarity among entities based on those topics. The Group-Topic (GT) model captures relational data along with accompanying text by discovering how entities fall into groups---capturing the different coalitions that arise dependent on the topic at hand. I will demonstrate this on several decades of voting records in the U.N. and U.S. Senate.

If there is time, I will also give a demo of the new research paper search engine we are creating at UMass.

Joint work with colleagues at UMass: Charles Sutton, Chris Pal, Ben Wellner, Michael Hay, Xuerui Wang, Natasha Mohanty, and Andres Corrada.

Andrew McCallum is an Associate Professor at University of Massachusetts, Amherst. He was previously Vice President of Research and Development at WhizBang Labs, a company that used machine learning for information extraction from the Web. In the late 1990's he was a Research Scientist and Coordinator at Justsystem Pittsburgh Research Center, where he spearheaded the creation of CORA, an early research paper search engine that used machine learning for spidering, extraction, classification and citation analysis. He was a post-doctoral fellow at Carnegie Mellon University after receiving his PhD from the University of Rochester in 1995. He is an action editor for the Journal of Machine Learning Research. For the past ten years, McCallum has been active in research on statistical machine learning applied to text, especially information extraction, document classification, clustering, finite state models, semi-supervised learning, and social network analysis.

MIT CSAIL Talk: Modularity, synchronization, and what we may learn from the brain

Speaker: Jean-Jacques Slotine , Nonlinear Systems Laboratory, MIT
Date: Tuesday, October 25 2005

Although neurons as computational elements are 7 orders of magnitude slower than their artificial counterparts, the primate brain grossly outperforms robotic algorithms in all but the most structured tasks. Parallelism alone is a poor explanation, and much recent functional modelling of the central nervous system focuses on its modular, heavily feedback-based computational architecture, the result of accumulation of subsystems throughout evolution. We discuss this architecture from a global stability and convergence point of view. We then study synchronization as a model of computations at different scales in the brain, such as pattern matching, temporal binding of sensory data, and mirror neuron response. Finally, we derive a simple condition for a general dynamical system to globally converge to a regime where multiple groups of fully synchronized elements coexist. Applications of such "polyrhythms" to some classical questions in robotics and systems neuroscience are discussed.

CMU FRC Seminar: Balancing Profit and Flexibility in a Partially-Committed Market-Based Multi-Agent System

Speaker: E. Gil Jones, PhD Student, Robotics Institute, Carnegie Mellon University
Date: FRIDAY, October 28

Abstract: I am interested in creating a market-based multi-robot coordination mechanism specifically designed for domains where a group of robots actively interact with a human operator - for instance, imagine a group of mining robots carrying out tasks generated dynamically by a human foreman. In these domains I picture a human operator generating a continuous stream of tasks, with the market-based system allocating those tasks to robots in such a way that maximizes a performance metric set by the operator. This performance metric may factor in the relative importance of tasks as well as increased urgency for some tasks. This talk will address the realization of a market-based multi-agent system capable of providing good allocation solutions given such performance metrics. I will discuss my current system implementation and present initial simulation results that illustrate how a market-based approach can provide reasonable solutions for a sample domain. Additionally, I will discuss the difficulties created by moving from a zero-commitment scheme, where robots are not required to complete tasks that they've been contracted to perform, to a partial-commitment scheme, where penalties are assessed for robots who fail to complete tasks. Robots may severely reduce their overall performance by agreeing to highly restrictive tasks; I'll present a learning approach such that robots can learn the value of retaining flexibility, balancing the profit of restrictive tasks versus the possibility of future opportunities.

Speaker Bio: Gil is a second year Ph.D. student at the Robotics Institute, and is co-advised by Bernardine Dias and Tony Stentz. His primary interest is market-based multi-robot coordination; he also dabbles in human-multi-robot interaction and economic agent theory. He received his BA in Computer Science from Swarthmore College in 2001, and spent two years as a software engineer at Bluefin Robotics - manufacturer of autonomous underwater vehicles - in Cambridge, Mass.

Discovery Channel::Mars Rover Heads for New Terrain

By Irene Mona Klotz, Discovery News

Oct. 21, 2005— The Mars rover Spirit capped a year-long quest to reach the top of a group of hills and has begun climbing down to explore a new region of its Gusev Crater landing site.

The descent could take a month or longer, depending on how many targets Spirit stops to study and how well it continues to operate.

"We're on new ground now, and we're going to be seeing some new sights," said rover principal investigator Steven Squyres with Cornell University. "Spirit has really been on a roll lately."

Getting to the last target on Husband Hill tested the team's mettle. Perched on a steep slope and with loose soil beneath the rover's wheels, rover operators were not sure what would happen when Spirit's sensor-laden arm was extended to study a rock named Hillary.

The team decided to wiggle the rover's wheels to test how stable it was. To scientists' dismay, the rover moved.

"A little motion isn't unexpected when you wiggle wheels on a slope this steep, of course, but it wasn't exactly a confidence-builder," Squyres wrote in his project's Web log.

The team was concerned that if the rover slipped with its arm extended, it might bump the delicate instruments into the rock. After a day of brain-storming, rover operators decided to partially extend the arm and see if Spirit held its ground. The rover was steady and the team collected data for several days.

Scientists were able to determine that the rocks at the top of the hill are nearly indistinguishable from rocks hundreds of yards away, though they are angled quite differently.

"All in all, it was a crucial piece of the puzzle in trying to work out the geology of Husband Hill," Squyres wrote.

Spirit will make its way along a ridge and head toward an area known as "Home Plate," located about a half-mile from the summit of Husband Hill.

"We don't know what this is, but it looks geographically interesting," said science team member Larry Crumpler, with the Museum of Natural History in Albuquerque, N.M. "We think it will help us to understand what the hills are all about."

Ultimate, the team wants to reach a basin-like area south of the Columbia Hills.

Spirit and its identical twin rover Opportunity have been exploring Mars for nearly two years in an effort to determine if and for how long the planet had liquid water, which scientists believe is a key requirement for life.

Opportunity, which is exploring on the other side of the planet in an area known as Meridiani Planum, has recovered from a series of glitches and is headed around Erebus Crater, a shallow depression stretching about 300 meters, or 984 feet, in diameter.

The rover's destination is an area known as Mogollon Rim.

Thursday, October 20, 2005

CMU MISC-Read: Tracking Loose Limbed People

Jake Sprouse

I will present Tracking Loose Limbed People by Leonid Sigal, Sidharth Bhatia, Stefan Roth, Michael Black, and Michael Isard from CVPR'04, in which our heroes combine particle filtering with belief propagation and eigenspace part detection to detect and track a human.

Background papers:

  • Particle Filtering
  • Nonparametric BP
  • CMU VASC seminar: Skinning Mesh Animations

    Doug L. James
    School of Computer Science, Carnegie Mellon University

    In this talk, I will present our recent work on parameterization of deformable animations to enable efficient processing. Given a skeleton-free mesh animation, I will present an automatic and robust algorithm to generate a progressive "skinned" mesh approximation--a generalization of techniques used to animate characters in video games. "Skinned mesh animations" provide optimized hardware rendering, level of detail, and output-sensitive collision detection for mesh animations. The key insight of our algorithm is that mean shift clustering of high-dimensional triangle rotation sequences can be used for efficient and robust estimation of near-rigid mesh structure.

    Related Publication:
    Skinning Mesh Animations,
    Doug L. James and Christopher D. Twigg,
    ACM Transactions on Graphics, 24(3), pages 399-407, July, 2005.

    ABOUT THE SPEAKER: Doug L. James has been an Assistant Professor of Computer Science and Robotics at Carnegie Mellon University since Fall 2002. He received his Ph.D. from the Institute of Applied Mathematics at the University of British Columbia advised by Dinesh K. Pai. Doug is a recipient of an NSF Early Career Development Award for his work on "Precomputing Data-driven Deformable Systems for Multimodal Interactive Simulation," and was chosen as one of Popular Science magazine's "Brilliant 10" young scientists for 2005.

    Wednesday, October 19, 2005

    CMU FRC seminar: Learning to Select Skills within a Dynamic Environment

    Speaker: Brenna Argall, PhD Student, Robotics Institute, Carnegie Mellon University
    Date: Thursday, October 20

    By augmenting a robot's reasoning with learning, we hope to promote their ability to adapt and respond intelligently within dynamic environments. Our chosen domain is Robocup robot soccer, in which our Segway robots perceive, reason, and act under the highly dynamic and adversarial constraints of a soccer game. In particular, we are interested in applying learning to the question of soccer skill selection; that is, to the choice of which action, or sequence of actions, to execute to attain a specific goal. Expert learning easily extends to this problem, where each expert recommends a single soccer skill. In this talk we introduce our experts learning algorithm, dEXP3, which is a modification on EXP3 (Auer et al., 1995) to enhance flexibility within dynamic environments. The modification present in dEXP3 explicitly handles the case where a previously learned best expert begins to fail. We present our results from implementation both in simulation and on the robots. With comparisons to the foundation algorithm EXP3 we show that, in response to environment variations, our enhanced algorithm exhibits faster adaptability and subsequent better performance.

    Speaker Bio:
    Brenna Argall is a second year Ph.D. student in the Robotics Institute, affiliated with the CORAL Research Group and co-advised by Dr. Brett Browning and Prof. Manuela Veloso. Her research interests lie with robot autonomy and heterogeneous team coordination, and in particular, with how learning can be used to improve autonomous robot decision making in dynamic environments. Prior to joining the Robotics Institute, Brenna researched in functional brain imaging within the Laboratory of Brain and Cognition at the National Institutes of Health. She received her B.S. in Mathematics in 2002 from Carnegie Mellon University.

    Tuesday, October 18, 2005

    MIT talk: Understanding Video: Research and Development in Applied Computer Vision

    Speaker: Matthew Antone , BAE Systems Advanced Information Technologies
    Date: Wednesday, October 19 2005

    The past decade has seen great advances in the theory and practice of computer vision. As algorithm maturity and computational power have grown, so also has the demand for robust application of vision techniques in real-world, deployed systems. In the first part of this talk, I will present high-level overviews of a few video-based projects currently under development in our research group. These include tracking of vehicles and people from stationary and moving cameras, and extraction of salient features for object recognition and classification, with emphasis on the implementation of working prototypes.

    Camera calibration is vital to the success of many such applications. For example, rectification of perspective effects normalizes size and velocity measurements, while recovery of pose situates disparate cameras and objects in a consistent coordinate frame. However, physical access to the site or to the sensors may be limited, precluding use of explicit calibration patterns. The second part of the talk will describe efficient techniques for automatic recovery of camera intrinsic and extrinsic parameters based upon phenomena observed over time, including object trajectories and cast shadows.

    Newsweek: Turning the Car Keys Over to the Car

    By Steven Levy

    Oct. 24, 2005 issue - At first, Sebastian Thrun didn't feel quite comfortable behind the wheel of the modified Volkswagen Touareg R5 named Stanley. That's understandable, because he wasn't driving. Stanley was. As the Stanford University entrant in the DoD's Defense Advanced Research Projects Agency's (DARPA) $2 million Grand Challenge, Stanley was designed to compete against 26 other driverless robot vehicles in a race through 132 miles of hostile terrain in the Mojave Desert. On test drives (the real race would be run with no passengers), Thrun had a red panic button to stop the car when Stanley failed to notice a sharp turn, or swerved toward the brush to avoid an obstacle that wasn't there. After months of software tweaking, Stanley got so good at driving that Thrun, head of Stanford's Artificial Intelligence Lab, relaxed, even allowing himself to gab on a cell phone or consult his maps while Stanley made its way through tough desert roads. Thrun and his team began to think that on Oct. 8, Stanley might complete the DARPA challenge.

    That was an achievement that many observers considered possible only in the distant future, if ever. Computers, despite success in e-commerce, data mining and chess, have behaved like utter idiots when it came to getting from point A to point B in the real world. Only a year and a half ago, in the first DARPA challenge, the robots promptly drove into rocks or bushes or simply died at the starting gate. The best effort was from a Humvee that went seven miles before steering itself into a drop-off, its front wheels grinding helplessly in the air. This year was different—and historic. There was a winner: Stanley, which completed the course in six hours and 54 minutes.

    What's more, four other empty vehicles also triumphantly made it to the finish line, avoiding gullies, making tight turns, scooting through tunnels and, finally, navigating the treacherous twists of Beer Bottle Pass on the route's home stretch—all without a hint of human intervention.

    What was Stanley's secret? According to Thrun and Mike Montemerlo, a postdoc who was the software guru for the Stanford team, this robot had the ability to learn about the road. Its sensors gathered information about what was underneath its front bumper and used that knowledge to figure out what was road and what was not road for hundreds of feet ahead. Also, when it came to figuring out what should be avoided and what could be ignored, Stanley was trained to emulate the behavior of human drivers. After that, says Thrun, "the false positives [incorrectly identifying an obstacle] went from 12 percent to 1 in 50,000."

    Thrun is the first to admit that Stanley and his robot kin aren't ready to negotiate the "dynamic environments" of L.A. freeways or New York City cross streets—yet. "It's like walking up to the Wright brothers and asking them if their plane could fly across the Atlantic," he says. He believes that winning the challenge is a milestone toward a development he thinks is inevitable: the day we'll be able to turn the keys of the car over to... the car. In a few years, he predicts, we won't drive into the parking garage—we'll get out and let the Chevy climb the ramps and squeeze into a space by itself. Eventually—20 years? 30 years?—you're reading the paper during the commute, and on family trips you're in the back seat with the kids, watching a DVD. Meanwhile, the Pentagon is more than eager to make use of self-driving robots—that, after all, was DARPA's explicit objective. The goal is to have a third of the military's land vehicles driving themselves by 2015. For human beings driving in convoys in Iraq, the robots can't come soon enough.

    Monday, October 17, 2005

    Human Perception


    The related courses:




    Sunday, October 16, 2005

    Group Meeting Talk: SIFT Algorithm

    Distinctive image features from scale-invariant keypoints
    • Motivation
    • Scale Space
    • SIFT Algorithm
    • Applications
    • Conclusions
    Other links:
    Stanford Project
    Application: Panorama Maker(Autopano-SIFT)
    Author:David Lowe

    CMU RI Defense: A Latent Cause Model of Classical Conditioning

    Aaron Courville
    Robotics Institute, Carnegie Mellon University

    Classical conditioning experiments probe what animals learn about their environment. This thesis presents an exploration of the probabilistic, generative latent cause theory of classical conditioning. According to the latent cause theory, animals assume that events within their environment are attributable to a latent cause. Learning is interpreted as an attempt to recover the generative model that gave rise to these observed events. In this thesis, I apply the latent cause theory to three distinct areas of classical conditioning, in each case offering a novel account of empirical phenomena.

    In the first instance, I develop a version of a latent cause model that explicitly encodes a latent timeline to which observed stimuli and reinforcements are associated, thus preserving their temporal order. In this context, the latent cause model is equivalent to a hidden Markov model. This model is able to account for a theoretically challenging set of experiments which collectively suggest that animals encode the temporal relationships among stimuli and use this representation to predict impending reinforcement.

    Next, I explore the effects of inference over an uncertain latent cause model structure. A key property of Bayesian structural inference is the tradeoff between the model complexity and data fidelity. Recognizing the equivalence between this tradeoff and the tradeoff between generalization and discrimination found in configural conditioning suggests a statistically sound account of these phenomena. By considering model simulations of a number of conditioning paradigms (including some not previously viewed as "configural'', I reveal behavioral signs that animals employ model complexity tradeoffs.

    Finally I explore the consequence of merging latent variable theory with a generative model of change. A model of change describes how the parameters and structure of the latent cause model evolve over time. The resulting non-stationary latent cause model offers a novel perspective on the factors that influence animal judgments about changes in their environment. In particular, the model correctly predicts that the introduction of an unexpected stimulus can spur fast learning and eliminate latent inhibition.

    This thesis offers a unified theoretical framework for classical conditioning. It uses state of the art machine reasoning concepts, including reversible jump MCMC and particle filtering search techniques, to explore a novel theoretical account of a wide range of empirical phenomena, many of which have otherwise resisted a computational explanation.

    A copy of the thesis oral document can be found at

    Friday, October 14, 2005

    CMU ML Lunch: Graphs Over Time: Densification Laws, Shrinking Diameters, Explanations And Realistic Generators

    Speaker: Jure Leskovec, CALD, CMU
    Date: October 17

    How do real graphs evolve over time? What are ``normal'' growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network; these include heavy tails for in- and out-degree distributions, communities, small-world phenomena, and others. However, given the lack of information about network evolution over long periods, it has been hard to convert these findings into statements about trends over time.

    We studied a wide range of real graphs, and we observed some surprising phenomena. First, most of these graphs densify over time, with the number of edges growing super-linearly in the number of nodes. Second, the average distance between nodes often shrinks over time, in contrast to the conventional wisdom that such distance parameters should increase slowly as a function of the number of nodes (like O(log n) or O(log(logn)).

    What underlying process causes a graph to systematically densify, and to experience a decrease in effective diameter even as its size increases? The existing graph generation models do not exhibit these types of behavior, even at a qualitative level. In most cases they are also very complicated to analyze mathematically.

    So we propose a graph generator that is mathematically tractable and matches this collection of properties. The main idea is to use a non-standard matrix operation, the Kronecker product, to generate graphs that we refer to as ``Kronecker graphs''. We show that Kronecker graphs naturally obey all the properties; in fact, we can rigorously prove that they do so. We also provide empirical evidence showing that they can mimic very well several real graphs.

    Paper: Graphs over Time: Densification Laws, Shrinking Diameters and Possible Explanations. Jurij Leskovec, Jon Kleinberg, Christos Faloutsos. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2005), Chicago, IL, USA, 2005. Winner of the Best Research Paper Award.

    CMU talk:NICT and Natural Language Processing

    Dr. Makoto Nagao, President of National Institute of Information and Communications Technology (NICT), Japan
    LTI seminar
    Abstract:National Institute of Information and Communications Technology (NICT) is the sole national institute for the research and funding in the area of information and communications. Its vision is to realize the "universal communication" on the Earth. It has been the central research group to realize the governmental plan of "e-Japan" (2001-2005), and will keep the same position in the next five year plan of the government: "u-Japan"(2006-2010), which is a step to realize the universal communication. NICT's research activity includes wireless communication and sensing, ultra-high speed network technology, particularly optical communication technology, information security technology, and human communication technology such as natural language processing, information retrieval, human interface, virtual reality and urban information robotic system. The talk will be composed of general introduction of the above activities, and the future direction of example-based machine translation and information retrieval. About the speaker: Dr. Makoto Nagao was a professor of Kyoto University and did research works of natural language processing, image processing and digital library system. He was the originator of example-based machine translation. After serving the President of Kyoto University for six years, he has been the president of NICT since 2004. He received many awards including Japan Prize, ACL Lifetime Achievement Award, and Medal of Honor of IAMT.

    Thursday, October 13, 2005

    Intel Research Distinguished Lectures

    Title: Computer vision: Progress, Fun, and Usefulness
    Speaker: Takeo Kanade

    Abstract: Vision is one of the first areas that Artificial Intelligence tackled. After some stagnation due to the failure of an earlier "Let's-program-what-I-think-I-am-doing" approach, computer vision has made substantial progress recently, thanks to several new approaches: physics-based, statistic-based, view-based, system-based, and sample-based approaches. I will review the progress, fun and usefulness that the computer-vision field has brought about, and argue that there is an opportunity to renew the tie between the vision and the search problem.

    Bio: Takeo Kanade is the U. A. and Helen Whitaker University Professor of Computer Science and Robotics at Carnegie Mellon. He received his Doctoral degree in Electrical Engineering from Kyoto University in 1974. He joined Carnegie Mellon in 1980. He was the Director of the Robotics Institute from 1992 to 2001. Dr. Kanade works in multiple areas of robotics: computer vision, sensors, multi-media, autonomous ground and air mobile robots, and medical robotics. He has been the principal investigator of more than a dozen major vision and robotics projects at Carnegie Mellon. Dr. Kanade has been elected to the National Academy of Engineering, and the American Academy of Arts and Sciences. He is a Fellow of the IEEE, a Fellow of the ACM, a Founding Fellow of the American Association of Artificial Intelligence (AAAI), and the former and founding editor of the International Journal of Computer Vision. He has received several awards, including the C&C Award, the Joseph Engelberger Award, the FIT Funai Accomplishment Award, the Allen Newell Research Excellence Award, the JARA Award, and the Marr Prize Award.

    Title: Research at the Robotics Institute
    Speaker Matthew T. Mason

    Abstract: The Robotics Institute has been conducting research for 25 years, and has grown to about 400 faculty, technical staff, and graduate students, with a sponsored research budget of over 45 million dollars. The diversity of our research activity far exceeds the popular conception of robotics. Robotics research is deeper than just studying robots, and broader than just building robots. Our most fundamental work addresses problems that robots and animals share: perception of one's surroundings; planning actions; real-time sensor-based control of one's actions; and communication and coordination with other agents. The underlying technologies have applications that include advanced user interfaces, entertainment, education, security, and many other areas. This talk will include a sample of Robotics Institute research chosen to illustrate the full depth and breadth of robotics research.

    Bio: Matthew T. Mason earned the BS, MS, and PhD degrees in Computer Science and Artificial Intelligence at MIT, finishing his PhD in 1982. Since that time he has been on the faculty at Carnegie Mellon University, where he is presently Professor of Computer Science and Robotics, and Director of the Robotics Institute. His research interests are in robotic manipulation, mobile robot error recovery, mobile manipulation, and robotic origami. He is co-author of "Robot Hands and the Mechanics of Manipulation" (MIT Press 1985), co-editor of "Robot Motion: Planning and Control" (MIT Press 1982), and author of "Mechanics of Robotic Manipulation" (MIT Press 2001). He is a winner of the System Development Foundation Prize, a Fellow of the AAAI, and a Fellow of the IEEE.

    Monday, October 10, 2005

    MIT CSAIL Seminar: From Continuous Models to Discrete Computations

    Speaker: Peter Schröder , CalTech
    Date: Friday, October 14 2005
    Relevant URL:

    Modeling the shape and physical behavior of the world around us has a long and illustrious history. In fact much of classical differential geometry came about through the study of mathematical models for physical objects and their properties. With the advent of computers it became possible to use this machinery for numerical computation. Unfortunately one must turn continuous mathematical models into discretized equations for this purpose. This conversion often loses much of the essential structures of the underlying equations. For example, the simulation of a rigid body in free motion may inexplicably gain or loose momentum.

    In my talk I will give an overview of recent work at Caltech which aims to remove this distinction between mathematical models and computation by formulating the entire machinery of differential geometry and calculus in a discrete setting from the very start. This often leads to much simpler, cleaner, and more stable algorithms which can be designed to have the same symmetries and conserved quantities as the continuous systems one wishes to model.

    I will illustrate these ideas with a number of applications ranging from geometric modeling to fluid simulation.

    Joint work with Mathieu Desbrun, Jerry Marsden, Alexander Bobenko, Boris Springboarn, Yiying Tong, Eva Kanso, Sharif Elcott, and Liliya Kharevych.

    MIT CSAIL seminar: The PlaceLab: What it does, what's been done with it, and how you can use it for your own research

    Speaker: Stephen Intille , MIT House_n
    Date: Friday, October 14 2005

    The PlaceLab is a sensor-enabled live-in laboratory for the study of people and technologies in the home setting. The facility is a 1000 sq. foot condominium in a residential neighborhood in Cambridge. Volunteer research participants live in the PlaceLab for days or weeks at a time, treating it as a temporary home. Meanwhile, sensing devices integrated into the fabric of the architecture record a detailed description of their activities. The facility generates sensor and observational datasets that can be used for research in ubiquitous computing and other fields where domestic contexts impact behavior. I will describe the design and operation of the PlaceLab, how the MIT House_n group has been using it for sensor development and exploratory evaluation of preventive healthcare technologies, and (most importantly) how you might be able to exploit the facility for your own research.

    Bio: Stephen Intille, Ph.D., is Technology Director of the House_n Consortium in the MIT Department of Architecture. His research is focused on the development of context-recognition algorithms and interface design strategies for ubiquitous computing environments and devices. In current work he is developing systems for preventive health care that support healthy aging and well-being in the home by motivating longitudinal behavior change. He received his Ph.D. from MIT in 1999 working on computational vision at the MIT Media Laboratory, an S.M. from MIT in 1994, and a B.S.E. degree in Computer Science and Engineering from the University of Pennsylvania in 1992. He has published research on computational stereo depth recovery, real-time and multi-agent tracking, activity recognition, perceptually-based interactive environments, and technology for preventive healthcare. Dr. Intille has been principal investigator on two NSF ITR grants focused on automatic activity recognition from sensor data in the home, as well as the MIT principal investigator on sensor-enabled health technology grants from Intel, the National Institutes of Health, and the Robert Wood Johnson Foundation. He received an IBM Faculty award in 2003.

    Home page:


    作者: 詹瑜璋 中央研究院地球科學研究所

    為有效表達大地的立體面貌,除了需要高解析度的航空照片或衛星影像外,更需要高精度及高解析度的數值化高程模型(digital elevation model, DEM)。對一般的應用而言,航空照片或是高解析度衛星影像,已經可提供重要且足夠的資訊,例如道路、河川、森林、建築、海岸、山崩區、農地等的分布。然而航空照片或衛星影像僅提供二維的資料,地表高程的資料則需另由其他技術取得,例如使用傳統的成對航空照片(paired aerial photographs),來測製數值化的高程模型。


    More on the PDF file.

    CNN: Four vehicles finish in $2 million robot race

    Sunday, October 9, 2005; Posted: 1:04 p.m. EDT (17:04 GMT)

    PRIMM, Nevada (AP) -- Four robotic vehicles finished a Pentagon-sponsored race across the Mojave desert Saturday and achieved a technological milestone by conquering steep drop-offs, obstacles and tunnels over a rugged 132-mile course without a single human command.
    The vehicles, guided by sophisticated software, gave scientists hope that robots could one day wage battles without endangering soldiers.
    "The impossible has been achieved," cried Stanford University's Sebastian Thrun, after the university's customized Volkswagen crossed first. Students cheered, hoisting Thrun atop their shoulders.
    Also finishing was a converted red Hummer named H1ghlander and a Humvee called Sandstorm from Carnegie Mellon University. The Stanford robot dubbed Stanley overtook the top-seeded H1ghlander at the 102-mile mark.
    "I'm on top of the world," said Carnegie Mellon robotics professor William "Red" Whittaker, who said a mechanical glitch allowed Stanley to pass H1ghlander.
    The sentimental favorite, a Ford Escape Hybrid by students in Metarie, Louisiana, was the fourth vehicle to finish Saturday. The team lost about a week of practice and some lost their homes when Hurricane Katrina blew into the Gulf Coast.


    Univ. of Sydney Seminar: Google Maps -- Organizing the World's Information, Geographically.

    Speaker: Lars Eilstrup Rasmussen, Google
    Time: 12. October 2005, 2-3pm
    Location: Sydney Uni, CARSLAW LECTURE THEATRE CAR173

    I will talk about the many pieces of the puzzle comprising Google Maps. We built, for example, the Google Maps site as a single page consisting almost entirely of JavaScript. (The acronym "AJAX" was later coined to describe this approach.) I will discuss the pros and cons of AJAX, and delve into some particular technical challenges we had to meet. I will also give a high-level overview of the challenges involved in working with spatial data: making it searchable, routable, and browsable. The field is relatively new to Google, and most of the challenges still lie ahead.

    Lars Eilstrup Rasmussen is a member of Google's technical staff and the lead engineer of the team that created Google Maps. He currently works out of Google's Sydney office and is actively working to expand Google's engineering presence in Australia.
    Lars holds a Ph.D. in theoretical computer science from the University of California at Berkeley. In early 2003, he co-founded with his brother Jens Eilstrup Rasmussen a mapping-related startup, Where 2 Technologies, which was acquired by Google in October of 2004.

    Sunday, October 09, 2005

    My talk is about the Camera-Projector System.

    The papaer's title is
    "Smarter Presentations: Exploiting Homography in Camera-Projector Systems".
    You can download it here.

    My presentation outline:
    • Introduction to Camera-Projector System
    • Projector Screen Problems.
    • Projector-Camera Homography
    • Modeling Projector-screen Distortions and Calibrating
    • Conclusion
    • Demo

    Friday, October 07, 2005

    Stanford Seminar: Modeling Displays and the Human Eye

    Michael Deering
    October 3, 2005, 4:15PM
    Hewlett (TCSeq) 200
    This talk will describe a simulation of modern display devices projecting light onto a new synthetic model of human eye cones. In a first step all 5 million cones in the human retina are grown in a mosaic simulation based on known biological data. The optical simulation is carried out at a deep level, with each individual photon emitted by a display device effectively modeled as a wavefront shape through the eye's optical system, and then interacting with a per-cone custom aperture shape before possible photoisomerization. The model is intended to be used to better understand the interaction between display pixel spatial-temporal structure and human perceptual resolution. This talk is the extended version of my SIGGRAPH 2005 paper presentation "A Photon Accurate Model of the Human Eye" plus my SIGGRAPH 2005 sketch presentation "A Human Eye Retinal Cone Synthesizer".

    MIT HCI Seminar: Automatic Music Similarity Measures

    Speaker: Beth Logan , Hewlett Packard
    Date: Friday, October 7 2005

    It would be an incredible understatement so say that there have been large changes in the music industry in recent years. We are moving toward a future in which anyone can publish music and expect it to be available to everybody. In addition we can expect all previously published music to be accessible online. Improved search techniques will be needed to enable consumers to find music of interest in these vast music repositories. Automatic determination of similarity between artists and songs is at the core of such algorithms since it provides a scalable way to index and recommend music.

    In this talk we describe several efforts to automatically determine similarity between artists and songs. The first is based on acoustic properties of music and the second is based on analysis of the lyrics. Results on the uspop2002 database show that acoustic-based similarity outperforms that based on lyrics. However the errors made by each technique are not randomly distributed suggesting that the two techniques could be profitably combined.

    A sub-theme of this presentation will be evaluation techniques in the emerging field of music information retrieval.

    Bio: Beth Logan received the BSc. and B.E. degrees from the University of Queensland, Australia, in 1990 and 1991 respectively. She received the PhD in engineering from the University of Cambridge, United Kingdom, in 1998, completing a dissertation on speech enhancement. Since 1998, she has been a research scientist at Hewlett Packard's Cambridge Research Laboratory in Cambridge Massachusetts. Her work here has focused on indexing of speech and music, medical informatics and computational biology.

    CMU FRC Seminar: Multi-Model Motion Tracking under Multiple Team-member Actuators

    Speaker: Yang Gu, PhD Student, Computer Science Department, Carnegie Mellon University
    Date: Thursday, October 13
    Abstract: Robots need to track objects. Tracking in essence consists of using sensory information combined with a motion model to estimate the position of a moving object. Object tracking efficiency completely depends on the accuracy of the motion model and of the sensory information. When tracking is performed by a robot executing specific tasks acting over the object being tracked, such as a Segway RMP soccer robot grabbing and kicking a ball, the motion model of the object becomes complex, and dependent on the robot's actions. In this talk, I will describe our tracking approach that switches among target motion models as a function of one robot's actions.

    Interestingly, when multiple team-members can actuate the object being tracked, the motion can become even more discontinuous and nonlinear. I will report our recent tracking approach that can use a dynamic multiple motion model based on a team coordination plan. I will present the multi-model probabilistic tracking algorithms in detail and present empirical results both in simulation and in a human-robot Segway soccer team.

    Discovery News: New Cell Phone Could Reduce Car Crashes

    By Tracy Staedter, Discovery News
    Oct. 6, 2005— A new phone in development at Motorola could make driving safer.

    The prototype, nicknamed the "polite phone," is able to interpret driving conditions and decide whether to let calls through to the driver, send them to voice mail, or even contact 911.
    "People are using various devices in automobiles in ways that may or may not affect driver focus," said Mike Gardner, director of intelligent systems research at Motorola's Driving Simulator Lab in Tempe, Ariz.
    "So we asked, 'Is there any way possible that we could cause the devices themselves to change their behavior depending on the driving situation?'" Gardner said.
    The answer was a phone that taps into a car's computer system. It starts with Motorola-designed software installed on the car's computer that can analyze driving conditions based on such things as speed, braking, acceleration and turn signals.
    That data is funneled through a small electronic component — about an inch-and-a-half cube — which can be plugged into one of the car's computer ports.
    Information about the driving conditions is relayed from the component to the phone via a wireless signal. Software on the cell phone uses the data to control how the phone operates inside the car.
    For example, as soon as the driver sits in the car, the phone will automatically switch to speaker, allowing hands-free communication.
    As the driver cruises along at a steady speed, the phone will filter out undesired or unnecessary calls, as predetermined by the driver.
    If the car goes into a complex driving maneuver, such as sudden braking or turning, the cell phone will automatically send any incoming calls to voice mail.
    If the air bag deploys, the phone will dial 911 and leave the line open.
    "There is a lot of evidence that driving and using the phone increases the risk of driving by a factor of four," said Paul Green, research professor and leader of the Driver Interface Group at the University of Michigan Transportation Research Institute.
    But people do it anyway, he said. "Therefore, building a system for unwise behavior is really a good thing to do."
    The challenges, said Green, involve figuring out what information to process from the car's computer and how.
    A signal from the computer can specify steering angle, speed, tire angle, and GPS data, for example, but how do the numbers and the endless combination of them indicate a potentially dangerous driving maneuver?
    More studies involving human drivers need to be carried out in order to correlate the human response with the computer data.
    According to Gardner, Motorola's next step is to bring in large numbers of subjects and have them use the concept in different driving situations in order to quantify the impact the polite phone may have on safety.
    Gardner will be presenting the polite phone this week at the International Conference on Distracted Driving in Toronto.

    CMU VASC seminar: Building Classification Cascades for Visual Identification from One Example

    Andras Ferencz
    UC Berkeley (now at Mobileye Vision Technologies)

    I will describe our effort to solve the problem of object identification (OID), which is specialized recognition where the category is known (for example cars or faces) and the algorithm recognizes an object's exact identity. Two special challenges characterize OID: 1. Inter-class variation is often small (many cars look alike) and may be dwarfed by illumination or pose changes; 2. There may be many classes but few or just one positive "training" examples per class.

    Due to (1), a solution must locate possibly subtle object-specific salient features (a door handle) while avoiding distracting ones (a specular highlights). However, (2) rules out direct techniques of feature selection. I will describe an on-line algorithm that takes one query image from a known category and builds an efficient "same" vs. "different" classification cascade by predicting the most discriminative feature set for that object. Our method not only estimates the saliency and scoring function for each candidate feature, but also models the dependency between features, building an ordered feature sequence unique to a specific query image, maximizing cumulative information content. Learned stopping thresholds make the classifier very efficient. To make this possible, category-specific characteristics are learned automatically in an off-line training procedure from labeled image pairs of the category, without prior knowledge about the category.

    Andras Ferencz, Erik Learned-Miller, Jitendra Malik. Building a Classification Cascade for Visual Identification from One Example.
    Draft: submitted to ICCV 2005. PDF, Project Page

    CMU ML Lunch: Nonmyopic Value of Information in Graphical Models

    Speaker: Andreas Krause, CSD, CMU

    Title: Nonmyopic Value of Information in Graphical Models
    Date: October 10

    Abstract: In decision making under uncertainty, where one can choose among several expensive queries, it is a central issue to decide which variables to observe in order to achieve a most effective increase in expected utility. This problem has previously only been approached myopically, without any known performance guarantees. In this talk, I will present efficient nonmyopic algorithms for selecting an optimal subset of observations and for computing an optimal conditional plan for a class of graphical models containing Hidden Markov Models. I will also show how our methods can be used for interactive structured classification and for sensor scheduling in a Civil Engineering domain. Many graphical models tasks which can be efficiently solved for chains, can be generalized to polytrees. I will present surprising hardness results, showing that the optimization problems are wildly intractable (NP^PP complete) even in the case of discrete polytrees. Addressing these theoretical limits, I will present efficient approximation algorithms for selecting informative subsets of variables. Our algorithms are applicable to a large class of graphical models, and provide a constant factor approximation guarantee of 1-1/e, which is provably the best constant factor achievable unless P = NP. I will sketch how our methods can be extended to optimal experimental design in Gaussian processes, and I will present extensive evaluation of our algorithms on several real-world data sets.

    A. Krause, C. Guestrin. "Near-optimal Nonmyopic Value of Information in Graphical Models". Proc. of Uncertainty in Artificial Intelligence (UAI), 2005 [pdf] Winner of the Best Paper Runner-up Award

    CNN: Hummer takes pole in robot race

    Teams competing for $2 million prize
    Thursday, October 6, 2005; Posted: 10:46 a.m. EDT (14:46 GMT)

    FONTANA, California (AP) -- A driverless red Hummer snagged the pole position Wednesday in a government-sponsored sequel race across the Mojave Desert that will pit 23 robots against one another.

    The finalists were chosen after an intense, weeklong qualifying run at the California Speedway, where the self-navigating vehicles had to drive on a bumpy road, zip through a tunnel and avoid obstacles. No human drivers or remote controls were allowed.

    The Hummer named H1ghlander, built by Carnegie Mellon University, flipped during practice a few weeks ago when it struck a rock. But it still managed to complete all four required semifinal runs.


    Wednesday, October 05, 2005

    MIT Vision Medical Seminar: Renyi entropy-based image registration: a graph-theoretic approach

    Speaker: Mert Rory Sabuncu, Princeton University
    Date: Thursday, October 6 2005

    Information-theoretic techniques, such as mutual information (MI) [Viola 95, Collignon 97], have yielded robust and accurate automatic multi-modal image registration algorithms. Inspired by this approach, our research employs the theory of entropic spanning graphs [Hero 2001] and proposes a novel graph-theoretic image registration framework. In this talk, I will provide a theoretical and experimental analysis of this framework, while drawing a rigorous comparison to a popular implementation of the MI-based registration algorithm. I will also elaborate on our recent contribution of showing how to obtain a gradient-based descent direction for the graph-theoretic registration function with minimal computational overhead. This result is then used for the efficient optimization of the registration function. Within the proposed framework, issues such as practical methods to speed up the algorithm and increase the robustness against "bad initialization", extensions to nonlinear transformations ,i.e., local deformations, and incorporation of prior knowledge (from pre-aligned image pairs) on the cross-modality relationship to improve robustness & speed will also be addressed. Experimental evidence for all the discussed ideas will be provided.

    MIT Machine Vision Colloquium: Shape reconstruction from multiple views

    Speaker: Sylvain Paris , MIT CSAIL
    Date: Wednesday, October 5 2005

    In recent years, the increased use of computer-generated images and movies has driven a need for digital content. I will focus on the creation of 3D geometry and propose a technique for acquiring the 3D shape of an object from several photographs of it. Ultimately, the goal is to have a lightweight acquisition method to produce models usable for further processing such as rendering, relighting and so on.

    I will expose how the minimal cut of a graph can be used as a powerful optimization engine. This leads to a reconstruction algorithm that recovers accurate heightfields. This approach is then extended by considering the object surface as a collection of small patches. I will discuss the theoretical properties and consequences of such a representation. In addition, I will show that the representation can practically express a broad range of shapes, including non-spherical topology.

    This work is made in collaboration with François Sillion (INRIA Grenoble, France), Long Quan and Zeng Gang (HKUST, Hong Kong).

    Progressive Surface Reconstruction from Images using a Local Prior by Gang Zeng, Sylvain Paris, Long Quan, François Sillion, International Conference on Computer Vision (ICCV'05)

    Tuesday, October 04, 2005

    Robot Rides Bike Without Falling

    A Murata Manufacturing technology showcase robot called Murata Boy rides a bike WITHOUT FALLING DOWN (or, more accurately, the bike is part of the robot). The robot has an internal sensor that knows its own body's angle. When it starts to fall to one side, its robot brain directs the arms to move the steering wheel to stay upright. When it comes to a stop, a spinning disk in its midsection keeps it from falling over. The robot is controlled with a PC via Wi-Fi.

    Monday, October 03, 2005

    The title of my talk (10/5)

    Hi, folks
    The title of my talk is "Introduction to Probability Topics"
    I'll introduce Bayesian Inference,Hidden Markov Model,K-Means Clustering,and Gaussian Random Variable.

    CMU ML talk: Kernel Conjugate Gradient

    Speaker: Nathan Ratliff, Robotics, CMU

    Abstract: We propose a novel variant of conjugate gradient based on the Reproducing Kernel Hilbert Space (RKHS) inner product. An analysis of the algorithm suggests it enjoys better performance properties than standard iterative methods when applied to learning kernel machines. Experimental results for both classification and regression bear out the theoretical implications. We further address the dominant cost of the algorithm by reducing the complexity of RKHS function evaluations and inner products through the use of space-partitioning tree data-structures.

    The paper pdf file

    Sunday, October 02, 2005

    CMU VASC seminar: Quasiconvex Optimization for Robust Geometric Reconstruction

    Qifa Ke

    Geometric reconstruction problems in computer vision are often solved by minimizing a cost function that combines the reprojection errors in the 2D images. In this paper, we show that, for various geometric reconstruction problems, their reprojection error functions share a common and quasiconvex formulation. Based on the quasiconvexity, we present a novel quasiconvex optimization framework in which the geometric reconstruction problems are formulated as a small number of small-scale convex programs that are ready to solve. Our final reconstruction algorithm is simple and has intuitive geometric interpretation. In contrast to existing random sampling or local minimization approaches, our algorithm is deterministic and guarantees a predefined accuracy of the minimization result. We demonstrate the effectiveness of our algorithm by experiments on both synthetic and real data.

    IEEE International Conference on Computer Vision (ICCV 2005), Beijing, China, October 2005.

    Saturday, October 01, 2005




    Intel Research Pittsburgh Seminar: Extending the Path-planning Horizon

    Speaker: Bart Nabbe, Robotics Institute, School of Computer Science, Carnegie Mellon

    The mobility sensors (LADAR, stereo, etc.) on a typical mobile robot vehicle can only acquire data up to a distance of a few tens of meters. Therefore a navigation system has no knowledge about the world beyond this sensing horizon. As a result, path planners that rely only on this knowledge to compute paths are unable to anticipate obstacles sufficiently early and have no choice but to resort to an inefficient behavior of local obstacle contour tracing.

    To alleviate this problem, we present an opportunistic navigation and view planning strategy that incorporates look-ahead sensing of possible obstacle configurations. This planning strategy is based on a what-if analysis of hypothetical future configurations of the environment. Candidate vantage positions are evaluated based on their ability of observing anticipated obstacles. These vantage positions identified by this forward-simulation framework are used by the planner as intermediate waypoints. The validity of the strategy is supported by results from simulations as well as field experiments with a real robotic platform. These results also show that opportunistically significant reduction in path length can be achieved by using this framework.


    作者:駐德國台北代表處科技組 現職:駐德國台北代表處科技組
    在 法蘭克福國際汽車展中有不少產品是由德國聯邦教育研究部(BMBF)補助而研發成功的,最佳例子屬獲三千兩百萬歐元補助的INVENT計畫 (Intelligenter Verkehr und nutzergerechte Technik,智慧化的交通和配合使用者的技術)。參與此計畫有24個企業,他們共同研發一種可以查尋、思考和對話的汽車,這種車在危險狀況發生時能主 動積極地幫助駕駛人控制情況,渡過難關。這技術包含十字路口輔助系統,可自動測查反向及橫向交通,還可自動發現路標

    德國聯邦教育研究部(BMBF)表示將來的補助重點將是研發省油的汽車,同時也尋找能替代汽油的能源。像最新的混合系統 (Hybridsystem) 已經是世界市場必要趨勢,可以節省30%能源。德國聯邦教育研究部在2000年到2005年之間已經補助大約三億歐元。