Friday, March 31, 2006

CMU ML talk: Rodeo: Sparse Nonparametric Regression in High Dimensions

Speaker: Larry Wasserman, CMU http://www.stat.cmu.edu/~larry/
Date: April 03
Time: 12:00 noon

Abstract:
We present a method for simultaneously performing bandwidth selection and variable selection in nonparametric regression. The method starts with a local linear estimator with large bandwidths, and incrementally decreases the bandwidth in directions where the gradient of the estimator with respect to bandwidth is large. When the unknown function satisfies a sparsity condition, the approach avoids the curse of dimensionality. The method---called rodeo(regularization of derivative expectation operator)---conducts a sequence of hypothesis tests, and is easy to implement. A modified version that replaces testing with soft thresholding may be viewed as solving a sequence of lasso problems. When applied in one dimension, the rodeo yields a method for choosing the locally optimal bandwidth.
Joint work with John Lafferty.

Videos about the Great Robot Race.

NOVA just released their documentary The Great Robot Race.
http://www.pbs.org/wgbh/nova/darpa

CMU ML talk: Calibration, Regret and Learning in Games

When: Monday, April 3, 2006 at 3:00p until 4:00p

Speaker: Rakesh Vohra

Abstract:
This talk will be a survey of the connections between calibration (a measure of the accuracy of a probability forecast), regret (a measure of how well a decision rule performs which in Wodsworth's words: `looks before and after and pines for what is not') and the question of learning in games (will boundedly rational players in repeated play of a game converge to ones favorite equilibrium of the game?).
http://www.kellogg.northwestern.edu/faculty/vohra/htm/vohra.htm

Thursday, March 30, 2006

CFP: JFR Special Issue on Space Robotics

Journal of Field Robotics
Special Issue On Space Robotics

Guest Editors: David Wettergreen and Alonzo Kelly, CMU, and Larry Matthies, JPL

It seems impossible to get a robot farther afield than by putting it into space. Space applications present many challenges to robotic systems: from extremes of temperature, vacuum, shock and gravity, to limitations on power and communication, from the intricate complexity of systems engineering, to requirements for reliability, robustness and autonomy.

The Journal of Field Robotics (JFR) [ http:// www.journalfieldrobotics.org ] announces a special issue on space robotics to examine these and other issues related to robots and space. This special issue will present and discuss the state of the art in space robots, their theory and practice.

We invite papers that exhibit theory and methods applied to robotic systems in space including:
- specification and evaluation of system concepts and designs;
- effects of the space environment on robotic devices;
- methods of sensing, actuation, and mobility;
- experiments in manipulation, assembly, construction and excavation;
- algorithms for localization and navigation, and task or mission planning;
- efforts related to deep space navigation and autonomous operation;
- techniques for safe and precise entry, descent, and landing; and
- analysis of human robot interaction and robot autonomy.
Papers for this special issue must also provide technical descriptions of systems and results and analysis of experimentation with orbital robots and spacecraft or planetary landers or rovers or with system prototypes in terrestrial analogue environments. Lessons learned in development and operation are also pertinent.

We encourage papers addressing all aspects of space robotic systems. Our emphasis is on systems that fulfill a specific space-relevant application. Robotic systems in Earth orbit, traveling in deep space, and operating on the surfaces of planets, moons, comets, or asteroids are of particular interest, as well systems envisioned for space application but developed and demonstrated in relevant environments here on Earth.

The JFR encourages multimedia content and this special issue seeks inclusion of movies illustrating system concept and operation, engineering experiments, and of course space operation.

Deadlines:
June 2, 2006 – Submit manuscripts
July 14, 2006 – Reviews completed
August 4, 2006 – Decisions and author notification
September 1, 2006 – Final manuscripts for publication

Authors interested in submitting to this issue can discuss submissions with the special issue editors, David Wettergreen , Alonzo Kelly and Larry Matthies .

Robot PAL lab (Any): Car-like Robot Control

Title: Control Issues of A RC Toy Car

Outlines:
Modeling the car-like robot
System identification issues
Path following of our toy car
Path planning

Wednesday, March 29, 2006

Robot PAL Lab Meeting (Casey): Fast and Accurate Hand Pose Detection for Human-Robot Interaction

Title: Fast and Accurate Hand Pose Detection for Human-Robot Interaction
Author: Luis Antón-Canalís1, Elena Sánchez-Nielsen, and Modesto Castrillón-Santana
From: IbPRIA 2005, LNCS 3522, pp. 553–560, 2005

Abstract: Enabling natural human-robot interaction using computer vision
based applications requires fast and accurate hand detection. However, previous
works in this field assume different constraints, like a limitation in the number
of detected gestures, because hands are highly complex objects difficult to locate.
This paper presents an approach which integrates temporal coherence cues
and hand detection based on wrists using a cascade classifier. With this approach,
we introduce three main contributions: (1) a transparent initialization
mechanism without user participation for segmenting hands independently of
their gesture, (2) a larger number of detected gestures as well as a faster training
phase than previous cascade classifier based methods and (3) near real-time performance
for hand pose detection in video streams.
1 Introduction
Improving human-robot interaction has been an active research field

Tuesday, March 28, 2006

Lab meeting schedule changed this week!

Hi Folks,

Sorry for this late notice. The lab meeting this Wednesday is rescheduled to 10:30AM, this Thursday. The place is the same, CSIE R524. No advisee meeting this week.

Any and Casey, could you please post your talk titles?

Best,

-Bob

Monday, March 27, 2006

CMU RI talk: Distributed Estimation and Control of Multi-Agent Systems

Kevin Lynch, Laboratory for Intelligent Mechanical Systems
Northwestern University

We are pursuing a framework for systematic design of emergent behaviors in sensing and communication networks of mobile agents. The problem is to design a control law to run on each agent, based on sensor and communication input, so that the desired collective behavior emerges. Example tasks include sensor coverage, formation control, multi-agent pursuer-evader, and other types of self-organization. The key constraints are that each agent may have significant dynamics and limited sensing, computation, motion, and communication capabilities. The behavior of the system should improve or degrade gracefully as agents are added or deleted; in other words, the approach should be scalable, robust, and require no central controller.

Our approach requires each agent to simultaneously (1) estimate properties of the global behavior of the system and (2) use those estimates in a motion control law. This suggests a systematic approach of separately designing the estimator and controller, and then ensuring that the coupled system retains desired performance properties. I will give an example applying this framework to swarm formation control, where the desired formation is described by inertial moments. Implementing a simple gradient control law on each agent, the coupled estimation and control system is globally convergent to the desired family of formations.

Speaker Biography: Kevin Lynch was a member of Carnegie Mellon's first class of robotics Ph.D. students. After graduation in 1996 he spent a year as a postdoctoral fellow at the AIST Mechanical Engineering Laboratory in Tsukuba, Japan. Since 1997 he has been on the faculty of the Mechanical Engineering Department at Northwestern University, where he co-directs the Laboratory for Intelligent Mechanical Systems. He was the recipient of the IEEE Early Career Award in Robotics and Automation in 2001, and he currently serves as Editor of the IEEE Transactions on Robotics. He is a co-author of Principles of Robot Motion, MIT Press, along with Howie Choset, George Kantor, and others. His research interests are in robot motion planning and manipulation, underactuated systems, human-robot interaction, and distributed multi-agent systems.

CMU ML Lunch talk: Dynamic Contextual Friendship Networks

Speaker: Alice Zheng, SCS, CMU
http://www.cs.cmu.edu/~alicez/

Date: March 27

For schedules, links to papers et al, please see the web page:
http://www.cs.cmu.edu/~learning/

Abstract:
The study of social networks has gained new importance with the recent rise of large on-line communities. Most current approaches focus on deterministic (descriptive) models and are usually restricted to a preset number of social actors. Moreover, the dynamic aspect is often treated as an addendum to the static model. Taking inspiration from real-life friendship formation patterns, we propose a new generative model of evolving social networks that allows for birth and death of social ties and addition of new actors. Each actor has a distribution over social interaction spheres, which we term "contexts." We study the robustness of our model by examining statistical properties of simulated networks relative to well known properties of real social networks. A Gibbs sampling procedure is developed for parameter learning.

Sunday, March 26, 2006

CMU RI oral: Statistical Modeling and Localization of Nonrigid and Articulated Shapes

Jiayong Zhang, Robotics Institute
Carnegie Mellon University

An articulated object can be loosely defined as a structure or mechanical system composed of links and joints. The human body is a good example of a nonrigid, articulated object. Localizing body shapes in still images remains a fundamental problem in computer vision, with potential applications in surveillance, video editing/annotation, human computer interfaces, and entertainment.

In this thesis, we present a 2D model-based approach to human body localization. We first consider a fixed viewpoint scenario (side-view) by introducing a triangulated model of the nonrigid and articulated body contours. Four types of image cues are combined to relate the model configuration to the observed image, including edge gradient, silhouette, skin color, and region similarity. The model is arranged into a sequential structure, enabling simple yet effective spatial inference through Sequential Monte Carlo (SMC) sampling.

We then extend the system to situations where the viewpoint of the human target is unknown. To accommodate large viewpoint changes, a mixture of view-dependent models is employed. Each model is decomposed based on the concept of parts, with anthropometric constraints and self-occlusion explicitly treated. Inference is done by direct sampling of the posterior mixture, using SMC enhanced with annealing. The fitting method is independent of the number of mixture components, and does not require the preselection of a “correct” viewpoint.

Finally, we return to the generic setting of single image, arbitrary pose, and arbitrary viewpoint. The constraints on the body pose and background subtraction that have been used in previous systems are no longer required. Our proposed solution is a hybrid search facilitated by a 3-level hierarchical decomposition of the model. We first fit a simple tree-structured model defined on a compact landmark set along the body contours by Dynamic Programming (DP). The output is a series of proposal maps that encode the probabilities of partial body configurations. Next, we fit a mixture of view-dependent models by SMC, which handles self-occlusion, anthropometric constraints, and large viewpoint changes. DP and SMC are designed to search in opposite directions such that the DP proposals are utilized effectively to initialize and guide the SMC inference. This hybrid strategy of combining deterministic and stochastic search ensures both the robustness and efficiency of DP, and the accuracy of SMC. Finally, we fit an expanded mixture model with increased landmark density through local optimization.

The models were trained on a large number of gait images. Extensive tests on cluttered images with varying poses including walking, dancing and various types of sports activities justified the feasibility of the proposed approach.

CMU talk: Dynamic Models of Human Behavior

March 24, 2006

Zoran Popovic, Associate Professor
University of Washington

In this talk I will describe two models of human locomotion that attempt to describe both micro (stylistic variation of locomotion) and macro (complex crowd behavior) motion behavior patterns of humans through a set of tuned differential equations.

The first model of human locomotion incorporates several important aspects of human biology, including relative preferences for using some muscles more than others, elastic mechanisms at joints due to the mechanical properties of tendons, ligaments, and muscles, and variable stiffness at joints depending on the task. When used in a spacetime optimization framework, the parameters of this model define a wide range of styles of natural human movement. Due to the complexity of biological motion, these style parameters are too difficult to design by hand. To address this, I will describe the process of Nonlinear Inverse Optimization, an algorithm for estimating optimization parameters from motion capture data. We show how salient physical parameters cam be extracted from a single short motion sequence. Once captured, this representation of style is extremely flexible: motions can be generated in the same style but performing different tasks, and styles may be edited to change the physical properties of the body.

The second part of the talk will present a real-time model of crowd dynamics that is based on the continuum computations instead of per-agent simulations. This formulation yields a set of continuous velocity and potential fields that guide all people simultaneously. A dynamic potential field integrates both local collision avoidance and global navigation, efficiently solving for smooth realistic motion for large crowds without the need for collision detection. Simulations created with our system run at interactive rates, exhibit smooth flow under a variety of conditions, and naturally exhibit emergent phenomena that have been observed in real crowds.

This talk describes joint work with Karen C. Liu, Aaron Hertzmann, Adrien Treuille, and Seth Cooper.

Speaker Biography: Zoran Popovic is an Associate Professor in computer science at University of Washington. He received a Sc.B. with Honors from Brown University, and M.S. and Ph.D in Computer Science from Carnegie Mellon University. He has held research positions at Sun Microsystems and Justsystem Research Center and University of California at Berkeley. Zoran's research interests lie in computer animation, primarily in physically based modeling, high-fidelity human modeling, and control of realistic natural motion. His contributions to the field of computer graphics have been recently recognized by a number of awards including the NSF CAREER Award, Alfred P. Sloan Fellowship and ACM SIGGRAPH Significant New Researcher Award.

Monday, March 20, 2006

My first talk

Title: Roadmap-Based Motion Planning in Dynamic Environments
Author: Jur P. van den Berg and Mark H. Overmars
From: IEEE TRANSACTIONS ON ROBOTICS, VOL. 21, NO. 5, OCTOBER 2005, p.885-897

Abstract:
In this paper, a new method is presented for motion
planning in dynamic environments, that is, finding a trajectory for
a robot in a scene consisting of both static and dynamic, moving obstacles.
We propose a practical algorithm based on a roadmap that
is created for the static part of the scene. On this roadmap, an approximately
time-optimal trajectory from a start to a goal configuration
is computed, such that the robot does not collide with any
moving obstacle. The trajectory is found by performing a two-level
search for a shortest path. On the local level, trajectories on single
edges of the roadmap are found using a depth-first search on an
implicit grid in state-time space. On the global level, these local
trajectories are coordinated using an A -search to find a global
trajectory to the goal configuration. The approach is applicable to
any robot type in configuration spaces with any dimension, and
the motions of the dynamic obstacles are unconstrained, as long as
they are known beforehand. The approach has been implemented
for both free-flying and articulated robots in three-dimensional
workspaces, and it has been applied to multirobot motion planning,
as well. Experiments show that the method achieves interactive
performance in complex environments.

Sunday, March 19, 2006

My talk this Wednesday.

My talk this Wednesday will be 2 parts.
First, I'll give a brief demo of my last talk. (about AdaBoost)
Second, I'll talk about the extension of Adaboost.
This time I'll introduce adaboost algorithm under multiclass condition.
My talk is based on the following 2 paper :

1.
Title : A decision-theoretic generalization of on-line learningand an application to boosting Author : Yoav Freund and Robert E. Schapire in AT&T Lab
This paper appears in : Journal of Computer and System Sciences,55(1):110-139,August 1997
Abstract :
In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update rule of Littlestone and Warmuth can be adapted to this model yielding boundsthat are slightly weaker in some cases, but applicable to a considerably more general classof learning problems. We show how the resulting learning algorithm can be applied to avariety of problems, including gambling, multiple-outcome prediction, repeated games andprediction of points in R^n. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm doesnot require any prior knowledge about the performance of the weak learning algorithm.We also study generalizations of the new boosting algorithm to the problem of learningfunctions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line.
link

2.
Title : Improved Boosting AlgorithmsUsing Confidence-rated Predictions
Author : Robert E. Schapire and Yoram Singer in AT&T Lab
This paper appears in : Machine Learning, 37(3):297-336, 1999.
Abstract :
We describe several improvements to Freund and Schapire's AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a simplified analysis of AdaBoost in this setting, and we show how this analysis can be used to find improved parametersettings as well as a refined criterion for training weak hypotheses. We give a specific method for assigning confidences to the predictions of decision trees, a method closely related to one used by Quinlan. This method also suggests a technique for growing decision trees which turns out to be identical to one proposed by Kearns and Mansour.We focus next on how to apply the new boosting algorithms to multiclass classification problems, particularlyto the multi-label case in which each example may belong to more than one class. We give two boosting methods for this problem, plus a third method based on output coding. One of these leads to a new method for handling the single-label case which is simpler but as effective as techniques suggested by Freund and Schapire. Finally,we give some experimental results comparing a few of the algorithms discussed in this paper.
link

Thursday, March 16, 2006

Computational Thinking

Jeannette M. Wing, CMU
Computational Thinking, CACM vol. 49, no. 3, March 2006, pp. 33-35. slides.

Folks, you must read this article!! -Bob

MIT ME Talk: Robotics -the next globally disruptive technology?

Speaker: Dr. David Barrett , Olin College
Date: Friday, March 17 2006
Time: 2:30PM to 3:30PM
Location: 3-370
Host: John Leonard, MIT

Abstract: Over the course of human history the emergence of certain new technologies have globally transformed life as we know it. Disruptive technologies like fire, the printing press, oil, and television have dramatically changed both the planet we live on and mankind itself, most often in extraordinary and unpredictable ways. In pre-history these disruptions took place over hundreds of years. With the time compression induced by our rapidly advancing technology, they can now take place in less than a generation. We are currently at the edge of one such event. In ten years robotic systems will fly our planes, grow our food, explore space, discover life saving drugs, fight our wars, sweep our homes and deliver our babies. In the process, this robotics driven disruptive event will create a new 200 billion dollar global industry and change life as you now know it, forever. Just as my children cannot imagine a world without electricity, your children will never know a world without robots. Come take a bold look at the future and the opportunities for Mechanical Engineers that wait there.

WHAT'S NEW @ IEEE IN CIRCUITS March 2006

FPGAs POPULARITY DEPENDANT ON POWER
Power consumption was a hot topic at February's 2006 FPGA, most notably the difference between FPGAs (Field-Programmable Gate Arrays) and ASICs (Application-Specific Integrated Circuits). Many researchers argued that ASICs are more practical than FPGAs, but not everyone agreed. One presenter, Tim Tuan of Xilinx Inc., said his company is building a low-power architecture based on the company's Spartan 3 fabric that will apply such optimizations as voltage scaling, power gating, low-leakage configuration memory and sleep mode. This Pika architecture is said to produce 46 percent less active power and 99 percent less standby power than the baseline Spartan 3. Pika claims to lessen the problem of dissipating standby power milliamps, bringing FPGAs into an acceptable range for mobile, battery-powered products. Despite the arguments that these advancements will improve the FPGAs marketability, other researchers were not so sure, arguing that despite the scaling, FPGAs are still 20 times more power-hungry than ASICs. For more on this and other topics discussed at the conference, visit: http://www.powermanagementdesignline.com/news/181400903

IEEE Tech Alert for 15 Mar 2006

3. IEEE¹s Wi-Fi standard moves to mesh
Passing a significant milestone, the IEEE 802.11 working group has announced its adoption of a proposed basis for a standard that will extend Wi-Fi wireless distribution by means of mesh points. In a mesh network, computers become transceivers, forwarding packets of data for other nearby computers on the network. By sending packets only as far as the next computer, instead of a distant base station, meshed computers use less power, emit fewer interfering signals, and have higher data rates.

The 802.11s extension will mesh the intermediate access points in a network, but not each and every individual computer, to somewhat boost the performance and efficiency of Wi-Fi systems.

For further information, go to the IEEE Standards Web site at: http://standards.ieee.org

Bob: Mesh Robots?

4. New handheld helps reduce stress
Considering the proliferation of handheld devices, all with their own little alerts and alarms, it may seem that stress is getting worse, not better. But now a new handheld device promises to help. A sleek, solid, handheld biofeedback device called the StressEraser, is designed as an aid for deep breathing exercises, which are commonly prescribed to alleviate stress. The device tells you just when to inhale and when to stop.
See Calm in Your Palm, by Samuel K. Moore: http://www.spectrum.ieee.org/mar06/3044

5. Prototype planetary rover tested in Chilean desert
A hardy band of researchers has braved freezing nights, bad food, and high winds in the Chilean desert to test a rover that could be the prototype for the next generation of vehicles to explore the surface of the Moon or Mars. Weighing in at 180 kilograms, the rover, dubbed Zoë, looks something like a motorized, overgrown ice cream cart. But it is beautiful in the one way that really matters to planetary scientists: unlike all the rovers built thus far, Zoë can roam autonomously.

See Halfway to Mars, by Jean Kumagai: http://www.spectrum.ieee.org/mar06/3059

Sunday, March 12, 2006

My talk this week

Affine Structure From Sound Simulation Demo

Base on the idea mentioned on "AFFINE STRUCTURE FROM SOUND", I am going to show a simulation that given structured microphone-array and some arbitrary audio events, this system will reconstruct the locations of microphones and audio events that occurred.

reference:
S. Thrun. Affine Structure From Sound In Proceedings of the 2005 Conference on Neural Information Processing Systems (NIPS). MIT Press, 2006.
link

My talk this week

Efficient Dense Correspondences using Temporally Encoded Light Patterns

Author:Nelson L. Chang
From:IEEE International Workshop on Projector-Camera Systems (PROCAMS) 2003

Abstract:
Establishing reliable dense correspondences is crucialfor many 3-D and projector-related applications. Thispaper proposes using temporally encoded patterns todirectly obtain the correspondence mapping between aprojector and a camera without any searching orcalibration. The technique naturally extends to efficientlysolve the difficult multiframe correspondence problemacross any number of cameras and/or projectors.Furthermore, it automatically determines visibility acrossall cameras in the system and scales linearly incomputation with the number of cameras. Experimentalresults demonstrate the effectiveness of the proposedtechnique for a variety of applications.

[PDF]

Saturday, March 11, 2006

MIT Talk: Learning Partially Observable Action Models

Speaker: Eyal Amir, Computer Science Department, University of Illinois, Urbana-Champaign
Date: Friday, March 10 2006
Time: 3:00PM to 4:00PM
Location: Seminar Room 32-G449 (Kiva/Patil)
Host: Professor Leslie Kaelbling, MIT CSAIL
Contact: Teresa Cataldo, 617-452-5005, cataldo@csail.mit.edu


Many complex domains offer limited information about their exact state and the way actions affect them. There, agents need to learn action models to act effectively, at the same time that they track the state of the domain.

In this presentation I will describe polynomial-time algorithms for learning logical models of actions' effects and preconditions in deterministic partially observable domains. These algorithms represent the set of possible action models compactly, and update it after every action execution and partial observation. This approach is the first tractable learning algorithm for partially observable dynamic domains. I will mention recent extensions of this work to relational domains, and will also discuss potential applications of this work to agents playing adventure games and to active web mining.


Relevant papers:
Partially Observable Deterministic Action Models, IJCAI'05.
Learning partially observable action models, CogRob'04, part of ECAI'04.

CMU FRC talk: Recognizing Things in Images

Speaker: Martial Hebert, Professor, Robotics, Carnegie Mellon University
Date: Thursday, March 16, 2006

Abstract: Finding things (objects, regions, events) in images or video is the main objective of computer vision. A common view of this problem is to start with tentative labeling of parts of the image as possible locations of objects or possible types of regions, followed by reasoning about context and relations between these elements. We have been working on tools for recognition that are very effective for this type of tasks. In particular, we have developed tools for representing relations beween scene elements (features, objects, regions..) and for enforcing geometric constraints between elements.

I'll review some of the recent results in using relations and context between image elements for recognition and classification. The applications include finding salient structures in images, recognizing individual objects, and segmenting the image into labeled regions. Most of the examples deal with single images but the techniques can be used also for more recognizing things in video sequences and I will show some results in this area from a recently completed project. Applications include detection of useful landmarks, object localization for navigation and manipulation, and surveillance.

Speaker Bio: Martial Hebert is Professor at the Robotics Institute, Carnegie Mellon University. He has led many major computer vision and robotics projects, funded by DARPA, NASA, NSF, ONR, DOE, and industry. Prof. Hebert has worked in multiple areas of robotics: computer vision, autonomous mobile robots, and sensors. His current research interests include object recognition in images, video, and range data, scene understanding using context representations, and model construction from images and 3-D data. His group has explored applications in the areas of autonomous mobile robots, both in indoor and in unstructured, outdoor environments, automatic model building for 3D content generation, and video monitoring. He has published more than 150 technical papers and reports in these areas.

Thursday, March 09, 2006

CMU & MIT talk: Visual classification by a hierarchy of semantic fragments

Boris Epshtein, Weizmann Institute

CVPR 2005 Oral paper

We describe a novel technique for identifying semantically equivalent parts in images belonging to the same object class, (e.g. eyes, license plates, aircraft wings etc.). The visual appearance of such object parts can differ substantially, and therefore traditional image similarity-based methods are inappropriate for this task. The technique we propose is based on the use of common context. We first retrieve context fragments, which consistently appear together with a given input fragment in a stable geometric relation. We then use the context fragments in new images to infer the most likely position of equivalent parts. Given a set of image examples of objects in a class, the method can automatically learn the part structure of the domain–identify the main parts, and how their appearance changes across objects in the class. Two applications of the proposed algorithm are shown: the detection and identification of object parts and object recognition.

PDF Slides

What's New @ IEEE in Communications, March 2006

7. REMOTE 'WEAR AND TEAR' SENSORS BEING DEVELOPED
A new type of wireless sensor is being developed to remotely monitor mechanical parts and systems such as gearboxes, engines, and door mechanisms, to predict machinery and transportation breakdowns, according to scientists at the University of Manchester. Developers say the sensors could be in service in the next four years, and would greatly reduce maintenance costs in the manufacturing, automotive and plant machinery industries by predicting when parts require maintenance or need replacing before the machinery fails. Different kinds of sensors would measure a range of selected parameters, such as vibration, temperature, and pressure, or the concentrations of metallic elements in lubricating oil created through machinery wear and tear. Read more: the link

8. RFID TAGS CAN BE HACKED USING CELL PHONES, RESEARCHER SAYS
Passwords for the most popular brand of RFID tags can be obtained using a directional antenna and digital oscilloscope to monitor power used by the tags while they are being read, according to a cryptographer and professor of computer science at the Weizmann Institute. Patterns in power use could be analyzed to determine when the tag received correct and incorrect password bits, according to the researcher, who said the brand of RFID tag he tested was "totally unprotected," and that a cell phone has all the ingredients necessary to compromise all RFID tags in its immediate vicinity. Read more: the link

9. NASA'S NEW SOFTWARE GETS COMPUTERS THINKING TOGETHER
A new NASA computer program that operates as a collective on many computers at once has designed an antenna that will be launched into space to study the Earth's magnetosphere. The revolutionary AI program uses Darwin's theory of evolution to determine what the best outcome will be for a given project. To create the antenna in question, attributes of thousands of antennae were given to the program. Eighty computers then combined their "brains" over a period of ten hours, a significantly smaller amount of time than could have been achieved by humans, to create an optimal design. The resulting antenna looks like a bent paperclip and can receive commands and send data to Earth. The writers of the program say the evolutionary AI software can invent and create new structures, computer chips and various other machines, and it can operate on up to 120 personal computers at once. Read more: the link

What's New @ IEEE for Students, March 2006

2. 3-D TECHNOLOGIES FOCUS OF "PROCEEDINGS OF THE IEEE" SPECIAL ISSUE
The March 2006 special issue of "Proceedings of the IEEE" (v. 94, no. 3) examines the broad subject of three-dimensional (3-D) imaging, display and visualization technologies. Writing in their introduction to the issue, Guest Editors Bahram Javidi and Fumio Okano say that 3-D technologies are "important applications of information systems in a society that is increasingly dependent on the presentation of information." Overview papers in this issue present the fundamental ideas, theory, experiments and application of some leading 3-D technologies, illustrated with examples, simulations and experiment results. A preview is available online:
http://www.ieee.org/web/publications/procieee/current.html

7. PROTOTYPING FOR HUMAN INTERACTION
How can inventors determine if their designs are a good fit for human users? A new program called d.Tools prototypes consumer products by blending the interactive components and physical design interface with the software's intents and purposes. The programs developers argue that many devices fail because the inventors try to mimic or supplant physical attributes with computer software, forgetting that people are inherently hands-only. They hope d.Tools will help bring about new technologies that are more in tune with what humans want. Read more:
http://www.physorg.com/news11112.html

8. INCREASED USE OF BIOMETRICS SEEN TO STOP IDENTITY THEFT
Biometrics, such as the digital record of an individual's fingerprints or iris patterns, are increasingly being used as a more secure way to confirm user identity in a variety of systems, writes Alfred C. Weaver in the current issue of "Computer" (v. 39, no. 2). Weaver identifies three broad classes of personal identification: what an individual knows (such as a password); what an individual carries (such as an ID card); and who an individual is (based on fingerprints, DNA, or some other physical or behavioral measurement). Of the three, biometric identification is the most reliable proof of identity, Weaver says, and is being implemented in more and more places like airports and border crossings, where the stakes are highest for positive identification. According to Weaver, the security of biometric identification is highly dependent upon who is collecting the data, and on the data being stored as mathematical templates so that it cannot be used to recreate the users' identifying characteristics. Read more: the link

Monday, March 06, 2006

Paper: Distributed Localization of Networked Cameras

Authors: Stanislav Funiak, Carlos Guestrin, Mark Paskin, Rahul Sukthankar
Conf: IPSN 2006
Abstract:
Camera networks are perhaps the most common type of sensor network and are deployed in a variety of real-world applications including surveillance, intelligent environments and scientific remote monitoring. A key problem in deploying a network of cameras is calibration, i.e., determining the location and orientation of each sensor so that observations in an image can be mapped to locations in the real world. This paper proposes a fully distributed approach for camera network calibration. The cameras collaborate to track an object that moves through the environment and reason probabilistically about which camera poses are consistent with the observed images. This reasoning employs sophisticated techniques for handling the difficult nonlinearities imposed by projective transformations, as well as the dense correlations that arise between distant cameras. Our method requires minimal overlap of the cameras' fields of view and makes very few assumptions about the motion of the object. In contrast to existing approaches, which are centralized, our distributed algorithm scales easily to very large camera networks. We evaluate the system on a real camera network with 25 nodes as well as simulated camera networks of up to 50 cameras and demonstrate that our approach performs well even when communication is lossy.

PDF
Movies

Sunday, March 05, 2006

My talk this week

An Introduction to the Kalman Filter

Author : Greg Welch, Gary Bishop

Introduction

The Kalman filter is a mathematical power tool that is playing an increasingly importantrole in computer graphics as we include sensing of the real world in our systems. The goodnews is you don’t have to be a mathematical genius to understand and effectively useKalman filters. This tutorial is designed to provide developers of graphical systems with abasic understanding of this important mathematical tool.

Link

Saturday, March 04, 2006

lab meeting : Rigid-Body Alignment

Rigid-Body Alignment

ICCV 2005 Short Course: 3D Scan Matching and Registration

Szymon Rusinkiewicz, Princeton University


Abstract:
This section of the course covers techniques for pairwise (i.e., scanto-
scan) and “global” (i.e., involving more than 2 scans) alignment,
given that the algorithms are constrained to obtain a rigid-body
transformation.

Friday, March 03, 2006

CMU FRC talk: Parameterizing Deformable Systems to Tame Complexity

Speaker: Doug James, Assistant Professor, Computer Science and Robotics, Carnegie Mellon University
Date: Thursday, March 9, 2006

Abstract: The complexity and beauty of physical deformation phenomena in our lives is truly amazing. It fundamentally affects our appearance (skin, hair, clothing), our composition (protein folding), the sounds we make (talking, clapping), beauty in nature (irises blowing in the wind), our creations (aerospace design), and important decisions (surgical intervention). Computer modeling of deformation has made enormous progress, but the complexity of the world is humbling. We still do not know how to create immersive, realistic, real-time computer simulations of our ever-changing and deforming world.

In this talk, I will discuss our recent work on data-driven approaches for preprocessing and parameterizing deformable systems to enable greater interactivity. These techniques exploit the structure of deformable motion to build efficient output-sensitive algorithms in several key areas: subspace dynamics integration, output-sensitive collision processing, haptic force-feedback rendering, dynamic illumination modeling, and hardware-accelerated mesh animation.

CMU VASC talk: A Spectral Technique for Correspondence Problems Using Pairwise Constraints

Marius Leordeanu,
Monday, March 6, 2006

Abstract:
We present an efficient spectral method for finding consistent correspondences between two sets of features. We build the adjacency matrix M of a graph whose nodes represent the potential correspondences and the weights on the links represent pairwise agreements between potential correspondences. Correct assignments are likely to establish links among each other and thus form a strongly connected cluster. Incorrect correspondences establish links with the other correspondences only accidentally, so they are unlikely to belong to strongly connected clusters. We recover the correct assignments based on how strongly they belong to the main cluster of M, by using the principal eigenvector of M and imposing the mapping constraints required by the overall correspondence mapping (one-to-one or one-to-many). The experimental evaluation shows that our method is robust to outliers, accurate in terms of matching rate, while being several orders of magnitude faster than
existing methods.

Short Bio: Marius Leordeanu received a double BA in Mathematics and Computer Science from Hunter College of The City University of New York. From 2002 to 2003 hw worked in the vision lab at Hunter College in the area of 3D registration and modeling. Since 2003, he has been a PhD student at the Robotics Institute of Carnegie Mellon University. At CMU his main reasearch is focusing on object recognition.