Thursday, August 31, 2006

Robotics: Science and Systems Conference - Responsive Robot Gaze to Interaction Partner

Author: Y. Yoshikawa, K. Shinozawa, H. Ishiguro, N. Hagita, T. Miyamoto

Abstract: Gaze is regarded as playing an important role in face-to-face communication, for example exhibiting one's attention and regulating turn-taking during conversation, and therefore has been one of central topics in several fields including psychology, human-computer and human-robot interaction studies. Although a lot of findings in psychology have encouraged the previous work in both human-computer and human-robot interaction studies, how to move the agent's gaze, including when to move it, has not been explored yet, and therefore is addressed in this study. The impression a person forms from an interaction is strongly influenced by the degree to which their partner's gaze direction is correlates with their own. In this paper, we propose methods of responsive robot gaze control and confirm their effect on the feeling of being looked at, which is considered to be the basis of impression conveyance with gaze, through face-to-face interaction

LINK

Robotics: Science and Systems Conference - A Probabilistic Exemplar Approach to Combine Laser and Vision for Person Tracking

Author: D, Schulz

Abstract:
This article presents an approach to person tracking that combines camera images and laser range data. The method uses probabilistic exemplar models, which represent typical appearances of persons in the sensor data by metric mixture distributions. Our approach learns such models for laser and for camera data and applies a Rao-Blackwellized particle filter in order to track a persons appearance in the data. The filter samples joint exemplar states and tracks the persons position conditioned on the exemplar states using a Kalman filter. We describe an implementation of the approach based on contours in images and laser point set features. Additionally, we show how the models can be learned from training data using clustering and EM. Finally, we give first experimental results of the method which show that it is superior to purely laser-based approaches for determining the position of persons in images.

Link

Sunday, August 27, 2006

Lab Meeting 31 August, 2006: practice talk for the robot competition

Instead of paper presentation and discussion, we will have a practice talk and a live demo for the coming robot competition. -Bob

CMU featured project: Educational Robotics - Vehicles for Teaching and Learning

http://www.terk.ri.cmu.edu/

A related news:
University Publishes ARM Powered Robot Designs
IQ Online - Paris,France
Carnegie Mellon University's Mobile Robot Programming Lab in the US has published a Linux based robot design for the ARM powered robot. (See the full article)

Thursday, August 24, 2006

MIT PhD Thesis: Anthills Built to Order: Automating Construction with Artificial Swarms

Authors: Werfel, Justin
Advisors: Gerald Sussman
Issue Date: 14-Aug-2006

Abstract: Social insects build large, complex structures, which emerge through the collective actions of many simple agents acting with no centralized control or preplanning. These natural systems motivate investigating the use of artificial swarms to automate construction or fabrication. The goal is to be able to take an unspecified number of simple robots and a supply of building material, give the system a high-level specification for any arbitrary structure desired, and have a guarantee that it will produce that structure without further intervention.In this thesis I describe such a distributed system for automating construction, in which autonomous mobile robots collectively build user-specified structures from square building blocks. The approach preserves many desirable features of the natural systems, such as considerable parallelism and robustness to factorslike robot loss and variable order or timing of actions. Further, unlike insect colonies, it can build particular desired structures according to a high-level design provided by the user. Robots in this system act without explicit communication or cooperation, instead using the partially completed structure to coordinate their actions. This mechanism is analogous to that of stigmergy used by social insects, in which insects take actions that affect the environment, and the environmental state influences further actions. I introduce a framework of "extended stigmergy" in which building blocks are allowed to store, process or communicate information. Increasing the capabilities of the building material (rather than of the robots) in this way increases the availability of nonlocal structure information. Benefits include significant improvements in construction speed and in ability to take advantage of the parallelism of the swarm.This dissertation describes system design and control rules for decentralized teams of robots that provably build arbitrary solid structures in two dimensions. I present a hardware prototype, and discuss extensions to more general structures, including those built with multiple block types and in three dimensions.

http://hdl.handle.net/1721.1/33791

News: Faster Mapping Speeds Up the Search for Oil

New algorithms are helping Shell to map possible oil reservoirs deep below the Gulf of Mexico.
By Katherine Bourzac

With demand and prices so high for crude oil, petroleum companies are searching for new reservoirs deep below the ocean floor, in areas of more geological complexity. But drilling under the ocean is very expensive, so oil companies need to have as complete an understanding of the geology where they're drilling as possible.

Even armed with reams of seismic data about the Earth's subterranean features, though, making accurate maps of the geology underlying the ocean is a challenge. Now Shell is working with computer scientists at MIT to design algorithms that will allow them to more quickly and more accurately create maps of these underground areas.

Generating maps of the deep and complex areas now under exploration by oil companies can take several people many months, says Richard Sears, a visiting scientist from Shell at MIT. Regions under study may be hundreds of kilometers in area and several kilometers deep. Those working to create 3-D maps of these areas must process huge amounts of data.

See the full article.

CMU thesis oral: Spectral Rounding & Image Segmentation

David Tolliver, Robotics Institute, Carnegie Mellon University

29 Aug 2006

Abstract
The task of assigning labels to pixels is central to computer vision. In automatic segmentation an algorithm assigns a label to each pixel where labels connote a shared property across pixels (e.g. color, bounding contour, texture). Recent approaches to image segmentation have formulated this labeling task as partitioning a graph derived from the image. We use spectral segmentation to denote the family of algorithms that seek a partitioning by processing the eigenstructure associated with image graphs. In this thesis we analyze current spectral segmentation algorithms and explain their performance, both practically and theoretically, on the Normalized Cuts (NCut) criterion. Further, we introduce a novel family of spectral graph partitioning methods, spectral rounding, and apply them to image segmentation tasks. Edge separators of a graph are produced by iteratively reweighting the edges until the graph disconnects into the prescribed number of components. At each iteration a small number of eigenvectors with small eigenvalue are computed and used to determine the reweighting. In this way spectral rounding directly produces discrete solutions where as current spectral algorithms must map the continuous eigenvectors to discrete solutions by employing a heuristic geometric separator (e.g. k-means). We show that spectral rounding compares favorably to current spectral approximations on the NCut criterion in natural image segmentation. Quantitative evaluations are performed on multiple image databases including the Berkeley Segmentation Database. These experiments demonstrate that segmentations with improved NCut value (obtained using the SR-Algorithm) are more highly correlated with human hand- segmentations.

A copy of the thesis oral document can be found at http://www.cs.cmu.edu/~tolliver/ThesisDraft.pdf.

Wednesday, August 23, 2006

CNN news: Wireless robots may float above Earth

Tuesday, August 22, 2006; Posted: 9:34 p.m. EDT (01:34 GMT)

PALMDALE, California (AP) -- Bob Jones has a lofty idea for improving communications around the world: Strategically float robotic airships above Earth as an alternative to unsightly telecom towers on the ground and expensive satellites in space.

Jones, a former NASA manager, envisions a fleet of unmanned "Stratellites" hovering in the atmosphere and blanketing large swaths of territory with wireless access for high-speed data and voice communications.

The idea of using airships as communications platforms isn't new _ it was widely floated during the dot-com boom. It didn't really fly then, and Jones is the first to admit the latest venture is a gamble.

See the full article.

CMU thesis proposal: Unsupervised Predictive Object Discovery

Thomas Stepleton, Robotics Institute, Carnegie Mellon University

August 28, 2006

Abstract
This thesis proposal presents a new data-driven computational framework for unsupervised learning of object models from video. This framework integrates object representation learning, image parsing, and inference into a coherent whole based on the principles of persistence, coherent covariation, and predictability of visual patterns associated with objects or object parts in dynamic visual scenes. Visual patterns in video are extracted and linked across frames by exploiting the tendency of objects to persist and change gradually in visual scenes. First, a multitude of visual pattern proposals are generated by a clustering process based on Gestalt rules. A particle filtering-based inference mechanism then uses the proposals to construct and refine hypotheses about what objects are present in the video. Hypotheses are judged based on their ability to predict future video events, and the best hypotheses are finally used to create new or refined object models. For improved robustness in feature and object identification and inference, the mechanism learns and employs representations that explicitly encode the temporal dynamics of visual patterns. The key insight of the approach is the use of prediction of “future” visual events to facilitate inference and to validate learned representations. This framework is inspired by principles and insights from cognitive neuroscience, and thus the mechanisms investigated are relevant to understanding the representational development of object models in the brain.

A copy of the thesis proposal document can be found at http://gs2040.sp.cs.cmu.edu/UPOD/.

CMU Thesis proposal: Occlusion Boundaries: From Low-Level Detection to High-Level Reasoning

Andrew Stein, Robotics Institute, Carnegie Mellon University

28 August 2006

Abstract
While much focus in computer vision is placed on the processing of individual, static images, many applications actually offer video, or sequences of images, as input. The extra temporal dimension of the data allows the motion of the camera or the scene to be used in processing. In particular, this motion provides the opportunity to observe objects or surfaces occluding one another. While often considered a nuisance to be "handled," the boundaries of objects at which occlusion occurs can also be valuable sources of information about 3D scene structure and shape. Since most, if not all, computer vision techniques aggregate information spatially within a scene via smoothing, patches, or graphical models with neighborhood structures, information from different physical surfaces in the scene is invariably and erroneously considered together. The low-level ability to locally detect occlusion through motion, then, should benefit many different vision techniques.

To this end, we propose to use our existing low-level occlusion detector, based on local reasoning about moving edges and the patches of data on either side of them, to find those edges in a scene which show evidence of being occlusion boundaries. We will also propose tackling this problem with a learned discriminative classifer, using the same motion features. Taking uncertainty into account, we will then propagate this local, low-level information more globally using random field methods or a confidence-based hysteresis thresholding approach. With extended occlusion boundaries available, we can then develop methods for incorporating that information into existing feature-based object recognition techniques, including our own Background and Scale Invariant Feature Transform (BSIFT). Leveraging existing techniques as a foundation, we also propose the use of these boundaries in generic object detection and segmentation, which may be advantageous for unsupervised detection and learning of novel objects in general environments.

This thesis therefore seeks to contribute to both the low- and high- level aspects of reasoning about occlusion:

- We will develop and compare a novel model-based detector and a learned discriminative classifier for extracting local occlusion boundaries in short video clips, both based on local motion features.

- We will show how to use occlusion boundary information to benefit the high-level tasks of feature-based object recognition and object detection/segmentation, possibly for unsupervised learning of object models.

We have existing work completed at either end of the spectrum (model- based detection and boundary-respecting recognition). Future work includes improvements to each, the connection of the two, and further research on the segmentation and learning tasks.

A copy of the thesis proposal document can be found at http://www.andrewstein.net/proposall.pdf.

(Leo)My Talk, Aug 24 2006: 3-D Localization and Mapping Using a Single Camera Based on Structure-from-Motion with Automatic Baseline Selection

Title: 3-D Localization and Mapping Using a Single Camera Based on Structure-from-Motion with Automatic Baseline Selection

Proceeding of the 2005 IEEE
International Conference on Robotics and Automation
Barcelona, Spain, April 2005

Author: Tomono, M.

Abstract: This paper presents a system of 3-D simultaneous localization and mapping (SLAM) using monocular vision-based on the structure-from-motion scheme. An crucial issue in applying structure-from-motion to SLAM is that accuracy depends heavily on the baseline distance. We address this problem by selecting an appropriate baseline based on criteria for the tradeoff between the baseline distance and the number of feature points visible in the images. Experimental results show that full 3-D sparse maps with camera trajectory were built from images captured with a handy camera.

PDF file: [link] (from IEEEXplore)

CMU thesis proposal: Incorporating unsupervised image segmentation into object class detection and localization

Caroline Pantofaru, Robotics Institute, Carnegie Mellon University

28 Aug 2006

Abstract
As the performance of object recognition and localization systems improves, there is increasing demand for their application to problems which require an exact pixel-level object mask. Photograph post-processing and robot-object interaction are just two examples of applications which require knowledge of exactly which pixels in an image are part of a specific object, and which ones are not. Traditional object recognition systems which generate bounding boxes around the found objects are inappropriate for these applications. The point- and patch-based features that these systems use are also ill-suited to delineating an object mask for a highly deformable object. Thus we propose to explore a framework for using segmentation regions for object learning and recognition. Image segmentation regions have a data-driven shape, so they can adapt to object boundaries well. In fact, if the right set of regions is grouped together, the entire object can be defined. In this proposal we will examine the issues which accompany using segmentation regions for recognition, namely:

- describing segmentation regions in a reliable and discriminative manner,

- grouping over-segmented regions together for more robust recognition and complete object segmentation, and

- within the context of the above framework, generating multiple segmentations per image to overcome the inherent ambiguity in unsupervised segmentation.

Since obtaining training data with hand-segmented objects is extremely expensive, we propose to use semi-supervised training data for which only image-level object labels are known but the pixels themselves are not labeled. Upon completion of the items in this proposal, we will have a better understanding of the issues related to performing object recognition and localization for such demanding applications.

A copy of the thesis proposal document can be found at http://gs2051.sp.cs.cmu.edu/proposal.pdf.

News: The Robots Are Coming!

Elizabeth Corcoran, 08.18.06, 8:00 AM ET (The full article)

The robots are on the move--leaping, scrambling, rolling, flying, climbing. They are figuring out how to get here on their own. They come to help us, protect us, amuse us--and some even do floors.

Since Czech playwright Karel Capek popularized the term ("robota" means "forced labor" in Czech) in 1921, we have imagined what robots could do. But reality fell short of our plans: Honda Motor (nyse: HMC - news - people ) trotted out its Asimo in 2000, but for now it's been relegated to temping as a receptionist at Honda and doing eight shows a week at Disneyland. The majority of the world's robots are bolted to a spot on a factory floor, sentenced to a repetitive choreography of welding, stamping and cutting.

...

Learning has been key, both for robots and for their designers. Carnegie Mellon's Robotics Institute has been an incubator for much of the current work on robots. Rodney Brooks of the Massachusetts Institute of Technology nudged the whole field forward in early 1990s when he showed how robots could make faster decisions by responding to sensory data from their immediate environment rather than relying on complex sets of rules.

...

Tandy Trower, general manager of Microsoft's robotics group, says robotics today reminds him of the early days of the PC--chock-full of ideas, opportunities and too many different operating systems.

Unlike PCs, however, robots are calling on the ingenuity of people from wildly diverse backgrounds: biologists are teaching robots to move, entertainers are teaching them how to amuse us, statisticians are teaching them when to ignore data, computer scientists are teaching them how to think, and materials scientists are inventing new composites that make them light on their feet.


Robots are about to be unshackled from forced labor. Expect them everywhere.

Sunday, August 20, 2006

(Casey)My talk, 24 August 2006 :Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object...

Title: Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories.(CVPR 2004, Workshop on Generative-Model Based Vision.)

Author: L. Fei-Fei, R. Fergus, and P. Perona.

Title: Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories.(CVPR 2004, Workshop on Generative-Model Based Vision.)

Author: L. Fei-Fei, R. Fergus, and P. Perona.

Abstract: Current computational approaches to learning visual
object categories require thousands of training images,
are slow, cannot learn in an incremental manner and cannot
incorporate prior information into the learning process. In
addition, no algorithm presented in the literature has been tested
on more than a handful of object categories. We present an
method for learning object categories from just a few training
images. It is quick and it uses prior information in a principled
way. We test it on a dataset composed of images of objects
belonging to 101 widely varied categories. Our proposed method
is based on making use of prior information, assembled from
(unrelated) object categories which were previously learnt. A
generative probabilistic model is used, which represents the shape
and appearance of a constellation of features belonging to the
object. The parameters of the model are learnt incrementally
in a Bayesian manner. Our incremental algorithm is compared
experimentally to an earlier batch Bayesian algorithm, as well
as to one based on maximum-likelihood. The incremental and
batch versions have comparable classification performance on
small training sets, but incremental learning is significantly
faster, making real-time learning feasible. Both Bayesian methods
outperform maximum likelihood on small training sets.

PDF file: [Link]

You can download other Li fei-fei's paper in this link: [Link]

Saturday, August 19, 2006

CNN: A shopping cart that doesn't run into things!


Friday, August 18, 2006; Posted: 4:33 p.m. EDT (20:33 GMT)
GAINESVILLE, Florida (AP) -- It looks almost like any other shopping cart, except sensors let it follow the shopper around the supermarket and slow down when needed so items can be placed in it. And it never crashes into anyone's heels.

"The immediate thing that jumped to my mind was all those times as a kid when my sister would accidentally hit me with a cart," said its inventor, Gregory Garcia. "It seems like the public would really want this, since everybody shops."

See the full article.

Friday, August 18, 2006

News: Robot team-mates tap into each others' talents

18:09 15 August 2006
NewScientist.com news service
Tom Simonite

Teams of robots that can remotely tap into each other's sensors and computers in order to perform tricky tasks have been developed by researchers in Sweden. The robots can, for example, negotiate their way past awkward obstacles by relaying different viewpoints to one another.

Robert Lundh, who developed the bots at Örebro University, says cooperative behaviour is normally rigidly pre-programmed into robots. "We wanted to have the robots plan for themselves how to draw on their capabilities and those of others," he told New Scientist.

Lundh's robots decide whether another nearby robot may be able to help with a specific task. In one experiment two round robots, each 45 centimetres in diameter and 25 cm tall, teamed up to negotiate their way through a doorway. They were forced to cooperate because each robot's vision system had been limited so that it could not see enough of the doorway to be certain of getting through without hitting the sides.

See the full article.

Wednesday, August 16, 2006

What's New @ IEEE in Computing, August 2006

8. THE TRAP OF ROBOT MORALITY
Before humans design robots with a moral capacity, we should decide exactly what that capacity should be, and to whom it should apply, according to Christopher Grau, assistant professor of philosophy at Florida International University. Writing in his article "There Is No 'I' in 'Robot': Robots and Utilitarianism," published in "IEEE Intelligent Systems" magazine, Grau uses the 2004 film "I, Robot" as a philosophical springboard to discuss the implications of utilitarianism, an ethical theory that requires moral agents to pursue actions that will maximize overall happiness. When faced with various possible actions, a utilitarian does what will produce the greatest net happiness, considering the happiness and suffering of all those affected by the action. Grau believes it is possible that sentient robots will be able to make utilitarian calculations, but that those calculations can sometimes be reduced to an "ends justifies the means" philosophy that is morally repugnant to humans. He says, however, that utilitarian moral theory might provide the best ethical theory for artificial agents that lack the boundaries of self that normally make utilitarian calculation inappropriate. Read more (PDF): the link

2. TOPIC MODELING SPEEDS UP THE SEARCH
A new technology developed by researchers at the University of California, Irvine (USA), called Topic Modeling allows people to locate topic-specific information from computerized newspaper text. The process involves looking for patterns of words that tend to occur together in documents, then automatically categorizing those words into topics. Before this, people searching for information had to enter the topic itself (or something closely related). For example, the researchers entered the words "Lance Armstrong," "Bike," "Race," and "Rider," and the program categorized it all under "Tour de France." Previously, looking for information this way was referred to as supervised learning, and involved many man hours. The researchers presented their finding recently at the IEEE Intelligence and Security Informatics Conference, and speculate that it will make retrieving information easier and quicker. Read more:
http://www.primidi.com/2006/07/27.html

Tuesday, August 15, 2006

Lab Meeting 17 August, 2006 (Yu-Chun): Function meets style: insights from emotion theory applied to HRI

Breazeal, C.
Massachusetts Inst. of Technol., Cambridge, MA, USA;

2004 IEEE SMC
Transactions, Part C.

Abstract:
As robot designers, we tend to emphasize the cognitive aspect of intelligence when designing robot architectures while viewing the affective aspect with skepticism. However, scientific studies continue to reveal the deeply intertwined and complementary roles that cognition and emotion play in intelligent decision-making, planning, learning, attention, communication, social interaction, memory, and more. Such findings provide valuable insights and lessons for the design of autonomous robots that must operate in complex and uncertain environments and perform in cooperation with people. This paper presents a concrete implementation of how these insights have guided our work, focusing on the design of sociable autonomous robots that interact with people as capable partners.

[Link]

Sunday, August 13, 2006

CNN News: Sink-or-swim robot race


Friday, August 11, 2006; Posted: 3:13 p.m. EDT (19:13 GMT)

SAN DIEGO, California (AP) -- Facing an exodus of institutional brain power as baby-boomer scientists retire, the Navy is turning to a younger pool of talent for its underwater robotics program.

As part of the effort, college students were recently invited to build robots that could perform a series of tasks without human control in a 38-foot deep research pool. The culmination, last weekend's International Autonomous Underwater Vehicle Competition, was a sink-or-swim contest.

The robots were required to swim through a gate, find and dock with a flashing light box, locate and tag a cracked pipeline, then home in on an acoustic beacon and resurface in a designated recovery zone. Top prize was $7,000 and serious bragging rights.

See the full article.

Friday, August 11, 2006

CMU RI Thesis Oral: Exploiting Spatial-temporal Constraints for Interactive Animation Control

Jinxiang Chai, Robotics Institute, Carnegie Mellon University
14 Aug 2006

Interactive control of human characters would allow the intuitive control of characters in computer/video games, the control of avatars for virtual reality, electronically mediated communication or teleconferencing, and the rapid prototyping of character animations for movies. To be useful, such a system must be capable of controlling a lifelike character interactively, precisely, and intuitively. Building an animation system for home use is particularly challenging because the system should also be low-cost and not require a considerable amount of time, skill, or artistry.

This thesis explores an approach that exploits a wide range of spatial-temporal constraints for interactive animation control. The control inputs from such a system are often low-dimensional, contain far less information than actual human motion, and thus cannot be directly used for precise control of high-dimensional characters. However, natural human motion is highly constrained; the movements of the degrees of freedom of the limbs or facial expressions are not independent.

Our hypothesis is that the knowledge about natural human motion embedded in a domain-specific motion capture database can be used to transform the underconstrained user input into realistic human motions. The spatial-temporal coherence embedded in the motion data allows us to control high-dimensional human animations with low- dimensional user input.

We demonstrate the power and flexibility of this approach through three different applications: controlling detailed three-dimensional (3D) facial expressions using a single video camera, controlling complex 3D full-body movements using two synchronized video cameras and a very small number of retro-reflective markers, and controlling realistic facial expressions or full-body motions using a sparse set of intuitive constraints defined throughout the motion. For all three systems, we assess the quality of the results by comparisons with those created by a commercial optical motion capture system. We demonstrate the quality of the animation created by all three systems is comparable to commercial motion capture systems but requires less expense, time, and space to suit up the user.

Further Details: A copy of the thesis oral document can be found at http://www.cs.cmu.edu/~jchai/thesis/chai-defense.pdf.

MIT Thesis Defense: Learning with Online Constraints: Shifting Concepts and Active Learning

Speaker: Claire Monteleoni , MIT CSAIL
Date: Friday, August 11 2006
Time: 2:00PM to 3:00PM
Host: Tommi Jaakkola, MIT CSAIL
Contact: Claire Monteleoni, cmontel@csail.mit.edu
Relevant URL: http://people.csail.mit.edu/cmontel

Many practical problems such as forecasting, real-time decision making, streaming data applications, and resource-constrained learning, can be modeled as learning with online constraints. This thesis is concerned with analyzing and designing algorithms for learning under the following online constraints: 1) The algorithm has only sequential, or one-at-time, access to data. 2) The time and space complexity of the algorithm must not scale with the number of observations. We analyze learning with online constraints in a variety of settings, including active learning. The active learning model is applicable in any domain in which unlabeled data is easy to come by and there exists a (potentially difficult or expensive) mechanism by which to attain labels.

We present the following algorithms, performance guarantees, and applications for learning with online constraints. In a supervised learning framework in which observations are assumed to be iid, we lower bound the mistake-complexity of Perceptron, a standard online learning algorithm, and then provide a modified update that avoids this lower bound, attaining the optimal mistake-complexity for the problem in question. In an analogous active learning framework, our lower bound applies to the label-complexity of Perceptron paired with any active learning rule. We provide a new online active learning algorithm that avoids this lower bound, and we upper bound its label-complexity. The upper bound is optimal and also bounds the algorithm's total errors (labeled and unlabeled). We analyze the algorithm further, yielding a label-complexity bound under relaxed assumptions, and we perform an empirical evaluation on problems in optical character recognition. Finally, in a supervised learning framework involving no statistical assumptions on the observation sequence, we provide a lower bound on regret for a class of shifting algorithms. We apply an algorithm we provided in previous work, that avoids this lower bound, to an energy-management problem in wireless networks, and demonstrate this application in a network simulation.

Thesis Committee:
Tommi Jaakkola, MIT CSAIL (Thesis Supervisor)
Piotr Indyk, MIT CSAIL
Sanjoy Dasgupta, UC San Diego

News: Educational Robotics

Robotics Academy educators say robotics could become an even more powerful teaching tool with curriculum they developed for the new version of LEGO Education’s popular MINDSTORMS robot-building set. See the full article.

Tuesday, August 08, 2006

Lab Meeting 10 August,2006(Ashin):PdaDriver: A Handheld System for Remote Driving

Author:Terrence Fong ,Charles Thorpe,and Betty Glass
Abstract:
PdaDriver is a Personal Digital Assistant (PDA) system for vehicle teleoperation. It is designed to be easy-to-deploy, to minimize the need for training, and to enable effective remote driving through multiple control modes. This paper presents the motivation for PdaDriver, its current design, and recent outdoor tests with a mobile robot.

[Link]

CNN news: Video cameras on the lookout for terrorists

Monday, August 7, 2006; Posted: 2:52 p.m. EDT (18:52 GMT)

NISKAYUNA, New York (AP) -- It sounds like something out of science fiction.

Researchers at General Electric Co.'s sprawling research center, are creating new "smart video surveillance" systems that can detect explosives by recognizing the electromagnetic waves given off by objects, even under clothing.

Scientist Peter Tu and his team are also developing programs that can recognize faces, pinpoint distress in a crowd by honing in on erratic body movements and synthesize the views of several cameras into one bird's eye view, as part of a growing effort to thwart terrorism.

See the full article.

Saturday, August 05, 2006

MIT defense: Algorithms for Data Mining

Speaker: Grant Wang , MIT
Date: Monday, August 7 2006
Time: 1:00PM

Data of massive size are now available in a wide variety of fields and come with great promise. In theory, these massive data sets allow data mining and exploration on a scale previously unimaginable. However, in practice, it can be difficult to apply classic data mining techniques to such massive data sets due to their sheer size.

In this thesis, we study three algorithmic problems in data mining with consideration to the analysis of massive data sets. Our work is both theoretical and experimental -- we design algorithms and prove guarantees for their performance and also give experimental results on real data sets. The three problems we study are: 1) finding a matrix of low rank that approximates a given matrix, 2) clustering high-dimensional points into subsets whose points lie in the same subspace, and 3) clustering objects by pairwise similarities/distances.

New Scientist magazine 4 August 2006

Shape-shifting lens mimics human eye
A lens has been developed that alters its focal length when squeezed by an artificial muscle in response to environmental changes.

Virtual bots teach each other using wordplay Video available
The same technique could enable real-life robots to cooperate more effectively when faced with a new challenge

Software meshes photos to create 3D landscape Video available
Overlapping image areas are identified and used to determine how images should be displayed in 3D environment

Wednesday, August 02, 2006

AAAI 2007 Spring Symposia

The American Association for Artificial Intelligence, in cooperation with Stanford University's Computer Science Department, is pleased to present its 2007 Spring Symposium Series, to be held Monday through Wednesday, March 26-28, 2007 at Stanford University in Stanford, California. The topics of the nine symposia in this symposium series are: