Wednesday, April 30, 2008

Disney enters the programmable robot market with its own kids robot

Robots aren’t just for hobbyists anymore. The programmable gadgets have taken off thanks to the efforts of tech-oriented companies such as WowWee, Sony, Ugobe and LEGO. But the market may be ready for a whole new level as Disney enters the market tomorrow.
LINK

Tuesday, April 29, 2008

[CSAIL Seminar]Maximum Entropy and Species Distribution Modeling

Speaker: Robert Schapire, Princeton University


Abstract:
Modeling the geographic distribution of a plant or animal species is a critical problem in conservation biology: to save a threatened species, one first needs to know where it prefers to live, and what its requirements are for survival. From a machine-learning perspective, this is an especially challenging problem in which the learner is presented with no negative examples and often only a tiny number of positive examples. In this talk, I will describe the application of maximum-entropy methods to this problem, a set of decades-old techniques that happen to fit the problem very cleanly and effectively. I will describe a version of maxent that we have shown enjoys strong theoretical performance guarantees that enable it to perform effectively even with a very large number of features. I will also describe some extensive experimental tests of the method, as well as some surprising applications.

This talk includes joint work with Miroslav Dudík and Steven Phillips.

[Relevant Link]

Monday, April 28, 2008

Lab Meeting April 28th, 2008 (Yu-Hsiang): Progress on abnormal object detection for the ICRA Challenge

I'll present how I use unsupervise way to cluster surf features and detect abnormal object. And I will show some my current results.

Sunday, April 27, 2008

Lab Meeting April 28th, 2008 (Der-Yeuan): Progress on Feature-Based SLAM for the ICRA Challenge

I will present some of my ideas and current results in mapping a 3D environment using the SwissRanger camera and the MotionNode IMU.

Saturday, April 26, 2008

News: Academic Leaders in Robotics Research Announce Effort To Create National Strategy for Robotics Growth

PITTSBURGH—Citing the critical importance of the continued growth of robotics to U.S. competitiveness, 11 universities are taking the lead in developing an integrated national strategy for robotics research. The United States is the only nation engaged in advanced robotics research that does not have such a research roadmap.

The Computing Community Consortium (CCC), a program of the National Science Foundation, is providing support for developing the roadmap, which will be a unified research agenda for robotics across federal agencies, industry and the universities.

The effort began last year and includes representatives from the Georgia Institute of Technology, Carnegie Mellon University and the universities of Massachusetts, Pennsylvania, California- Berkeley, Southern California, Utah and Illinois, as well as Rensselaer Polytechnic Institute, Stanford University and Massachusetts Institute of Technology.

See the full article. The roadmapping effort is detailed at www.us-robotics.us.

Wednesday, April 23, 2008

[CVPR2008] Fusion of Time-of-Flight Depth and Stereo for High Accuracy Depth Maps

Authors: Jiejie Zhu, Liang Wang, Ruigang Yang, James Davis
CVPR 2008 Oral

Abstract:
Time-of-flight range sensors have error characteristics which are complementary to passive stereo. They provide real time depth estimates in conditions where passive stereo does not work well, such as on white walls. In contrast, these sensors are noisy and often perform poorly on the textured scenes for which stereo excels. We introduce a method for combining the results from both methods that performs better than either alone. A depth probability distribution function from each method is calculated and then merged. In addition, stereo methods have long used global methods such as belief propagation and graph cuts to improve results, and we apply these methods to this sensor. Since time-of-flight devices have primarily been used as individual sensors, they are typically poorly calibrated. we introduce a method that substantially improves upon the manufacturer’s calibration. We show that these techniques lead to improved accuracy and robustness.

fulltext

[CVPR2008]Unsupervised Modeling of Object Categories Using Link Analysis Techniques

Title: Unsupervised Modeling of Object Categories Using Link Analysis Techniques

Author: Gunhee Kim, Christos Faloutsos, Martial Hebert, Carnegie Mellon University

Abstract:
We propose an approach for learning visual models of object categories in an unsupervised manner in which we first build a large-scale complex network which captures the interactions of all unit visual features across the entire training set and we infer information, such as which features are in which categories, directly from the graph by using link analysis techniques. The link analysis techniques are based on well-established graph mining techniques used in diverse applications such as WWW, bioinformatics, and social networks. The techniques operate directly on the patterns of connections between features in the graph rather than on statistical properties, e.g., from clustering in feature space. We argue that the resulting techniques are simpler, and we show that they perform similarly or better compared to state of the art techniques on common data sets. We also show results on more challenging data sets than those that have been used in prior work on unsupervised modeling.

fulltext

[CVPR2008]Model-Based Hand Tracking with Texture, Shading and Self-occlusions

Authors: Martin de La Gorce, Nikos Paragios, David J. Fleet
CVPR 2008 oral

Abstract:
A novel model-based approach to 3D hand tracking from monocular video is presented. The 3D hand pose, the hand texture and the illuminant are dynamically estimated through minimization of an objective function. Derived from an inverse problem formulation, the objective function enables explicit use of texture temporal continuity and shading information, while handling important self-occlusionsand time-varying illumination. The minimization is done efficiently using a quasi-Newton method, for which we propose a rigorous derivation of the objective function gradient. Particular attention is given to terms related to the change of visibility near self-occlusion boundaries that are neglected in existing formulations. In doing so we introduce new occlusion forces and show that using all gradient terms greatly improves the performance of the method. Experimental results demonstrate the potential of the formulation.

[Link]

[CVPR 2008] A Mobile Vision System for Robust Multi-Person Tracking

Authors: Andreas Ess, Bastian Leibe, Konrad Schindler and Luc Van Gool
ETH Zurich, Switzerland, KU Leuven, Belgium
CVPR 2008 oral

Full text

Abstract:
We present a mobile vision system for multi-person track-ing in busy environments. Specifically, the system integratescontinuous visual odometry computation with tracking-by-detection in order to track pedestrians in spite of frequentocclusions and egomotion of the camera rig. To achieve re-liable performance under real-world conditions, it has longbeen advocated to extract and combine as much visual in-formation as possible. We propose a way to closely inte-grate the vision modules for visual odometry, pedestrian de-tection, depth estimation, and tracking. The integration nat-urally leads to several cognitive feedback loops between themodules. Among others, we propose a novel feedback con-nection from the object detector to visual odometry whichutilizes the semantic knowledge of detection to stabilize lo-calization. Feedback loops always carry the danger that er-roneous feedback from one module is amplified and causesthe entire system to become instable. We therefore incor-porate automatic failure detection and recovery, allowingthe system to continue when a module becomes unreliable.The approach is experimentally evaluated on several longand difficult video sequences from busy inner-city locations.Our results show that the proposed integration makes it pos-sible to deliver stable tracking performance in scenes ofpreviously infeasible complexity.

Intel Seminar : Unsupervised Analysis of Human Activities in Everyday Environments

Title: Structure from Statistics: Unsupervised Analysis of HumanActivities in Everyday Environments
Speaker:Raffay Hamid
Monday, April 21st, 2008, 10:30am- 12:00pm

Abstract:
In order to make computers proactive and assistive, we must enablethem to perceive, learn, and predict what is happening in theirsurroundings. This presents us with the challenge of formalizingcomputational models of everyday human activities. These models mustperform well in the face of data uncertainty and complex activitydynamics. Traditional approaches to this end assume prior knowledgeabout the structure of human activities, using which explicitlydefined activity-models are learned in a supervised manner. However,for a majority of everyday environments such activity structure isgenerally not known a priori. In this talk, I will discuss knowledgerepresentations and manipulation techniques that facilitate minimallysupervised learning of activity structure. In particular, I willpresent n-grams and Suffix Tree based sequence representations forhuman activity analysis. I will discuss how such data-driven approachtowards activity modeling can help discover and characterize humanactivities, and learn typical behaviors crucial for detectingirregular occurrences in an environment. I will provide experimentalvalidation of my proposed approach for activity analysis inenvironments such as a residential house, a loading dock area, and ahousehold kitchen.

Bio:
Raffay Hamid is a Ph.D. candidate in Computer Science in the School ofInteractive Computing at the Georgia Institute of Technology, where heis a member of the Computational Perception Lab., and the Aware HomeResearch Initiative. His research interests lie at the intersection ofStatistical Learning, Computer Vision and Ubiquitous Computing.
During his graduate years, Raffay has worked as a Research Intern atIntel Research Lab., Mitsubishi Electronic Research Lab., andMicrosoft Research. From 2001 to 2002, he was a Signal ProcessingEngineer at Techlogix Inc., working on a joint project with GeneralMotors and Eaton Corporation. During this time he also served as anadjunct lecturer at the University of Engineering and TechnologyLahore, Pakistan. He has been awarded the National Merit Scholarshipfrom the Government of Pakistan from 1994 to 2001. More informationabout his curricular and co-curricular interests can be found at:www.cc.gatech.edu/~raffay .

Monday, April 21, 2008

Lab Meeting April 21st, 2008 (Leo): Ground truth system

I will show some improvement of the ground truth sytsem.

Sunday, April 20, 2008

Lab Meeting April 21st, 2008 (Yi-Liu): Progress report

I'll show some results of Monocular DATMO.

Lab Meeting April 21th, 2008 (fish60): Efficient Motion Planning Algorithm for Stochastic Dynamic Systems with Constraints on Probability of Failure

I will talk about the simple idea which I want to present last week.

Abstract:Focus on -- Bi-stage Robust Motion Planning algorithm:Two stage optimization approach, with the upper stage optimizingthe risk allocation and the lower stage calculatingthe optimal control sequence that maximizes the reward.

Link

Lab Meeting April 21st, 2008 (Jeff):Progress report

I will show 2 results using current dataset.

And I will point out some problem about the current sensor model and try to

propose a method to solve it.

Saturday, April 19, 2008

Off-Road Obstacle Avoidance through End-to-End Learning

We describe a vision-based obstacle avoidance system for off-road mobile robots. The system is trained from end to end to map raw input images to steering angles. It is trained in supervised mode to predict the steering angles provided by a human driver during training runs collected in a wide variety of terrains, weather conditions, lighting conditions, and obstacle types. The robot is a 50cm off-road truck, with two forward pointing wireless color cameras. A remote computer processes the video and controls the robot via radio. The learning system is a large 6-layer convolutional network whose input is a single left/right pair of unprocessed low-resolution images. The robot exhibits an excellent ability to detect obstacles and navigate around them in real time at speeds of 2 m/s.

Link

Tuesday, April 15, 2008

CMU RI Report: Cost-based Registration using A Priori Data for Mobile Robot Localization

L. Xu and A. Stentz
tech. report TR-08-05
Robotics Institute
Carnegie Mellon University

Abstract--A major challenge facing outdoor navigation is the localization of a mobile robot as it traverses a particular terrain. Inaccuracies in dead-reckoning and the loss of global positioning information (GPS) often lead to unacceptable uncertainty in vehicle position. We propose a localization algorithm that utilizes cost-based registration and particle filtering techniques to localize a robot in the absence of GPS. We use vehicle sensor data to provide terrain information similar to that stored in an overhead satellite map. This raw sensor data is converted to mobility costs to normalize for perspective disparities and then matched against overhead cost maps. Cost-based registration is particularly suited for localization in the navigation domain because these normalized costs are directly used for path selection. To improve the robustness of the algorithm, we use particle filtering to handle multi-modal distributions. Results of our algorithm applied to real field data from a mobile robot show higher localization certainty compared to that of dead-reckoning alone.

Check here for the full text.

[CVPR 2008] Learning Patch Correspondences for Improved Viewpoint Invariant Face Recognition

Authors: Ahmed Bilal Ashraf, Simon Lucey, Tsuhan Chen

Abstract:
Variation due to viewpoint is one of the key challenges that stand in the way of a complete solution to the face recognition problem. It is easy to note that local regions of the face change differently in appearance as the viewpoint varies. Recently, patch-based approaches, such as those of Kanade and Yamada, have taken advantage of this effect resulting in improved viewpoint invariant face recognition. In this paper we propose a data-driven extension to their approach, in which we not only model how a face patch varies in appearance, but also how it deforms spatially as the viewpoint varies. We propose a novel alignment strategy which we refer to as "stack flow" that discovers viewpoint induced spatial deformities undergone by a face at the patch level. One can then view the spatial deformation of a patch as the correspondence of that patch between two viewpoints. We present improved identification and verification results to demonstrate the utility of our technique.

[Link]

Monday, April 14, 2008

[Robotics Institute Thesis Proposal ] Structured Prediction Techniques for Imitation Learning

Abstract:
Programming robots is hard. We can often easily demonstrate the behavior we desire, but mapping that intuition into the space of parameters governing the robot's decisions is difficult, time consuming, and ultimately expensive. Machine learning promises “programming by demonstration” paradigms to develop high-performance robotic systems. Unfortunately, many “classical” machine learning techniques, such as decision trees, neural networks, and support vector machines, do not fit the needs of modern robotics systems which are often built around sophisticated planning algorithms that efficiently reason about the future. Consequently, these learning systems often fall short of producing high-quality robot performance.

Rather than ignoring planning algorithms in lieu of pure learning systems, the algorithms I discuss in this proposal embrace optimal cost planning algorithms as a central component of robot behavior. I propose here a set of simple gradient-based algorithms for training cost-based planners from examples of decision sequences provided by an expert. These algorithms are simple, intuitive, easy to implement, and they enjoy both state-of-the-art empirical performance and strong theoretical guarantees. Collectively, we call our framework Maximum Margin Planning (MMP).

Our algorithms fall under the category of imitation learning. In this proposal, I first briefly survey the history of imitation learning and map the progression of algorithms that led to the development of MMP. I then discuss the MMP collection of algorithms at many levels of detail, starting from an intuitive and implementational perspective, and then proceeding to a more formal mathematical derivation. Throughout the discussion I demonstrate the techniques on a wide array of problems found in robotics, from navigational planning and heuristic learning to footstep prediction and grasp planning. Toward the end of the document I outline a set of open problems in imitation learning not solved by MMP and touch on recent progress we have made toward solving them.

Link

Saturday, April 12, 2008

[graphics] VASC Seminar: Yaser Yacoob (U Maryland), Monday April 14, 3:30pm, NSH 1507

VASC Seminar

Image Segmentation Using Meta-Texture Saliency
Yaser Yacoob
University of Maryland
3:30pm, Monday, April 14
NSH 1507

Appointments: Peggy Martin

Abstract:
The rapid increase in megapixel resolution of digital images provides a novel opportunity to capture and analyze information about scene surfaces and expand beyond the commonly used edge/color/texture attributes. The talk will address segmentation of an image into patches that have common underlying salient surface-roughness. Three intrinsic images are derived: reflectance, shading and meta-texture images. A constructive approach is proposed for computing a meta-texture image by preserving, equalizing and enhancing the underlying surface-roughness across color, brightness and illumination variations. We evaluate the performance on sample images and illustrate quantitatively that different patches of the same material, in an image, are normalized in their statistics despite variations in color, brightness and illumination. Image segmentation by line-based boundary-detection is proposed and results are provided and compared to known algorithms.

Biography:
Yaser Yacoob is a Research Faculty at the Computer Vision laboratory at the University of Maryland, College Park. His research is on image and video analysis with focus on topics that are relevant to interpretation of human appearance and motion.

Lab Meeting April 14th, 2008 (fish60): Efficient Motion Planning Algorithm for Stochastic Dynamic Systems with Constraints on Probability of Failure

I will talk about what I have read this week.

Abstract:
Focus on -- Bi-stage Robust Motion Planning algorithm:
Two stage optimization approach, with the upper stage optimizingthe risk allocation and the lower stage calculatingthe optimal control sequence that maximizes the reward.

Link

Friday, April 11, 2008

[ML Lunch] Brian Ziebart on Learning Driving Route Preferences

Speaker: Brian Ziebart
Title: Learning Driving Route Preferences
Venue: NSH 1507
Date: Monday April 14
Time: 12:00 noon


Abstract:
Personal Navigation Devices are useful for obtaining drivingdirections to new destinations, but they are not very intelligent --they observe thousands of miles of preferred driving routes but neverlearn from those observations when planning routes to newdestinations. Motivated by this deficiency, we present a novelapproach for recovering from demonstrated behavior the preferenceweights that drivers place on different types of roads andintersections. The approach resolves ambiguities in inversereinforcement learning (Abbeel and Ng 2004) using the principle ofmaximum entropy (Jaynes 1957), resulting in a probabilistic model forsequential actions. Using the approach, we model thecontext-dependent driving preferences of 25 Yellow Cab Pittsburgh taxidrivers from over 100,000 miles of GPS trace data. Unlike previousapproaches to this modeling problem, which directly modeldistributions over actions at each intersection, our approach learnsthe reasons that make certain routes preferable. Our reason-basedmodel is much more generalizable to new destinations and newcontextual situations, yielding significant performance improvements on a number of driving-related prediction tasks.This is joint work with Andrew Maas, Drew Bagnell, and Anind Dey.

Lab Meeting April 14th, 2008 (Atwood): Loopy Belief Propagation

I will explain the principles behind the belief propagation (BP) algorithm, which is an efficient way to solve inference problems based on passing local messages, and one extension, Residual Belief Propagation, to arbitrary graphs possibly with loops.

References:

Understanding Belief Propagation and its Generalizations, Jonathan S. Yedidia, William T. Freeman, and Yair Weiss, MERL Technical Report, 2001

G. Elidan, I. McGraw, and D. Koller (2006). "Residual Belief Propagation: Informed Scheduling for Asynchronous Message Passing." Proceedings of the Twenty-second Conference on Uncertainty in AI (UAI).


Constructing Free-Energy Approximations and Generalized Belief Propagation Algorithms Yedidia, J.S.; Freeman, W.T.; Weiss, Y. ,IEEE Transactions on Information Theory, 2005


Tuesday, April 08, 2008

Efficient Motion planning Algorithm for Stochastic Dynamic Systems with Constraints on Probability of Failure

Computer Science and Artifical Intelligence Laboratory Technical Report

Abstract:
When controlling dynamic systems such as mobile robots in uncertain environments, there is a trade off between risk and reward. ...
This paper proposes a new appoach to planning a control sequence with guaranteed risk bound.
...
We propose a two-stage optimization approach, with the upper stage optimizing the risk allocation and the lower stage calculating the optimal control sequence that maximizes the reward.

link

CVPR08 : Trajectory Analysis and Semantic Region Modeling Using A Nonparametric Bayesian Model

Title: Trajectory Analysis and Semantic Region Modeling Using A Nonparametric Bayesian Model


Authors:
Wang,, Xiaogang
Ma,, Keng Teck
Ng,, Gee-Wah
Grimson, Eric


Abstract:
We propose a novel nonparametric Bayesian model, Dual Hierarchical Dirichlet Processes (Dual-HDP), for trajectory analysis and semantic region modeling in surveillance settings, in an unsupervised way. In our approach, trajectories are treated as documents and observations of an object on a trajectory are treated as words in a document. Trajectories are clustered into different activities. Abnormal trajectories are detected as samples with low likelihoods. The semantic regions, which are intersections of paths commonly taken by objects, related to activities in the scene are also modeled. Dual-HDP advances the existing Hierarchical Dirichlet Processes (HDP) language model. HDP only clusters co-occurring words from documents into topics and automatically decides the number of topics. Dual-HDP co-clusters both words and documents. It learns both the numbers of word topics and document clusters from data. Under our problem settings, HDP only clusters observations of objects, while Dual-HDP clusters both observations and trajectories. Experiments are evaluated on two data sets, radar tracks collected from a maritime port and visual tracks collected from a parking lot.

link

Monday, April 07, 2008

Lab Meeting April 14th, 2008 (Any): Probabilistic Terrain Analysis For High-Speed Desert Driving

Abstract--The ability to perceive and analyze terrain is a key problem in mobile robot navigation. Terrain perception problems arise in planetary robotics, agriculture, mining, and, of course, self-driving cars. Here, we introduce the PTA (probabilistic terrain analysis) algorithm for terrain classication with a fastmoving robot platform. The PTA algorithm uses probabilistic techniques to integrate range measurements over time, and relies on efficient statistical tests for distinguishing drivable from nondrivable terrain. By using probabilistic techniques, PTA is able to accommodate severe errors in sensing, and identify obstacles with nearly 100% accuracy at speeds of up to 35mph. The PTA algorithm was an essential component in the DARPA Grand Challenge, where it enabled our robot Stanley to traverse the entire course in record time.

S. Thrun, M. Montemerlo, and A. Aron. Probabilistic terrain analysis for high-speed desert driving. In G. Sukhatme, S. Schaal, W. Burgard, and D. Fox, editors, Proceedings of the Robotics Science and Systems Conference, Philadelphia, PA, 2006.

[Lab meeting] 7th April - video: earthmine inc.

3d mapping spin-off from berkeley.

Earthmine.com

link to the demo video

Lab Meeting 2008,04,07(Li-Wei) Omni-directional binocular stereoscopic images from one omni-directional camera

Omni-directional binocular stereoscopic images from one omni-directional camera
Digital and Computational Video, 2002. DCV 2002.
http://w.csie.org/~b93026/01218739.pdf

abstract
An omni-directional binocular stereoscopic image pair consists of two
omni-directional panoramic images, where each image is for the left eye and
the right eye. The panoramic stereo pair provides stereo sensation in a full
360/spl deg/. The omni-directional binocular stereoscopic image pair cannot be
photographed by two omni-directional cameras from two viewpoints, but can be
constructed by mosaicing together the omni-directional images from four
different positions around the user's position. We propose a technique for
producing and evaluating omni-directional binocular stereoscopic images from
one omni-directional lens attached to a digital still camera.

Sunday, April 06, 2008

News: MIT robotics group reveals emotional robot Nexi

The Personal Robots Group at the MIT Media Lab has announced its latest project, a small mobile humanoid robot called Nexi that shows emotion.

A video of the robot in action posted to YouTube has taken the blogosphere by storm.

The robot, which moves around on four wheels and has arms and hands to manipulate objects, was partially funded by the Office of Naval Research through the Defense University Research Instrumentation Program, as well as by a research grant from Microsoft Corp. Nexi and its three counterparts were developed in cooperation with the University of Massachusetts Amherst and private industry.

Cynthia Breazeal, the lead researcher on the project, calls the class of robots "MDS" for mobile/dexterous/social. The robots are targeted for completion sometime this fall. The purpose of the robots is to support research and education goals in human-robot interaction, teaming, and social learning.


Official site

Related articles:
MIT's Nexi bot wants to be your friend (engadget)
Nexi Robot from MIT (übergizmo)
Nexi, The Social Robot From MIT Goes For the Emo Look (gizmodo)
Nexi: MIT's emotive robot - Emotive or creepy? You decide (neoseeker)

Lab Meeting April 7th, 2008 (Andi)

Title: An Automated Method for Large-Scale, Ground-Based City
Model Acquisition

Authors: CHRISTIAN FRUEH and AVIDEH ZAKHOR
Video and Image Processing Laboratory, University of California, Berkeley

from International Journal of Computer Vision 60(1), 5–24, 2004

Abstract: In this paper, we describe an automated method for fast, ground-based acquisition of large-scale 3D city models. Our experimental set up consists of a truck equipped with one camera and two fast, inexpensive 2D laser scanners, being driven on city streets under normal traffic conditions. One scanner is mounted vertically to capture building facades, and the other one is mounted horizontally. Successive horizontal scans are matched with each other in order to determine an estimate of the vehicle’s motion, and relative motion estimates are concatenated to
form an initial path. Assuming that features such as buildings are visible from both ground-based and airborne view, this initial path is globally corrected by Monte-Carlo Localization techniques. Specifically, the final global pose is obtained by utilizing an aerial photograph or a Digital Surface Model as a global map, to which the ground-based horizontal laser scans are matched. A fairly accurate, textured 3D cof the downtown Berkeley area has been acquired in a matter of minutes, limited only by traffic conditions during the data acquisition phase. Subsequent automated processing time to accurately localize the acquisition vehicle is 235 minutes for a 37 minutes or 10.2 km drive, i.e. 23 minutes per kilometer.

Link to the full paper

Lab Meeting April 7th, 2008 (ZhenYu)

I will show some results of my recent work.

Friday, April 04, 2008

VASC Seminar : Visual Analysis of Crowded Scenes

Speaker :
Saad Ali
University of Central Florida
Thursday, April 3, 3:30pm, NSH 1307

Abstract:
Automatic localization, tracking, and event detection in videos ofcrowded environments is an important visual surveillance problem.Despite the sophistication of current surveillance systems, they havenot yet attained the desirable level of applicability and robustnessrequired for handling crowded scenes like parades, concerts, footballmatches, train stations, airports, city centers, malls etc.
In this talk, I will first present a framework for segmenting scenesinto dynamically distinct crowd regions using Lagrangian particledynamics. For this purpose, the spatial extent of the video is treatedas a phase space of a non-autonomous dynamical system where transportfrom one region of the phase space to the other is controlled by theoptical flow. A grid of particles is advected through the phase spaceusing the optical flow using a numerical integration scheme, and theamount by which neighboring particles diverge is quantified by using aCauchy-Green deformation tensor. The maximum eigenvalue of this tensoris used to construct a Finite Time Lyapunov Exponent (FTLE) field,which reveals the time-dependent invariant manifolds of thenon-autonomous dynamical system which are called Lagrangian CoherentStructures (LCS). The LCS in turn divides the crowd flow into regionsof different dynamics, and therefore are used to the segment the sceneinto distinct crowd regions. This segmentation is then used to detectany change in the behavior of the crowd over time. Next, I willpresent an algorithm for tracking individual targets in high density(hundreds of people) crowded scenes. The novelty of the algorithm liesin a scene structure based force model, which is used in conjunctionwith the available appearance information for tracking individuals in a complex crowded scene. The key ingredients of the scene structureforce model are three fields namely, `Static Floor Field' (SFF),`Dynamic Floor Field' (DFF), and `Boundary Floor Field' (BFF). Thesefields determine the probability of a person moving from one locationto another in a way that the object movement is more likely in thedirection of higher fields.

Bio:Saad Ali is currently a PhD candidate at the University of CentralFlorida, advised by Prof. Mubarak Shah. His research interests includesurveillance in crowded and aerial scenes, action recognition, objectrecognition and dynamical systems. He is a student member of IEEE.

FRC Seminar: Stingray and Daredevil: High-Speed Teleoperation and All-Weather Perception for Small UGVs

Speaker: Brian Yamauchi, Lead Roboticist, iRobot Research

Abstract:
The mission of the iRobot Research Group is to conduct applied research to develop and integrate new technologies for iRobot products. In this talk, I will describe two ongoing research projects aimed at solving key problems in mobile robotics -- teleoperating UGVs at high speeds through urban environments (Stingray) -- and navigating autonomously in poor weather and detecting obstacles through foliage (Daredevil).

For Stingray, we have partnered with Chatten Associates to provide immersive telepresence for small UGVs using the Chatten Head-Aimed Remote Viewer (HARV). We have controlled the iRobot Warrior UGV and a high-speed 1/5-scale gas-powered radio-controlled car using the HARV. We will be adding driver assist behaviors to aid the operator in driving at high speeds.

For Daredevil, we are developing an all-weather perception payload for the PackBot that integrates ultra wideband (UWB) radar, LIDAR, and stereo vision. In initial experiments, we have demonstrated that UWB radar can detect obstacles through precipitation, smoke/fog, and sparse-to-moderate foliage. The payload will fuse the low-resolution UWB radar data with high-resolution range data from LIDAR and stereo vision. This will enable the PackBot to perform obstacle avoidance, waypoint navigation, path planning, and autonomous exploration in adverse weather and through foliage.

Speaker Bio:
Dr. Brian Yamauchi is a Lead Roboticist with iRobot's Research Group. He has been conducting robotics research and development for the last 19 years. He is the Principal Investigator for the Daredevil and Stingray Projects, both funded by the US Army Tank-Automotive Research, Development, and Engineering Center (TARDEC). At iRobot, he has conducted research in mobile robot navigation and mapping, autonomous vehicles, heterogeneous mobile robot teams, robotic casualty extraction, UAV/UGV collaboration, and hybrid UAV/UGVs. Prior to joining iRobot, he conducted robotics research at the Naval Research Laboratory, the Jet Propulsion Laboratory, Kennedy Space Center, and the Institute for the Study of Learning and Expertise. He earned his BS in Applied Math/Computer Science at Carnegie Mellon University, his MS in Computer Science at the University of Rochester, and his Ph.D. in Computer Science from Case Western Reserve University.

Tuesday, April 01, 2008

[Robotics Institute Thesis Proposal ] Adaptive Model-Predictive Motion Planning for Navigation in Complex Environments

Author:
Thomas Howard
Robotics Institute
Carnegie Mellon University

Abstract:
Outdoor mobile robot motion planning and navigation is a challenging problem in robot autonomy because of the dimensionality of the search space, the complexity of the system dynamics and the environmental interaction, and the typically limited perceptual horizon. In general, it is intractable to generate a motion plan between arbitrary boundary states that consider sophisticated models of vehicle dynamics and the entire set of feasible actions for nontrivial systems. It is even more difficult to accomplish the aforementioned goals in real time, which is necessary due to dynamic environments and updated perceptual information.

In this proposal, complex environments are defined as worlds where locally optimal motion plans are numerous and where the sensitivity of the cost function is highly dependent on state and mobility model fidelity. Examples of these include domains where obstacles are prevalent, terrain shape is varied, and the consideration of terramechanical models is important. Sequential search processes provide globally optimal solutions but are constrained to search only edges that exist in the graph and satisfy state constraints in the discretized representation of the world. Optimization and relaxation techniques determine only locally optimal, possibly homotopically distinct trajectories and it can be difficult to provide good initial guesses of solutions. Such techniques are arguably more informed and efficient as they follow the gradients of the cost functions to optimize trajectories and can satisfy boundary state constraints in the continuum. A better solution is to leverage the benefits of each approach and to apply it in a hybrid optimization method, relaxing local and regional motion planning sequential search spaces to improve relative optimality of solutions. Relative optimality is defined as the relationship between the quality of a motion plan and the amount of effort (time, computational resources, etc...) required to produce it. In order to achieve this, real-time processes for informed action generation (production of trajectories that consider sophisticated models of motion, suspension, and interaction with the environment) at the regional motion planning level to initialize the optimization must be developed. Since the optimality of executed path can directly correlated to fidelity of the motion model, a related issue is that of system identification, the adaptation of vehicle models using state and sensor data to model predictable disturbances.

In this thesis, I propose to develop techniques to generate feasible motion plans at the local and regional levels that consider sophisticated dynamics models, wheel-terrain interaction, and vehicle configuration to improve navigation capabilities of mobile robots operating in complex environments. The proposed work approaches this problem through developing, applying, and characterizing the benefits of four distinct extensions of work in model-predictive motion planning. The first is the development of a hybrid optimization technique that considers informed mobility models to improve the relative optimality of motion plans in complex environments. The second involves the optimization of search spaces through relaxation of edges and nodes. The third and fourth extensions involve the development of methods for real-time informed action generation that considers varying mobility models and simultaneous model identification and control to tune the predictive motion models. All of this work is in line with the greater goal of developing mobile robot motion planners that effectively navigate in complex environments while considering relative optimality of actions. The application of such techniques may resolve many undesirable behaviors of real systems, leading to mobile robots that are more efficient, robust, and effective at performing tasks in the real world.


Link