Thursday, April 30, 2009

CMU talk: Learning to Search: Structured Prediction Techniques for Imitation Learning

PhD Thesis Defense:

Learning to Search: Structured Prediction Techniques for Imitation Learning

Nathan D. Ratliff
Carnegie Mellon University

May 01, 2009

Abstract: Modern robots successfully manipulate objects, navigate rugged terrain, drive in urban settings, and play world-class chess. Unfortunately, programming these robots is challenging, time-consuming and expensive; the parameters governing their behavior are often unintuitive, even when the desired behavior is clear and easily demonstrated. Inspired by successful end-to-end learning systems such as neural network controlled driving platforms (Pomerleau, 1989), learning-based "programming by demonstration" has gained currency as a method to achieve intelligent robot behavior. Unfortunately, with highly structured algorithms at their core, it is not clear how to effectively and efficiently train modern robotic systems using classical learning techniques. Rather than redefining robot architectures to accommodate existing learning algorithms, in this thesis I develop learning techniques that leverage the performance of modern robotic components.


My presentation begins with a discussion of a novel imitation learning framework we call Maximum Margin Planning which automates finding a cost function for optimal planning and control algorithms such as A*. In the linear setting, this framework has firm theoretical backing in the form of strong generalization and regret bounds. Further, I have developed practical nonlinear generalizations that are effective and efficient for real-world problems. This framework reduces imitation learning to a modern form of machine learning known as Maximum Margin Structured Classification (Taskar et al. 2005); these algorithms, therefore, apply both specifically to training existing state-of-the-art planners, as well as broadly to solving a range of structured prediction problems of importance in learning and robotics.

In difficult high-dimensional planning domains, such as those found in many manipulation problems, high-performance planning technology remains a topic of much research. I will present some recent work which moves toward simultaneously advancing this technology while retaining the learnability developed above.

I'll demonstrate our algorithms on a range of applications including overhead navigation, quadrupedal locomotion, heuristic learning, manipulation planning, grasp prediction, driver prediction, pedestrian prediction, optical character recognition, and LADAR classification.

link

Saturday, April 25, 2009

CMU talk: Camera and LIDAR Fusion for Mapping in Dark Environments

FRC Seminar:

Camera and LIDAR Fusion for Mapping in Dark Environments

Uland Wong
PhD Student, Robotics Institute, Carnegie Mellon University

Thursday, April 30th, 2009

Abstract: Unlit diffuse environments like subterranean voids on earth and planetary surfaces elsewhere are of great interest for robotic exploration and exploitation. These environments pose unique obstacles and constraints, including the necessity for active perception and illumination. However, uniformity of albedo, lack of external lighting and known surface reflectance provide additional assumptions which can be used to enhance 3D-mapping and photographic data collected from robots. This talk presents a method for improving the accuracy of super-resolution point clouds by fusing actively illuminated HDR camera imagery with LIDAR data in dark Lambertian environments. The key approach is shape recovery from estimation of the illumination function and integration in a Markov Random Field (MRF) framework. Experimental results collected from a virtual reconstruction of the Bruceton Research Mine in Pittsburgh, PA are also presented.

Tuesday, April 21, 2009

NTU talk: Community Discovery in Dynamic, Rich Media Social Networks

Title: Community Discovery in Dynamic, Rich Media Social Networks
Speaker: Yu-Ru Lin, PhD Candidate, Arizona State University.
Time: 3:50pm ~ 5:00pm, Wednesday, March 22, 2009.
Place: Room 111, CSIE building

Abstract: With the rapid proliferation of different types of social media, such as instant messaging (e.g., AIM, MSN, Skype), media sharing sites (e.g., Flickr, YouTube), blogs (e.g., Blogger, WordPress, LiveJournal), wikis (e.g., Wikipedia, PBWiki), microblogs (e.g., Twitter, Jaiku), social networks (e.g., MySpace, Facebook), to mention a few, users routinely produce (e.g. blogs) and consume media (e.g. YouTube) as well as interact with each other through a wide array of functionality provided by various social media. These social media depend largely on implicit communities of online users to deliver value. Identifying and analyzing the dynamics of such latent communities can lead to improved functionality of the social media as well as provide insight into the design of future online collaborative services. The problem is particularly important in the enterprise domain where extracting emergent community structure on enterprise social media, can help in forming new collaborative teams, in expertise discovery, and guide long term enterprise reorganization.

In this talk, I will cover three aspects of community analysis in dynamic, rich media social networks: (1) Community evolution – How do we identify communities in large scale, dynamic social networks, and analyze their structures and evolutions? I will introduce a robust unified approach that discovers communities and captures their evolution with temporal smoothness given by historic community structure. (2) Community summarization – How do we summarize community activities, in order to trace community interests or retrieve community generated content? I will present a summarization framework that characterizes the time-evolving patterns of social activities with associated media objects in a community. (3) Multi-relational communities – How do we discover communities when the social networks exist in a highly connected web of contexts (e.g., social groups, geographic locations, time, etc.)? I will discuss a novel multi-relational non-negative tensor decomposition algorithm that aims to solve this problem. I will also show the effectiveness of these techniques in real world datasets collected from the blogosphere, an enterprise, Flickr, Digg, etc.

Short Biography: Yu-Ru Lin is currently a Ph.D. student in the School of Computing and Informatics at Arizona State University, with a concentration in Arts, Media and Engineering. Her advisor is Dr. Hari Sundaram. Her research interests include problems relating to dynamic multi-relational social network analysis – in particular, community dynamics, social information summarization and representation. Her research focuses on extracting human communities that collaborate around certain topics or media sharing activities. She has proposed non-negative matrix/tensor factorization techniques for analyzing community structures and evolutions in online social networks, as well as time-varying social relational data. Her work has been published in leading international conferences and journals. (Her publication can be found at http://www.public.asu.edu/~ylin56/pub.html.)
She has worked at NEC Labs America and IBM TJ Watson Research Center as a summer intern in 2006, 2007 and 2008. She has received awards including AME Student Excellence Award (2007 and 2008) and IBM PhD Fellowship Award (2009). She holds an M.S. and B.S. degree in Computer Science from National Chiao Tung University, Taiwan.

NTU talk: Mining Geotagged Photos for Semantic Understanding

Title: Mining Geotagged Photos for Semantic Understanding
Speaker: Dr. Jiebo Luo, IEEE Fellow, Senior Principal Scientist with the Kodak Research Laboratories.
Time: 2:30pm ~ 3:40pm, Wednesday, March 22, 2009.
Place: Room 111, CSIE building

Abstract:
Semantic understanding based only on vision cues has been a challenging problem. This problem is particularly acute when the application domain is unconstrained photos available on the Internet or in personal repositories. In recent years, it has been shown that metadata captured with pictures can provide valuable contextual cues complementary to the image content and can be used to improve classification performance. With the recent geotagging phenomenon, an important piece of metadata available with many geotagged pictures is GPS information. We will describe a number of novel ways to mine GPS information in a powerful contextual inference framework that boosts the accuracy of semantic understanding. With integrated GPS-capable cameras on the horizon and geotagging on the rise, this line of research will revolutionize event recognition and media annotation.

Short Biography: Jiebo Luo is a Senior Principal Scientist with the Kodak Research Laboratories in Rochester, NY. He received a B.S. degree and M.S. degree in Electrical Engineering from the University of Science and Technology of China (USTC) in 1989 and 1992, respectively, and a Ph.D. degree in Electrical Engineering from the University of Rochester in 1995. His research interests include signal and image processing, pattern recognition, computer vision, and the related multi-disciplines such as multimedia data mining, biomedical informatics, computational photography, and human-computer interaction. Dr. Luo has authored over 130 technical papers and holds 50 granted US patents. Dr. Luo actively participates in numerous technical conferences, including serving as the chair of the 2008 ACM International Conference on Content-based Image and Video Retrieval (CIVR), an area chair of the 2008 IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), a program co-chair of the 2007 SPIE International Symposium on Visual Communication and Image Processing (VCIP), a member of the Organizing Committee of the 2008 ACM Multimedia Conference, 2006 & 2008 IEEE International Conference on Multimedia and Expo (ICME) and 2002 IEEE International Conference on Image Processing (ICIP), and the chair of the IEEE CVPR Workshop on Semantic Learning Application in Multimedia (SLAM) since its inception in 2006. He is the Editor-in-Chief for the Journal of Multimedia (Academy Publisher). Currently, he is also on the editorial boards of the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), the IEEE Transactions on Multimedia (TMM), Pattern Recognition (PR), and Journal of Electronic Imaging (JEI). He is a guest editor for a number of influential special issues, including “Image Understanding for Digital Photos” (PR, 2005), “Real-World Image Annotation and Retrieval” (TPAMI, 2008), “Event Analysis in Video” (IEEE Transactions on Circuits and Systems for Video Technology, 2008), “Integration of Content and Context for Multimedia Management” (TMM, 2009), and “Probabilistic Graphic Models in Computer Vision” (TPAMI, 2009). Dr. Luo is an adjunct professor at Rochester Institute of Technology, as well as the co-dvisor or thesis committee member of many PhD and MS graduate students in various US universities. He is a Kodak Distinguished, a Fellow of SPIE for achievements in electronic imaging and visual communication, and a Fellow of IEEE for contributions to semantic image understanding and intelligent image processing.

Saturday, April 18, 2009

Lab Meeting April 20, 2009 (Casey): Avoiding moving outliers in visual SLAM by tracking moving objects

TitleAvoiding moving outliers in visual SLAM by tracking moving objects
Authors: Somkiat Wangsiripitak and David W Murray
In: ICRA 2009

Abstract:
To work at video rate, the maps that monocular SLAM builds are bound to be sparse, making them sensitive to the erroneous inclusion of moving points and to the deletion of valid points through temporary occlusion. This paper describes the parallel implementation of monoSLAM with a 3D object tracker, allowing reasoning about moving objects and occlusion. The SLAMprocess provides the object tracker with information to register objects to the map’s frame, and the object tracker allows the marking of features, either those on objects, or those created by their occluding edges, or those occluded by objects. Experiments are presented to verify the recovered geometry and to indicate the impact on camera pose in monoSLAM of including and avoiding moving features.

Lab Meeting April 20, 2009 (Yuchun): An Affective Guide Robot in a Shopping Mall

Title: An Affective Guide Robot in a Shopping Mall
Authors: Takayuki Kanda, Masahiro Shiomi, Zenta Miyashita, Hiroshi Ishiguro and Norihiro Hagita HRI 2009: 173-180

Abstract:
To explore possible robot tasks in daily life, we developed a guide robot for a shopping mall and conducted a field trial with it. The robot was designed to interact naturally with customers and to affectively provide shopping information. It was also designed to repeatedly interact with people to build a rapport; since a shopping mall is a place people repeatedly visit, it provides the chance to explicitly design a robot for multiple interactions. For this capability, we used RFID tags for person identification. The robot was semi- autonomous, partially controlled by a human operator, to cope with the difficulty of speech recognition in a real environment and to handle unexpected situations.

A field trial was conducted at a shopping mall for 25 days to observe how the robot performed this task and how people interacted with it. The robot interacted with approximately 100 groups of customers each day. We invited customers to sign up for RFID tags and those who participated answered questionnaires. The results revealed that 63 out of 235 people in fact went shopping based on the information provided by the robot. The experimental results suggest promising potential for robots working in shopping malls.

[pdf]

Thursday, April 16, 2009

CMU talk: Looking without Seeing is in fact Seeing without Knowing -- Insights from Gaze-tracked Change Blindness Studies

Center for the Neural Basis of Cognition Seminar:

Looking without Seeing is in fact Seeing without Knowing
-- Insights from Gaze-tracked Change Blindness Studies

Stella X. Yu
Clare Boothe Luce Assistant Professor
Computer Science @ Boston College
http://www.cs.bc.edu/~syu

Tuesday, April 28

Abstract: Change blindness experiments demonstrate that human vision often neglects certain aspects of the visual scene while attending to others. Using a gaze-tracked flicker paradigm and synthetic images equally rendered in three fundamental features, we explore whether and how innate feature processing might be responsible for the blindness to a change. Our analysis of detection accuracy, detection time, and gaze patterns in this active visual search task reveals distinctive feature extraction, discrimination, and selectivity of size, color, and orientation that underly different behaviours of change blindness. With an array of two-feature stimuli where a single element could change in either feature dimension, we discover that what is changing is sensed long before the subject consciously detects the change, and the change detection task is not accomplished in a single thread of searching for a nonspecific change, but in three separate threads: sensing what the change is, localizing where it might be, and discerning how it is actualized.


Bio: Stella X. Yu got her Ph.D. from the School of Computer Science at Carnegie Mellon University, where she studied robotics at the Robotics Institute and vision science at the Center for the Neural Basis of Cognition. She then went to the University of California, Berkeley to continue her research on computer vision. Her research interests are visual perception and computer vision. Since she joined Boston College, Dr. Yu has been developing an interdisciplinary curriculum and research agenda around art and vision. Her 5-year NSF CAREER proposal, entitled Art and Vision: Scene Layout from Pictorial Cues, was awarded in 2007. Her recent works include image segmentation, object matching, spatial layout categorization and inference, change blindness, and brightness perception.

Tuesday, April 14, 2009

CMU talk: Multi-robot Coordination for Domains with Intra-path Constraints

CMU FRC Seminar

Multi-robot Coordination for Domains with Intra-path Constraints

E. Gil Jones
PhD Candidate
CMU Robotics Institute

Thursday, April 16nd, 2009

Abstract
Many applications require teams of robots to cooperatively execute complex tasks. Among these domains are those where successful coordination solutions must respect constraints that occur on the intra-path level. This work focuses on multi-agent coordination for disaster response with intra-path constraints, a compelling application that is not well addressed by current coordination methods. In this domain a group of fire trucks agents attempt to address a number of fires that are occurring throughout a city in the wake of a large-scale disaster. The disaster has not only caused fires but has also caused many city roads to be blocked by debris, making them impassable; bulldozer robots also operating in the domain can clear the debris. The coordination solution must determine not only a task allocation but also what routes the fire trucks should take given the intra-path precedence constraints and which bulldozers should be assigned to satisfy those constraints. This talk will focus on two main techniques for determining multi-robot coordination solutions for domains with intra-path constraints. The first technique uses tiered auctions, a novel market-based method. The second technique uses centralized genetic algorithms. The approaches are compared in terms of solution quality and computation time in a simulated disaster response domain.

Bio
Gil is a fifth year Ph.D. student at the Robotics Institute, and is co-advised by Bernardine Dias and Tony Stentz. He received his BA in Computer Science from Swarthmore College in 2001, and spent two years as a software engineer at Bluefin Robotics - manufacturer of autonomous underwater vehicles - in Cambridge, Mass.

Saturday, April 11, 2009

CMU talk: Inferring Object Attributes

RI Seminar
Inferring Object Attributes

Derek Hoiem
Assistant Professor, University of Illinois at Urbana Champaign

April 10, 2009

Abstract: Ultimately, the goal of computer vision is to make useful inferences from imagery, and a big part of that is knowing something about the properties of nearby objects. In this talk, I'll describe our recent work on learning to identify object attributes, such as parts, materials, or shape, from images in a way that generalizes to new object categories. The tricky part is training classifiers that really predict the intended attribute, and not ones that are correlated through familiar object categories. Once we can predict attributes, we can say what is unusual about an object and more easily learn to recognize new objects Sometimes we can even recognize new object categories from a purely verbal description (e.g., a goat has four legs, horns, and is furry).

This work is with Ali Farhadi, Ian Endres, and David Forsyth at UIUC.

Speaker Bio.: Derek Hoiem is a new assistant professor at University of Illinois at Urbana Champaign. Derek researches object recognition, segmentation, 3d reconstruction from images, and other aspects of computer vision that are related to scene understanding. He recently (2007) graduated from the Robotics Institute under the tutelage of Alyosha Efros and Martial Hebert and looks forward to visiting. By request, Derek will share a little of his perspective in transitioning from being a grad student at CMU to a professor at UIUC.

CMU talk: Fundamental Limits of Imaging in Scattering Media

VASC Seminar
Monday, April 13, 2009

Fundamental Limits of Imaging in Scattering Media
Tali Treibitz
Technion

Abstract:
Scattering media exist in bad weather, liquids, biological tissue and even solids. Images taken into scattering media suffer from resolution loss photometric as well as geometric problems. In addition, high noise levels impose resolution limits, even if there is no blur. These problems inhibit vision in such media. In the talk I give an overview of our contributions in this subject:
- Resolution limits imposed by noise
- Limits in polarization-based dehazing
- Geometry limits: The non-single viewpoint nature of imaging systems looking into water through a flat glass.

Bio:
Tali Treibitz received her BA degree in computer science from the Technion-Israel Institute of Technology in 2001. She is currently a Ph.D. candidate in the department of Electrical Engineering, Technion. Her research involves physics-based computer vision. She is also an active PADI open water scuba instructor.

CMU talk: Discourse Structure from Topic Models in Text and Video

CMU ML Lunch
Speaker: Jacob Eisenstein (UIUC)
Venue: NSH 1507
Date: Monday, April 13, 2009

Title:
Discourse Structure from Topic Models in Text and Video

Abstract:
This talk describes how latent topic models can be used to discover discourse structure in unannotated text and video. Linguists have long believed that discourse topic shifts are marked by changes in the distribution of lexical items. This idea is called lexical cohesion, and can be formalized in a latent topic model, yielding substantial performance gains over previous heuristic approaches. More importantly, this Bayesian setting permits several interesting extensions:
(1) Explicit cue phrases for topic transitions are clearly relevant for segmentation, but can not be handled by previous unsupervised methods. I'll show how such cue phrases can be discovered without annotation and incorporated to improve segmentation.
(2) Topic segmentation can be applied to multimedia data by searching for self-similarity in visual communication. I'll present a topic segmenter for conversational speech that integrates lexical and gestural cohesion.
(3) Hierarchical structure can be discovered by modeling lexical cohesion as a multi-scale phenomenon, in which some words are governed by low-level subtopics, and others by the high-level topics. Inference is performed jointly across scale-levels, improving on greedy top-down approaches.

Overall, these extensions outperform the state-of-the-art on several tasks, and point the way to more comprehensive analysis of discourse structure through hierarchical Bayesian models.

BIO: Jacob Eisenstein is a Beckman Postdoctoral Fellow at the University of Illinois. He completed his doctorate at MIT in 2008 under the supervision of Regina Barzilay and Randall Davis. His thesis, titled "Gesture in Automatic Discourse Processing," won the 2008 George M. Sprowls award for best Doctoral theses in Computer Science at MIT. Working in the domain of computational linguistics, Jacob's research focuses on applying state-of-the-art structured learning techniques to discourse processing and visual communication.

Friday, April 10, 2009

Lab Meeting April 13,2009 (Jimmy): Mutual Localization in a Team of Autonomous Robots using Acoustic Robot Detection

Title: Mutual Localization in a Team of Autonomous Robots using Acoustic Robot Detection
Authors: David Becker and Max Risler
In: RoboCup International Symposium 2008

Abstract
In order to improve self-localization accuracy we are exploring ways of mutual localization in a team of autonomous robots. Detecting team mates visually usually leads to inaccurate bearings and only rough distance estimates. Also, visually identifying teammates is not possible. Therefore we are investigating methods of gaining relative position information acoustically in a team of robots.
The technique introduced in this paper is a variant of code-multiplexed communication (CDMA, code division multiple access). In a CDMA system, several receivers and senders can communicate at the same time, using the same carrier frequency. Well-known examples of CDMA systems include wireless computer networks and the Global Positioning System, GPS. While these systems use electro-magnetic waves, we will try to adopt the CDMA principle towards using acoustic pattern recognition, enabling robots to calculate distances and bearings to each other.
First, we explain the general idea of cross-correlation functions and appropriate signal pattern generation. We will further explain the importance of synchronized clocks and discuss the problems arising from clock drifts.
Finally, we describe an implementation using the Aibo ERS-7 as platform and briefly state basic results, including measurement accuracy and a runtime estimate. We will briefly discuss acoustic localization in the specific scenario of a RoboCup soccer game.

[pdf]

Lab Meeting April 13,2009 (Gary) Subtle facial expression recognition using motion magnification

Title: Subtle facial expression recognition using motion magnification
Authors: Sungsoo Park, Daijin Kim

Pattern Recognition Letters
Volume 30, Issue 7, 1 May 2009, Pages 708-716

Abstract
This paper proposes a novel method for subtle facial expression recognition that uses motion magnification to transform subtle expressions into corresponding exaggerated ones. Motion magnification consists of four steps: First, active appearance model (AAM) fitting extracts 70 facial feature points in the face image sequence. Second, the face image sequence is aligned using the three feature points (two eyes and nose tip). Third, the motion vectors of 27 feature points are estimated using the feature point tracking method. Finally, exaggerated facial expressions are obtained by magnifying the motion vectors of the 27 feature points. After motion magnification, the exaggerated facial expressions are recognized as follows: first, the shape and appearance features are obtained by projecting the exaggerated facial expression image to the AAM shape and appearance model. Second, support vector machines (SVM) are used to classify shape and appearance features. Experimental results show that proposed subtle facial recognition rate is 88.125% for the 80 facial expression images in the SFED2007 database.


link

Saturday, April 04, 2009

NTU talk: Skill Learning for Humanoid Robots

NTU CSIE Seminar
Time: 2:20 PM, May 8, 2009 
Place: NTU CSIE

Skill Learning for Humanoid Robots

Hsien-I Lin
School of Electrical and Computer Engineering
Purdue University
West Lafayette, IN 47907-2035
sofin@purdue.edu

Abstract:
Recent advances in human-centered robots such as humanoid robots are driven by the projection that these robots will have a place in our society and in our daily life activities as assistive robots. Endowing these humanoid robots with the ability of skill learning will enable them to be versatile and skillful in performing various tasks. The problem of transferring human skills to humanoid robots raises tremendous research interest in studying human and robot motor skills. Our current research aims at developing a quantitative measure of robot motor capability of a humanoid root motor system for the application of transferring human skills to a humanoid robot.

We propose to employ an information-theory-based method to quantitatively represent the robot motor capability by a pseudo index of motor performance. This pseudo index of motor performance is derived from kinematics, dynamics, and control with the speed-accuracy constraint taken into consideration. With the speed-accuracy constraint, we are able to optimize the motor performance of a robot to accomplish a task by satisfying the task spatial and temporal constraints. Computer simulations and experimental work were performed on a 6 DOF PUMA robot to validate the performance of the proposed approach in measuring the robot motor capability of a root motor system.

Bio: Hsien-I Lin received the B.S. and M.S. degrees in Electrical and Control Engineering from National Chiao Tung University in 1997 and 1999, respectively, and he is currently a Ph.D. candidate in Electrical and Computer Engineering at Purdue University, West Lafayette, Indiana. Before beginning his academic career, he worked for the VIA technologies, Inc., Taipei, Taiwan during 2001-2003, after which he was a research assistant of the Department of Bio-Industrial Mechatronics Engineering at National Taiwan University during 2003-2004. Since then, he has been a research assistant of the Department of Electrical and Computer Engineering at Purdue University. His research interests are in the areas of human-robot interaction with emphasis on robot skill learning, intelligent systems, and neuro-fuzzy networks.

CMU talk: Fourier Theoretic Probabilistic Inference over Permutations

Speaker: Jonathan Huang (RI @ CMU)
Venue: NSH 1507
Date: Monday, April 6, 2009
Time: 12:00 noon

Title:
Fourier Theoretic Probabilistic Inference over Permutations

Abstract:
Permutations are ubiquitous in many real-world problems, such as voting, ranking, and data association. Representing uncertainty over permutations, however, is challenging, since there are $n!$ possibilities, and common factorized probability distribution representations, such as graphical models, are inefficient due to the mutual exclusivity constraints that are typically associated with permutations.

I will talk about a recent approach for probabilistic reasoning with permutations based on the idea of approximating distributions using their low-frequency Fourier components. Maintaining the appropriate set of low-frequency Fourier terms corresponds to maintaining matrices of simple marginal probabilities which summarize the underlying distribution. Using these intuitions, I will show how to derive the Fourier coefficients of a variety of probabilistic models which arise in practice and that many useful models are either well-approximated or exactly represented by low-frequency (and in many cases, sparse) Fourier coefficient matrices.

In addition to showing that Fourier representations are both compact and intuitive, I will show how to cast common probabilistic inference operations in the Fourier domain, including marginalization, conditioning on evidence, and factoring based on probabilistic independence.

From the theoretical side, our work has tackled several problems in understanding the consequences of the bandlimiting approximation. On this front, I will present illuminating results about the nature of error propagation in the Fourier domain and propose methods for mitigating their effects.

Finally I will demonstrate the approach on several real datasets and show that our methods, in addition to being well-founded theoretically, are also scalable and provide superior results in practice.

joint work with Carlos Guestrin, Leonidas Guibas and Xiaoye Jiang

Friday, April 03, 2009

News: A Big-Screen Display as Mobile as Your Phone

Spectrum Video: A Big-Screen Display as Mobile as Your Phone
Projector phones give us the first glimpse at a future where device size and display size are independent. The companies behind liquid crystal on silicon, digital light processing, and laser projection technologies each think they have a superior technology, but watch the video and decide for yourself.
View Now!

I just can not forget our previous work on the camera-projector systems. -Bob

Thursday, April 02, 2009

CMU talk: Next Generation Map Making

Next Generation Map Making
Xin Chen
NAVTEQ

VASC Seminar
Monday, April 6, 2009

Abstract:
NAVTEQ is a leading global provider of digital map data. NAVTEQ maps drive most in-vehicle navigation systems, the top routing web sites, and the leading brands of wireless navigation devices. NAVTEQ continues to enhance the technologies used for collecting, analyzing, and delivering new content to a wide range of users and devices. Dr. Chen will discuss NAVTEQ's perspective on the hardware and software systems required to automatically create and maintain a navigable map through the use of high-end mobile data collection sensors and computer vision techniques. Dr. Chen will also present numerous research efforts based on video and LIDAR data collection as well as various challenging problems related to automatic feature extraction for mapping and navigation.

Bio:
Dr. Xin Chen currently works as a senior researcher in the Research and Emerging Technologies Department of NAVTEQ Corporation. His recent research efforts have concentrated on computer vision, pattern recognition and image processing of video, aerial photo and LIDAR. He received a Ph.D. in Computer Science and Engineering from the University of Notre Dame. His research at Notre Dame focused on biometrics, including infrared, 2D, and 3D face recognition.