Thursday, July 28, 2005

the RoboCup 2005 Symposium

The proceedings are available here. The files could be removed soon.

-Bob

Tuesday, July 26, 2005

PhD Oral: path planning

Extending the Path-planning Horizon

Bart Nabbe
Robotics Institute
Carnegie Mellon University

Abstract: The mobility sensors (LADAR, stereo, etc.) on a typical mobile robot vehicle can only acquire data up to a distance of a few tens of meters. Therefore a navigation system has no knowledge about the world beyond this sensing horizon. As a result, path planners that rely only on this knowledge to compute paths are unable to anticipate obstacles sufficiently early and have no choice but to resort to an inefficient behavior of local obstacle contour tracing. To alleviate this problem, we present an opportunistic navigation and view planning strategy that incorporates look-ahead sensing of possible obstacle configurations. This planning strategy is based on a what-if analysis of hypothetical future configurations of the environment. Candidate vantage positions are evaluated based on their ability of observing anticipated obstacles. These vantage positions identified by this forward-simulation framework are used by the planner as intermediate waypoints. The validity of the strategy is supported by results from simulations as well as field experiments with a real robotic platform. These results also show that opportunistically significant reduction in path length can be achieved by using this framework.

Talk at CMU: Spatiotemporal Modeling of Facial Expressions

Maja Pantic
Delft University of Technology

Abstract:

Machine understanding of facial expressions could revolutionize human-machine interaction technologies and fields as diverse as security, behavioral science, medicine, and education. Consequently, computer-based recognition of facial expressions has become an active research area.

Most systems for automatic analysis of facial expressions attempt to recognize a small set of "universal" emotions such as happiness and anger. Recent psychological studies claim, however, that facial expression interpretation in terms of emotions is culture dependent and may even be person dependent. To allow for rich and sometimes subtle shadings of emotion that humans recognize in a facial expression, context-dependent (e.g., user- and task-dependent) recognition of emotions from images of faces is needed.

We propose a case-based reasoning system capable of classifying facial expressions (given in terms of facial muscle actions) into the emotion categories learned from the user. The utilized case base is a dynamic, incrementally self-organizing event-content-addressable memory that allows fact retrieval and evaluation of encountered events based upon the user preferences and the generalizations formed from prior input.

Three systems for automatic recognition of facial muscle actions (i.e., Action Units, AUs) in face video will be presented as well. One of these uses temporal templates as the data representation and a combined k-Nearest-Neighbor and rule-based classifier as the recognition engine. Temporal templates are 2D representations of motion history, that is, they picture where and when motion in the input image sequence has occurred. The other two systems exploit particle filtering to track facial characteristic points in an input face video. One of those systems performs facial-behavior temporal-dynamics recognition in face-profile image sequences using temporal rules. The other employs Support Vector Machines to encode 20 AUs occurring alone or in combination in an input nearly-frontal view face video.

The systems have been trained and tested using two different databases: the Cohn-Kanade facial expression database and our own web-based MMI facial expression database. The recognition results achieved by the proposed systems demonstrated rather high concurrent validity with human coding.

Bio: Maja (Maya) Pantic received the MS and PhD degrees in computer science from Delft University of Technology, in 1997 and 2001. She is currently an associate professor at the Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, The Netherlands, where she is doing research in the area of machine analysis of human interactive cues for achieving a natural, multimodal human-machine interaction. She is the (co-) principal investigator in three large, national, ongoing projects in the area of multimodal, affective, human-machine interaction. She was the organizer and co-organizer of various meetings and symposia on Automatic Facial Expression Analysis and Synthesis and she is the Associate Editor of the IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics, responsible for computer vision and its applications to human-computer interaction. In 2002, for her research on Facial Information For Advanced Interface, she received Innovational Research Award of Dutch Scientific Organization as one of the 7 best young scientists in exact sciences in the Netherlands. She is currently a visiting professor at the Robotics Institute, Carnegie Mellon University. She has published more than 40 technical papers in the areas of machine analysis of facial expressions and emotions, artificial intelligence, and human-computer interaction and has served on the program committee of several conferences in these areas. For more information, please see http://mmi.tudelft.nl/~maja/

Monday, July 25, 2005

Tutorials on Graphical Models

David Heckerman
Tutorial and applications overview for data miner – slides from KDD 2004

Phd Thesis: Microscopic Pedestrain Flow Characteristics

Kardi Teknomo,
Microscopic Pedestrian Flow Characteristics: Development of an Image Processing Data Collection and Simulation Model,
Ph.D. Dissertation, Tohoku University Japan, Sendai, 2002.

Friday, July 22, 2005

EE/CS Course List

ftp://anonymous@ftp.ntu.edu.tw/NTU/course/COURSE09.XLS

Paper: SIFT feature

David G. Lowe
Distinctive image features from scale-invariant keypoints
International Journal of Computer Vision, 60, 2 (2004), pp. 91-110.
Demo Software

Paper: Inference and Learning

Brendan J. Frey and Nebojsa Jojic
Advances in Algorithms for Inference and Learning in Complex Probability Models
2003?
Abstract: Computer vision is currently one of the most exciting areas of artificial intelligence research, largely because it has recently become possible to record, store and process large amounts of visual data. Impressive results have been obtained by applying discriminative techniques in an ad hoc fashion to large amounts of data, e.g., using support vector machines for detecting face patterns in images. However, it is even more exciting that researchers may be on the verge of introducing computer vision systems that perform realistic scene analysis, decomposing a video into its constituent objects, lighting conditions, motion patterns, and so on. In our view, two of the main challenges in computer vision are finding efficient models of the physics of visual scenes and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based generative probability models and their associated inference and learning algorithms for computer vision and scene analysis. We review exact techniques and various approximate, computationally efficient techniques, including iterative conditional modes, the expectation maximization algorithm, the mean field method, variational techniques, structured variational techniques, Gibbs sampling, the sum-product algorithm and “loopy” belief propagation. We describe how each technique can be applied to an illustrative example of inference and learning in models of multiple, occluding objects, and compare the performances of the techniques.

Talk: Learning to See People

Michael J. Black
Learning to See People, Invited talk, Twenty-First International Conference on Machine Learning (ICML 2004), Banff, Alberta, Canada, July 4-8, 2004.

Talk: Michael J. Tarr

I have the sildes of Michael J. Tarr about "Human Object Recognition: Do we know more than we did 20 years ago?". It is interesting, but I can not find the link at the moment. -Bob

Book list

This list is only for myself. :)
-Bob

J. Whittaker. Graphical Models in Applied Mathematical Multivariate Statistics. John Wiley & Sons, 1990.
Steffen L. Lauritzen. Graphical Models. Clarendon Press, London, 1996.
L. W. Beineke and R. J. Wilson. Graph Connection: Relationships Between Graph Theory and Other Areas of Mathematics. Clarendon Press, London, 1997.
G. Hinton and T. J. Sejnowski. Unsupervised Learning: Foundations of Neural Computation. The MIT Press, 1999.
M. I. Jordan and T. J. Sejnowski. Graphical Models: Foundations of Neural Computation. The MIT Press, 2001.
S. N. Lahiri. Resampling Methods for Dependent Data. Springer 2003.

Paper: Plan Recognition

Hung H. Bui
Efficient Approximate Inference for Online Probabilistic Plan Recognition
Nov. 2002
A General Model for Online Probabilistic Plan Recognition, IJCAI 2003

Georgia Institute of Technology
Expectation Grammars: Leveraging High-Level Expectations for Activity Recognition, CVPR 2003
Asymmetrically Boosted HMM for Speech Reading, CVPR2004
Propagation Networks for Recognition of Partially Ordered Sequential Action, CVPR2004

Jianbo Shi
Detecting Unusual Activity in Video, CVPR2004

Conditional Random Fields

Notes on Conditional Random Fields from Hanna M. Wallach.

Paper: Markov Random Fields

Hinton, G. E., Osindero, S. and Bao, K.
Learning Causally Linked Markov Random Fields.
In: Artificial Intelligence and Statistics, 2005, Barbados

Abstract:We describe a learning procedure for a generative model that contains a hidden Markov Random Field (MRF) which has directed connections to the observable variables. The learning procedure uses a variational approximation for the posterior distribution over the hidden variables. Despite the intractable partition function of the MRF, the weights on the directed connections and the variational approximation itself can be learned by maximizing a lower bound on the log probability of the observed data. The parameters of the MRF are learned by using the mean field version of contrastive divergence [1]. We show that this hybrid model simultaneously learns parts of objects and their inter-relationships from intensity images. We discuss the extension to multiple MRF's linked into in a chain graph by directed connections.

Paper: Unsupervised Mining

Lexing Xie, Shih-Fu Chang, Ajay Divakaran, Huifang Sun.
Unsupervised Mining of Statistical Temporal Structures in Video.
In Video Mining, A. Rosenfeld, D. Doremann, D. Dementhon (eds.), Chap. 10, Kluwer Academic Publishers, 2003
The DVMM Lab at Columbia University

Thursday, July 21, 2005

Papers: Activities and Interactions

NIPS2004: Workshop on Activity Recognition and Discovery

Nuria Oliver, Barbara Rosario and Alex Pentland.
A Bayesian Computer Vision System for Modeling Human Interactions
IEEE Transactions on Pattern Analysis and Machine Intelligence, August 2000.

Yuri A. Ivanov and Aaron F. Bobick
Recognition of Visual Activities and Interactions by Stochastic Parsing
IEEE Transactions on Pattern Analysis and Machine Intelligence, August 2000.

Stephen S. Intille and Aaron F. Bobick
Recognizing Planned, Multiperson Action
Computer Vision and Image Understanding 81, 414-445, 2001
A Framework for Recognizing Multi-Agent Action from Visual Evidence
AAAI 1999

Notes: Information Geometry

Notes on Information Geometry from Cosma Rohilla Shalizi.

Paper: Social Interactions

Charles F. Manski

Economic Analysis of Social Interactions
March 2000, forthcoming in the Journal of Economic Perspectives

Abstract: Economists have long been ambivalent about whether the discipline should focus on the analysis of markets or should be concerned with social interactions more generally. Recently the discipline has sought to broaden its scope while maintaining the rigor of modern economic analysis. Major theoretical developments in game theory, the economics of the family, and endogenous growth theory have taken place. Economists have also performed new empirical research on social interactions, but the empirical literature does not show progress comparable to that achieved in economic theory. This paper examines why and discusses how economists might make sustained contributions to the empirical analysis of social interactions.

Nobel Lecture: The Economic Way of Looking at Behavior

Gary S. Becker
Winner of the 1992 Nobel Prize in Economics

Abstract: An important step in extending the traditional theory of individual rational choice to analyze social issues beyond those usually considered by economists is to incorporate into the theory a much richer class of attitudes, preferences, and calculations. While this approach to behavior builds on an expanded.theory of individual choice, it is not mainly concerned with individuals. It uses theory at the micro level as a powerful tool to derive implications at the group or macro level. The lecture describes the approach and illustrates it with examples drawn from the author's past and current work.

Download link

Paper: Unsupervised Learning

Kilian Weinberger & Lawrence Saul

Unsupervised Learning of Image Manifolds by Semidefinite Programming
CVPR 2004

Abstract: Can we detect low dimensional structure in high dimensional data sets of images and video? The problem of dimensionality reduction arises often in computer vision and pattern recognition. In this paper, we propose a new solution to this problem based on semidefinite programming. Our algorithm can be used to analyze high dimensional data that lies on or near a low dimensional manifold. It overcomes certain limitations of previous work in manifold learning, such as Isomap and locally linear embedding. It also bridges two recent developments in machine learning: semidefinite programming for learning kernel matrices and spectral methods for nonlinear dimensionality reduction. We illustrate the algorithm on easily visualized examples of curves and surfaces, as well as on actual images of faces, handwritten digits, and solid objects.

Tuesday, July 19, 2005

Paper: Dynamic Maps

Peter Biber, Tom Duckett

Dynamic Maps for Long-Term Operation of Mobile Service Robots


Robotics: Science and Systems, June, 2005

Abstract: This paper introduces a dynamic map for mobile robots that adapts continuously over time. It resolves the stabilityplasticity dilemma (the trade-off between adaptation to new patterns and preservation of old patterns) by representing the environment over multiple timescales simultaneously (5 in our experiments). A sample-based representation is proposed, where older memories fade at different rates depending on the timescale. Robust statistics are used to interpret the samples. It is shown that this approach can track both stationary and non-stationary elements of the environment, covering the full spectrum of variations from moving objects to structural changes. The method was evaluated in a five week experiment in a real dynamic environment. Experimental results show that the resulting map is stable, improves its quality over time and adapts to changes.

Saturday, July 16, 2005

Robocup 2005


Robocup Rescue League: ARC CAS team
Photo by Jonathan Paxman

Friday, July 15, 2005

News: Intel experiments with Wi-Fi as GPS substitute

By Michael Kanellos, CNET News.com
Published on ZDNet News: July 12, 2005, 4:30 PM PT

SAN JOSE, Calif.--The satellites that comprise the global positioning system can pinpoint a person's location to within a few meters. Intel is experimenting with ordinary wireless networks to see if the same job can be done on land.

Link.

You should be able to find some related papers in previous posts. -Bob

News: I, Roommate: The Robot Housekeeper Arrives

The New York Times:

By MARK ALLEN
Published: July 14, 2005

WHEN my home robot arrived last month, its smiling inventors removed it from its box and laid it on its back on my living room floor. They leaned over and spoke to it, as one might to a sleeping child.

It straightened, let out a little beep, lighted up, looked left and right, and then, amazingly, stood and faced me.

I said, "Nuvo, how are you?"

It tilted to the left, and raised one arm to greet me. It shook my hand and winked with one of the lights in its little head. My life hasn't really been the same since.

More? Click Here

Thursday, July 14, 2005

Sony QRIO


Demo preparation
IROS2004, Sendai, Japan

American Open 2003 @ CMU

frame 2: defence

frame 1: attack

Summer 2002 at NASA Ames

NASA K-9 platform

PhD Thesis: Activity Map

Learning an Activity-Based Semantic Scene Model

Dimitrios Makris (2004)
City University, London
School of Engineering and Mathematical Sciences
Information Engineering Centre

Abstract
This thesis investigates how scene activity, which is observed by fixed surveillance cameras, can be modelled and learnt. Modelling of activity is performed through a spatio-probabilistic scene model that contains semantics like entry/exit zones, paths, junctions, routes and stop zones. The spatial nature of the model allows physical and semantic representation of the scene features, which can be useful in applications like video annotation and contextual databases. The probabilistic nature of the model encodes the variance and the related uncertainty of the usage of the scene features, which is useful for activity analysis applications, such as motion prediction and atypical motion detection.
A variety of models and learning methods are used to represent and automatically derive particular activity-based semantic scene elements. Expectation-Maximisation is used for learning Gaussian Mixture Models and accumulative statistics in image maps are integrated in the methods presented. Also, a novel route model and an appropriate learning algorithm are introduced. Additionally, a Hidden Markov Model superimposed on the scene model is used for enabling activity analysis.
The application of the methods is investigated for single cameras and collectively across multiple cameras. Additionally, a novel automatic cross-correlation method is introduced that reveals the topology of a network of activities, as observed by a network of uncalibrated cameras. The method is important not only because it provides an integrated activity model for all the cameras, but also because it provides a mechanism to automatically estimate the topology of the camera network, modelling the activity across the “blind” areas of the surveillance system.
All the proposed learning algorithms are unsupervised to allow automatic learning of the scene model. Their input is a set of noisy trajectories derived automatically by motion tracking modules, attached to each of the cameras.

International Hands-on Competition 2005

For your information, -Bob

Dear Colleagues:

On behalf of President Ren C. Luo at National Chung Cheng University (NCCU), Taiwan, we are pleased to invite you and your students to participate “2005 International Student Experimental Hands-on Project Competition via Internet on Intelligent Mechatronics and Automation”. The objective of this competition is to stimulate the advancement of mechatronics among students’ experimental research projects. The feature of this competition activity is to use Internet visual communication techniques to reduce the logistic problems. Please see the attachment or visit our web site at: http://www.handson.org.tw for detailed information.

Hereby, we are sending you the Call for Participations. The deadline to submit the video for preliminary review is September 15, 2005. The final live competition via Internet is scheduled for December 9, 2005.

We look forward to having your participation.

Sincerely,

******************************
Chao-Chu Chen (Ms.)
Hands-on 2005 Secretariat
National Chung Cheng University
Automation Research Center
168, University Rd., Min-Hsiung
Chia-Yi, Taiwan, 621, R.O.C.
TEL: 886-5-272-0411 ext.16755
FAX: 886-5-272-3941
Email: handson@ia.ee.ccu.edu.tw
******************************

Wednesday, July 13, 2005

Papers: planning in dynamic environments

Jur P. van den Berg, D. Nieuwenhuisen, L. Jaillet and M. Overmars
Creating Robust Roadmaps for Motion Planning in Changing Environments
IROS 2005

Abstract
In this paper we introduce a method based on the Probabilistic Roadmap (PRM) Planner to construct robust roadmaps for motion planning in changing environments. PRM’s are usually aimed at static environments. In reality though, many environments are not static, but contain moving obstacles as well. Often the motion of these obstacles is not unconstrained, but is restricted to some confined area, e.g. a door that can be open or closed or a chair which is bounded to a room. We exploit this observation by assuming that a moving obstacle has a predefined set of potential placements. We present a variant of PRM that is robust against placement changes of obstacles. Our method creates a roadmap that is guaranteed to contain a path for any feasible query when time goes to infinity, i.e. the method is probabilistically complete. Our implementation shows that after a roadmap is created in the preprocessing phase, queries can be solved instantaneously, thus allowing for on-the-fly replanning to anticipate changes in the environment.


Gazihan Alankus, Nuzhet Atay, Chenyang Lu, O. Burchan Bayazit
SPATIOTEMPORAL QUERY STRATEGIES FOR NAVIGATION IN DYNAMIC SENSOR NETWORK ENVIRONMENTS
IROS 2005

Abstract
Autonomous mobile agent navigation is crucial to many mission-critical applications (e.g., search and rescue missions in a disaster area). In this paper, we present how sensor networks may assist probabilistic roadmap methods (PRMs), a class of efficient navigation algorithms particularly suitable for dynamic environments. A key challenge of applying PRM algorithms in dynamic environment is that they require the spatiotemporal sensing of the environment to solve a given navigation problem. To facilitate navigation, we propose a set of query strategies that allow a mobile agent
to periodically collect real-time information (e.g., fire conditions) about the environment through a sensor network. Such strategies include local spatiotemporal query (query of spatial neighborhood), global spatiotemporal query (query of all sensors), and border query (query of the border of danger fields). We investigate the impact of different query strategies through simulations under a set of realistic fire conditions. We also evaluate the feasibility of our approach using a real robot and real motes. Our results demonstrate that (1) spatiotemporal queries from a sensor network result in significantly better navigation performance than traditional approaches based on on-board sensors of a robot, (2) the area of local queries represent a tradeoff between communication cost and navigation performance, (3) through in-network processing our border query strategy achieves the best navigation performance at a small fraction of communication cost compared to global spatiotemporal queries.

Monday, July 11, 2005

AAAI-05 Outstanding Paper Award.

Vincent A. Cicirello, my office mate between 1999-2000 at CMU, and his adviser, Stephen F. Smith, won the twentieth National Conference on Artificial Intelligence (AAAI-05) Outstanding Paper Award for the paper entitled "The Max K-Armed Bandit: A New Model of Exploration Applied to Search Heuristic Selection".

Congratulations!

Paper: Robotic Mapping

Mark A. Paskin and Sebastian Thrun
Robotic Mapping with Polygonal Random Fields
UAI 2005

Abstract
Two types of probabilistic maps are popular in the mobile robotics literature: occupancy grids and geometric maps. Occupancy grids have the advantages of simplicity and speed, but they represent only a restricted class of maps and they make incorrect independence assumptions. On the other hand, current geometric approaches, which characterize the environment by features such as line segments, can represent complex environments compactly. However, they do not reason explicitly about occupancy, a necessity for motion planning; and, they lack a complete probability model over environmental structures. In this paper we present a probabilistic mapping technique based on polygonal random fields (PRF), which combines the advantages of both approaches. Our approach explicitly represents occupancy using a geometric representation, and it is based upon a consistent probability distribution over environments which avoids the incorrect independence assumptions made by occupancy grids. We show how sampling techniques for PRFs can be applied to localized laser and sonar data, and we demonstrate significant improvements in mapping performance over occupancy grids.

Paper: Smartphones

E. Horvitz, J. Apacible, R. Sarin, and L. Liao (2005).
Prediction, Expectation, and Surprise: Methods, Designs, and Study of a Deployed Traffic Forecasting Service
Twenty-First Conference on Uncertainty in Artificial Intelligence, UAI-2005, Edinburgh, Scotland, July 2005.

Abstract
We present research on developing models that forecast traffic flow and congestion in the Greater Seattle area. The research has led to the deployment of a service named JamBayes, that is being actively used by over 2,500 users via smartphones and desktop versions of the system. We review the modeling effort and describe experiments probing the predictive accuracy of the models. Finally, we present research on building models that can identify current and future surprises, via efforts on modeling and forecasting unexpected situations.

talk at CMU: Tracking Across Multiple Moving Cameras

Dr. Mubarak Shah
Computer Vision Lab, School of Computer Science
University of Central Florida, Orlando, FL 32816
http://www.cs.ucf.edu/~vision/

Check out their ICCV2005 papers!

The concept of a cooperative multi-camera system, informally a 'forest' of sensors, has recently received increasing attention from the research community. The idea is of great practical relevance, since cameras typically have limited fields of view, but are now available at low costs. Thus, instead of having a high-resolution camera that surveys a large area, far greater flexibility and scalability can be achieved by observing a scene 'through many eyes', using a multitude of lower-resolution COTS (commercial off-the-shelf) cameras.

In this talk I will present two approaches for object tracking across multiple moving cameras. In the first approach, objects are to be tracked across several cameras, each mounted on an aerial vehicle, without any telemetry or calibration information. The principal assumption that is made in this work is that the altitude of the camera allows the scene to be modeled well by a plane. First the global motion is compensated in each video sequence and objects are detected and tracked in individual cameras. For solving multiple camera correspondence problem we exploit constraints on the relationship between the motion of each object across cameras, estimating the probability that trajectories in two views originated from the same object, to test multiple correspondence hypotheses (without assuming any calibration information).

In the second approach we consider sequences acquired by hand-held cameras, for which planar scene assumption is not valid. Recently we have proposed a notion of temporal fundamental matrix to capture the epi-polar geometry between the temporal views of independently moving camera pair where the scene is dynamic. The temporal fundamental matrix, which is a 3x3 matrix capturing the temporal variation of the geometry. Constraining the rotational and translational motion of cameras to polynomials in time, we have shown that the components of the fundamental matrix are polynomials in time. In order to obtain the correct correspondences across the multiple moving cameras, we perform a maximum bipartite matching of a graph, in which the weights of the edges depend on the properties of the temporal fundamental matrix.

======================================
Dr. Mubarak Shah, Agere Chair professor of Computer Science, and the founding director of the Computer Vision Laboratory at University of Central Florida (UCF), is a researcher in computer vision. He is a co-author of two books Video Registration (2003) and Motion-Based Recognition (1997), both by Kluwer Academic Publishers. He has worked in several areas including activity and gesture recognition, violence detection, event ontology, object tracking (fixed camera, moving camera, multiple overlapping and non-overlapping cameras), video segmentation, story and scene segmentation, view morphing, ATR, wide-baseline matching, and video registration. . Dr. Shah is a fellow of IEEE, was an IEEE Distinguished Visitor speaker for 1997-2000, and is often invited to present seminars, tutorials and invited talks all over the world. He received the Harris Corporation Engineering Achievement Award in 1999, the TOKTEN awards from UNDP in 1995, 1997, and 2000; Teaching Incentive Program award in 1995 and 2003, Research Incentive Award in 2003, and IEEE Outstanding Engineering Educator Award in 1997. He is an editor of international book series on "Video Computing"; editor in chief of Machine Vision and Applications journal, and an associate editor Pattern Recognition journal. He was an associate editor of the IEEE Transactions on PAMI, and a guest editor of the special issue of International Journal of Computer Vision on Video Computing.

Tuesday, July 05, 2005

Jobs: Computer Vision, Pattern Recognition

1. INDUSTRIAL LIGHT + MAGIC
RESEARCH AND DEVELOPMENT IS SEEKING COMPUTER VISION SPECIALISTS

SUMMARY
ILM is currently seeking computer vision specialists for our Research and Development department. Key technologies include 2d and 3d tracking, matchmove, 3d reconstruction, image-based rendering, and related computer vision techniques. Duties include designing and implementing new algorithms and systems, maintaining current systems, and assisting artists in film production tasks.

PRINCIPAL DUTIES AND REQUIREMENTS
- Primarily responsible for the development of algorithms, software, and/or systems under the guidance of a departmental project lead.
- May work directly with artists to identify technology solutions and define workflows and interfaces.
- Serves as a knowledge resource for software and/or systems used in production at ILM. This includes end user support of alpha and beta release cycles.
- Advises/assists junior engineers with maintenance and bug fixing of existing software and/or systems.
- Expected to participate in discussions surrounding future applications and advise on appropriateness of solutions.

EDUCATION, EXPERIENCE, AND SKILLS REQUIRED
- Bachelor's degree in Engineering or Scientific discipline, advanced degree strongly preferred.
- 2-4 years of professional or post-doc experience in applied computer graphics or vision.
- Some experience with commercial 2d and 3d production tools.
- In-depth knowledge and demonstrated experience with computer vision algorithms.
- Excellence in problem solving and balancing quick turnaround with long-term quality.
- Must be able to work well with a wide range of personality types.
- Must be detail oriented and organized, possess strong communication skills, and be able to prioritize a variety of tasks efficiently.

TO APPLY
If you are interested in this role, please email a resume to "lala@ilm.com" referencing JOB CV.

---------------------------------------------------------------------------------------------

2. Pittsburgh Pattern Recognition is a spin-off of Carnegie Mellon University formed in 2004 to commercialize patented object detection and recognition software. We seek self-motivated individuals who share a vision, passion, and appreciation for exploration. Expectations are high: our employees must be driven by intellectual curiosity while remaining firmly grounded in developing products that yield substantial financial returns. Since our product development requires close collaboration and an interdisciplinary approach to solving problems, employees constantly engage in a variety of different projects and tasks to support the company’s growth.

Software Engineers will develop computer vision/pattern recognition solutions for commercial and government applications. Employees will work collaboratively within a small development team as well as with customers.

Requirements include:
• a BS and/or MS in electrical engineering, computer science, or related field
• software development experience
• working knowledge of image processing and statistical pattern recognition
• complex problem-solving skills
• attention to detail and excellent communication skills

Preferred qualifications include: experience with hardware/embedded systems, system-administration experience, programming experience in Microsoft Windows environment, and Linux system administration experience.

PittPatt offers competitive compensation, health care benefits, and a stock plan for qualified employees. Current positions are only for our headquarters in “The Strip” warehouse district in Pittsburgh.

If you’re interested in an exciting career at a rising company, please submit a resume, cover letter, and two references by email to careers@pittpatt.com with the subject “University Posting: Software Engineer”.

Pittsburgh Pattern Recognition is an EOE employer. All job applications are maintained on file for two years from the date received.

Job at NASA Ames

For your information. -Bob

ROBOTICS RESEARCHER POSITION

The Intelligent Robotics Group at the NASA Ames Research Center has an immediate opening for a full-time researcher. Applicants should hold a M.S. or Ph.D. in Computer Science or Robotics and have experience in software architectures (especially robot controllers and interaction infrastructure). A strong background in UNIX-based development, including C++, Java, and software engineering (UML, object-oriented design, etc.) is required. In addition, knowledge in one, or more, of the following areas is greatly preferred:

- agent architectures and delegated computing
- computer vision (visual servoing, autonomous classification, and SLAM)
- human-robot interaction (dialogue, user modeling, and user interfaces)
- marine / underwater robotics
- mobile manipulation (especially non-prehensile)
- perceptual user interfaces (gaze following, visual gesturing, etc.)
- real-time and distributed computing

If you are interested in applying for this position, please send the following via email:

- a letter describing your background and motivation
- a detailed CV (preferably in text or PDF format)
- contact details of at least two references

to Dr. Terry Fong .

The NASA Ames Research Center is located at Moffett Field, California in the heart of Silicon Valley. NASA Ames is a leader in information technology research, with a focus on intelligent systems, supercomputing, and networking. More than 3,500 personnel are employed at Ames. In addition, approximately 300 graduate students, cooperative education students, post-doctoral fellows, and university faculty work at the Center.

Since 1998, the Intelligent Robotics Group has been building robots to help humans explore and understand extreme environments and uncharted worlds. IRG conducts cross-cutting research in a wide range of areas including: 3D user interfaces, outdoor computer vision, human-robot interaction, navigation, mobile manipulation, robot software architectures and field mobile robots. This research directly supports applications in education, planetary exploration, marine robotics, and urban search and rescue.

Friday, July 01, 2005

LEGO MindStorms

Today I went to a mall with my girlfriend, and we saw the LEGO MindStorms Robotics Invention System 2.0. It is traditional lego with microprocessor and sensors/motors. It has very easy-to-use programming environment. It seems good for rapid-prototyping robotics ideas. It costs $10700 at the mall and $9000 on the net, but only $6300 in the US. I think I will buy it. Have you guys ever played with it before?