This Blog is maintained by the Robot Perception and Learning lab at CSIE, NTU, Taiwan. Our scientific interests are driven by the desire to build intelligent robots and computers, which are capable of servicing people more efficiently than equivalent manned systems in a wide variety of dynamic and unstructured environments.
Wednesday, June 29, 2005
Lab Space:
Room 407: (12.52坪) No window
Room 406/408: (23.70坪) Shared with 演算法實驗室. With windows.
Room 402/404: (22.30坪) May be shared with another new coming prof. No window.
What do you think? If you have any comments, tell me before this Friday.
Thanks,
-Bob
Friday, June 24, 2005
The best CVPR2005 paper award goes to
- J. Pilet, V. Lepetit, and P. Fua for their paper on "Real-Time Non-Rigid Surface Detection."
The following are CVPR 2005 Best Paper Award Honorable Mentions:
- A. Buades, B. Coll, and J.-M. Morel. "A Non-Local Algorithm for Image Denoising"
- V. Kolmogorov, A. Criminisi, A. Blake, C. Rother, and G. Cross. "Bi-Layer Segmentation of Binocular Stereo Video"
- V. Cheung, B. Frey, and N. Jojic. "Video Epitomes"
Meet at 10 AM, Tuesday, June 28
Best,
-Bob
Thursday, June 23, 2005
Two of my best friends at CMU will defense next week
Active Learning for Outdoor Perception
Jun 28 2005, 2:00 PM, NSH 1507
Abstract
Many current state-of-the-art outdoor robots have perception systems that are primarily hand-tuned, which makes them difficult to adapt to new tasks and environments. Machine learning offers a powerful solution to this problem. Assuming that training data describing the desired output of the system is available, many supervised learning algorithms exist for automatically adjusting the parameters of possibly complex perception systems. This approach has been successfully demonstrated in many areas, and is gaining significant momentum in the field of robotic perception. An important difficulty in using machine learning techniques for large scale robotics problems comes from the fact that most algorithms require labeled data for training. Large data sets occur naturally in outdoor robotics applications, and labeling is most often an expensive process. This makes the direct application of learning techniques to realistic perception problems in our domain impractical. This thesis proposes to address the data labeling problem by analyzing the unlabeled data and automatically selecting for labeling only those examples that are hopefully important for the classification problem of interest. We present solutions for adapting several active learning techniques to the specific constraints that characterize outdoor perception, such as the need to learn from data sets with severely unbalanced class priors. We demonstrate that our solutions result in important reductions in the amount of data labeling required by presenting results from a large amount of experiments performed using real-world data.
Carl Wellington
Learning a Terrain Model for Autonomous Navigation in Rough Terrain
Jun 29 2005, 3:00 PM, NSH 1507
Abstract
Current approaches to local rough-terrain navigation are limited by their ability to build a terrain model from sensor data. Available sensors make very indirect measurements of quantities of interest such as the supporting ground surface and the location of obstacles. This is especially true in domains where vegetation may hide the ground surface or partially obscure obstacles. This thesis presents two related approaches for automatically learning how to use sensor data to build a local terrain model that includes the height of the supporting ground surface and the location of obstacles in challenging rough-terrain environments that include vegetation. The first approach uses an online learning method that directly learns the mapping between sensor data and ground height through experience with the world. The system can be trained by simply driving through representative areas. The second approach includes a terrain model that encodes structure in the world such as ground smoothness, class continuity, and similarity in vegetation height. This structure helps constrain the problem to better handle dense vegetation. Results from an autonomous tractor show that the mapping from sensor data to a terrain model can be automatically learned, and that exploiting structure in the environment improves ground height estimates in vegetation.
Tuesday, June 21, 2005
meet with me next Tuesday?
I am attending the CVPR in San Diego and will fly back to Taiwan next Monday. I am wondering how many of you would like to talk to me next Tuesday. Please let me know if you want to have a short meeting with me as soon as possible so that I can arrange a room at CSIE.
Best,
-Bob
Thursday, June 16, 2005
Paper: Recognition with spatial and temporal cues
Scene Recognition Based on Relationship between Human Actions and Objects
17th International Conference on Pattern Recognition (ICPR'04) Volume 3 pp. 73-78
Abstract
In this paper, we propose a novel method for scene recognition using video images through analysis of human activities. We aim at recognizing three kinds of things such as human activities, objects and environment. In the previous method, locations and orientations of objects are estimated using shape models, which are often claimed to be dependent upon individual scene. Instead of shape models, we employ conceptual knowledge about function and/or usage of objects as well as that about human actions. In our method, the location and usage of objects can be identified by observing interaction of human with them.
Paper: RoboCup
Cooperative behavior based on a subjective map with shared information in a dynamic environment
Advanced Robotics, Vol.19, No.2, pp.207--218, 2005.
Abstract:
This paper proposes a subjective map representation that enables a robot in a multiagent system to make decisions in a dynamic, hostile environment. A typical situation can be found in the Sony four-legged robot league of the RoboCup competition [1]. The subjective map is a map of the environment that each agent maintains regardless of the objective consistency of the representation among the agents. Owing to the map's subjectivity, it is not affected by incorrect information aquired by other agents. The method is compared with conventional methods with or without information sharing.
Wednesday, June 15, 2005
News: iRobot, ROOMBA
Roomba, the first automatic vacuum available in the U.S., is an intelligent vacuum that uses robotic technology to deliver both clean floors and personal time. Roomba uses artificial intelligence algorithms to clean efficiently and was introduced by iRobot, pioneers in artificial intelligence who have built numerous products for the U.S. Department of Defense, the U.S. military, and toy and energy companies. Thirteen inches in diameter and lighter than your average bowling ball, this innovative vacuum roams the room devouring dust, dirt, and tidbits left behind from everyday living.
Roomba's built-in intelligence means that it cleans without human intervention. Even if it doesn't vacuum the floor the way you might, it is smart enough to get your floors barefoot clean. When Roomba starts cleaning it first travels in a spiral pattern. Its Non-Marring Bumper will contact a wall, or it may try to find a wall after spiraling for a while. Roomba follows the wall for a little while, sweeping up dirt next to the wall with the Edge Cleaning Side Brush. After cleaning along a portion of the wall or other object, Roomba crisscrosses the room in straight lines. For most of Roomba's cleaning cycles, Roomba repeats this cleaning pattern until its cleaning time has elapsed, providing maximum coverage of the room.
1st Annual Conference on Human-Robot Interaction (HRI 2006)
Scope of the Conference
* User evaluations
* HRI metrics
* HRI applications
* HRI foundations
* Case studies
* Multi-modal interaction
* Adjustable autonomy
* Human-Robot Dialog
* Interface and autonomy design
* HRI for heterogeneous team
* Cognitive modeling and science in HRI
* Assistive robotics
* Human-guided learning
* Mixed-guided learning
* Mixed-initiative interaction
* Work practice studies
Tuesday, June 14, 2005
Free Planning book.
Keywords: motion planning, decision-theoretic planning, robotics
Computer Vision Contest
The goal of this contest is to provide a fun framework for vision students to try their hand at solving a challenging vision-related task and to improve their knowledge and understanding by competing against other teams from around the world. The contest is being launched in June so that students can work on the challenge over the course of the summer. The contest finals will be held at ICCV 2005 in Beijing.
Sunday, June 12, 2005
More About Lab Party
There is a good place for lab party.
The reaturant is called "米倉" which is near NTNU. We could eat dinner and chat there with comfortable sofas. There is also a wireless network in this place that you could access internet.
I hope everyone can tell me when you could go to this party or repost your appropriate time for it.
Mine are nights on 6/16, 6/19.
Saturday, June 11, 2005
Talk at CMU: Joint Exploitation of Multiple Media--From Multimedia to Databases
ABSTRACT
Multimedia content analysis offers many exciting research opportunities and is a necessary step towards automatic understanding of the content of digital documents. Digital documents are typically composite. Processing in parallel and integrating low-level information computed over each of the media that compose a multimedia document can yield knowledge that stand-alone and isolated analysis could not discover.
Joint processing of multiple media is very challenging, even at the lowest analysis levels. Coping with imperfect synchronization of pieces of information, mixing extremely different kinds of information (numerical or symbolic descriptions, values describing intervals or instants, probabilities and distances, HMM and Gaussians, ...), and reconciling contradictory outputs are some of the obstacles which make processing of multimedia documents much more difficult than it seems at first glance.
This talk will first show what may be gained from jointly analyzing multimedia documents. It will then briefly overview the typical information that can be extracted from major media (video, sound, images and text) before focusing on the problems that arise when trying to use all this information together. We hope to convince researchers to start trying to solve these problems, since they directly hamper the acquisition of higher-level knowledge from multimedia documents.
>--------------------------------------------
ABOUT PATRICK GROS
Patrick Gros has been involved in research in the field of Computer Vision for 14 years. After having finished his studies in Engineering Science at ``cole Polytechnique'' and ``cole Nationale Superieure de Techniques Avancees'' in Paris, he joined the Fundamental Computer Science and Artificial Intelligence Laboratory (LIFIA) in 1990, to achieve a Ph.D. in computer vision. Since July 1993 and the defense of this thesis, he has had a research position at CNRS, still in LIFIA, which became GRAVIR since then. From november 1995 until october 1996, he was visiting research scientist at the Robotics Institute of Carnegie Mellon University in Pittsburgh, PA, USA working on a project of automatic landmark recognition for vehicles in urban environment. In July 1999, he moved from Grenoble to Rennes where he joint the IRISA research unit. In 2002, he founded TexMex, a new research group devoted to multimedia document analysis and management, with a special emphasis on the problems raised by the management of very large volumes of documents.
His research interests are image indexing and recognition in large databases, multimedia documents description. He teaches graduate courses in computer science and computer vision. He is associate editor of the journal "Traitement du signal". He participates to numerous national projects on multimedia description and indexing, with applications to television archiving, copyright protection for photo agencies, personal picture management on set-top-boxes. In the frame of the 6th Framework Program from European Union, he is currently involved in the MUSCLE Network of Excellence and in the Enthrone and AceMedia Integrated projects. He published 17 papers in journals and book chapters, and 37 papers in conferences.
Friday, June 10, 2005
Thursday, June 09, 2005
Helpful information about AIBO
[AIBO SDE] official web site
The AIBO SDE is a full-featured develop environment where you can make software for AIBO.
[AIBO Life]
[AIBO Hack]
New AIBO Hacks, Fixes, and Developments...
Have fun :)
Papers: Spatio-Temporal Image Features
Apostoloff, N. and A. W. Fitzgibbon.
Learning spatiotemporal T-junctions for occlusion detection.
CVPR 2005
Ivan Laptev
Local Spatio-Temporal Image Features for Motion Interpretation
PhD Thesis, 2004, NADA, KTH, Stockholm
How to start your research in the lab
- What is my research topic? Is it important?
- There are many papers/books posted on this Blog. What paper or book I should read first?
- I can not understand this paper/book. What should I do?
- I don’t like this topic. Can I change my research topic?
-Bob
Wednesday, June 08, 2005
Paper: Scan Matching
F Lu, E Milios
Robot pose estimation in unknown environments by matching 2D range scans
Journal of Intelligent and Robotic Systems, 1997
Globally Consistent Range Scan Alignment for Environment Mapping
Autonomous Robots, 1997
Paper: Space-Time Behavior
Space-Time Behavior Based Correlation
CVPR2005
Abstract
We introduce a behavior-based similarity measure which tells us whether two different space-time intensity patterns of two different video segments could have resulted from a similar underlying motion field. This is done directly from the intensity information, without explicitly computing the underlying motions. Such a measure allows us to detect similarity between video segments of differently dressed people performing the same type of activity. It requires no foreground/background segmentation, no prior learning of activities, and no motion estimation or tracking.
Using this behavior-based similarity measure, we extend the notion of 2-dimensional image correlation into the 3-dimensional space-time volume, thus allowing to correlate dynamic behaviors and actions. Small space-time video segments (small video clips) are “correlated” against entire video sequences in all three dimensions (x,y, and t). Peak correlation values correspond to video locations with similar dynamic behaviors. Our approach can detect very complex behaviors in video sequences (e.g., ballet movements, pool dives, running water), even when multiple complex activities occur simultaneously within the field-of-view of the camera.
CMU Phd Thesis: Normative Approach to Market Microstructure Analysis
Abstract:
In this thesis, we propose a normative approach to market microstructure analysis. We study, model, and quantify low-level high-frequency interactions among agents in financial markets. This is an environment where electronic agents are much better positioned to both make decisions and take actions, since the amount of information and the rapid pace of activity can overwhelm humans. Unlike previous work in this area, we are not only interested in explaining why microstructure variables (prices, volumes, spreads, order flow, etc) behave in a certain way, but also in determining optimal policies for agents interacting in this environment. Our prescriptive - as opposed to explanatory - method treats market interactions as a stochastic control problem. We suggest a quantitative framework for solving this problem, describe a reinforcement learning algorithm tailored to this domain, and conduct empirical studies on very large datasets of high-frequency data. We hope that our research will lead not just to automation of market activities, but to more orderly and efficient financial markets.
the link.
Tuesday, June 07, 2005
Monday, June 06, 2005
Paper: Consciousness
Consciousness: Drinking from the Firehose of Experience.
National Conference on Artificial Intelligence (AAAI-2005)
Paper: Learning
Bootstrap Learning of Foundational Representations.
Developmental Robotics, AAAI Spring Symposium Series, 2005.
More about Song AIBO...
I hope that I did not scare you with requesting a proposal. These requests are normal. When I was a student, I had to convince my advisor in order to purchase the sensors I want to use. I explained my ideas and showed him my preliminary results. Now I am a faculty member. I still need to convince my boss for purchasing equipments. Again, proposal requests are common.
Song AIBO is a good platform. I contacted my friend in Taiwan Sony and it seems that I need to contact Japan Sony directly. I will keep trying after I go back to Taiwan. Meanwhile, you could try to think about and write down your research plans this summer and try to “convince” me when I go back. I will do my best to help you if I am convinced.
Please do not be afraid of asking!
-Bob
Papers: SLAM
R. Smith, M. Self and P. Cheeseman
A Stochastic Map For Uncertain Spatial Relationships
4th International Symposium on Robotics Research, MIT Press, 1987
R. Smith, M. Self and P. Cheeseman
Estimating Uncertain Spatial Relationships in Robotics
In Cox and Wilfong (eds.), Autonomous Robot Vehicles, Springer-Verlag, 1990
Sunday, June 05, 2005
Saturday, June 04, 2005
Paper: Subjective Localization
Learning subjective representations for planning
IJCAI 2005
Friday, June 03, 2005
Thursday, June 02, 2005
Wednesday, June 01, 2005
Hi,I'm Bright
My name is Bright Lo(羅子建).
I'm glad that I can join the Robot Perception and Learning Lab.
MSN: brightsuper@ms64.url.com.tw
Email: brightsuper@ms64.url.com.tw
Phone: 0920-167169
Group Contact List
The contact list is arranged in alphabetical order.
Please contact me if you are not in the list or the information about you is incorrect.
- Chi-Hao:
MSN: anki_jun@msn.com
Email: chihao.mail@gmail.com
Phone: 0919-840080 - Chun-Wei (Vincent):
MSN: heroreo@msn.com
Email: heroreo@gmail.com
Phone: - Pei-Han (Eric):
MSN: eric.cisco@msa.hinet.net
Email: eric.cisco@msa.hinet.net
Phone: - Shao-Wen (any):
MSN: anychris@msn.com
Email: anychris@gmail.com
Phone: 0935-219529 - Ta-Ching (Jim):
MSN: b88506059@ntu.edu.tw
Email: b88059@csie.ntu.edu.tw
Phone: 0968-740330 - Tai-Liang (Nelson):
MSN: tailionchen@hotmail.com
Email: b88501074@ntu.edu.tw
Phone: 0968-019382 - Tzu-Chien (Bright):
MSN: brightsuper@ms64.url.com.tw
Email: brightsuper@ms64.url.com.tw
Phone: 0920-167169