This Blog is maintained by the Robot Perception and Learning lab at CSIE, NTU, Taiwan. Our scientific interests are driven by the desire to build intelligent robots and computers, which are capable of servicing people more efficiently than equivalent manned systems in a wide variety of dynamic and unstructured environments.
Wednesday, December 28, 2011
Lab Meeting Dec. 29, 2011 (David): Semantic fusion of laser and vision in pedestrian detection (PR 2010)
Lab Meeting Dec. 29, 2011 (David): Semantic fusion of laser and vision in pedestrian detection (PR 2010)
Luciano Oliveira, Urbano Nunes, Paulo Peixoto, Marco Silva, Fernando Moita
Abstract
Fusion of laser and vision in object detection has been accomplished by two main approaches: (1) independent integration of sensor-driven features or sensor-driven classifiers, or (2) a region of interest (ROI) is found by laser segmentation and an image classifier is used to name the projected ROI. Here, we propose a novel fusion approach based on semantic information, and embodied on many levels. Sensor fusion is based on spatial relationship of parts-based classifiers, being performed via a Markov logic network. The proposed system deals with partial segments, it is able to recover depth information even if the laser fails, and the integration is modeled through contextual information—characteristics not found on previous approaches. Experiments in pedestrian detection demonstrate the effectiveness of our method over data sets gathered in urban scenarios.
Paper Link
Local Link
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment