Monday, September 30, 2013

Lab Meeting Oct. 3rd (Jim): Robot Navigation in Dense Human Crowds: the Case for Cooperation

Title: Robot Navigation in Dense Human Crowds: the Case for Cooperation
Authors: Pete Trautman, Jeremy Ma,  Richard M. Murray and Andreas Krause
in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2013)

Abstract:
... we explore two questions. Can we design a navigation algorithm that encourages humans to cooperate with a robot? Would such cooperation improve navigation performance? We address the first question by developing a probabilistic predictive model of cooperative collision avoidance and goal-oriented behavior. ... We answer the second question by empirically validating our model in a natural environment (a university cafeteria), and in the process, carry out the first extensive quantitative study of robot navigation in dense human crowds (completing 488 runs). The “multiple goal” interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities near 1 person/m2, while a state of the art noncooperative planner exhibits unsafe behavior more than 3 times as often as our planner. ... We conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.

Link

Tuesday, September 24, 2013

Lab Meeting September 26, 2013 (Gene): Track-to-Track Fusion With Asynchronous Sensors Using Information Matrix Fusion for Surround Environment Perception



Title: Track-to-Track Fusion With Asynchronous Sensors Using Information Matrix Fusion for Surround Environment Perception
Authors: Michael Aeberhard, Stefan Schlichthärle, Nico Kaempchen, and Torsten Bertram


In: IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2012

Abstract—Driver-assistance systems and automated driving applications in the future will require reliable and flexible surround environment perception. Sensor data fusion is typically used to increase reliability and the observable field of view. In this paper, a novel approach to track-to-track fusion in a high-level sensor

data fusion architecture for automotive surround environment perception using information matrix fusion (IMF) is presented. It is shown that IMF produces the same good accuracy in state estimation as a low-level centralized Kalman filter, which is widely known to be the most accurate method of fusion. Additionally, as

opposed to state-of-the-art track-to-track fusion algorithms, the presented approach guarantees a globally maintained track over time as an object passes in and out of the field of view of several sensors, as required in surround environment perception. As opposed to the often-used cascaded Kalman filter for track-to-track

fusion, it is shown that the IMF algorithm has a smaller error and maintains consistency in the state estimation. The proposed approach using IMF is compared with other track-to-track fusion algorithms in simulation and is shown to perform well using real sensor data in a prototype vehicle with a 12-sensor configuration for surround environment perception in highly automated driving applications.

link

Wednesday, September 11, 2013

Lab Meeting September 12, 2013 (Jimmy): Indoor Tracking and Navigation Using Received Signal Strength and Compressive Sensing on a Mobile Device

Title: Indoor Tracking and Navigation Using Received Signal Strength and Compressive Sensing on a Mobile Device
Authors: Anthea Wain Sy Au, Chen Feng, Shahrokh Valaee, Sophia Reyes, Sameh Sorour, Samuel N. Markowitz, Deborah Gold, Keith Gordon, and Moshe Eizenman
In: IEEE Transactions on Mobile Computing 2013

Abstract
An indoor tracking and navigation system based on measurements of received signal strength (RSS) in wireless local area network (WLAN) is proposed. In the system, the location determination problem is solved by first applying a proximity constraint to limit the distance between a coarse estimate of the current position and a previous estimate. Then, a Compressive Sensing-based (CS-based) positioning scheme, proposed in our previous work [1], [2], is applied to obtain a refined position estimate. The refined estimate is used with a map-adaptive Kalman filter, which assumes a linear motion between intersections on a map that describes the user’s path, to obtain a more robust position estimate. Experimental results with the system that is implemented on a PDA with limited resources (HP iPAQ hx2750 PDA) show that the proposed tracking system outperforms the widely used traditional positioning and tracking systems. Meanwhile, the tracking system leads to 12.6 percent reduction in the mean position error compared to the CS-based stationary positioning system when three APs are used. A navigation module that is integrated with the tracking system provides users with instructions to guide them to predefined destinations. Thirty visually impaired subjects from the Canadian National Institute for the Blind (CNIB) were invited to further evaluate the performance of the navigation system. Testing results suggest that the proposed system can be used to guide visually impaired subjects to their desired destinations.

[Link]

Tuesday, September 03, 2013

Lab Meeting Sep 5th 2013 (Tom Hsu): Efficient Dense 3D Rigid-Body Motion Segmentation in RGB-D Video

Title: Efficient Dense 3D Rigid-Body Motion Segmentation in RGB-D Video

Authors: Jörg Stückler, Sven Behnke

From: British Machine Vision Conference (BMVC), Bristol, UK, 2013

Abstract:
Motion is a fundamental segmentation cue in video. Many current approaches segment 3D motion in monocular or stereo image sequences, mostly relying on sparse interest points or being dense but computationally demanding. We propose an efficient expectation-maximization (EM) framework for dense 3D segmentation of moving rigid parts in RGB-D video. Our approach segments two images into pixel regions that undergo coherent 3D rigid-body motion. Our formulation treats background and foreground objects equally and poses no further assumptions on the motion of the camera or the objects than rigidness. While our EM-formulation is not restricted to a specific image representation, we supplement it with efficient image representation and registration for rapid segmentation of RGB-D video. In experiments we demonstrate that our approach recovers segmentation and 3D motion at good precision.