Sunday, September 09, 2012

Lab meeting Sep 12th 2012 (Chih-Chung): Global motion planning under uncertain motion, sensing,and environment map

[LINK]

Presented by Chih-Chung

From Autonomous Robots, Volume 33, No.3, 2012, pp. 255-272

Authors:
Hanna Kurniawati · Tirthankar Bandyopadhyay ·
Nicholas M. Patrikalakis

Abstract:
Uncertainty in motion planning is often caused by
three main sources: motion error, sensing error, and imperfect
environment map. Despite the significant effect of all
three sources of uncertainty to motion planning problems,
most planners take into account only one or at most two
of them. We propose a new motion planner, called Guided
Cluster Sampling (GCS), that takes into account all three
sources of uncertainty for robots with active sensing capabilities.
GCS uses the Partially Observable Markov Decision
Process (POMDP) framework and the point-based
POMDP approach. Although point-based POMDPs have
shown impressive progress over the past few years, it performs
poorly when the environment map is imperfect. This
poor performance is due to the extremely high dimensional
state space, which translates to the extremely large belief
space B.
We alleviate this problem by constructing a more suitable
sampling distribution based on the observations that when the
robot has active sensing capability, B can be partitioned
into a collection of much smaller sub-spaces, and an optimal
policy can often be generated by sufficient sampling of
a small subset of the collection. Utilizing these observations,
GCS samples B in two-stages, a subspace is sampled from
the collection and then a belief is sampled from the subspace.

It uses information from the set of sampled sub-spaces
and sampled beliefs to guide subsequent sampling. Simulation
results on marine robotics scenarios suggest that GCS
can generate reasonable policies for motion planning problems
with uncertain motion, sensing, and environment map,
that are unsolvable by the best point-based POMDPs today.
Furthermore, GCS handles POMDPs with continuous state,
action, and observation spaces. We show that for a class of
POMDPs that often occur in robot motion planning, given
enough time, GCS converges to the optimal policy.

To the best of our knowledge, this is the first convergence
result for point-based POMDPs with continuous action
space.

No comments: