Yang Gu
Computer Science Department
Carnegie Mellon University
Robots need to track objects. We consider tasks where robots actuate on the target that is visually tracked. Object tracking efficiency completely depends on the accuracy of the motion model and of the sensory information. The motion model of the target becomes particularly complex in the presence of multiple agents acting on a mobile target. We assume that the tracked object is actuated by a team of agents, composing of robots and possibly humans. Robots know their own actions, and team members are collaborating according to coordination plans and communicated information. The thesis shows that using a previously known or learned action model of the single robot or team members improves the efficiency of tracking.
First, we introduce and implement a novel team-driven motion tracking approach. Team-driven motion tracking is a tracking paradigm defined as a set of principles for the inclusion of a hierarchical, prior knowledge and construction of a motion model. We illustrate a possible set of behavior levels within the Segway soccer domain that correspond to the abstract motion modeling decomposition.
Second, we introduce a principled approach to incorporate models of the robot-object interaction into the tracking algorithm to effectively improve the performance of the tracker. We present the integration of a single robot behavioral model in terms of skills and tactics with multiple actions into our dynamic Bayesian probabilistic tracking algorithm.
Third, we extend to multiple motion tracking models corresponding to known multi-robot coordination plans or from multi-robot communication. We evaluate our resulting informed tracking approach empirically in simulation and using a setup Segway soccer task. The input of the multiple single and multi-robot behavioral sources allow a robot to much more effectively visually track mobile targets with dynamic trajectories.
Fourth, we present a parameter learning algorithm to learn actuation models. We describe the parametric system model and the parameters we need to learn in the actuation model. As in the KLD-sampling algorithm applied to tracking, we adapt the number of modeling particles and learn the unknown parameters. We successfully decrease the computation time of learning and the state estimation process by using significantly fewer particles on average. We show the effectiveness of learning using simulated experiments. The tracker that uses the learned actuation model achieves improved tracking performance.
These contributions demonstrate that it is possible to effectively improve an agent’s object tracking ability using tactics, plays, communication and learned action models in the presence of multiple agents acting on a mobile object. The introduced tracking algorithms are proven effective in a number of simulated experiments and setup Segway robot soccer tasks. The team-driven motion tracking framework is demonstrated empirically across a wide range of settings of increasing complexity.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.