Robot Perception and Learning
This Blog is maintained by the Robot Perception and Learning lab at CSIE, NTU, Taiwan. Our scientific interests are driven by the desire to build intelligent robots and computers, which are capable of servicing people more efficiently than equivalent manned systems in a wide variety of dynamic and unstructured environments.
Wednesday, January 21, 2015
Lab Meeting Jan 22th, 2015 (ChihChung): Image matching under large viewpoint changes and occlusions
Previously, I proposed two-point homography RANSAC approach to solve the large viewpoint change problem. In this lab meeting, I will demostrate the refined approach, which overcomes the motion constraints of the previous approach and shows more promising matching performances. The algorithm is tested on matching hand-held camera images and google street view images.
Wednesday, November 19, 2014
Lab Meeting November 20th, 2014 (ChihChung): Worldwide Pose Estimation using 3D Point Clouds
Abstract:
We address the problem of determining where a photo was taken by estimating a full 6-DOF-plus-intrincs camera pose with respect to a large geo-registered 3D point cloud, bringing together research on image localization, landmark recognition, and 3D pose estimation. Our method scales to datasets with hundreds of thousands of images and tens of millions of 3D points through the use of two new techniques: a co-occurrence prior for RANSAC and bidirectional matching of image features with 3D points. We evaluate our method on several large data sets, and show state-of-the-art results on landmark recognition as well as the ability to locate cameras to within meters, requiring only seconds per query.
Link
We address the problem of determining where a photo was taken by estimating a full 6-DOF-plus-intrincs camera pose with respect to a large geo-registered 3D point cloud, bringing together research on image localization, landmark recognition, and 3D pose estimation. Our method scales to datasets with hundreds of thousands of images and tens of millions of 3D points through the use of two new techniques: a co-occurrence prior for RANSAC and bidirectional matching of image features with 3D points. We evaluate our method on several large data sets, and show state-of-the-art results on landmark recognition as well as the ability to locate cameras to within meters, requiring only seconds per query.
Link
Thursday, November 06, 2014
Lab Meeting November 7th, 2014 (Jeff): Multiple Target Tracking using Recursive RANSAC
Title: Multiple Target Tracking using Recursive RANSAC
Authors: Peter C. Niedfeldt and Randal W. Beard
Abstract:
Estimating the states of multiple dynamic targets is difficult due to noisy and spurious measurements, missed detections, and the interaction between multiple maneuvering targets. In this paper a novel algorithm, which we call the recursive random sample consensus (R-RANSAC) algorithm, is presented to robustly estimate the states of an unknown number of dynamic targets. R-RANSAC was previously developed to estimate the parameters of multiple static signals when measurements are received sequentially in time. The R-RANSAC algorithm proposed in this paper reformulates our previous work to track dynamic targets using a Kalman filter. Simulation results using synthetic data are included to compare R-RANSAC to the GM-PHD filter.
American Control Conference (ACC), 2014
Link:
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6859273&tag=1
Authors: Peter C. Niedfeldt and Randal W. Beard
Abstract:
Estimating the states of multiple dynamic targets is difficult due to noisy and spurious measurements, missed detections, and the interaction between multiple maneuvering targets. In this paper a novel algorithm, which we call the recursive random sample consensus (R-RANSAC) algorithm, is presented to robustly estimate the states of an unknown number of dynamic targets. R-RANSAC was previously developed to estimate the parameters of multiple static signals when measurements are received sequentially in time. The R-RANSAC algorithm proposed in this paper reformulates our previous work to track dynamic targets using a Kalman filter. Simulation results using synthetic data are included to compare R-RANSAC to the GM-PHD filter.
American Control Conference (ACC), 2014
Link:
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6859273&tag=1
Monday, October 20, 2014
Lab Meeting, October 23, 2014, Jim
I will present my previous work about imitation learning. What we have done and learned will be described. Then I will present the proposed idea for solving the remaining issues of my previous work.
Wednesday, October 15, 2014
Lab Meeting, October 16, 2014 (Channing): Modeling and Learning Synergy for Team Formation with Heterogeneous Agents
Title:
Modeling and Learning Synergy for Team Formation with Heterogeneous Agents
AAMAS '12 Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1 Pages 365-374
Authors:
Modeling and Learning Synergy for Team Formation with Heterogeneous Agents
AAMAS '12 Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1 Pages 365-374
Somchaya Liemhetcharat and Manuela Veloso
Abstract:
The performance of a team at a task depends critically on the composition of its members. There is a notion of synergy in human teams that represents how well teams work together, and we are interested in modeling synergy in multi-agent teams. We focus on the problem of team formation, i.e., selecting a subset of a group of agents in order to perform a task, where each agent has its own capabilities, and the performance of a team of agents depends on the individual agent capabilities as well as the synergistic effects among the agents. We formally define synergy and how it can be computed using a synergy graph, where the distance between two agents in the graph correlates with how well they work together. We contribute a learning algorithm that learns a synergy graph from observations of the performance of subsets of the agents, and show that our learning algorithm is capable of learning good synergy graphs without prior knowledge of the interactions of the agents or their capabilities. We also contribute an algorithm to solve the team formation problem using the learned synergy graph, and experimentally show that the team formed by our algorithm outperforms a competing algorithm.
The performance of a team at a task depends critically on the composition of its members. There is a notion of synergy in human teams that represents how well teams work together, and we are interested in modeling synergy in multi-agent teams. We focus on the problem of team formation, i.e., selecting a subset of a group of agents in order to perform a task, where each agent has its own capabilities, and the performance of a team of agents depends on the individual agent capabilities as well as the synergistic effects among the agents. We formally define synergy and how it can be computed using a synergy graph, where the distance between two agents in the graph correlates with how well they work together. We contribute a learning algorithm that learns a synergy graph from observations of the performance of subsets of the agents, and show that our learning algorithm is capable of learning good synergy graphs without prior knowledge of the interactions of the agents or their capabilities. We also contribute an algorithm to solve the team formation problem using the learned synergy graph, and experimentally show that the team formed by our algorithm outperforms a competing algorithm.
Wednesday, October 01, 2014
Lab Meeting, October 2, 2014(Yun-Jun Shen): Multi-modal and Multi-spectral Registration for Natural Images
Title: Multi-modal and Multi-spectral Registration for Natural Images
Authors: Xiaoyong Shen , Li Xu, Qi Zhang, and Jiaya Jia
Abstract:
Authors: Xiaoyong Shen , Li Xu, Qi Zhang, and Jiaya Jia
Abstract:
Images now come in different forms – color, near-infrared, depth, etc. – due to the development of special and powerful cameras in computer vision and computational photography. Their cross-modal correspondence establishment is however left behind. We address this challenging dense matching problem considering structure variation possibly existing in these image sets and introduce new model and solution. Our main contribution includes designing the descriptor named robust selective normalized cross correlation (RSNCC) to establish dense pixel correspondence in input images and proposing its mathematical parameterization to make optimization tractable. A computationally robust framework including global and local matching phases is also established. We build a multi-modal dataset including natural images with labeled sparse correspondence. Our method will benefit image and vision applications that require accurate image alignment.
In: Computer Vision–ECCV 2014
Link: http://www.cse.cuhk.edu.hk/leojia/projects/multimodal/papers/multispectral_registration.pdf
Tuesday, September 23, 2014
Lab Meeting September 25th, 2014(Bang-Cheng Wang): Strategies for Adjusting the ZMP Reference Trajectory for Maintaining Balance in Humanoid Walking
Title: Strategies for Adjusting the ZMP Reference Trajectory for Maintaining Balance in Humanoid Walking
Abstract:
The present paper addresses strategies of changing the reference trajectories of the future ZMP that are used for online repetitive walking pattern generation. Walking pattern generation operates with a cycle of 20 [ms], and the reference ZMP trajectory is adjusted according to the current actual motion status in order to maintain the current balance. Three different strategies are considered for adjusting the ZMP. The first strategy is to change the reference ZMP inside the sole area. The second strategy is to change the position of the next step, and the third strategy is to change the duration of the current step. The manner in which these changes affect the current balance and how to combine the three strategies are discussed. The proposed methods are implemented as part of an online walking control system with short cycle pattern generation and are evaluated using the HRP-2 full-sized humanoid robot.
2010 IEEE International Conference on Robotics and Automation
http://ieeexplore.ieee.org/xpl/abstractMultimedia.jsp?arnumber=5510002
Authors: Koichi Nishiwaki and Satoshi Kagami
Abstract:
The present paper addresses strategies of changing the reference trajectories of the future ZMP that are used for online repetitive walking pattern generation. Walking pattern generation operates with a cycle of 20 [ms], and the reference ZMP trajectory is adjusted according to the current actual motion status in order to maintain the current balance. Three different strategies are considered for adjusting the ZMP. The first strategy is to change the reference ZMP inside the sole area. The second strategy is to change the position of the next step, and the third strategy is to change the duration of the current step. The manner in which these changes affect the current balance and how to combine the three strategies are discussed. The proposed methods are implemented as part of an online walking control system with short cycle pattern generation and are evaluated using the HRP-2 full-sized humanoid robot.
2010 IEEE International Conference on Robotics and Automation
http://ieeexplore.ieee.org/xpl/abstractMultimedia.jsp?arnumber=5510002
Subscribe to:
Posts (Atom)