This Blog is maintained by the Robot Perception and Learning lab at CSIE, NTU, Taiwan. Our scientific interests are driven by the desire to build intelligent robots and computers, which are capable of servicing people more efficiently than equivalent manned systems in a wide variety of dynamic and unstructured environments.
Monday, October 20, 2014
Lab Meeting, October 23, 2014, Jim
I will present my previous work about imitation learning. What we have done and learned will be described. Then I will present the proposed idea for solving the remaining issues of my previous work.
Wednesday, October 15, 2014
Lab Meeting, October 16, 2014 (Channing): Modeling and Learning Synergy for Team Formation with Heterogeneous Agents
Title:
Modeling and Learning Synergy for Team Formation with Heterogeneous Agents
AAMAS '12 Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1 Pages 365-374
Authors:
Modeling and Learning Synergy for Team Formation with Heterogeneous Agents
AAMAS '12 Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1 Pages 365-374
Somchaya Liemhetcharat and Manuela Veloso
Abstract:
The performance of a team at a task depends critically on the composition of its members. There is a notion of synergy in human teams that represents how well teams work together, and we are interested in modeling synergy in multi-agent teams. We focus on the problem of team formation, i.e., selecting a subset of a group of agents in order to perform a task, where each agent has its own capabilities, and the performance of a team of agents depends on the individual agent capabilities as well as the synergistic effects among the agents. We formally define synergy and how it can be computed using a synergy graph, where the distance between two agents in the graph correlates with how well they work together. We contribute a learning algorithm that learns a synergy graph from observations of the performance of subsets of the agents, and show that our learning algorithm is capable of learning good synergy graphs without prior knowledge of the interactions of the agents or their capabilities. We also contribute an algorithm to solve the team formation problem using the learned synergy graph, and experimentally show that the team formed by our algorithm outperforms a competing algorithm.
The performance of a team at a task depends critically on the composition of its members. There is a notion of synergy in human teams that represents how well teams work together, and we are interested in modeling synergy in multi-agent teams. We focus on the problem of team formation, i.e., selecting a subset of a group of agents in order to perform a task, where each agent has its own capabilities, and the performance of a team of agents depends on the individual agent capabilities as well as the synergistic effects among the agents. We formally define synergy and how it can be computed using a synergy graph, where the distance between two agents in the graph correlates with how well they work together. We contribute a learning algorithm that learns a synergy graph from observations of the performance of subsets of the agents, and show that our learning algorithm is capable of learning good synergy graphs without prior knowledge of the interactions of the agents or their capabilities. We also contribute an algorithm to solve the team formation problem using the learned synergy graph, and experimentally show that the team formed by our algorithm outperforms a competing algorithm.
Wednesday, October 01, 2014
Lab Meeting, October 2, 2014(Yun-Jun Shen): Multi-modal and Multi-spectral Registration for Natural Images
Title: Multi-modal and Multi-spectral Registration for Natural Images
Authors: Xiaoyong Shen , Li Xu, Qi Zhang, and Jiaya Jia
Abstract:
Authors: Xiaoyong Shen , Li Xu, Qi Zhang, and Jiaya Jia
Abstract:
Images now come in different forms – color, near-infrared, depth, etc. – due to the development of special and powerful cameras in computer vision and computational photography. Their cross-modal correspondence establishment is however left behind. We address this challenging dense matching problem considering structure variation possibly existing in these image sets and introduce new model and solution. Our main contribution includes designing the descriptor named robust selective normalized cross correlation (RSNCC) to establish dense pixel correspondence in input images and proposing its mathematical parameterization to make optimization tractable. A computationally robust framework including global and local matching phases is also established. We build a multi-modal dataset including natural images with labeled sparse correspondence. Our method will benefit image and vision applications that require accurate image alignment.
In: Computer Vision–ECCV 2014
Link: http://www.cse.cuhk.edu.hk/leojia/projects/multimodal/papers/multispectral_registration.pdf
Subscribe to:
Posts (Atom)