Dear all,
I'm Tai-Liang Chen(Nelson) , it's so nice that I can join this great team. :)
Since we will be partners and work together in the future , how about having a party on this weekend?
I suggest we could have a dinner at a good restaurant near NTU.
BTW, My MSN is tailionchen@hotmail.com , and my cell phone is 0968-019382.
I hope that everyone could leave your MSN or cell numbers , it's a good way to know each other.
This Blog is maintained by the Robot Perception and Learning lab at CSIE, NTU, Taiwan. Our scientific interests are driven by the desire to build intelligent robots and computers, which are capable of servicing people more efficiently than equivalent manned systems in a wide variety of dynamic and unstructured environments.
Tuesday, May 31, 2005
Paper: SLAM
Ryan Eustice, Hanumant Singh, and John Leonard
Exactly Sparse Delayed-State Filters
ICRA 2005
Best student paper award
The paper is available here.
Exactly Sparse Delayed-State Filters
ICRA 2005
Best student paper award
The paper is available here.
Theory: Interaction modelling
Dr. Nihat Ay published several papers about mathematical frameworks of stochastic interaction. Some papers are available here.
Paper: Localization using laptop, PDA, or cell-phone
J. Letchner, D. Fox, and A. LaMarce.
Large-Scale Localization from Wireless Signal Strength
AAAI-05
The paper is available here.
Large-Scale Localization from Wireless Signal Strength
AAAI-05
The paper is available here.
Paper: Activity Recognition
L. Liao, D. Fox, and H. Kautz.
Location-Based Activity Recognition using Relational Markov Networks
IJCAI-05
The paper is available here.
Location-Based Activity Recognition using Relational Markov Networks
IJCAI-05
The paper is available here.
Paper: Relational Object Maps
B. Limketkai, L. Liao, and D. Fox.
Relational Object Maps for Mobile Robots
IJCAI-05
The paper is available here.
Relational Object Maps for Mobile Robots
IJCAI-05
The paper is available here.
Monday, May 30, 2005
Hi, I am Vincent!
Hello everybody.
I'm Chun-Wei(Vincent) Chen.
I'm responsible for object recognition part of our lab.
Glad to know you guys.
Wish we could have a happy lab life.
Vincent
I'm Chun-Wei(Vincent) Chen.
I'm responsible for object recognition part of our lab.
Glad to know you guys.
Wish we could have a happy lab life.
Vincent
Hi, I'm Shao-Wen.
Hi all,
I'm Shao-Wen. I'm very interested in robotics
and glad to work together with you guys.
'Any' is the name I usually used on the internet.
I'm Shao-Wen. I'm very interested in robotics
and glad to work together with you guys.
'Any' is the name I usually used on the internet.
Hi! I am Eric.
Dear all,
My name is Pei-Han Lee(李沛翰). I'm happy I can work with everybody in this lab.
By the way, hope we can have Sony AIBO in the lab. Ha~ Ha~
My name is Pei-Han Lee(李沛翰). I'm happy I can work with everybody in this lab.
By the way, hope we can have Sony AIBO in the lab. Ha~ Ha~
Collaboration
Jim asked:
Are we going to collaborate on our works soon, or after we each have quite completed our own parts? Are we going to have one big project, or several? Do you have specific requirements in mind, or will we discuss it together?
My response:
You will need a platform (robot) to implement and verify your ideas and algorithms. We will work together on creating our robots, which will be our first big project. In the early stage of this project, we only need to establish basic abilities such as data collection, remote control, and emergency stop. The members can use the robots for collecting their datasets and doing experiments. Once your algorithms are mature enough, our robots will be integrated and automated with your works. Of course we will discuss this together.
Regarding your research/thesis topic, you have to define your own problem and find a way to solve it. I will discuss this issue with you individually. You will work on your own topic and with other members as a team at the same time in principle.
Are we going to collaborate on our works soon, or after we each have quite completed our own parts? Are we going to have one big project, or several? Do you have specific requirements in mind, or will we discuss it together?
My response:
You will need a platform (robot) to implement and verify your ideas and algorithms. We will work together on creating our robots, which will be our first big project. In the early stage of this project, we only need to establish basic abilities such as data collection, remote control, and emergency stop. The members can use the robots for collecting their datasets and doing experiments. Once your algorithms are mature enough, our robots will be integrated and automated with your works. Of course we will discuss this together.
Regarding your research/thesis topic, you have to define your own problem and find a way to solve it. I will discuss this issue with you individually. You will work on your own topic and with other members as a team at the same time in principle.
Robotics for Safe Driving
The web site, http://www.ivsource.net/, provides the latest news about intelligent vehicle technology. You can sign up for their free mailing list to get notices about new content.
Indoor platform
Shao-Wen is doing a survey about electronic wheelchairs in Taiwan. He suggested that we could select an electronic scooter which is more like a bike with built-in front light, turning signals and rotatable seat. It might be helpful in system design. He will show us much more details later.
Your suggestions are welcome.
Your suggestions are welcome.
I'm Jim..
Hi all~
I'm Jim (林大慶) of NTU-CSIE.
I'm interested in intelligent vehicles & automatic mapping.
btw, this blog renders fine in my Firefox, but not in my Konqueror browser (long lines don't wrap).
I'm Jim (林大慶) of NTU-CSIE.
I'm interested in intelligent vehicles & automatic mapping.
btw, this blog renders fine in my Firefox, but not in my Konqueror browser (long lines don't wrap).
The See-Think-Act Loop
Perception, cognition, action are three fundamental components of robotics. A robot needs to see the scene, think and reason about the scene, and then decide an action to achieve its goal. Agents (robots & human beings) run this see-think-act loop iteratively.