Takeshi Takahashi
Master's thesis
Robotics Institute, Carnegie Mellon University,
May, 2007
Abstract
Robot localization in outdoor environments is a challenging problem because of unstructured terrains. Ladars that are not horizontally attached have benefits for detecting obstacles but are not suitable for some localization algorithms used for indoor robots, which have horizontally fixed ladars. The data obtained from tilted ladars are 3D while these from non-tilted ladars are 2D. We present a 2D localization approach for these non-horizontally attached ladars. This algorithm combines 2D particle filter localization with a 3D perception system. We localize the vehicle by comparing a local map with a previously known map. These maps are created by converting 3D data into 2D data. Experimental results show that our approach is able to utilize the benefits of 3D data and 2D maps to efficiently overcome the problems of outdoor environments.
See the complete thesis.
This Blog is maintained by the Robot Perception and Learning lab at CSIE, NTU, Taiwan. Our scientific interests are driven by the desire to build intelligent robots and computers, which are capable of servicing people more efficiently than equivalent manned systems in a wide variety of dynamic and unstructured environments.
Monday, December 31, 2007
Sunday, December 30, 2007
Georgia Tech PhD thesis: Acoustical Awareness for Intelligent Robotic Action
Eric Martinson,
Acoustical Awareness for Intelligent Robotic Action,
PhD Thesis, College of Computing,
Georgia Institute of Technology,
Nov. 2007
With the growth of successes in pattern recognition and signal processing, mobile robot applications today are increasingly equipping their hardware with microphones to improve the set of available sensory information. However, if the robot, and therefore the microphone, ends up in a poor location acoustically, then the data will remain noisy and potentially useless for accomplishing the required task. This is compounded by the fact that there are many bad acoustic locations through which a robot is likely to pass, and so the results from auditory sensors often remain poor for much of the task.
The movement of the robot, though, can also be an important tool for overcoming these problems, a tool that has not been exploited in the traditional signal processing community. Robots are not limited to a single location as are traditionally placed microphones, nor are they powerless over to where they will be moved as with wearable computers. If there is a better location available for performing its task, a robot can navigate to that location under its own power. Furthermore, when deciding where to move, robots can develop complex models of the environment. Using an array of sensors, a mobile robot can build models of sound flow through an area, picking from those models the paths most likely to improve performance of an acoustic application.
In this dissertation, we address the question of how to exploit robotic movement. Using common sensors, we present a collection of tools for gathering information about the auditory scene and incorporating that information into a general framework for acoustical awareness. Thus equipped, robots can make intelligent decisions regarding control strategies to enhance their performance on the underlying acoustic application.
The full thesis.
Acoustical Awareness for Intelligent Robotic Action,
PhD Thesis, College of Computing,
Georgia Institute of Technology,
Nov. 2007
With the growth of successes in pattern recognition and signal processing, mobile robot applications today are increasingly equipping their hardware with microphones to improve the set of available sensory information. However, if the robot, and therefore the microphone, ends up in a poor location acoustically, then the data will remain noisy and potentially useless for accomplishing the required task. This is compounded by the fact that there are many bad acoustic locations through which a robot is likely to pass, and so the results from auditory sensors often remain poor for much of the task.
The movement of the robot, though, can also be an important tool for overcoming these problems, a tool that has not been exploited in the traditional signal processing community. Robots are not limited to a single location as are traditionally placed microphones, nor are they powerless over to where they will be moved as with wearable computers. If there is a better location available for performing its task, a robot can navigate to that location under its own power. Furthermore, when deciding where to move, robots can develop complex models of the environment. Using an array of sensors, a mobile robot can build models of sound flow through an area, picking from those models the paths most likely to improve performance of an acoustic application.
In this dissertation, we address the question of how to exploit robotic movement. Using common sensors, we present a collection of tools for gathering information about the auditory scene and incorporating that information into a general framework for acoustical awareness. Thus equipped, robots can make intelligent decisions regarding control strategies to enhance their performance on the underlying acoustic application.
The full thesis.
Friday, December 28, 2007
Door safety system
As the reinstallation of the computer which contains the database of our card data, I have to rebuild the database(I will make more backup this time). So, please contact me if your card record is not in the new database system. (If you are not sure your record is in database or not, just try it out with the door safy system.)
Sorry for the inconvenience.
Sorry for the inconvenience.
Wednesday, December 19, 2007
Team MIT Urban Challenge
This technical report describes Team MIT’s approach to the DARPA Urban Challenge. We have developed a novel strategy for using many inexpensive sensors, mounted on the vehicle periphery, and calibrated with a new crossmodal calibration technique. Lidar, camera, and radar data streams are processed using an innovative, locally smooth state representation that provides robust perception for realtime autonomous control. A resilient planning and control architecture has been developed for driving in traffic, comprised of an innovative combination of wellproven algorithms for mission planning, situational planning, situational interpretation, and trajectory control.
These innovations are being incorporated in two new robotic vehicles equipped for autonomous driving in urban environments, with extensive testing on a DARPA site visit course. Experimental results demonstrate all basic navigation and some basic traffic behaviors, including unoccupied autonomous driving, lane following using purepursuit control and our local frame perception strategy, obstacle avoidance using kinodynamic RRT path planning, Uturns, and precedence evaluation amongst other cars at intersections using our situational interpreter. We are working to extend these approaches to advanced navigation and traffic scenarios.
LINK
These innovations are being incorporated in two new robotic vehicles equipped for autonomous driving in urban environments, with extensive testing on a DARPA site visit course. Experimental results demonstrate all basic navigation and some basic traffic behaviors, including unoccupied autonomous driving, lane following using purepursuit control and our local frame perception strategy, obstacle avoidance using kinodynamic RRT path planning, Uturns, and precedence evaluation amongst other cars at intersections using our situational interpreter. We are working to extend these approaches to advanced navigation and traffic scenarios.
LINK
Tuesday, December 18, 2007
Lab Meeting December 18th, 2007 (Leo): Augmented State Tracking
I will try to show some simulation result of augmented state tracking.
Monday, December 17, 2007
Lab meeting December 18th:The Autonomous City Explorer Project: Aims and System Overview
Abstract—As robots are gradually leaving highly structuredfactory environments and moving into human populated environments,they need to possess more complex cognitive abilities.Not only do they have to operate efficiently and safely innatural populated environments, but also be able to achievehigher levels of cooperation and interaction with humans. TheAutonomous City Explorer (ACE) project envisions to createa robot that will autonomously navigate in an unstructuredurban environment and find its way through interaction withhumans. To achieve this, research results from the fields ofautonomous navigation, path planning, environment modeling,and human-robot interaction are combined. In this paper anovel hardware platform is introduced, a system overview isgiven, the research foci of ACE are highlighted, approaches tothe occurring challenges are proposed and analyzed, and finallysome first results are presented.
[link]http://www.csie.ntu.edu.tw/~b91501097/04399411.pdf
[link]http://www.csie.ntu.edu.tw/~b91501097/04399411.pdf
Lab Meeting December 18th, 2007(ZhenYu):Spherical Catadioptric Arrays: Construction, Multi-View Geometry, and Calibration
Title: Spherical Catadioptric Arrays: Construction, Multi-View Geometry, and Calibration
Author: Lanman, Douglas Crispell, Daniel Wachs, Megan Taubin, Gabriel
Proceedings of the Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)
Abstract:
This paper introduces a novel imaging system composed of an array of spherical mirrors and a single high-resolution digital camera. We describe the mechanical design and construction of a prototype, analyze the geometry of image formation, present a tailored calibration algorithm, and discuss the effect that design decisions had on the calibration routine. This system is presented as a unique platform for the development of efficient multi-view imaging algorithms which exploit the combined properties of camera arrays and non-central projection catadioptric systems. Initial target applications include data acquisition for image-based rendering and 3D scene reconstruction. The main advantages of the proposed system include: a relatively simple calibration procedure, a wide field of view, and a single imaging sensor which eliminates the need for color calibration and guarantees time synchronization.
[Link]
Author: Lanman, Douglas Crispell, Daniel Wachs, Megan Taubin, Gabriel
Proceedings of the Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)
Abstract:
This paper introduces a novel imaging system composed of an array of spherical mirrors and a single high-resolution digital camera. We describe the mechanical design and construction of a prototype, analyze the geometry of image formation, present a tailored calibration algorithm, and discuss the effect that design decisions had on the calibration routine. This system is presented as a unique platform for the development of efficient multi-view imaging algorithms which exploit the combined properties of camera arrays and non-central projection catadioptric systems. Initial target applications include data acquisition for image-based rendering and 3D scene reconstruction. The main advantages of the proposed system include: a relatively simple calibration procedure, a wide field of view, and a single imaging sensor which eliminates the need for color calibration and guarantees time synchronization.
[Link]
Lab Meeting December 18th, 2007 (Atwood): Maximum Entropy Model and Conditional Random Field
I will talk about the relation between Maximum Entropy Model and Conditional Random Field, and my recent experiments.
Abstract:
In this chapter, we will describe a statistical model that conforms to the
maximum entropy principle (we will call it the maximum entropy model, or
ME model in short) [68, 69]. Through mathematical derivations, we will show
that the maximum entropy model is a kind of exponential model, and is a close
sibling of the Gibbs distribution described in Chap. 6. An essential difference
between the two models is that the former is a discriminative model, while
the latter is a generative model. Through a model complexity analysis, we will
show why discriminative models are generally superior to generative models in
terms of data modeling power. We will also describe the Conditional Random
Field (CRF), one of the latest discriminative models in the literature, and
prove that CRF is equivalent to the maximum entropy model.
Fulltext
Abstract:
In this chapter, we will describe a statistical model that conforms to the
maximum entropy principle (we will call it the maximum entropy model, or
ME model in short) [68, 69]. Through mathematical derivations, we will show
that the maximum entropy model is a kind of exponential model, and is a close
sibling of the Gibbs distribution described in Chap. 6. An essential difference
between the two models is that the former is a discriminative model, while
the latter is a generative model. Through a model complexity analysis, we will
show why discriminative models are generally superior to generative models in
terms of data modeling power. We will also describe the Conditional Random
Field (CRF), one of the latest discriminative models in the literature, and
prove that CRF is equivalent to the maximum entropy model.
Fulltext
Lab Meeting December 18th, 2007 (Jeff):Progress report
I will try to show some test of my recent work.
And I will try to point out some problems of EKF-SLAM and some limitations about it.
And I will try to point out some problems of EKF-SLAM and some limitations about it.
Tuesday, December 11, 2007
Lab Meeting 11 December (Der-Yeuan): Registration of Colored 3D Point Clouds with a Kernel-based Extension to the Normal Distributions Transform
Abstract
We present a new algorithm for scan registration
of colored 3D point data which is an extension to the Normal
Distributions Transform (NDT). The probabilistic approach of
NDT is extended to a color-aware registration algorithm by
modeling the point distributions as Gaussian mixture-models
in color space. We discuss different point cloud registration
techniques, as well as alternative variants of the proposed algorithm.
Results showing improved robustness of the proposed
method using real-world data acquired with a mobile robot and
a time-of-flight camera are presented.
Authors: Benjamin Huhle, Martin Magnusson, Achim, Lilienthal, Wolfgang, Straßer
Reference on NDT: http://citeseer.ist.psu.edu/biber03normal.html
We present a new algorithm for scan registration
of colored 3D point data which is an extension to the Normal
Distributions Transform (NDT). The probabilistic approach of
NDT is extended to a color-aware registration algorithm by
modeling the point distributions as Gaussian mixture-models
in color space. We discuss different point cloud registration
techniques, as well as alternative variants of the proposed algorithm.
Results showing improved robustness of the proposed
method using real-world data acquired with a mobile robot and
a time-of-flight camera are presented.
Authors: Benjamin Huhle, Martin Magnusson, Achim, Lilienthal, Wolfgang, Straßer
Reference on NDT: http://citeseer.ist.psu.edu/biber03normal.html
Thursday, December 06, 2007
FRC Seminar - December 12 - Autonomous Peat Moss Harvesting
FRC Seminar - Autonomous Peat Moss Harvesting
Carl Wellington
NREC Commercialization Specialist
National Robotics Engineering Center
Carnegie Mellon University
Wednesday, December 12th
Noon
NSH 1109
Pizza will be served
Noon
NSH 1109
Pizza will be served
Abstract
This presentation will describe recent work with John Deere to deploy a team of three autonomous tractors for coordinated peat moss harvesting at a Canadian farm. We provided the perception system that estimates the location of the dumping pile and detects ditches and other obstacles. These systems were deployed for three months of testing and successfully harvested and deposited several fields of peat moss autonomously.
After discussing this application and our long term partnership with John Deere, I will describe our sensor pod and perception system, including a Markov random field ground estimation algorithm used for pile estimation. I'll end with some lessons learned and a discussion of the challenges in making a perception system that was deployed for months of outdoor testing without us present.
Speaker Bio
Carl Wellington is a researcher at the National Robotics Engineering Center of Carnegie Mellon University's Robotics Institute. His current focus is on perception for autonomous ground vehicles and includes project work with John Deere and Darpa's UPI Crusher program. He received his PhD from Carnegie Mellon's Robotics Institute in 2005 and his BS in Engineering from Swarthmore College in 1999.
Saturday, December 01, 2007
Door safety system
We could start to operate the system now! (As almost every member is rigistered now.)
Things to be concerned:
1. Press the red bottom at the top of the door to enable/disable the door safety lock.
2. Keep the door closed and lock on usually. (At this time, no need the original physical door lock, remember to disable the physical door lock)
3. The last person who leaves the lab need enable the physical door lock as before.
(4. The first person who enters the lab need remember to use both their key and card. If it is very annoying, maybe we could try to disable the safety lock when condition 3. occurs)
Things to be concerned:
1. Press the red bottom at the top of the door to enable/disable the door safety lock.
2. Keep the door closed and lock on usually. (At this time, no need the original physical door lock, remember to disable the physical door lock)
3. The last person who leaves the lab need enable the physical door lock as before.
(4. The first person who enters the lab need remember to use both their key and card. If it is very annoying, maybe we could try to disable the safety lock when condition 3. occurs)
WIRED MAGAZINE -- Getting a Grip
Building the Ultimate Robotic Hand
A 6-foot-tall, one-armed robot named Stair 1.0 balances on a modified Segway platform in the doorway of a Stanford University conference room. It has an arm, cameras and laser scanners for eyes, and a tangle of electrical intestines stuffed into its base.
...
To do real work in our offices and homes, to fetch our staplers or clean up our rooms, robots are going to have to master their hands. They'll need the kind of "hand-eye" coordination that enables them to identify targets, guide their mechanical mitts toward them, and then manipulate the objects deftly.
...
But the next generation, Stair 2.0, will actually analyze its own actions. The next Stair will look for the object in its hand and measure the force its fingers are applying to determine whether it's holding anything. It will plan an action, execute it, and observe the result, completing a feedback loop. And it will keep going through the loop until it succeeds at its task.
...
For detail: Link
A 6-foot-tall, one-armed robot named Stair 1.0 balances on a modified Segway platform in the doorway of a Stanford University conference room. It has an arm, cameras and laser scanners for eyes, and a tangle of electrical intestines stuffed into its base.
...
To do real work in our offices and homes, to fetch our staplers or clean up our rooms, robots are going to have to master their hands. They'll need the kind of "hand-eye" coordination that enables them to identify targets, guide their mechanical mitts toward them, and then manipulate the objects deftly.
...
But the next generation, Stair 2.0, will actually analyze its own actions. The next Stair will look for the object in its hand and measure the force its fingers are applying to determine whether it's holding anything. It will plan an action, execute it, and observe the result, completing a feedback loop. And it will keep going through the loop until it succeeds at its task.
...
For detail: Link
Subscribe to:
Posts (Atom)