Home sensors, long-distance health monitors and other gadgets help seniors remain independent.
May 2006
RI-MAN isn’t your average caregiver. The pale-green, 220-pound robot is a mass of wiring, metal and computer chips. It was created in Japan as an eventual high-tech alternative to costly home-health services and nursing-home care.
Although you can’t order your own RI-MAN or other home-care robot yet, you can buy many other assistive-technology devices that enable older adults with various ailments to continue to live in their own homes. Such devices include home sensors that monitor a person’s day-to-day activities and special goggles that help the visually impaired to see. These products are part of tech companies’ response to the new demographics: a rising number of seniors, families scattered around the globe and grown children with full-time careers who care for elderly parents. Here are some examples of what’s available now.
See the full article.
This Blog is maintained by the Robot Perception and Learning lab at CSIE, NTU, Taiwan. Our scientific interests are driven by the desire to build intelligent robots and computers, which are capable of servicing people more efficiently than equivalent manned systems in a wide variety of dynamic and unstructured environments.
Saturday, December 30, 2006
Thursday, December 28, 2006
Lab meeting 28 Dec, 2006 (Jim): Unified Inverse Depth Parametrization for Monocular SLAM
Unified Inverse Depth Parametrization for Monocular SLAM
Montiel etal., RSS 2006
PDF
A.J.Davison's website
Abstract:
Recent work has shown that the probabilistic SLAM approach of explicit uncertainty propagation can succeed in permitting repeatable 3D real-time localization and mapping even in the ‘pure vision’ domain of a single agile camera with no extra sensing. An issue which has caused difficulty in monocular SLAM however is the initialization of features, since information from multiple images acquired during motion must be combined to achieve accurate depth estimates. This has led algorithms to deviate from the desirable Gaussian uncertainty representation of the EKF and related probabilistic filters during special initialization steps.
In this paper we present a new unified parametrization for point features within monocular SLAM which permits efficient and accurate representation of uncertainty during undelayed initialisation and beyond, all within the standard EKF (Extended Kalman Filter). The key concept is direct parametrization of inverse depth, where there is a high degree of linearity. Importantly, our parametrization can cope with features which are so far from the camera that they present little parallax during motion, maintaining sufficient representative uncertainty that these points retain the opportunity to ‘come in’ from infinity if the camera makes larger movements. We demonstrate the parametrization using real image sequences of large-scale indoor and outdoor scenes.
Montiel etal., RSS 2006
A.J.Davison's website
Abstract:
Recent work has shown that the probabilistic SLAM approach of explicit uncertainty propagation can succeed in permitting repeatable 3D real-time localization and mapping even in the ‘pure vision’ domain of a single agile camera with no extra sensing. An issue which has caused difficulty in monocular SLAM however is the initialization of features, since information from multiple images acquired during motion must be combined to achieve accurate depth estimates. This has led algorithms to deviate from the desirable Gaussian uncertainty representation of the EKF and related probabilistic filters during special initialization steps.
In this paper we present a new unified parametrization for point features within monocular SLAM which permits efficient and accurate representation of uncertainty during undelayed initialisation and beyond, all within the standard EKF (Extended Kalman Filter). The key concept is direct parametrization of inverse depth, where there is a high degree of linearity. Importantly, our parametrization can cope with features which are so far from the camera that they present little parallax during motion, maintaining sufficient representative uncertainty that these points retain the opportunity to ‘come in’ from infinity if the camera makes larger movements. We demonstrate the parametrization using real image sequences of large-scale indoor and outdoor scenes.
Wednesday, December 27, 2006
Lab meeting 28 Dec, 2006 (Any): Sonar Sensor Interpretation
Title: Sonar Interpretation Learned from Laser Data
Authors: S. Enderle, G. Kraetzschmar, S. Sablatnog and G. Palm
From: 1999 Third European Workshop on Advanced Mobile Robots, 1999. (Eurobot '99)
Links: [Paper 1][Paper 2][Paper 3]
Abstract:
Sensor interpretation in mobile robots often involves an inverse sensor model, which generates hypotheses on specific aspects of the robot's environment based on current sensor data. Building inverse sensor models for sonar sensor assemblies is a particularly difficult problem that has received much attention in past years. A common solution is to train neural networks using supervised learning. However; large amounts of training data are typically needed, consisting e.g. of scans of recorded sonar data which are labeled with manually constructed teacher maps. Obtaining these training data is an error-prone and time-consuming process. We suggest that it can be avoided, if an additional sensor like a laser scanner is also available which can act as the feeding signal. We have successfully trained inverse sensor models for sonar interpretation using laser scan data. In this paper; we describe the procedure we used and the results we obtained.
Authors: S. Enderle, G. Kraetzschmar, S. Sablatnog and G. Palm
From: 1999 Third European Workshop on Advanced Mobile Robots, 1999. (Eurobot '99)
Links: [Paper 1][Paper 2][Paper 3]
Abstract:
Sensor interpretation in mobile robots often involves an inverse sensor model, which generates hypotheses on specific aspects of the robot's environment based on current sensor data. Building inverse sensor models for sonar sensor assemblies is a particularly difficult problem that has received much attention in past years. A common solution is to train neural networks using supervised learning. However; large amounts of training data are typically needed, consisting e.g. of scans of recorded sonar data which are labeled with manually constructed teacher maps. Obtaining these training data is an error-prone and time-consuming process. We suggest that it can be avoided, if an additional sensor like a laser scanner is also available which can act as the feeding signal. We have successfully trained inverse sensor models for sonar interpretation using laser scan data. In this paper; we describe the procedure we used and the results we obtained.
Lab meeting 28 Dec, 2006 (Leo): Square Root SAM
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing
Frank Dellaert
Robotics: Science and Systems, 2005
Abstract— Solving the SLAM problem is one way to enable
a robot to explore, map, and navigate in a previously unknown
environment. We investigate smoothing approaches as a viable
alternative to extended Kalman filter-based solutions to the
problem. In particular, we look at approaches that factorize either
the associated information matrix or the measurement matrix
into square root form. Such techniques have several significant
advantages over the EKF: they are faster yet exact, they can be
used in either batch or incremental mode, are better equipped
to deal with non-linear process and measurement models, and
yield the entire robot trajectory, at lower cost. In addition,
in an indirect but dramatic way, column ordering heuristics
automatically exploit the locality inherent in the geographic
nature of the SLAM problem.
In this paper we present the theory underlying these methods,
an interpretation of factorization in terms of the graphical model
associated with the SLAM problem, and simulation results that
underscore the potential of these methods for use in practice.
[Link]
Frank Dellaert
Robotics: Science and Systems, 2005
Abstract— Solving the SLAM problem is one way to enable
a robot to explore, map, and navigate in a previously unknown
environment. We investigate smoothing approaches as a viable
alternative to extended Kalman filter-based solutions to the
problem. In particular, we look at approaches that factorize either
the associated information matrix or the measurement matrix
into square root form. Such techniques have several significant
advantages over the EKF: they are faster yet exact, they can be
used in either batch or incremental mode, are better equipped
to deal with non-linear process and measurement models, and
yield the entire robot trajectory, at lower cost. In addition,
in an indirect but dramatic way, column ordering heuristics
automatically exploit the locality inherent in the geographic
nature of the SLAM problem.
In this paper we present the theory underlying these methods,
an interpretation of factorization in terms of the graphical model
associated with the SLAM problem, and simulation results that
underscore the potential of these methods for use in practice.
[Link]
Thursday, December 14, 2006
Lab meeting 15 Dec, 2006 (Casey): Estimating 3D Hand Pose from a Cluttered Image
Title: Estimating 3D Hand Pose from a Cluttered Image
Authors: Vassilis Athitsos and Stan Scalaroff
(CVPR 2003)
Abstract:
A method is proposed that can generate a ranked list of
plausible three-dimensional hand configurations that best
match an input image. Hand pose estimation is formulated
as an image database indexing problem, where the closest
matches for an input hand image are retrieved from a large
database of synthetic hand images. In contrast to previous
approaches, the system can function in the presence of
clutter, thanks to two novel clutter-tolerant indexing methods.
First, a computationally efficient approximation of
the image-to-model chamfer distance is obtained by embedding
binary edge images into a high-dimensional Euclidean
space. Second, a general-purpose, probabilistic line matching
method identifies those line segment correspondences
between model and input images that are the least likely to
have occurred by chance. The performance of this cluttertolerant
approach is demonstrated in quantitative experiments
with hundreds of real hand images.
Paper download: [Link]
Authors: Vassilis Athitsos and Stan Scalaroff
(CVPR 2003)
Abstract:
A method is proposed that can generate a ranked list of
plausible three-dimensional hand configurations that best
match an input image. Hand pose estimation is formulated
as an image database indexing problem, where the closest
matches for an input hand image are retrieved from a large
database of synthetic hand images. In contrast to previous
approaches, the system can function in the presence of
clutter, thanks to two novel clutter-tolerant indexing methods.
First, a computationally efficient approximation of
the image-to-model chamfer distance is obtained by embedding
binary edge images into a high-dimensional Euclidean
space. Second, a general-purpose, probabilistic line matching
method identifies those line segment correspondences
between model and input images that are the least likely to
have occurred by chance. The performance of this cluttertolerant
approach is demonstrated in quantitative experiments
with hundreds of real hand images.
Paper download: [Link]
Wednesday, December 13, 2006
Lab meeting 15 Dec, 2006 (YuChun): Modeling Affect in Socially Interactive Robots
Author:
Rachel Gockley, Reid Simmons, and Jodi Forlizzi
Proc. of the 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN06), September, 2006.
Abstract:
Humans use expressions of emotion in a very social manner, to convey messages such as “I'm happy to see you” or “I want to be comforted,” and people's long-term relationships depend heavily on shared emotional experiences. We believe that for robots to interact naturally with humans in social situations they should also be able to express emotions in both short-term and long-term relationships. To this end, we have developed an affective model for social robots. This generative model attempts to create natural, human-like affect and includes distinctions between immediate emotional responses, the overall mood of the robot, and long-term attitudes toward each visitor to the robot. This paper presents the general affect model as well as particular details of our implementation of the model on one robot, the Roboceptionist.
[Link]
Rachel Gockley, Reid Simmons, and Jodi Forlizzi
Proc. of the 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN06), September, 2006.
Abstract:
Humans use expressions of emotion in a very social manner, to convey messages such as “I'm happy to see you” or “I want to be comforted,” and people's long-term relationships depend heavily on shared emotional experiences. We believe that for robots to interact naturally with humans in social situations they should also be able to express emotions in both short-term and long-term relationships. To this end, we have developed an affective model for social robots. This generative model attempts to create natural, human-like affect and includes distinctions between immediate emotional responses, the overall mood of the robot, and long-term attitudes toward each visitor to the robot. This paper presents the general affect model as well as particular details of our implementation of the model on one robot, the Roboceptionist.
[Link]
Friday, December 08, 2006
[Thesis Oral] A Market-Based Framework for Tightly-Coupled Planned Coordination in Multirobot Teams
Author:
Nidhi Kalra
Robotics Institute
Carnegie Mellon University
Abstract:
This thesis explores the coordination challenges posed by real-world multirobot domains that require planned tight coordination between teammates throughout execution. These domains involve solving a multi-agent planning problem in which the actions of robots are tightly coupled. Because of uncertainty in the environment and the team, they also require persistent tight coordination between teammates throughout execution.
This thesis proposes an approach to these problems in which the complexity and strength of the coordination adapt to the difficulty of the problem. Our approach, called Hoplites, is a market-based framework that selectively injects pockets of complex coordination into a primarily distributed system by enabling robots to purchasing each other's participation in tightly-coupled plans over the market. We discuss how it is widely applicable to real-world problems because it is general, computationally feasible, scalable, operates under uncertainty, and improves solutions with new information. Experiments show that our approach significantly outperforms existing coordination methods.
Nidhi Kalra
Robotics Institute
Carnegie Mellon University
Abstract:
This thesis explores the coordination challenges posed by real-world multirobot domains that require planned tight coordination between teammates throughout execution. These domains involve solving a multi-agent planning problem in which the actions of robots are tightly coupled. Because of uncertainty in the environment and the team, they also require persistent tight coordination between teammates throughout execution.
This thesis proposes an approach to these problems in which the complexity and strength of the coordination adapt to the difficulty of the problem. Our approach, called Hoplites, is a market-based framework that selectively injects pockets of complex coordination into a primarily distributed system by enabling robots to purchasing each other's participation in tightly-coupled plans over the market. We discuss how it is widely applicable to real-world problems because it is general, computationally feasible, scalable, operates under uncertainty, and improves solutions with new information. Experiments show that our approach significantly outperforms existing coordination methods.
Tuesday, December 05, 2006
Lab meeting 8 Dec, 2006 (Chihao): Particle filtering algorithms for tracking an acoustic source in a reverberant environment
Author:
Ward, D.B. Lehmann, E.A. Williamson, R.C.
Dept. of Electr. & Electron. Eng., Imperial Coll. London, UK
From: Speech and Audio Processing, IEEE Transactions
Abstract:
Ward, D.B. Lehmann, E.A. Williamson, R.C.
Dept. of Electr. & Electron. Eng., Imperial Coll. London, UK
From: Speech and Audio Processing, IEEE Transactions
Abstract:
Traditional acoustic source localization algorithms attempt to find the current location of the acoustic source using data collected at an array of sensors at the current time only. In the presence of strong multipath, these traditional algorithms often erroneously locate a multipath reflection rather than the true source location. A recently proposed approach that appears promising in overcoming this drawback of traditional algorithms, is a state-space approach using particle filtering. In this paper we formulate a general framework for tracking an acoustic source using particle filters. We discuss four specific algorithms that fit within this framework, and demonstrate their performance using both simulated reverberant data and data recorded in a moderately reverberant office room (with a measured reverberation time of 0.39 s). The results indicate that the proposed family of algorithms are able to accurately track a moving source in a moderately reverberant room.
[Link]
[Link]
Monday, December 04, 2006
Lab meeting 8 Dec, 2006 (AShin): Learning and Inferring Transportation Routines
Author:L. Liao, D. Fox, and H. Kautz
Proc. of the National Conference on Artificial Intelligence (AAAI-04)
Outstanding Paper Award
Abstract
This paper introduces a hierarchical Markov model that can learn and infer a user's daily movements through the community. The model uses multiple levels of abstraction in order to bridge the gap between raw GPS sensor measurements and high level information such as a user's mode of transportation or her goal. We apply Rao-Blackwellised particle filters for efficient inference both at the low level and at the higher levels of the hierarchy. Significant locations such as goals or locations where the user frequently changes mode of transportation are learned from GPS data logs without requiring any manual labeling. We show how to detect abnormal behaviors (\eg\ taking a wrong bus) by concurrently tracking his activities with a trained and a prior model. Experiments show that our model is able to accurately predict the goals of a person and to recognize situations in which the user performs unknown activities.
[Link]
Proc. of the National Conference on Artificial Intelligence (AAAI-04)
Outstanding Paper Award
Abstract
This paper introduces a hierarchical Markov model that can learn and infer a user's daily movements through the community. The model uses multiple levels of abstraction in order to bridge the gap between raw GPS sensor measurements and high level information such as a user's mode of transportation or her goal. We apply Rao-Blackwellised particle filters for efficient inference both at the low level and at the higher levels of the hierarchy. Significant locations such as goals or locations where the user frequently changes mode of transportation are learned from GPS data logs without requiring any manual labeling. We show how to detect abnormal behaviors (\eg\ taking a wrong bus) by concurrently tracking his activities with a trained and a prior model. Experiments show that our model is able to accurately predict the goals of a person and to recognize situations in which the user performs unknown activities.
[Link]
Saturday, December 02, 2006
No Polit, No Problem?
[origional link]
The promise is fantastic: new generations of remote-controlled aircraft could soon be flying in civilian airspace, performing all sorts of useful tasks.The reality is that a lack of radio frequencies to control the planes and serious concerns over their safety are going to keep them grounded for years to come.
Surprisingly, given the commercial hopes it has for civil unmanned aerial vehicles (UAVs), the aviation industry has failed to obtain the radio frequencies it needs to control them - and it will be 2011 before it can even begin to lobby for space on the radio spectrum. What's more, none of the world's aviation authorities will allow civil UAVs to fly in their airspace without a reliable system for avoiding other aircraft - and the industry has not yet even begun developing such a system. Experts say this could take up to seven years.
Dedicated frequencies are handed out at the International Telecommunications Union's World Radiocommunications Conference.But no one in the UAV industry had applied for any new frequencies.If UAVs are to mingle safely with other civilian aircraft, the industry needs to develop a safe, standardised collision avoidance system. This is complicated because aviation regulators demand that if UAVs are to have access to civil airspace, they must be "equivalent" in every way to regular planes.The problem for now is that aviation regulators have yet to define precisely what they mean by "equivalent", so UAV makers are not yet willing to commit themselves to developing collision-avoidance technology."A crewless aircraft on a collision course must behave as if it had a pilot on board"
On the brighter side, last week the UN's International Civil Aviation Organization said its navigation experts would meet in early 2007 to consider regulations for UAVs in civil airspace.
however, it will be meaningless unless the industry can obtain the necessary frequencies to control the planes and feed images and other sensor data back to base, says Bowker. "The lack of robust, secure radio spectrum is a show-stopper."
The promise is fantastic: new generations of remote-controlled aircraft could soon be flying in civilian airspace, performing all sorts of useful tasks.The reality is that a lack of radio frequencies to control the planes and serious concerns over their safety are going to keep them grounded for years to come.
Surprisingly, given the commercial hopes it has for civil unmanned aerial vehicles (UAVs), the aviation industry has failed to obtain the radio frequencies it needs to control them - and it will be 2011 before it can even begin to lobby for space on the radio spectrum. What's more, none of the world's aviation authorities will allow civil UAVs to fly in their airspace without a reliable system for avoiding other aircraft - and the industry has not yet even begun developing such a system. Experts say this could take up to seven years.
Dedicated frequencies are handed out at the International Telecommunications Union's World Radiocommunications Conference.But no one in the UAV industry had applied for any new frequencies.If UAVs are to mingle safely with other civilian aircraft, the industry needs to develop a safe, standardised collision avoidance system. This is complicated because aviation regulators demand that if UAVs are to have access to civil airspace, they must be "equivalent" in every way to regular planes.The problem for now is that aviation regulators have yet to define precisely what they mean by "equivalent", so UAV makers are not yet willing to commit themselves to developing collision-avoidance technology."A crewless aircraft on a collision course must behave as if it had a pilot on board"
On the brighter side, last week the UN's International Civil Aviation Organization said its navigation experts would meet in early 2007 to consider regulations for UAVs in civil airspace.
however, it will be meaningless unless the industry can obtain the necessary frequencies to control the planes and feed images and other sensor data back to base, says Bowker. "The lack of robust, secure radio spectrum is a show-stopper."