Craig Boutilier, Department of Computer Science, University of Toronto
WHEN: 10:30am Wed., Nov. 30
ABSTRACT: Preference elicitation is generally required when making or recommending decisions on behalf of users whose utility function is not known with certainty. Although one can engage in elicitation until a utility function is perfectly known, in practice, this is infeasible. This talk tackles this problem in constraint-based optimization.
I will first describe a graphical model for utility representation and issues associated with elicitation in this model. I then discuss two methods for optimization with imprecise utility information: a Bayesian approach in which utility function uncertainty is quantified probabilistically; and a distribution-free minimax regret model. Finally, I will describe several heuristic strategies for elicitation.
This work describes several joint projects with: Darius Braziunas, Relu Patrascu, Pascal Poupart and Dale Schuurmans.
SPEAKER BIO: Craig Boutilier received his Ph.D. in Computer Science (1992) from the University of Toronto, Canada. He is Professor and Chair of the Department of Computer Science at the University of Toronto. He was previously an Associate Professor at the University of British Columbia, a consulting professor at Stanford University, and has served on the Technical Advisory Board of CombineNet, Inc. since 2001.
Dr. Boutilier's research interests span a wide range of topics, with a focus on decision making under uncertainty. He has been awarded the Isaac Walton Killam Research Fellowship, and an IBM Faculty Award. He also received the Killam Teaching Award.
This Blog is maintained by the Robot Perception and Learning lab at CSIE, NTU, Taiwan. Our scientific interests are driven by the desire to build intelligent robots and computers, which are capable of servicing people more efficiently than equivalent manned systems in a wide variety of dynamic and unstructured environments.
Wednesday, November 30, 2005
CMU talk: Learning Image Manifolds != Manifold Learning
Robert Pless
Washington University in St. Louis
Monday, December 5, 2005
Abstract
This talk will detail my explorations in applying Manifold Learning techniques to real problems in image processing. Initial experiments with natural image sets (What is the intrinsic dimension of a Charlie Chaplin video clip?... Do cardio-pulmonary MR-images have a natural 2D parameterization?) illuminate several limitations of existing algorithms. First, using Euclidean (sum-of-squared pixel intensity difference) distance is usually a poor choice of image distance functions for natural images. Second, many natural image manifolds have a cyclic topology (and thus cannot be cleanly embedding into a Euclidean space). Third, natural data sets often include unlabeled examples from multiple, intersecting low-dimensional manifolds.
I will talk about several heuristic (and occasionally well founded) algorithms for choosing effective local image distance measures, finding minimal parameterizations for cyclic manifolds, and simultaneously clustering and parameterizing data from multiple intersecting manifolds. These have been brought together in an end-to-end application which automatically learns the 2D manifold structure of (ungated, free-breathing) cardiac MRI images of a patient, and uses the manifold structure of the images to regularize the segmentation of the left ventricle simultaneously in all frames.
Short Bio
Robert Pless is an Assistant Professor of Computer Science, and Assistant Director of the Center for Security Technologies at Washington University. His research interests focus on video processing; motion estimation for video surveillance and manifold learning for applications in biomedical imaging. He received a BS from Cornell University in 1994 and a PhD from the University of Maryland in 2000, and was chairman of the IEEE OMNIVIS workshop in 2003.
Washington University in St. Louis
Monday, December 5, 2005
Abstract
This talk will detail my explorations in applying Manifold Learning techniques to real problems in image processing. Initial experiments with natural image sets (What is the intrinsic dimension of a Charlie Chaplin video clip?... Do cardio-pulmonary MR-images have a natural 2D parameterization?) illuminate several limitations of existing algorithms. First, using Euclidean (sum-of-squared pixel intensity difference) distance is usually a poor choice of image distance functions for natural images. Second, many natural image manifolds have a cyclic topology (and thus cannot be cleanly embedding into a Euclidean space). Third, natural data sets often include unlabeled examples from multiple, intersecting low-dimensional manifolds.
I will talk about several heuristic (and occasionally well founded) algorithms for choosing effective local image distance measures, finding minimal parameterizations for cyclic manifolds, and simultaneously clustering and parameterizing data from multiple intersecting manifolds. These have been brought together in an end-to-end application which automatically learns the 2D manifold structure of (ungated, free-breathing) cardiac MRI images of a patient, and uses the manifold structure of the images to regularize the segmentation of the left ventricle simultaneously in all frames.
Short Bio
Robert Pless is an Assistant Professor of Computer Science, and Assistant Director of the Center for Security Technologies at Washington University. His research interests focus on video processing; motion estimation for video surveillance and manifold learning for applications in biomedical imaging. He received a BS from Cornell University in 1994 and a PhD from the University of Maryland in 2000, and was chairman of the IEEE OMNIVIS workshop in 2003.
CMU talk: Human-Robot Systems for Planetary Exploration
Title: Human-Robot Systems for Planetary Exploration
Speaker: Salvatore Domenick Desiano, Research Scientist, Intelligent Systems Division (QSS Group, Inc), NASA Ames Research Center
Date: Thursday, December 1
Abstract:
Planetary robots will be used in many contexts -- Martian and Lunar, alone and with humans, for construction and for scientific exploration, to name a few. The Intelligent Robotics Group at the NASA Ames Research Center develops cross-cutting capabilities that enable robots to perform autonomously in all of these situations.
In this talk, I will focus on the results of the Collaborative Design Systems FY05 demonstration, performed in September. This was the largest demonstration of integrated robotic systems ever carried out at NASA Ames. The demonstration included visual target tracking, autonomous multi-SCIP (Single Cycle Instrument Placement), constraint-based temporal planning, human-robot collaboration, spoken dialog interfaces, multi-agent systems, and 3D visualization tools.
In addition to the results of this specific demonstration, I will briefly present some of the open research problems on which our group is interested in working or collaborating. I will also provide some inside perspective on the current state of NASA's robotics programs and funding sources.
Speaker Bio:
Salvatore Domenick Desiano is a robotics research scientist at the NASA Ames Research Center. As a member of the Intelligent Robotics Group, he leads the K-9 Rover Team of the CDS project, the most elaborate combination of human and robot planetary exploration ever demonstrated at NASA Ames. His research focuses on developing fundamental navigation capabilities for mobile robots, and he works extensively with the NASA Office of Education. He has also served as the Integration Lead for the Personal Satellite Assistant project. Salvatore is currently on leave from being a doctoral candidate at the Robotics Institute and will be returning to the program early 2006.
Speaker: Salvatore Domenick Desiano, Research Scientist, Intelligent Systems Division (QSS Group, Inc), NASA Ames Research Center
Date: Thursday, December 1
Abstract:
Planetary robots will be used in many contexts -- Martian and Lunar, alone and with humans, for construction and for scientific exploration, to name a few. The Intelligent Robotics Group at the NASA Ames Research Center develops cross-cutting capabilities that enable robots to perform autonomously in all of these situations.
In this talk, I will focus on the results of the Collaborative Design Systems FY05 demonstration, performed in September. This was the largest demonstration of integrated robotic systems ever carried out at NASA Ames. The demonstration included visual target tracking, autonomous multi-SCIP (Single Cycle Instrument Placement), constraint-based temporal planning, human-robot collaboration, spoken dialog interfaces, multi-agent systems, and 3D visualization tools.
In addition to the results of this specific demonstration, I will briefly present some of the open research problems on which our group is interested in working or collaborating. I will also provide some inside perspective on the current state of NASA's robotics programs and funding sources.
Speaker Bio:
Salvatore Domenick Desiano is a robotics research scientist at the NASA Ames Research Center. As a member of the Intelligent Robotics Group, he leads the K-9 Rover Team of the CDS project, the most elaborate combination of human and robot planetary exploration ever demonstrated at NASA Ames. His research focuses on developing fundamental navigation capabilities for mobile robots, and he works extensively with the NASA Office of Education. He has also served as the Integration Lead for the Personal Satellite Assistant project. Salvatore is currently on leave from being a doctoral candidate at the Robotics Institute and will be returning to the program early 2006.
Tuesday, November 29, 2005
My talk this Wednesday
Paper:
Multi-Plannar Projection by Fixed-Center Pan-Tile Projectors
{Ikuhisa Mitsugami , Norimichi Ukita , Masatsugu Kidode}
Abstract:
We describe a new steerable projector, whose projection center precisely corresponds with its rotation center, which we call a “fixed-center pan-tilt (FC-PT) projector.” This mechanism allows it be set up more easily to display graph- ics precisely on the planes in the environment than for other steerable projectors; wherever we would like to display graphics, all we have to do are locating the FC-PT projec- tor in the environment, and directing it to the corners of the planes whose 2D sizes have been measured. Moreover, as the FC-PT projector can recognize automatically whether each plane is connected to others, it can display visual in- formation that lies across the boundary line of two planes in a similar way to a paper poster folded along the planes.
Link:
Multi-Plannar Projection by Fixed-Center Pan-Tile Projectors
Multi-Plannar Projection by Fixed-Center Pan-Tile Projectors
{Ikuhisa Mitsugami , Norimichi Ukita , Masatsugu Kidode}
Abstract:
We describe a new steerable projector, whose projection center precisely corresponds with its rotation center, which we call a “fixed-center pan-tilt (FC-PT) projector.” This mechanism allows it be set up more easily to display graph- ics precisely on the planes in the environment than for other steerable projectors; wherever we would like to display graphics, all we have to do are locating the FC-PT projec- tor in the environment, and directing it to the corners of the planes whose 2D sizes have been measured. Moreover, as the FC-PT projector can recognize automatically whether each plane is connected to others, it can display visual in- formation that lies across the boundary line of two planes in a similar way to a paper poster folded along the planes.
Link:
Multi-Plannar Projection by Fixed-Center Pan-Tile Projectors
Sunday, November 27, 2005
Paper: The Smart Wheelchair Component System
Richard Simpson, PhD, ATP; Edmund LoPresti, PhD; Steve Hayashi, PhD; Illah Nourbakhsh, PhD; David Miller, PhD
Journal of Rehabilitation Research & Development (JRRD) Volume 41, Number 3B, Pages 429–442 May/June 2004
Abstract—While the needs of many individuals with disabilities can be satisfied with power wheelchairs, some members of the disabled community find it difficult or impossible to operate a standard power wheelchair. To accommodate this population, several researchers have used technologies originally developed for mobile robots to create“smart wheelchairs”that reduce the physical, perceptual, and cognitive skills necessary to operate a power wheelchair. We are developing a Smart Wheelchair Component System (SWCS) that can be added to a variety of commercial power wheelchairs with minimal modification. This paper describes the design of a prototype of the SWCS, which has been evaluated on wheelchairs from four different manufacturers. [PDF]
Journal of Rehabilitation Research & Development (JRRD) Volume 41, Number 3B, Pages 429–442 May/June 2004
Abstract—While the needs of many individuals with disabilities can be satisfied with power wheelchairs, some members of the disabled community find it difficult or impossible to operate a standard power wheelchair. To accommodate this population, several researchers have used technologies originally developed for mobile robots to create“smart wheelchairs”that reduce the physical, perceptual, and cognitive skills necessary to operate a power wheelchair. We are developing a Smart Wheelchair Component System (SWCS) that can be added to a variety of commercial power wheelchairs with minimal modification. This paper describes the design of a prototype of the SWCS, which has been evaluated on wheelchairs from four different manufacturers. [PDF]
Wednesday, November 23, 2005
What's New @ IEEE in Signal Processing, November 2005
-- PROTOTYPE COMBINES THERAPEUTIC AND 3-D IMAGING CAPABILITY
High frequency ultrasound waves may allow physicians to both visualize the heart's interior in three dimensions and selectively destroy heart tissue with heat to correct arrhythmias, according to engineers at Duke University who are developing the technology. Building on previous work, the Duke team has created dual-function ultrasound probes that use tiny cables, as many as two hundred of them in a three-millimeter catheter. To destroy aberrant tissue in the heart, physicians currently use electrodes that must touch the target tissue, and are guided by x-rays, which do not provide sharp images of soft tissue. The Duke engineers say their prototype can destroy target tissue without touching it, and is guided by much cleared three-dimensional imaging. The work is described in two research papers published in last month in the journals "IEEE Transactions on Ultrasonics, Ferroelectronics and Frequency Control" and "Ultrasonic Imaging." Read more
-- USING ULTRASOUND TECHNOLOGY TO LOOK INSIDE CONCRETE
Researchers at Cambridge Ultrasonics, in conjunction with UK firm Sonatest, have developed an ultrasound sensor that can "see" inside concrete. The sensor works by firing sound waves from up to six different transducers and then registering the returning echoes. A visual map of the inside of the concrete is then displayed as a three-dimensional image. The system, still in the testing stage, was designed to monitor concrete structures for interior corrosion, such as cracks and fissures, particularly in a building's tendons, which act as the skeletons of structures. But the sensor is also of particular interest to police organizations, which could use the device to locate corpses buried in concrete. Bodies buried in concrete break down and leave voids which the sensor would record. Cambridge Ultrasonics is also working on a monitoring system which could be attached to structures to provide regular feedback on corrosion. Read more
High frequency ultrasound waves may allow physicians to both visualize the heart's interior in three dimensions and selectively destroy heart tissue with heat to correct arrhythmias, according to engineers at Duke University who are developing the technology. Building on previous work, the Duke team has created dual-function ultrasound probes that use tiny cables, as many as two hundred of them in a three-millimeter catheter. To destroy aberrant tissue in the heart, physicians currently use electrodes that must touch the target tissue, and are guided by x-rays, which do not provide sharp images of soft tissue. The Duke engineers say their prototype can destroy target tissue without touching it, and is guided by much cleared three-dimensional imaging. The work is described in two research papers published in last month in the journals "IEEE Transactions on Ultrasonics, Ferroelectronics and Frequency Control" and "Ultrasonic Imaging." Read more
-- USING ULTRASOUND TECHNOLOGY TO LOOK INSIDE CONCRETE
Researchers at Cambridge Ultrasonics, in conjunction with UK firm Sonatest, have developed an ultrasound sensor that can "see" inside concrete. The sensor works by firing sound waves from up to six different transducers and then registering the returning echoes. A visual map of the inside of the concrete is then displayed as a three-dimensional image. The system, still in the testing stage, was designed to monitor concrete structures for interior corrosion, such as cracks and fissures, particularly in a building's tendons, which act as the skeletons of structures. But the sensor is also of particular interest to police organizations, which could use the device to locate corpses buried in concrete. Bodies buried in concrete break down and leave voids which the sensor would record. Cambridge Ultrasonics is also working on a monitoring system which could be attached to structures to provide regular feedback on corrosion. Read more
What's New @ IEEE in Wireless, November 2005
-- PLANNING SMART BUILDINGS TO AID FIRST RESPONDERS
The National Institute of Standards and Technology (NIST) says it is working with the building industry, public safety officials and information technologists to study how "intelligent" building systems can be used by firefighters, police and other first responders to assess emergency conditions in real-time. NIST is developing standards for various types of communication networks (including wireless networks) to transmit real-time building sensor information on mechanical systems, elevators, lighting, security and fire systems, occupant locations, and temperature and smoke conditions to first responders. According to NIST, the network information would include floor plans and live data from motion, heat, biochemical and other sensors and video cameras. Read more
In related first-responder news, the article "Service-Based Computing on
Manets: Enabling Dynamic Interoperability of First Responders," can be
found in the current issue of IEEE Intelligent Systems magazine
The National Institute of Standards and Technology (NIST) says it is working with the building industry, public safety officials and information technologists to study how "intelligent" building systems can be used by firefighters, police and other first responders to assess emergency conditions in real-time. NIST is developing standards for various types of communication networks (including wireless networks) to transmit real-time building sensor information on mechanical systems, elevators, lighting, security and fire systems, occupant locations, and temperature and smoke conditions to first responders. According to NIST, the network information would include floor plans and live data from motion, heat, biochemical and other sensors and video cameras. Read more
In related first-responder news, the article "Service-Based Computing on
Manets: Enabling Dynamic Interoperability of First Responders," can be
found in the current issue of IEEE Intelligent Systems magazine
Sunday, November 20, 2005
CMU talk: Consistent Segmentation for Optical Flow Estimation
Larry Zitnick
When: Monday, November 21, 3:30 p.m.- 4:45 p.m.
Abstract:
The computation of optical flow in the presence of large displacements and occlusion boundaries is a difficult problem, both in terms of accuracy and computational efficiency. Recently, work in stereo vision has shown promising results using image segmentation to constrain the matching process. Unfortunately, these same segmentation approaches are highly inefficient for optical flow estimation, due to the increased search space (2D vs. 1D) needed for optical flow. We propose a new approach that simultaneously computes a consistent segmentation across images while estimating optical flow. This approach leads to a computationally efficient algorithm while producing accurate results. In addition, we'll present results in video interpolation and exaggerated motion blur using the computed flow fields.
When: Monday, November 21, 3:30 p.m.- 4:45 p.m.
Abstract:
The computation of optical flow in the presence of large displacements and occlusion boundaries is a difficult problem, both in terms of accuracy and computational efficiency. Recently, work in stereo vision has shown promising results using image segmentation to constrain the matching process. Unfortunately, these same segmentation approaches are highly inefficient for optical flow estimation, due to the increased search space (2D vs. 1D) needed for optical flow. We propose a new approach that simultaneously computes a consistent segmentation across images while estimating optical flow. This approach leads to a computationally efficient algorithm while producing accurate results. In addition, we'll present results in video interpolation and exaggerated motion blur using the computed flow fields.
MIT talk: Vision-based robotics: Representation, Mapping and Exploration
Speaker: Robert Sim, University of British Columbia
Date: Tuesday, November 22 2005
Autonomous mobile robot systems have an important role to play in a wide variety of application domains. A key component for autonomy is the capability to explore an unknown environment and construct a representation that a robotic agent can use to localize, navigate, and reason about the world. In this talk I will present results on the automatic construction of visual representations. First, the Visual Map representation will be introduced as a method for modelling the visual structure of the world. Second, I will present a flexible architecture for robust real-time vision-based mapping of an unknown environment. Finally, I will conclude with a discussion of recent progress on the problem of autonomous robotic exploration, and illustrate issues in the problem of developing robotic explorers that are naturally curious about their environment.
The Visual Map framework is an approach to representing the visual world that enables a robot to learn models of the salient visual features of an environment. A key component of this representation is the ability to learn mappings between camera pose and image-domain features without imposing a priori assumptions about the structure of the environment, or the optical characteristics of the visual sensor. These mappings can be employed as generative models in a Bayesian framework for solving the robot localization problem, as well as for visual servoing and path planning.
The second part of this talk demonstrates an architecture for performing simultaneous localization and mapping with vision. The main goals of our work are to facilitate robust large-scale mapping in real time using vision. We employ a Rao-Blackwellised particle filter for managing uncertainty and examine a variety of robust proposal distributions, as well as the run-time and scaling characteristics of our architecture.
The latter part of this builds on representation and mapping to address robotic exploration. In order to acquire a representation of the world, a robot must first acquire data. From an information-theoretic point of view, this problem involves moving through the world so as to maximize the information that can be gained
from what is observed along the robot's trajectory. However, computing the optimal trajectory is complicated by several factors, including the presence of noise, the time horizon over which the robot plans, the specific objective function that is optimized, and the robot's choice of sensor. I will present several results in this area that lead to the development of robust robotic systems that can plan over the long term and successfully demonstrate an emergent sense of curiosity.
Date: Tuesday, November 22 2005
Autonomous mobile robot systems have an important role to play in a wide variety of application domains. A key component for autonomy is the capability to explore an unknown environment and construct a representation that a robotic agent can use to localize, navigate, and reason about the world. In this talk I will present results on the automatic construction of visual representations. First, the Visual Map representation will be introduced as a method for modelling the visual structure of the world. Second, I will present a flexible architecture for robust real-time vision-based mapping of an unknown environment. Finally, I will conclude with a discussion of recent progress on the problem of autonomous robotic exploration, and illustrate issues in the problem of developing robotic explorers that are naturally curious about their environment.
The Visual Map framework is an approach to representing the visual world that enables a robot to learn models of the salient visual features of an environment. A key component of this representation is the ability to learn mappings between camera pose and image-domain features without imposing a priori assumptions about the structure of the environment, or the optical characteristics of the visual sensor. These mappings can be employed as generative models in a Bayesian framework for solving the robot localization problem, as well as for visual servoing and path planning.
The second part of this talk demonstrates an architecture for performing simultaneous localization and mapping with vision. The main goals of our work are to facilitate robust large-scale mapping in real time using vision. We employ a Rao-Blackwellised particle filter for managing uncertainty and examine a variety of robust proposal distributions, as well as the run-time and scaling characteristics of our architecture.
The latter part of this builds on representation and mapping to address robotic exploration. In order to acquire a representation of the world, a robot must first acquire data. From an information-theoretic point of view, this problem involves moving through the world so as to maximize the information that can be gained
from what is observed along the robot's trajectory. However, computing the optimal trajectory is complicated by several factors, including the presence of noise, the time horizon over which the robot plans, the specific objective function that is optimized, and the robot's choice of sensor. I will present several results in this area that lead to the development of robust robotic systems that can plan over the long term and successfully demonstrate an emergent sense of curiosity.
MIT PhD Defense: A Unified Information Theoretic Framework for Pair- and Group-wise Registration of Medical Images
Speaker: Lilla Zollei , MIT CSAIL Vision Research Group
Date: Tuesday, November 22 2005
Time: 10:30AM to 12:00PM
Location: Star Seminar Room, 32-D463, Stata Center
Host: Prof. Eric Grimson, Head, EECS Dept.; MIT CSAIL Vision Research Group
Abstract:
The field of medical image analysis has been rapidly growing for the past two decades. Besides a significant growth in computational power, scanner performance, and storage facilities, this acceleration is partially due to an unprecedented increase in the amount of data sets accessible for researchers. Medical experts traditionally rely on manual comparisons of images, but the abundance of information now available makes this task increasingly difficult. Such a challenge prompts for more automation in processing the images.
In order to carry out any sort of comparison between multiple medical images, one frequently needs to identify the proper correspondence between them. This step allows us to follow the changes that happen to anatomy throughout a time interval, to identify differences between individuals, or to acquire complementary information from different data modalities. Registration achieves such correspondences. In this dissertation we focus on the unified analysis and characterization of statistical registration approaches.
First we formulate and interpret a select group of pair-wise registration methods in the context of a unified statistical and information theoretic framework. This clarifies the implicit assumptions of each method and yields a better understanding of their relative strengths and weaknesses. This guides us to a new registration algorithm that incorporates the advantages of the previously described methods. Next we extend the unified formulation with analysis of the group-wise registration algorithms that align a population as opposed to pairs of data sets. Finally, we present our group-wise registration framework, stochastic congealing. That algorithm runs in a simultaneous fashion, with every member of the population approaching the central tendency of the collection at the same time. It eliminates the need for selecting a particular reference frame a priori, resulting in a non-biased estimate of a digital template. Our algorithm adopts an information theoretic objective function which is optimized via a gradient-based stochastic approximation process embedded in a multi-resolution setting. We demonstrate the accuracy and performance characteristics of stochastic congealing via experiments on both synthetic and real images.
Date: Tuesday, November 22 2005
Time: 10:30AM to 12:00PM
Location: Star Seminar Room, 32-D463, Stata Center
Host: Prof. Eric Grimson, Head, EECS Dept.; MIT CSAIL Vision Research Group
Abstract:
The field of medical image analysis has been rapidly growing for the past two decades. Besides a significant growth in computational power, scanner performance, and storage facilities, this acceleration is partially due to an unprecedented increase in the amount of data sets accessible for researchers. Medical experts traditionally rely on manual comparisons of images, but the abundance of information now available makes this task increasingly difficult. Such a challenge prompts for more automation in processing the images.
In order to carry out any sort of comparison between multiple medical images, one frequently needs to identify the proper correspondence between them. This step allows us to follow the changes that happen to anatomy throughout a time interval, to identify differences between individuals, or to acquire complementary information from different data modalities. Registration achieves such correspondences. In this dissertation we focus on the unified analysis and characterization of statistical registration approaches.
First we formulate and interpret a select group of pair-wise registration methods in the context of a unified statistical and information theoretic framework. This clarifies the implicit assumptions of each method and yields a better understanding of their relative strengths and weaknesses. This guides us to a new registration algorithm that incorporates the advantages of the previously described methods. Next we extend the unified formulation with analysis of the group-wise registration algorithms that align a population as opposed to pairs of data sets. Finally, we present our group-wise registration framework, stochastic congealing. That algorithm runs in a simultaneous fashion, with every member of the population approaching the central tendency of the collection at the same time. It eliminates the need for selecting a particular reference frame a priori, resulting in a non-biased estimate of a digital template. Our algorithm adopts an information theoretic objective function which is optimized via a gradient-based stochastic approximation process embedded in a multi-resolution setting. We demonstrate the accuracy and performance characteristics of stochastic congealing via experiments on both synthetic and real images.
11/23 lab meeting
Dear all,
This is the paper I'll present.
Modeling the Static and the Dynamic Parts of the Environment
to Improve Sensor-based Navigation
ICRA 2005
This is the paper I'll present.
Modeling the Static and the Dynamic Parts of the Environment
to Improve Sensor-based Navigation
ICRA 2005
Monday, November 14, 2005
11/16 lab meeting
Dear all,
This is the paper I'll present.
Simultaneous Localization and Mapping with
Detection and Tracking of Moving Objects
-C.-C. Wang and C. Thorpe.In IEEE International Conference on Robotics and Automation (ICRA'02), May, 2002.
This is the paper I'll present.
Simultaneous Localization and Mapping with
Detection and Tracking of Moving Objects
-C.-C. Wang and C. Thorpe.In IEEE International Conference on Robotics and Automation (ICRA'02), May, 2002.
Saturday, November 12, 2005
CMU talk: Automatic Filters for the Detection of Coherent Structure in Spatiotemporal Systems
Cosma Shalizi
November 29, 2005
Abstract: Current methods for identifying coherent structures in spatially-extended systems rely on prior information about the form which those structures take. This talk describes two new approaches to automatically filter the changing configurations of spatial dynamical systems and extract coherent structures. One, local sensitivity filtering, gauges the ability of locally-applied perturbations to produce large-scale changes in the system configuration. The other, local statistical complexity filtering, calculates the amount of information needed for optimal prediction of the system's behavior in the vicinity of a given point. By examining the changing spatiotemporal distributions of these quantities, we can find the coherent structures in a variety of pattern-forming systems, without needing to guess or postulate the form of that structure. The results are at least comparable to those obtained with older techniques based on formal language theory or the statistical- mechanical theory of order parameters. Paper URL: http://arxiv.org/abs/nlin.CG/0508001
November 29, 2005
Abstract: Current methods for identifying coherent structures in spatially-extended systems rely on prior information about the form which those structures take. This talk describes two new approaches to automatically filter the changing configurations of spatial dynamical systems and extract coherent structures. One, local sensitivity filtering, gauges the ability of locally-applied perturbations to produce large-scale changes in the system configuration. The other, local statistical complexity filtering, calculates the amount of information needed for optimal prediction of the system's behavior in the vicinity of a given point. By examining the changing spatiotemporal distributions of these quantities, we can find the coherent structures in a variety of pattern-forming systems, without needing to guess or postulate the form of that structure. The results are at least comparable to those obtained with older techniques based on formal language theory or the statistical- mechanical theory of order parameters. Paper URL: http://arxiv.org/abs/nlin.CG/0508001
CMU talk: Scalable Inference in Hierarchical Models of the Neocortex
Tom Dean
November 21, 2005
Title: Scalable Inference in Hierarchical Models of the Neocortex
Abstract:
Borrowing insights from computational neuroscience, we present a class of generative models well suited to modeling perceptual processes and an algorithm for learning their parameters that promises to scale to learning very large models. The models are hierarchical, composed of multiple levels, and allow input only at the lowest level, the base of the hierarchy. Connections within a level are generally local and may or may not be directed. Connections between levels are directed and generally do not span multiple levels. The learning algorithm falls within the general family of expectation maximization algorithms. Parameter estimation proceeds level-by-level starting with components in the lowest level and moving up the hierarchy. Having learned the parameters for the components in a given level, those parameters are fixed and needn't be revisited for the purposes of learning. These parameters do, however, play an important role in learning the parameters for higher-level components by helping to generate the samples used in subsequent parameter estimation. Within levels, learning is decomposed into many local subproblems suggesting a straightforward parallel implementation. The inference required for learning is carried out by local message passing and the arrangement of connections within the underlying networks is designed to facilitate this method of inference. Learning is unsupervised but can be easily adapted to accommodate labeled data. In addition to describing several variants of the basic algorithm, we present preliminary experimental results demonstrating the pattern-recognition capabilities of our approach and some of the characteristics of the approximations that the algorithms produce.
November 21, 2005
Title: Scalable Inference in Hierarchical Models of the Neocortex
Abstract:
Borrowing insights from computational neuroscience, we present a class of generative models well suited to modeling perceptual processes and an algorithm for learning their parameters that promises to scale to learning very large models. The models are hierarchical, composed of multiple levels, and allow input only at the lowest level, the base of the hierarchy. Connections within a level are generally local and may or may not be directed. Connections between levels are directed and generally do not span multiple levels. The learning algorithm falls within the general family of expectation maximization algorithms. Parameter estimation proceeds level-by-level starting with components in the lowest level and moving up the hierarchy. Having learned the parameters for the components in a given level, those parameters are fixed and needn't be revisited for the purposes of learning. These parameters do, however, play an important role in learning the parameters for higher-level components by helping to generate the samples used in subsequent parameter estimation. Within levels, learning is decomposed into many local subproblems suggesting a straightforward parallel implementation. The inference required for learning is carried out by local message passing and the arrangement of connections within the underlying networks is designed to facilitate this method of inference. Learning is unsupervised but can be easily adapted to accommodate labeled data. In addition to describing several variants of the basic algorithm, we present preliminary experimental results demonstrating the pattern-recognition capabilities of our approach and some of the characteristics of the approximations that the algorithms produce.
Stanford Talk: Rethinking State, Action, and Reward in Reinforcement Learning
Satinder Singh
November 7, 2005, 4:15PM
Abstract
Over the last decade and more, there has been rapid theoretical and empirical progress in reinforcement learning (RL) using the well- established formalisms of Markov decision processes (MDPs) and partially observable MDPs or POMDPs. At the core of these formalisms are particular formulations of the elemental notions of state, action, and reward that have served the field of RL so well. In this talk, I will describe recent progress in rethinking these basic elements to take the field beyond (PO)MDPs. In particular, I will briefly describe older work on flexible notions of actions called options, briefly describe some recent work on intrinsic rather than extrinsic rewards, and then spend the bulk of my time on recent work on predictive representations of state. I will conclude by arguing that taken together these advances point the way for RL to address the many challenges of building an artificial intelligence.
About the Speaker
Satinder Singh is an Associate Professor of Electrical Engineering and Computer Science in the University of Michigan, Ann Arbor. His main research interest is in the old-fashioned goal of Artificial Intelligence, that of building autonomous agents that can learn to be broadly competent in complex, dynamic, and uncertain environments. The field of reinforcement learning (RL) has focused on this goal, and accordingly his deepest contributions are in RL.
November 7, 2005, 4:15PM
Abstract
Over the last decade and more, there has been rapid theoretical and empirical progress in reinforcement learning (RL) using the well- established formalisms of Markov decision processes (MDPs) and partially observable MDPs or POMDPs. At the core of these formalisms are particular formulations of the elemental notions of state, action, and reward that have served the field of RL so well. In this talk, I will describe recent progress in rethinking these basic elements to take the field beyond (PO)MDPs. In particular, I will briefly describe older work on flexible notions of actions called options, briefly describe some recent work on intrinsic rather than extrinsic rewards, and then spend the bulk of my time on recent work on predictive representations of state. I will conclude by arguing that taken together these advances point the way for RL to address the many challenges of building an artificial intelligence.
About the Speaker
Satinder Singh is an Associate Professor of Electrical Engineering and Computer Science in the University of Michigan, Ann Arbor. His main research interest is in the old-fashioned goal of Artificial Intelligence, that of building autonomous agents that can learn to be broadly competent in complex, dynamic, and uncertain environments. The field of reinforcement learning (RL) has focused on this goal, and accordingly his deepest contributions are in RL.
MIT talk: Visual Recognition: From Generative to Discriminative Models
Speaker: Pietro Perona , Caltech
Date: Monday, November 14 2005
We can easily recognize objects and properties of the world by looking. If machines had this ability they could be much more intelligent and useful. I will present a taxonomy of visual recognition, review the state of the art and discuss a number of fascinating open problems.
Pietro Perona studies the computational aspects of vision; his current focus is visual recognition. He has published on applications of PDEs to image segmentation, human texture perception and segmentation, dynamic vision, grouping, perception of human motion, learning and recognition of object categories, categorization of scenes in human vision, human perception of 3D shape, interaction of attention and recognition. Perona is Professor of Electrical Engineering and of Computation and Neural Systems at the California Institute of Technology (Caltech). He is the Director of the National Science Foundation Engineering Research Center in Neuromorphic Systems Engineering at Caltech.
Date: Monday, November 14 2005
We can easily recognize objects and properties of the world by looking. If machines had this ability they could be much more intelligent and useful. I will present a taxonomy of visual recognition, review the state of the art and discuss a number of fascinating open problems.
Pietro Perona studies the computational aspects of vision; his current focus is visual recognition. He has published on applications of PDEs to image segmentation, human texture perception and segmentation, dynamic vision, grouping, perception of human motion, learning and recognition of object categories, categorization of scenes in human vision, human perception of 3D shape, interaction of attention and recognition. Perona is Professor of Electrical Engineering and of Computation and Neural Systems at the California Institute of Technology (Caltech). He is the Director of the National Science Foundation Engineering Research Center in Neuromorphic Systems Engineering at Caltech.
MIT talk: A robust layered control system for a mobile robot
Speaker: Rodney Brooks , MIT
Date: Tuesday, November 15 2005
Rod will present a historical perspective on robot control, planning, and intelligence and discuss his influential trend-changing paper A robust layered control system for a mobile robot published in IEEE Transactions on Robotics and Automation, 2(1), pages 14-23, April 1986.
The paper is available to download here:
http://people.csail.mit.edu/brooks/papers/AIM-864.pdf
Date: Tuesday, November 15 2005
Rod will present a historical perspective on robot control, planning, and intelligence and discuss his influential trend-changing paper A robust layered control system for a mobile robot published in IEEE Transactions on Robotics and Automation, 2(1), pages 14-23, April 1986.
The paper is available to download here:
http://people.csail.mit.edu/brooks/papers/AIM-864.pdf
Friday, November 11, 2005
CMU talk: Fast Inference and Learning in Large-State-Space HMMs
Speaker: Sajid Siddiqi, CMU
http://www.ri.cmu.edu/people/siddiqi_sajid.html
Date: November 14
Abstract:
For Hidden Markov Models (HMMs) with fully connected transition models, the three fundamental problems of evaluating the likelihood of an observation sequence, estimating an optimal state sequence for the observations, and learning the model parameters, all have quadratic time complexity in the number of states. We introduce a novel class of non-sparse Markov transition matrices called Dense-Mostly-Constant (DMC) transition matrices that allow us to derive new algorithms for solving the basic HMM problems in sub-quadratic time. We describe the DMC HMM model and algorithms and attempt to convey some intuition for their usage. Empirical results for these algorithms show dramatic speedups for all three problems. In terms of accuracy, the DMC model yields strong results and outperforms the baseline algorithms even in domains known to violate the DMC assumption.
Fast Inference and Learning in Large-State-Space HMMs
S. Siddiqi and A. Moore
Proceedings of the 22nd International Conference on Machine Learning, August, 2005 Paper.
http://www.ri.cmu.edu/people/siddiqi_sajid.html
Date: November 14
Abstract:
For Hidden Markov Models (HMMs) with fully connected transition models, the three fundamental problems of evaluating the likelihood of an observation sequence, estimating an optimal state sequence for the observations, and learning the model parameters, all have quadratic time complexity in the number of states. We introduce a novel class of non-sparse Markov transition matrices called Dense-Mostly-Constant (DMC) transition matrices that allow us to derive new algorithms for solving the basic HMM problems in sub-quadratic time. We describe the DMC HMM model and algorithms and attempt to convey some intuition for their usage. Empirical results for these algorithms show dramatic speedups for all three problems. In terms of accuracy, the DMC model yields strong results and outperforms the baseline algorithms even in domains known to violate the DMC assumption.
Fast Inference and Learning in Large-State-Space HMMs
S. Siddiqi and A. Moore
Proceedings of the 22nd International Conference on Machine Learning, August, 2005 Paper.
Thursday, November 10, 2005
New @ IEEE in Communications, November 2005
1. SOCIAL NETWORKING LAYS FOUNDATION FOR INNOVATIVE DESIGNS
The current special issue of IEEE Internet Computing (v. 9, no. 5) explores how ideas from social networking can propel creative designs in the communications technology field. To help people use communication technologies to understand and manage their social networks more effectively, the issue' guest editors have selected three articles that position social networks and social networking in terms of relationships among individuals. One author focuses on the networked ego of the e-mail user to present sociograms as end-user visualizations of connections between individuals who have been co-addressed on e-mail messages. The second article examines social isolation and depression in elderly individuals and uses social networking and computing technologies to help reduce these feelings and provide health feedback displays. The third article connects physical place, mobile technologies, and social networks into the P3 framework, a system which helps designers determine appropriate geographic context clues for specific social interactions. The guest editors' introduction, along with a sidebar entitled "Resources on Social Networks, Social Networking, and Social Analysis," are available to all readers online: the link
5. FUTURE SPACE MISSIONS MAY LINK MULTIPLE PLATFORMS IN SENSOR NETWORKS
Missions to other planets and moons may one day use combined space, aerial and ground vehicles to deploy sensors and a communications network more robust and adaptable than current one-vehicle missions, researchers say. The new concept would ensure that the failure of one instrument or vehicle would not doom a mission, say scientists from the California Institute of Technology, the University of Arizona and the U.S. Geological Survey. The researchers propose multi-tiered robotic space missions that link orbiting spacecrafts, blimps and balloons with ground robots, all of which will carry instruments which can communicate and interact with instruments on the other platforms to exploit local weather and geographic conditions.
Read more: the link.
14. PAPER SUBMISSIONS TO INFORMATION FUSION CONFERENCE DUE MID-JANUARY
Papers to the 9th International Conference on Information Fusion should be submitted by 15 January 2006. The conference, sponsored by the IEEE Aerospace & Electronic Systems Society, seeks papers on advancements and applications in information fusion, particularly those with special emphasis on non-traditional topics. Some areas of interest include foundational tools, algorithmic developments, technological advancements and applications. The conference will take place in Florence, Italy, next June. For more details, visit: the link.
The current special issue of IEEE Internet Computing (v. 9, no. 5) explores how ideas from social networking can propel creative designs in the communications technology field. To help people use communication technologies to understand and manage their social networks more effectively, the issue' guest editors have selected three articles that position social networks and social networking in terms of relationships among individuals. One author focuses on the networked ego of the e-mail user to present sociograms as end-user visualizations of connections between individuals who have been co-addressed on e-mail messages. The second article examines social isolation and depression in elderly individuals and uses social networking and computing technologies to help reduce these feelings and provide health feedback displays. The third article connects physical place, mobile technologies, and social networks into the P3 framework, a system which helps designers determine appropriate geographic context clues for specific social interactions. The guest editors' introduction, along with a sidebar entitled "Resources on Social Networks, Social Networking, and Social Analysis," are available to all readers online: the link
5. FUTURE SPACE MISSIONS MAY LINK MULTIPLE PLATFORMS IN SENSOR NETWORKS
Missions to other planets and moons may one day use combined space, aerial and ground vehicles to deploy sensors and a communications network more robust and adaptable than current one-vehicle missions, researchers say. The new concept would ensure that the failure of one instrument or vehicle would not doom a mission, say scientists from the California Institute of Technology, the University of Arizona and the U.S. Geological Survey. The researchers propose multi-tiered robotic space missions that link orbiting spacecrafts, blimps and balloons with ground robots, all of which will carry instruments which can communicate and interact with instruments on the other platforms to exploit local weather and geographic conditions.
Read more: the link.
14. PAPER SUBMISSIONS TO INFORMATION FUSION CONFERENCE DUE MID-JANUARY
Papers to the 9th International Conference on Information Fusion should be submitted by 15 January 2006. The conference, sponsored by the IEEE Aerospace & Electronic Systems Society, seeks papers on advancements and applications in information fusion, particularly those with special emphasis on non-traditional topics. Some areas of interest include foundational tools, algorithmic developments, technological advancements and applications. The conference will take place in Florence, Italy, next June. For more details, visit: the link.
New @ IEEE for Students, November 2005
7. IEEE MAGAZINE EXAMINES HUMAN-MACHINE COMMUNICATION
Speech technology gets the special-focus treatment in the current issue of IEEE Signal Processing Magazine (v. 22, no. 5). The issue contains nine articles around the theme of "Speech Technology in Human-Machine Communication," along with an introduction by the guest editors, who write that "the full potential of speech technology still remains to be uncovered." The table of contents and abstracts for all papers in the current issue can be found in the IEEE Xplore digital library, where subscribers may also access the full text of the articles: the link.
10. NEPTUNE RISING: IEEE SPECTRUM REPORTS
Scientists studying the world's oceans are limited to short trips during times of the year when weather conditions are most favorable, and the underwater instruments they leave behind lack the power and bandwidth to deliver much useful information. But this month, construction begins on an Internet-connected undersea observatory covering hundreds of thousands of square kilometers of sea floor. When the project, called the North-East Pacific Time Series Undersea Networked Experiments (NEPTUNE), is completed in 2007, instruments such as hydrophones, current sensors, high-definition video cameras and even robotic crawlers will deliver data around the clock.
IEEE Spectrum has more: the link.
Speech technology gets the special-focus treatment in the current issue of IEEE Signal Processing Magazine (v. 22, no. 5). The issue contains nine articles around the theme of "Speech Technology in Human-Machine Communication," along with an introduction by the guest editors, who write that "the full potential of speech technology still remains to be uncovered." The table of contents and abstracts for all papers in the current issue can be found in the IEEE Xplore digital library, where subscribers may also access the full text of the articles: the link.
10. NEPTUNE RISING: IEEE SPECTRUM REPORTS
Scientists studying the world's oceans are limited to short trips during times of the year when weather conditions are most favorable, and the underwater instruments they leave behind lack the power and bandwidth to deliver much useful information. But this month, construction begins on an Internet-connected undersea observatory covering hundreds of thousands of square kilometers of sea floor. When the project, called the North-East Pacific Time Series Undersea Networked Experiments (NEPTUNE), is completed in 2007, instruments such as hydrophones, current sensors, high-definition video cameras and even robotic crawlers will deliver data around the clock.
IEEE Spectrum has more: the link.
Wednesday, November 09, 2005
Jim's presentation today
An Application of Markov Random Fields to Range Sensing by Diebel and Thrun, NIPS 2005.
An Introduction to the Conjugate Gradient Method Without the Agonizing Pain by Jonathan Richard Shewchuk. <-- I have not read it yet.
An Introduction to the Conjugate Gradient Method Without the Agonizing Pain by Jonathan Richard Shewchuk. <-- I have not read it yet.
CMU LTI talk: Natural Language Processing in Bioinformatics: Uncovering Semantic Relations
Speaker: Barbara Rosario, University of California, Berkeley
TITLE: Natural Language Processing in Bioinformatics: Uncovering Semantic Relations
ABSTRACT: Current-generation search engines provide a glimpse of the kinds of activities that can be catalyzed by intelligent processing of large-scale document corpora. Further progress in this area will require the tools of statistical natural language processing, including tools for automatic extraction of propositional information from text. This presentation will explore several lines of research on one of the core problems that arise in this domain---the identification of semantic relations between constituents in sentences. First, I will discuss the problem of identifying relationships between two-word noun compounds (to characterize, for example, the treatment-for-disease relationship between the words of "migraine treatment" versus the method-of-treatment relationship between the words of "aerosol treatment".) Second, I'll describe my work in the area of Information Extraction, in particular the problem of identifying semantic entities such as "treatment" and "disease" from biomedical text. Finally, I will present my recent work on the problem of predicting protein-protein interactions from biological text. A major impediment to such work is the acquisition of appropriately labeled training data; for my experiments I have identified a database that serves as a proxy for training data. In each of these cases I will describe the statistical machine learning methods---both generative and discriminative---used to tackle these tasks.
ABSTRACT: Current-generation search engines provide a glimpse of the kinds of activities that can be catalyzed by intelligent processing of large-scale document corpora. Further progress in this area will require the tools of statistical natural language processing, including tools for automatic extraction of propositional information from text. This presentation will explore several lines of research on one of the core problems that arise in this domain---the identification of semantic relations between constituents in sentences. First, I will discuss the problem of identifying relationships between two-word noun compounds (to characterize, for example, the treatment-for-disease relationship between the words of "migraine treatment" versus the method-of-treatment relationship between the words of "aerosol treatment".) Second, I'll describe my work in the area of Information Extraction, in particular the problem of identifying semantic entities such as "treatment" and "disease" from biomedical text. Finally, I will present my recent work on the problem of predicting protein-protein interactions from biological text. A major impediment to such work is the acquisition of appropriately labeled training data; for my experiments I have identified a database that serves as a proxy for training data. In each of these cases I will describe the statistical machine learning methods---both generative and discriminative---used to tackle these tasks.
Tuesday, November 08, 2005
CMU RI talk: Social robots, social development, and social disorders
Brian Scassellati
Department of Computer Science
Yale University
Nov.11.2005
Social robots recognize and respond to human social cues with appropriate behaviors. These robots are unique tools in the study of human social development, and have the potential to play a critical role in the diagnosis and treatment of social disorders such as autism.
In the first part of this talk, I present four vignettes on what the practicality of constructing social robots has taught us about human social development. These vignettes cover topics of perceptual development (vocal prosody), sensorimotor development (declarative and imperative pointing), linguistic development (learning pronouns), and cognitive development (self-other discrimination).
The second half will focus on the application of social robots to the diagnosis and therapy of autism. Autism is a pervasive developmental disorder that is characterized by social and communicative impairments. Based on three years of integration and immersion with a clinical research group which performs more than 130 diagnostic evaluations of children for autism per year, I will discuss how social robots will impact the ways in which we diagnose, treat, and understand autism.
Speaker Biography
Brian Scassellati is an assistant professor of Computer Science at Yale University. His research focuses on the construction of humanoid robots that interact with people using natural social cues. These robots are used both to evaluate models of how infants acquire social skills and to assist in the diagnosis and quantification of disorders of social development (such as autism). He is an associate editor of the International Journal of Humanoid Robotics and the program chair for the upcoming 6th International Conference on Development and Learning. In 2003, he was awarded an NSF CAREER award.
Department of Computer Science
Yale University
Nov.11.2005
Social robots recognize and respond to human social cues with appropriate behaviors. These robots are unique tools in the study of human social development, and have the potential to play a critical role in the diagnosis and treatment of social disorders such as autism.
In the first part of this talk, I present four vignettes on what the practicality of constructing social robots has taught us about human social development. These vignettes cover topics of perceptual development (vocal prosody), sensorimotor development (declarative and imperative pointing), linguistic development (learning pronouns), and cognitive development (self-other discrimination).
The second half will focus on the application of social robots to the diagnosis and therapy of autism. Autism is a pervasive developmental disorder that is characterized by social and communicative impairments. Based on three years of integration and immersion with a clinical research group which performs more than 130 diagnostic evaluations of children for autism per year, I will discuss how social robots will impact the ways in which we diagnose, treat, and understand autism.
Speaker Biography
Brian Scassellati is an assistant professor of Computer Science at Yale University. His research focuses on the construction of humanoid robots that interact with people using natural social cues. These robots are used both to evaluate models of how infants acquire social skills and to assist in the diagnosis and quantification of disorders of social development (such as autism). He is an associate editor of the International Journal of Humanoid Robotics and the program chair for the upcoming 6th International Conference on Development and Learning. In 2003, he was awarded an NSF CAREER award.
Saturday, November 05, 2005
CMU talk: Snake Robots and Stuff that Makes them Go
Speaker: Howie Choset, Associate Professor, Robotics Institute, Carnegie Mellon University
Date: Thursday, November 10
Snake robots, formally called hyper-redundant mechanisms, are highly articulated devices that can use their many internal degrees of freedom to thread through tightly packed volumes accessing locations that people and machinery otherwise cannot. Moreover, the internal degrees of freedom of hyper-redundant mechanisms give them the ability to achieve different forms of mobility, including crawling, climbing and swimming.
The many degrees of freedom that furnish these robots with their benefits, also provide their greatest challenges: mechanism design, control, systems integration and power. This talk discusses my group's work in addressing these challenges and overviews future work. Also, this summarizes some of the applications for snake robots that my group is active; these applications include urban search and rescue, minimally invasive sugery, inspection of wings, and site characterization of buried tanks.
Date: Thursday, November 10
Snake robots, formally called hyper-redundant mechanisms, are highly articulated devices that can use their many internal degrees of freedom to thread through tightly packed volumes accessing locations that people and machinery otherwise cannot. Moreover, the internal degrees of freedom of hyper-redundant mechanisms give them the ability to achieve different forms of mobility, including crawling, climbing and swimming.
The many degrees of freedom that furnish these robots with their benefits, also provide their greatest challenges: mechanism design, control, systems integration and power. This talk discusses my group's work in addressing these challenges and overviews future work. Also, this summarizes some of the applications for snake robots that my group is active; these applications include urban search and rescue, minimally invasive sugery, inspection of wings, and site characterization of buried tanks.
CNN: MIT maps wireless users across campus
Friday, November 4, 2005; Posted: 9:54 a.m. EST (14:54 GMT)
CAMBRIDGE, Massachusetts (AP) -- In another time and place, college students wondering whether the campus cafe has any free seats, or their favorite corner of the library is occupied, would have to risk hoofing it over there.
But for today's student at the Massachusetts Institute of Technology, that kind of information is all just a click away.
MIT's newly upgraded wireless network -- extended this month to cover the entire school -- doesn't merely get you online in study halls, stairwells or any other spot on the 9.4 million square foot campus.
It also provides information on exactly how many people are logged on at any given location at any given time. It even reveals a user's identity if the individual has opted to make that data public.
MIT researchers did this by developing electronic maps that track across campus, day and night, the devices people use to connect to the network, whether they're laptops, wireless PDAs or even Wi-Fi equipped cell phones.
The maps were unveiled this week at the MIT Museum, where they are projected onto large Plexiglas rectangles that hang from the ceiling. They are also available online to network users, the data time-stamped and saved for up to 12 hours.
Red splotches on one map show the highest concentration of wireless users on campus. On another map, yellow dots with names written above them identify individual users, who pop up in different places depending where they're logged in.
"With these maps, you can see down to the room on campus how many people are logged on," said Carlo Ratti, director of the school's SENSEable City Laboratory, which created the maps. "You can even watch someone go from room to room if they have a handheld device that's connected."
Researchers use log files from the university's Internet service provider to construct the maps. The files indicate the number of users connected to each of MIT's more than 2,800 access points. The map that can pinpoint locations in rooms is 3-D, so researchers can even distinguish connectivity in multistoried buildings.
"Laptops and Wi-Fi are creating a revolutionary change in the way people work," Ratti said. The maps aim to "visualize these changes by monitoring the traffic on the wireless network and showing how people move around campus."
Some of the results so far aren't terribly surprising for students at the vanguard of tech innovation.
The maps show, for example, that the bulk of wireless users late at night and very early in the morning are logged on from their dorms. During the day, the higher concentration of users shifts to classrooms.
But researchers also found that study labs that once bustled with students are now nearly empty as people, no longer tethered to a phone line or network cable, move to cafes and nearby lounges, where food and comfy chairs are more inviting.
Researchers say this data can be used to better understand how wireless technology is changing campus life, and what that means for planning spaces and administering services.
The question has become, Ratti said, "If I can work anywhere, where do I want to work?"
"Many cities, including Philadelphia, are planning to go wireless. Something like our study will help them understand usage patterns and where best to invest," said researcher Andres Sevtsuk.
Sevtsuk likened the mapping project to a real-time census.
"Instead of waiting every year or every 10 years for data, you have new information every 15 minutes or so about the population of the campus," he said.
While every device connected to the campus network via Wi-Fi is visible on the constantly refreshed electronic maps, the identity of the users is confidential unless they volunteer to make it public.
Those students, faculty and staff who opt in are essentially agreeing to let others track them.
"This raises some serious privacy issues," Ratti said. "But where better than to work these concerns out but on a research campus?"
Rich Pell, a 21-year-old electrical engineering senior from Spartanburg, South Carolina, was less than enthusiastic about the new system's potential for people monitoring. He predicted not many fellow students would opt into that.
"I wouldn't want all my friends and professors tracking me all the time. I like my privacy," he said.
"I can't think of anyone who would think that's a good idea. Everyone wants to be out of contact now and then."
CAMBRIDGE, Massachusetts (AP) -- In another time and place, college students wondering whether the campus cafe has any free seats, or their favorite corner of the library is occupied, would have to risk hoofing it over there.
But for today's student at the Massachusetts Institute of Technology, that kind of information is all just a click away.
MIT's newly upgraded wireless network -- extended this month to cover the entire school -- doesn't merely get you online in study halls, stairwells or any other spot on the 9.4 million square foot campus.
It also provides information on exactly how many people are logged on at any given location at any given time. It even reveals a user's identity if the individual has opted to make that data public.
MIT researchers did this by developing electronic maps that track across campus, day and night, the devices people use to connect to the network, whether they're laptops, wireless PDAs or even Wi-Fi equipped cell phones.
The maps were unveiled this week at the MIT Museum, where they are projected onto large Plexiglas rectangles that hang from the ceiling. They are also available online to network users, the data time-stamped and saved for up to 12 hours.
Red splotches on one map show the highest concentration of wireless users on campus. On another map, yellow dots with names written above them identify individual users, who pop up in different places depending where they're logged in.
"With these maps, you can see down to the room on campus how many people are logged on," said Carlo Ratti, director of the school's SENSEable City Laboratory, which created the maps. "You can even watch someone go from room to room if they have a handheld device that's connected."
Researchers use log files from the university's Internet service provider to construct the maps. The files indicate the number of users connected to each of MIT's more than 2,800 access points. The map that can pinpoint locations in rooms is 3-D, so researchers can even distinguish connectivity in multistoried buildings.
"Laptops and Wi-Fi are creating a revolutionary change in the way people work," Ratti said. The maps aim to "visualize these changes by monitoring the traffic on the wireless network and showing how people move around campus."
Some of the results so far aren't terribly surprising for students at the vanguard of tech innovation.
The maps show, for example, that the bulk of wireless users late at night and very early in the morning are logged on from their dorms. During the day, the higher concentration of users shifts to classrooms.
But researchers also found that study labs that once bustled with students are now nearly empty as people, no longer tethered to a phone line or network cable, move to cafes and nearby lounges, where food and comfy chairs are more inviting.
Researchers say this data can be used to better understand how wireless technology is changing campus life, and what that means for planning spaces and administering services.
The question has become, Ratti said, "If I can work anywhere, where do I want to work?"
"Many cities, including Philadelphia, are planning to go wireless. Something like our study will help them understand usage patterns and where best to invest," said researcher Andres Sevtsuk.
Sevtsuk likened the mapping project to a real-time census.
"Instead of waiting every year or every 10 years for data, you have new information every 15 minutes or so about the population of the campus," he said.
While every device connected to the campus network via Wi-Fi is visible on the constantly refreshed electronic maps, the identity of the users is confidential unless they volunteer to make it public.
Those students, faculty and staff who opt in are essentially agreeing to let others track them.
"This raises some serious privacy issues," Ratti said. "But where better than to work these concerns out but on a research campus?"
Rich Pell, a 21-year-old electrical engineering senior from Spartanburg, South Carolina, was less than enthusiastic about the new system's potential for people monitoring. He predicted not many fellow students would opt into that.
"I wouldn't want all my friends and professors tracking me all the time. I like my privacy," he said.
"I can't think of anyone who would think that's a good idea. Everyone wants to be out of contact now and then."
MIT talk: The subjective nature of straight lines: shortest paths for mobile Robots
Speaker: Matthew T. Mason, Director, Robotics Institute, CMU
Date: Tuesday, November 8 2005
Abstract:
One way to define a straight line for a mobile robot is to put a bound on the robot's velocity, and then solve for the time-optimal paths using Pontryagin's maximum principle. Different types of mobile robots yield different solutions, corresponding to different notions of straight lines and distance. The resulting robot-specific metrics are useful for motion planning.
Date: Tuesday, November 8 2005
Abstract:
One way to define a straight line for a mobile robot is to put a bound on the robot's velocity, and then solve for the time-optimal paths using Pontryagin's maximum principle. Different types of mobile robots yield different solutions, corresponding to different notions of straight lines and distance. The resulting robot-specific metrics are useful for motion planning.
Friday, November 04, 2005
CMU talk: Who Am I If a Robot Can Do My Job?
"Who Am I If a Robot Can Do My Job?
Identity’s Impact on Pre-Implementation Sensemaking and Subsequent Use of New Technology"
Pamela Hinds
November 09
Abstract
This talk will focus on research that I’ve been doing with Rosanne Siino and others on how people make sense of robots in the work environment. Based on an ethnographic study of the introduction of an autonomous mobile robot into a community hospital, we argue that sensemaking begins prior to the implementation of new technology, as actors learn about and prepare for the arrival of a technology. Using data collected during a community hospital’s pre-implementation of an autonomous mobile robot, we propose that the sensemaking process triggered by a technology’s anticipated introduction into an organization can commit people to certain understandings of the technology that impact its subsequent use. During the pre-implementation phase, individuals make sense of the technology by drawing on cognitive frames related to self- and organizational identities. Individuals take public actions during sensemaking, subsequently justifying those actions, with justifications leading to the actions’ repetition - a cycle that lays the seeds for the reinforcement, transformation and creation of structures. I will discuss the implications of this process for technology design, adoption and use within organizations.
Pamela J. Hinds is an Associate Professor with the Center on Work, Technology, & Organization in the Department of Management Science & Engineering, Stanford University. She conducts research on the effects of technology on groups. Much of her research has focused the dynamics of geographically distributed work teams, particularly those spanning national boundaries. Most recently, Pamela has been conducting research on professional service robots in the work environment, examining how people make sense of them and how they affect work practices. She serves on the editorial board of Organization Science and is co-editor with Sara Kiesler of the book Distributed Work (MIT Press). Her research has appeared in journals such as Organization Science, Research in Organizational Behavior, Human-Computer Interaction, Journal of Applied Psychology, Journal of Experimental Psychology: Applied, and Organizational Behavior and Human Decision Processes.
Identity’s Impact on Pre-Implementation Sensemaking and Subsequent Use of New Technology"
Pamela Hinds
November 09
Abstract
This talk will focus on research that I’ve been doing with Rosanne Siino and others on how people make sense of robots in the work environment. Based on an ethnographic study of the introduction of an autonomous mobile robot into a community hospital, we argue that sensemaking begins prior to the implementation of new technology, as actors learn about and prepare for the arrival of a technology. Using data collected during a community hospital’s pre-implementation of an autonomous mobile robot, we propose that the sensemaking process triggered by a technology’s anticipated introduction into an organization can commit people to certain understandings of the technology that impact its subsequent use. During the pre-implementation phase, individuals make sense of the technology by drawing on cognitive frames related to self- and organizational identities. Individuals take public actions during sensemaking, subsequently justifying those actions, with justifications leading to the actions’ repetition - a cycle that lays the seeds for the reinforcement, transformation and creation of structures. I will discuss the implications of this process for technology design, adoption and use within organizations.
Pamela J. Hinds is an Associate Professor with the Center on Work, Technology, & Organization in the Department of Management Science & Engineering, Stanford University. She conducts research on the effects of technology on groups. Much of her research has focused the dynamics of geographically distributed work teams, particularly those spanning national boundaries. Most recently, Pamela has been conducting research on professional service robots in the work environment, examining how people make sense of them and how they affect work practices. She serves on the editorial board of Organization Science and is co-editor with Sara Kiesler of the book Distributed Work (MIT Press). Her research has appeared in journals such as Organization Science, Research in Organizational Behavior, Human-Computer Interaction, Journal of Applied Psychology, Journal of Experimental Psychology: Applied, and Organizational Behavior and Human Decision Processes.
Thursday, November 03, 2005
MIT talk: Representations and Algorithms for Monitoring Dynamic Systems
Speaker: Avi Pfeffer , Harvard University
Date: Thursday, November 3 2005
Continually monitoring the state of a dynamic system is an important problem for artificial intelligence. Dynamic Bayesian networks (DBNs) provide for compact representation of probabilistic dynamic models. However the monitoring task is extremely difficult even for well-factored DBNs. Therefore approximate monitoring algorithms are needed. One family of approximate monitoring algorithms is based on the idea of factoring the joint distribution over the state of the system into a product of distributions over factors consisting of subsets of variables. Factoring relies on the notion of weak interaction between subsystems. We identify a new notion of weak interaction called separability, and show that it leads to the property that, in order to compute the factor distributions at one point in time, only the factored distributions at the previous time point are needed. We also define an approximate form of separability. We show that separability and approximate separability lead to very good approximations for the monitoring task.
Unfortunately, sometimes the factoring approach is computationally infeasible. An alternative approach to approximate monitoring is particle filtering (PF), in which the joint distribution over the state of the system is approximated by a set of samples, or particles. In high dimensional spaces, the variance of PF is high and too many particles are required to provide good performance. We improve the performance of PF by introducing factoring, maintaining particles over factors instead of the global state space. This has the effect of reducing the variance of PF and so reducing its error. Maintaining factored particles also allows us to improve PF by looking ahead to future evidence before deciding which particles to propagate, thus leading to much better accuracy.
Date: Thursday, November 3 2005
Continually monitoring the state of a dynamic system is an important problem for artificial intelligence. Dynamic Bayesian networks (DBNs) provide for compact representation of probabilistic dynamic models. However the monitoring task is extremely difficult even for well-factored DBNs. Therefore approximate monitoring algorithms are needed. One family of approximate monitoring algorithms is based on the idea of factoring the joint distribution over the state of the system into a product of distributions over factors consisting of subsets of variables. Factoring relies on the notion of weak interaction between subsystems. We identify a new notion of weak interaction called separability, and show that it leads to the property that, in order to compute the factor distributions at one point in time, only the factored distributions at the previous time point are needed. We also define an approximate form of separability. We show that separability and approximate separability lead to very good approximations for the monitoring task.
Unfortunately, sometimes the factoring approach is computationally infeasible. An alternative approach to approximate monitoring is particle filtering (PF), in which the joint distribution over the state of the system is approximated by a set of samples, or particles. In high dimensional spaces, the variance of PF is high and too many particles are required to provide good performance. We improve the performance of PF by introducing factoring, maintaining particles over factors instead of the global state space. This has the effect of reducing the variance of PF and so reducing its error. Maintaining factored particles also allows us to improve PF by looking ahead to future evidence before deciding which particles to propagate, thus leading to much better accuracy.
Subscribe to:
Posts (Atom)