Moving the technology toward these so-called “social robots” are researchers in a variety of disciplines engaged in the growing field of human-robot interaction (HRI). To explore some of the challenges in realizing the potential of HRI, Microsoft Research launched the “Robots Among Us” request for proposals (RFP) last October with the bold declaration, “The robots are coming!”
Eight winners will receive a share of more than US$500,000 awarded under the program. Winning research proposals were selected from 74 submissions from academic researchers from 24 countries. The research projects explore broad range of devices, technologies and functions as robots begin to work with and alongside human beings.
- “Snackbot: A Service Robot,” Jodi Forlizzi and Sara Kiesler, Carnegie Mellon University. Snackbot will roam the halls of two large office buildings at Carnegie Mellon University, selling (or in some cases, giving away) snacks and performing other services. Microsoft’s grant will help the team link its current robot prototype to the Web, e-mail, instant messaging and mobile services. The group will also deploy the robot in a field study to understand the uptake of robotic products and services.
- “Human-Robot-Human Interface for an autonomous vehicle in challenging environments,” Ioannis Rekleitis and Gregory Dudek, McGill University, Canada.Utilizing Microsoft Robotics Studio, this group will work to provide an interface for controlling a robot operating on land and underwater, as well as a visualization tool for interpreting the visual feedback. The work will also create a new method for communicating with AQUA when a direct link to a controlling console is not available.
- “Personal Digital Interfaces for Intelligent Wheelchairs,” Nicholas Roy,Massachusetts Institute of Technology.Using a Windows Mobile PDA outfitted with a remote microphone and speech processor, this group will create a single, flexible point of interaction to control wheelchairs. The project will address human-robot interaction challenges in how the spatial context of the interaction varies depending on the location of the wheelchair, the location of the hand-held device and the location of the resident. This project is part of an ongoing collaboration with a specialized care residence in Boston.
- Human-Robot Interaction to Monitor Climate Change via Networked Robotic Observatories, Dezhen Song, Texas A&M University, and Ken Goldberg, University of California, Berkeley. This team will develop a new Human-TeleRobot system to engage the public in documenting climate change effects on natural environments and wildlife, and provide a testbed for study of Human Robot Interaction. To facilitate this, a new type of human-robot system will be built to allow anyone via a browser to participate in viewing and collecting data via the Internet. The Human Robot Interface will combine telerobotic cameras and sensors with a competitive game where “players” score points by taking photos and classifying the photos of others.
- FaceBots: Robots utilizing and publishing social information in FaceBook, Nikolaos Mavridis and Tamer Rabie, United Arab Emirates University. The system to be developed by Mavridis and Rabie is expected to achieve two significant novelties: arguably being the first robot that is truly embedded in a social web, and being the first robot that can purposefully exploit and create social information available online. Furthermore, it is expected to provide empirical support for their main hypothesis - that the formation of shared episodic memories within a social web can lead to more meaningful long-term human-robot relationships.
- Multi-Touch Human-Robot Interaction for Disaster Response, Holly Yanco, University of Massachusetts. This group wants create a common computing platform that can interact with many different information systems, personnel from different backgrounds and expertise, and robots deployed for a variety of task in the event of a disaster. The proposed research intends to bridge the technological gaps through the use of collaborative tabletop multi-touch displays such as the Microsoft Surface. The group will develop an interface between the multi-touch display and Microsoft Robotics Studio to create a multi-robot interface for command staff to monitor and interact with all of the robots deployed at a disaster response.
- Survivor Buddy: A Web-Enabled Robot as a Social Medium for Trapped Victims, Robin Murphy, University of South Florida.The main focus of this group is the assistance of humans who will be dependent on a robot for long periods of time. One function is to provide two-way audio communication between the survivor and the emergency response personnel. Other ideas are being studied, such as playing therapeutic music with a beat designed to regulate heartbeats or breathing. The idea is that a web-enabled, multi-media robot allows: 1) the survivor to take some control over the situation and find a soothing activity while waiting for extrication; and 2) responders to support and influence the state of mind of the victim.
- Prosody Recognition for Human-Robot Interaction, Brian Scassellati, Yale University. This group will work to build a novel prosody recognition algorithm for release as a component for Microsoft Robotics Studio. Vocal prosody is the information contained in your tone of voice that conveys affect, and is a critical aspect to human-human interactions. In order to move beyond direct control of robots toward autonomous social interaction between humans and robots, the robots must be able to construct models of human affect by indirect, social means.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.