Original link
From left, graduate students Ashutosh Saxena and Morgan Quigley and Assistant Professor Andrew Ng were part of a large effort to develop a robot to see an unfamiliar object and ascertain the best spot to grasp it.
Stanford scientists plan to make a robot capable of performing everyday tasks, such as unloading the dishwasher. By programming the robot with "intelligent" software that enables it to pick up objects it has never seen before, the scientists are one step closer to creating a real life Rosie, the robot maid from The Jetsons cartoon show.
"Within a decade we hope to develop the technology that will make it useful to put a robot in every home and office," said Andrew Ng, an assistant professor of computer science who is leading the wireless Stanford Artificial Intelligence Robot (STAIR) project.
"Imagine you are having a dinner party at home and having your robot come in and tidy up your living room, finding the cups that your guests left behind your couch, picking up and putting away your trash and loading the dishwasher," Ng said.
Cleaning up a living room after a party is just one of four challenges the project has set out to have a robot tackle. The other three include fetching a person or object from an office upon verbal request, showing guests around a dynamic environment and assembling an IKEA bookshelf using multiple tools.
Developing a single robot that can solve all these problems takes a small army of about 30 students and 10 computer science professors—Gary Bradski, Dan Jurafsky, Oussama Khatib, Daphne Koller, Jean-Claude Latombe, Chris Manning, Ng, Nils Nilsson, Kenneth Salisbury and Sebastian Thrun.
From Shakey to Stanley and beyond
Stanford has a history of leading the field of artificial intelligence. In 1966, scientists at the Stanford Research Institute built Shakey, the first robot to combine problem solving, movement and perception. Flakey, a robot that could wander independently, followed. In 2005, Stanford engineers won the Defense Advanced Research Projects Agency (DARPA) Grand Challenge with Stanley, a robot Volkswagen that autonomously drove 132 miles through a desert course.
The ultimate aim for artificial intelligence is to build a robot that can create and execute plans to achieve a goal. "The last serious attempt to do something like this was in 1966 with the Shakey project led by Nils Nilsson," Ng said. "This is a project in Shakey's tradition, done with 2006 technology instead of 1966 AI technology."
To succeed, the scientists will need to unite fragmented research areas of artificial intelligence including speech processing, navigation, manipulation, planning, reasoning, machine learning and vision. "There are these disparate AI technologies and we'll bring them all together in one project," Ng said.
The true problem remains in making a robot independent. Industrial robots can follow precise scripts to the point of balancing a spinning top on a blade, he said, but the problem comes when a robot is requested to perform a new task. "Balancing a spinning top on the edge of a sword is a solved problem, but picking up an unfamiliar cup is an unsolved problem," Ng explained.
His team recently designed an algorithm that allowed STAIR to recognize familiar features in different objects and select the right grasp to pick them up. The robot was trained in a computer-generated environment to pick up five items—a cup, pencil, brick, book and martini glass. The algorithm locates the best place for the robot to grasp an object, such as a cup's handle or a pencil's midpoint. "The robot takes a few pictures, reasons about the 3-D shape of the object, based upon computing the location, and reaches out and grasps the object," Ng said.
In tests with real objects, the robotic arm picked up items similar to those with which it trained, such as cups and books, as well as unfamiliar objects including keys, screwdrivers and rolls of duct tape. To grasp a roll of duct tape, the robot employs an algorithm that evaluates the image against all prior strategies. "The roll of duct tape looks a little like a cup handle and also a little bit like a book," Ng said. The program formulates the best location to clutch based on a combination of all the robot's prior experiences and tells the arm where to go. "It would be a hybrid, or a combination of all the different grasping strategies that it has learned before," Ng said.
The word "robot" originates from a Slavic word meaning "toil," and robots may soon reduce the amount of drudgery in our daily lives. "I think if we can have a robot intelligent enough to do these things, that will free up vast amounts of human time and enable us to go to higher goals," Ng said.
Funding for the project has come from the National Science Foundation, DARPA and industrial technology companies Intel, Honda, Ricoh and Google.
Brian D. Lee is a science writing intern with the Stanford News Service.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.