Robots equipped with navigational aids could one day do dangerous jobs for soldiers and help the emergency services to save lives. Ben Sampson reports
Robots are useful to have around, especially for any dull, dirty or dangerous jobs we humans don't want to do. In industry they perform many repetitive tasks better than humans ever could, although their usefulness in areas outside industry has always been held back by technological limitations.
But autonomous robots that can look after themselves are finally promising to take on the kinds of roles previously reserved for science fiction. Research is taking off, in recognition of the large number of potential uses for autonomous robots: search and rescue, plant monitoring, automotive driving aids, and even space exploration.
The Demonstration of Robot Autonomy project, or DORA, is funded by the Ministry of Defence through its Defence Technology Centre programme. It aims to produce an autonomous robot that can enter a room, explore it, construct a structural representation of it, and then leave it. DORA is derived from the same basic aim as the MoD's recent Grand Challenge competition, which encouraged inventors to develop robots that could be placed in dangerous situations instead of soldiers. But, because the DORA robot would operate indoors, it will have no GPS signal to navigate by.
The DORA robot has to navigate using information derived solely from the environment in front of it. Chris Harris, principal consultant on the project from Roke Manor Research, devised the "structure for motion" image processing technology that enables the robot to do just that. Harris has impressive credentials - he was part of the team that gave us number plate recognition, and also developed the Hawkeye ball tracking system used to referee cricket and tennis matches.
Surprisingly, DORA does not use an expensive suite of sensors, but just one black-and-white video camera. "By seeing the world move as the camera moves around we can understand the motion of the vehicle and the structure of the world, where the floor is, where the obstacles are, and where the ceiling is," he explains.
The software uses a "point cloud", a number of points located in 3D space which are represented as colour-coded pixels to indicate the height of things. It also extracts local patches from the video image, where there is a distribution of bright and dark pixels, to help it to detect edges, corners and obstacles. "These are tracked and put in the structure for motion algorithm to determine where the robot is and how it is moving," says Harris.
Using this information, the robot is able to build up a 3D plan of the room, tell where obstacles are, and determine a free path as it travels. Essentially, Harris explains, the robot judges distance in the same way that humans do as they move. "When you look at the world you see it in 3D, but it's not in 3D when it hits your eyes. The three dimensions are all constructed in your head, by similar sorts of processes which turn the 2D into 3D in DORA," he says.
But the team did not really consider any "higher vertebrate" processing because of the overwhelming complexity and differences. For example, our eyes run continuously, whereas cameras operate frame by frame. Most of us also see in stereo, which makes determining distance easier. Twocamera stereo vision is the next step for the Roke team, which is currently developing a "man portable" stereo system, mounted on a helmet, to help manned exploration and mapping activities.
Harris says he would also like to improve the robot's mobility. It is built around a hobbyist remote control platform, and can only climb the height of a pencil. Another limitation being worked on is its ability to track moving objects, something it is unable to perform accurately. "The real proof of the pudding is controlling the vehicle. If you can do that without bumping into things and getting lost you really have done the job," he says.
A robot reliable enough to be used in real-life situations will take between three and five years to develop, he adds. There is MoD funding for the next five years, so it is possible that the technology could be used by the army in the field and saving lives soon.
Another research initiative has the aim of developing search and rescue robots to assist fire and emergency services. The EU-funded Viewfinder and Guardian projects are being run from Sheffield Hallam University by Dr Jacques Penders, in collaboration with other universities throughout Europe. The Viewfinder robot will be remotely controlled, and Guardian will be autonomous. The robots must be able to scout ahead of firemen and detect for structural integrity, toxic chemicals and sources of smoke in a warehouse.
The robots, which are 30cm in diameter and 20cm high, move in swarms and are linked together wirelessly to form a communications network. They are fitted with infrared and ultrasound sensors and video cameras. The first Guardian robots were tested in a warehouse in Catalonia, Spain last month. According to Penders, the robots "worked a little bit".
"The Guardian robots have to do the normal navigation, but the smoke also adds to the challenge because of the restricted view. Cameras and infrared sensors are not suitable," says Penders.
To overcome the smoky environment, researchers have devised technology to enable the robots to work out their positions from each other using proximity sensors and the wireless communications network. Although this is the most reliable method, the accuracy is poor, says Penders, with errors of 50 to 100cm usual. The aim is to develop a hierarchy of information that will enable the robots to navigate according to the environmental conditions.
Penders says it will be five years before the robots start saving lives.
Robots to the rescue
Robots equipped with navigational aids could one day do dangerous jobs for soldiers and help the emergency services to save lives. Ben Sampson reports
Robots are useful to have around, especially for any dull, dirty or dangerous jobs we humans don't want to do. In industry they perform many repetitive tasks better than humans ever could, although their usefulness in areas outside industry has always been held back by technological limitations.
But autonomous robots that can look after themselves are finally promising to take on the kinds of roles previously reserved for science fiction. Research is taking off, in recognition of the large number of potential uses for autonomous robots: search and rescue, plant monitoring, automotive driving aids, and even space exploration.
The Demonstration of Robot Autonomy project, or DORA, is funded by the Ministry of Defence through its Defence Technology Centre programme. It aims to produce an autonomous robot that can enter a room, explore it, construct a structural representation of it, and then leave it. DORA is derived from the same basic aim as the MoD's recent Grand Challenge competition, which encouraged inventors to develop robots that could be placed in dangerous situations instead of soldiers. But, because the DORA robot would operate indoors, it will have no GPS signal to navigate by.
The DORA robot has to navigate using information derived solely from the environment in front of it. Chris Harris, principal consultant on the project from Roke Manor Research, devised the "structure for motion" image processing technology that enables the robot to do just that. Harris has impressive credentials - he was part of the team that gave us number plate recognition, and also developed the Hawkeye ball tracking system used to referee cricket and tennis matches.
Surprisingly, DORA does not use an expensive suite of sensors, but just one black-and-white video camera. "By seeing the world move as the camera moves around we can understand the motion of the vehicle and the structure of the world, where the floor is, where the obstacles are, and where the ceiling is," he explains.
The software uses a "point cloud", a number of points located in 3D space which are represented as colour-coded pixels to indicate the height of things. It also extracts local patches from the video image, where there is a distribution of bright and dark pixels, to help it to detect edges, corners and obstacles. "These are tracked and put in the structure for motion algorithm to determine where the robot is and how it is moving," says Harris.
Using this information, the robot is able to build up a 3D plan of the room, tell where obstacles are, and determine a free path as it travels. Essentially, Harris explains, the robot judges distance in the same way that humans do as they move. "When you look at the world you see it in 3D, but it's not in 3D when it hits your eyes. The three dimensions are all constructed in your head, by similar sorts of processes which turn the 2D into 3D in DORA," he says.
But the team did not really consider any "higher vertebrate" processing because of the overwhelming complexity and differences. For example, our eyes run continuously, whereas cameras operate frame by frame. Most of us also see in stereo, which makes determining distance easier. Twocamera stereo vision is the next step for the Roke team, which is currently developing a "man portable" stereo system, mounted on a helmet, to help manned exploration and mapping activities.
Harris says he would also like to improve the robot's mobility. It is built around a hobbyist remote control platform, and can only climb the height of a pencil. Another limitation being worked on is its ability to track moving objects, something it is unable to perform accurately. "The real proof of the pudding is controlling the vehicle. If you can do that without bumping into things and getting lost you really have done the job," he says.
A robot reliable enough to be used in real-life situations will take between three and five years to develop, he adds. There is MoD funding for the next five years, so it is possible that the technology could be used by the army in the field and saving lives soon.
Another research initiative has the aim of developing search and rescue robots to assist fire and emergency services. The EU-funded Viewfinder and Guardian projects are being run from Sheffield Hallam University by Dr Jacques Penders, in collaboration with other universities throughout Europe. The Viewfinder robot will be remotely controlled, and Guardian will be autonomous. The robots must be able to scout ahead of firemen and detect for structural integrity, toxic chemicals and sources of smoke in a warehouse.
The robots, which are 30cm in diameter and 20cm high, move in swarms and are linked together wirelessly to form a communications network. They are fitted with infrared and ultrasound sensors and video cameras. The first Guardian robots were tested in a warehouse in Catalonia, Spain last month. According to Penders, the robots "worked a little bit".
"The Guardian robots have to do the normal navigation, but the smoke also adds to the challenge because of the restricted view. Cameras and infrared sensors are not suitable," says Penders.
To overcome the smoky environment, researchers have devised technology to enable the robots to work out their positions from each other using proximity sensors and the wireless communications network. Although this is the most reliable method, the accuracy is poor, says Penders, with errors of 50 to 100cm usual. The aim is to develop a hierarchy of information that will enable the robots to navigate according to the environmental conditions.
Penders says it will be five years before the robots start saving lives.