A camera that can shoot around corners has been developed by US scientists.
The prototype uses an ultra-short high-intensity burst of laser light to illuminate a scene.
The device constructs a basic image of its surroundings - including objects hidden around the corner - by collecting the tiny amounts of light that bounce around the scene.
The Massachusetts Institute of Technology team believe it has uses in search and rescue and robot vision.
"It's like having x-ray vision without the x-rays," said Professor Ramesh Raskar, head of the Camera Culture group at the MIT Media Lab and one of the team behind the system.
"But we're going around the problem rather than going through it."
Professor Shree Nayar of Columbia University, an expert in light scattering and computer vision, was very complimentary about the work and said it was a new and "very interesting research direction".
"What is not entirely clear is what complexities of invisible scenes are computable at this point," he told BBC News.
"They have not yet shown recovery of an entire [real-world] scene, for instance."
Flash trickProfessor Raskar said that when he started research on the camera three years ago, senior people told him it was "impossible".
However, working with several students, the idea is becoming a reality.
The heart of the room-sized camera is a femtosecond laser, a high-intensity light source which can fire ultra-short bursts of laser light that last just one quadrillionth of a second (that's 0.000000000000001 seconds).
The light sources are more commonly used by chemists to image reactions at the atomic or molecular scale.
For the femtosecond transient imaging system, as the camera is known, the laser is used to fire a pulse of light onto a scene.
The light particles scatter and reflect off all surfaces including the walls and the floor.
Continue reading the main story<!-- Slideshow helpers --><!-- Own script --><!-- ID = ss-{namespace} -->If there is a corner, some of the light will be reflected around it. It will then continue to bounce around the scene, reflecting off objects - or people - hidden around the bend.
Some of these particles will again be reflected back around the corner to the camera's sensor.
Here, the work is all about timing.
Following the initial pulse of laser light, its shutter remains closed to stop the precise sensors being overwhelmed with the first high-intensity reflections.
"Start Quote
End Quote Prof Ramesh RaskarYou could generate a map before you go into a dangerous place like a building fire, or a robotic car could use the system to compute the path it should take around a corner before it takes it"
This method - known as "time-gating" - is commonly used by cameras in military surveillance aircraft to peer through dense foliage.
In these systems, the shutter remains closed until after the first reflections off the tops of the trees. It then opens to collect resections of hidden vehicles or machinery beneath the canopy.
Similarly, the experimental camera shutter opens once the first reflected light has passed, allowing it to mop up the ever-decreasing amounts of reflected light - or "echoes" as Prof Raskar calls them - from the scene.
Unlike a standard camera that just measures the intensity and position of the light particles as it hits the sensor, the experimental set up also measures the arrival time of the particles at each pixel.
This is the central idea used in so-called "time-of-flight cameras" or Lidar (Light Detection And Ranging) that can map objects in the "line of sight" of the camera.
Lidar is commonly used in military applications and has been put to use by Google's Street View cars to create 3D models of buildings.
Professor Raskar calls his set-up a "time-of-flight camera on steroids".
Both use the speed of light and the arrival time of each particle to calculate the so-called "path length" - or distance travelled - of the light.
To build a picture of a scene, the experimental set up must repeat the process of firing the laser and collecting the reflections several times. Each pulse is done at a slightly different point and takes just billionths of a second to complete.
"We need to do it at least a dozen times," said Professor Raskar. "But the more the better."
It then use complex algorithms - similar to those used in medical CAT scans - to construct a probable 3D model of the surrounding area - including objects that may be hidden around the corner.
"In the same way that a CAT scan can reveal what is inside the body by taking multiple photographs using an x-ray source in different positions, we can recover what is beyond the line of sight by shining the laser at different points on a reflective surface," he said.
Look aheadAt the moment, the set-up only works in controlled laboratory conditions and can get confused by complex scenes.
"It looks like they are very far from handling regular scenes," said Prof Nayar.
In everyday situations, he said, the system may compute "multiple solutions" for an image, largely because it relied on such small amounts of light and it was therefore difficult to extrapolate the exact path of the particle as it bounced around a room.
"However, it's a very interesting first step," he said.
It would now be interesting to see how far the idea could be pushed, he added.
Professor Raskar said there are "lots of interesting things you can do with it.
"You could generate a map before you go into a dangerous place like a building fire, or a robotic car could use the system to compute the path it should take around a corner before it takes it."
However, he said, the team initially aim to use the system to build an advanced endoscope.
"It's an easy application to target," he said. "It's a nice, dark environment."
If the team get good results from their trials, he said, they could have a working endoscope prototype within two years.
"That would be something that is room-sized," he said. "Building something portable could take longer."
Additional reporting in video by Matthew Danzico.
0 comments:
Post a Comment