Email
Password
Remember meForgot password?
    Log in with Twitter

article image3D Camera Will Have 12,616 Lenses and Could Fundamentally Change Photography

By Chris Hogg     Mar 23, 2008 in Technology
Your camera has saved all those precious baby pictures, soccer games, graduations and weddings. But they're flat and two-dimensional. If your digital camera saw the world through thousands of lenses, you could have saved incredible 3D memories.
Digital Journal -- Researchers at Stanford University are working on a new camera technology that could add a new dimension into your life. Well, at least into your photography.
Modern day cameras take fantastic images, but they're never more than a flat print-out or a two-dimensional image on a computer screen. Many film-makers have dabbled with two cameras, or a camera with two lenses, in order to reproduce a three-dimensional image. But what would that same image look like with thousands of lenses from a miniature camera? That is exactly what a few Stanford researchers are thinking about.
The prototype camera shoots regular 2D images, but it also creates a "depth map" that remembers distances from the camera to every object in your photo.
Lead by electrical engineering Professor Abbas El Gamal, Stanford electronics researchers are developing this super 3D camera built around a "multi-aperture image sensor." This is a story of science meeting art in a geeky, brilliant and beautiful way: They shrunk pixels on the sensors to 0.7 microns (much smaller than a standard digital camera), they've grouped pixels in arrays of 256 pixels each and they will place a lens on top of each array. So what does the highly technical explanation mean for you? Three dimensions.
"It's like having a lot of cameras on a single chip," Keith Fife, a graduate student working with El Gamal, said in a news release. If researchers can get the prototype 3 megapixel chip to work, it would give them 12,616 cameras in one.
As the news release reads: "Point such a camera at someone's face, and it would, in addition to taking a photo, precisely record the distances to the subject's eyes, nose, ears, chin, etc. One obvious potential use of the technology: facial recognition for security purposes."
Researchers also say a depth-information camera would open up new possibilities for 3D modelling of buildings, biological imaging, 3D printing and more. The technology could be big for other industries like robotics, where robots could get better spatial vision than humans to allow them to do jobs unimaginable right now. The cameras could also be made small enough to fit in cellphones.
Philip Wong, Abbas El Gamal and Keith Fife are developing a digital camera that sees the world through thousands of tiny lenses, providing an electronic “depth map” containing the distance from the camera to every object in the picture. - Photo by L.A. Cicero, courtesy Stanford
For the more technical-minded reader who wants to know how exactly this works:
The main lens (also known as the objective lens) of an ordinary digital camera focuses its image directly on the camera's image sensor, which records the photo. The objective lens of the multi-aperture camera, on the other hand, focuses its image about 40 microns (a micron is a millionth of a meter) above the image sensor arrays. As a result, any point in the photo is captured by at least four of the chip's mini-cameras, producing overlapping views, each from a slightly different perspective, just as the left eye of a human sees things differently than the right eye.
The technology is also great for average consumers who might not care as much about how the 3D imaging works; the detailed image map is invisible on the 2D photo, but it gets stored along with the information or data from the image. So when you look at the image on screen, it can still look like a regular photo. But if you want to see it in 3D, all the data is stored along with the image and software would allow you to see the same image in three dimensions.
"You can choose to do things with that image that you weren't able to do with the regular 2-D image," Fife said. "You can say, 'I want to see only the objects at this distance,' and suddenly they'll appear for you. And you can wipe away everything else."
Researchers say their multi-aperture sensor has key advantages of being small, it doesn't require lasers or bulky camera equipment, it's not complex and it reproduces great colour quality.
Researchers are now working out the manufacturing details of fabricating the micro-optics onto a camera chip and believe it or not, the finished product might even cost less than existing digital cameras. How? Because the camera's main lens is not as important anymore. "We believe that you can reduce the complexity of the main lens by shifting the complexity to the semiconductor," Fife said.
The camera would also destroy the barriers of conventional photography, as everything in sight (near or far) would be in focus. Furthermore, it would give photographers the ability to selectively put certain areas of an image out of focus after the fact using software on a PC.
The three researchers published a paper on their work in the February edition of the IEEE ISSCC Digest of Technical Papers. More detail can be found on the university's news release here.
More about Three dimensional, Camera, Stanford
 
Latest News
Top News