Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Researchers Develop Genuine 3D Camera

timothy posted more than 3 years ago | from the now-in-stereo dept.

Input Devices 96

cylonlover writes "Cameras that can shoot 3D images are nothing new, but they don't really capture three dimensional moments at all — they actually record images in stereoscopic format, using two 2D images to create the illusion of depth. These photos and videos certainly offer a departure from their conventional two dimensional counterparts, but if you shift your view point, the picture remains the same. Researchers from Ecole Polytechnique Federale de Lausanne (EPFL) hope to change all that with the development of a strange-looking camera that snaps 360 degrees of simultaneous images and then reconstructs the images in 3D."

cancel ×

96 comments

Sorry! There are no comments related to the filter you selected.

Isn't this what... (0)

Anonymous Coward | more than 3 years ago | (#34521780)

...Street View does?

Quick question (1)

korielgraculus (591914) | more than 3 years ago | (#34521790)

Does anyone know if the Microsoft Kinect is classified as a "true" 3d camera under these criteria?

Re:Quick question (0)

Osgeld (1900440) | more than 3 years ago | (#34521804)

since it only processes depth in 1 plane I would say no

Re:Quick question (1)

rsmith-mac (639075) | more than 3 years ago | (#34521812)

Negative. Kinect a regular camera and an IR range camera. The IR range camera can figure out the depth based on IR returns, but it can't see anything from any additional angles making it just as fixed as stereoscopy.

Re:Quick question (4, Insightful)

marcansoft (727665) | more than 3 years ago | (#34521820)

Nor can the camera in the article. They keep talking about "being able to see the scene from any point", but that's a load of bullshit. All they've done is combined a 360 camera array (what Street View does) with stereoscopic vision (what regular 2-camera 3D does) to get a 360 view with depth information. So now you can look around in a scene in 3D, but you can't change your position. The camera still sees the scene only from one viewpoint, it's just that it has a full hemispherical field of view plus depth/3D info. Cool? Yes, but hardly a breakthrough, and definitely nothing like what they claim it does.

If the camera can't see something because it is obscured by another object, then it can't see it, period. The camera has a bit more info due to the use of multiple lenses somewhat offset from each other, but that's just like regular stereoscopic vision, and your viewpoint is still severely limitedd. You can do a better job of 3D scene reconstruction with three or four Kinects in different places than with this, since then you can actually obtain several perspectives simultaneously and merge them into a more complete 3D model of the scene.

Re:Quick question (4, Insightful)

marcansoft (727665) | more than 3 years ago | (#34521830)

Seriously, Slashdot can't handle the degree sign either? That's ISO-8859-1 for fuck's sake, not even one of the weirder Unicode characters.

Re:Quick question (1)

mangu (126918) | more than 3 years ago | (#34522966)

Doesn't work with HTML codes [ascii.cl] either, I tried writing it as ° and as ° and it didn't work with either one

Re:Quick question (1)

TheRaven64 (641858) | more than 3 years ago | (#34523694)

Slashdot, rather than using one of the existing whitelists of unicode characters, rolls its own. This contains all characters that more than 100 posts have complained about them not supporting. If you're bored and want to read something entertaining, find some of the posts explaining the rationale for the current state of Slashdot unicode support. It's really scary to think that people who consider those reasons to be valid arguments are writing code that we're using...

Re:Quick question (0)

Anonymous Coward | more than 3 years ago | (#34521974)

Wrong, he said you could put a *couple* of these cameras into a scene to computationally render a viewpoint from any position. And actually the only reason for the second camera is to reduce occlusions. Look up "computational photography" and "light field rendering". Once you have enough information about a light field you *can* render it from any camera angle.

Re:Quick question (2)

Anachragnome (1008495) | more than 3 years ago | (#34522042)

"Nor can the camera in the article."

Nor does the article discuss how the "3D" images are to be viewed, beyond a very vague "...which is no longer limiting thanks to the 3D reconstruction."

Holograms? Those stupid-fucking glasses? A planetarium (that would actually make the most sense)?

Kinects (2)

joshharle (1957066) | more than 3 years ago | (#34522148)

I'm glad to see people calling "bullshit" on this. I'm big into developments of PhotoSynth/Bundler/PMVS and other interesting 3D photogrammetry, so this is close to my heart. I'd just like to clear up a confusion though: "You can do a better job of 3D scene reconstruction with three or four Kinects in different places[...]" Unfortunately you can't, as the Kinects use structured light reconstruction, and the IR light patterns from multiple Kinects would confuse each other.

Re:Kinects (1)

psy0rz (666238) | more than 3 years ago | (#34522364)

What if you somehow alternate the ir beam, so they're not on at the same time?

Re:Kinects (1)

Yvanhoe (564877) | more than 3 years ago | (#34522686)

They confuse each other a bit, but you still can do some things :
http://www.youtube.com/watch?v=5-w7UXCAUJE [youtube.com]

Also, the kinect as it is today dose not easily allow combinations, but it is not hard to imagine different IR frequencies being used to prevent interference or even blinking patterne with a difference of phase.

Re:Kinects (1)

marcansoft (727665) | more than 3 years ago | (#34523548)

Ah, but you see, the Kinects *know* what pattern to expect (they correlate with a known pattern) and ignore extraneous data, so in practice the interference between two or three kinects is minimal and only results in a few scattered small "holes" (that you fill in with data from the other device). I didn't think it would work at first, but, in fact, it does.

Re:Quick question (1)

houghi (78078) | more than 3 years ago | (#34522236)

So not only is the camera not genuine, it also is not a genuine story.

Re:Quick question (2)

TheLink (130905) | more than 3 years ago | (#34522360)

the camera has a bit more info due to the use of multiple lenses somewhat offset from each other, but that's just like regular stereoscopic vision, and your viewpoint is still severely limitedd.

It doesn't have to be like regular steroscopic vision. The clever bit wouldn't be so much in the camera positions.

It would be in the image/scene processing: http://news.cnet.com/8301-13580_3-9793272-39.html [cnet.com]
See the video too: http://www.youtube.com/watch?v=xu31XWUxSkA [youtube.com]

Based on both videos, Adobe's tech looks more impressive to me. And they did that years before.

Re:Quick question (0)

Anonymous Coward | more than 3 years ago | (#34525054)

Actually, as they can do depth finding, that means they can do photon scatter analysis, which means it would be possible to calculate the view from multiple angles as long as the view is static (static, because they'd also have to have a "control" light source to calibrate the scatter reading). The neat thing about this is that it is then possible to include objects in the 3D space that are not line-of-sight visible from the camera's fixed location! However, I'm pretty sure this isn't what these guys have done.

Re:Quick question (1)

Sprouticus (1503545) | more than 3 years ago | (#34525318)

If you placed sevral of these in a stadium with high end figial camers you could emulate a full #D experience. Sure you could hife spots if you wished, but for enterprainment and mapping it would be pretty awesome. Of course the software would have to be able to handle the input from multiple locations and corrlate the data between them, but syill yjay would be amazing.

Re:Quick question (0)

Anonymous Coward | more than 3 years ago | (#34527870)

Also, the lens from the human eye can change its focal point by changing shape, but camera's have to use multiple static lenses to focus, so the perspective is slightly distorted, and they can only capture one focal point at a time. That's why 3D photos look like those old sprite-based 3d games; overall you can see depth, but the objects themselves look flat.

Re:Quick question (1)

LBArrettAnderson (655246) | more than 3 years ago | (#34521862)

Where do you get the idea that having a bunch of camera looking outward from a single point will be any more effective at doing 3D than 2 cameras set up to do stereoscopy? (let's assume no other differences here)

This thing can't magically look around corners just because it's looking outward at different angles.

Re:Quick question (1)

LBArrettAnderson (655246) | more than 3 years ago | (#34521868)

(and yes, I know that this thing is using software to compute the distance of objects, but that isn't anything new. We've been able to do the same thing with two images for a long time)

Re:Quick question (1)

Anonymous Coward | more than 3 years ago | (#34521906)

Have you seen the video of two Kinects rendering a 3d object? as long as one sees it then you have some of the 3d info.

Re:Quick question (0)

Anonymous Coward | more than 3 years ago | (#34521936)

all the camera look from the same point outward, objects obstructed are obstructed.

the two kinect demo had them pointed from different points at the same scene.

the only way we know right now to make a fully 3d scene is an array of camera pointing *inwards* and there is nothing new to it, we've done it for ages now.

these camera are pointed *outwards* and could not reconstruct 3d models of the scene, only the face they see, and then it's just the classical stereoscopic trick known since the 18th century.

Re:Quick question (1)

asvravi (1236558) | more than 3 years ago | (#34522114)

Kinect is not exactly a ranging camera, it is based on structured light processing. There are 3 types of classifications in 3D cameras -
1. Stereoscopic - the most prevalent kind, meant for human vision
2. Time of flight - these are true IR ranging cameras that get depth information from the time the light takes to reflect off the object. Mostly used for machine vision since it provides true depth numerically for each pixel.
3. Structured light - The shine a patterned light out and analyze the way the light distorts in the captured 2D image to extract the depth information at each pixel.

No. 2 and 3 are often paired with regular cameras to get a decent color image. But you are right that the Kinect cannot see from any additional angles.

Re:Quick question (3, Informative)

Rhaban (987410) | more than 3 years ago | (#34522254)

What about the 2-kinects video where the scene was shown from the viewpoint of a non-existant camera located somewhere between the two kinects?

Re:Quick question (1)

DragonWriter (970822) | more than 3 years ago | (#34523686)

What about the 2-kinects video where the scene was shown from the viewpoint of a non-existant camera located somewhere between the two kinects?

Synthesizing depth information from the differences in simultaneous stereographic images sufficient to produce images from any point between the cameras that took the stereographic images is something that was done in software (and available in lots of consumer packaged software, though I don't think any of it was particularly popular) long before Kinect (I first saw it ca. 1998, IIRC).

Having the processing power to do it realtime with video may be a testament to the progress of hardware, but being able to do it at all is nothing special about the Kinect or its depth-processing ability.

Re:Quick question (0)

Anonymous Coward | more than 3 years ago | (#34525162)

Synthesizing depth information from the differences in simultaneous stereographic images sufficient to produce images from any point between the cameras that took the stereographic images

That's not possible in a general sense. It's perfectly possible to have two objects that both occlude a third object from each camera's viewpoint, but wouldn't from an intermediate viewpoint, making that piece of the picture not be available. This isn't even rare.

Re:Quick question (1)

imakemusic (1164993) | more than 3 years ago | (#34522706)

Not by default. But it can be arranged [youtube.com] .

BS... (1, Insightful)

markass530 (870112) | more than 3 years ago | (#34521808)

So I RTFA and WTFV , and the asshole at the computer put on some fucking glasses, I call shenanigans..

Re:BS... (1)

Logopop (234246) | more than 3 years ago | (#34522006)

One of the major problems with today's 3d technology is that the brain of the viewer is used to a specific distance between the eyes. If the distance between the camera lenses of a 3d camera is not the same as the distance between the eyes, your brain will generate a distorted depth image (to shallow, too deep) and you might end up with a major headache (as some experiences). Not to mention that tele/wide lenses causes additional problems of the same nature. To represent a movie that was correct and natural looking for all viewers, you would have to render an animated movie just-in-time for each individual (or maybe a few standard widths). For non-cgi movies, I have no idea how you would do it.
I am by no means an expert in this - maybe there's somebody here that could fill in the gory scientific brain details?

Re:BS... (1)

DarwinSurvivor (1752106) | more than 3 years ago | (#34522058)

I don't think the video he was looking at was even rendering the red/blue stereoscopic effect when he put them on...

So instead... (2)

Aerorae (1941752) | more than 3 years ago | (#34521816)

of Stereoscopic....it's polyscopic? I dunno...this still seems like more of the same.

More of the same, just stereoscopic on steroids (2, Insightful)

Anonymous Coward | more than 3 years ago | (#34521818)

Unless it's doing a lot of moving around, it's just stereoscopic on steroids. If it stays in a fixed position, even though it has more than two cameras, all the objects are at fixed points. Until it can accurately judge the height, width and depth of an object without faking it in reconstruction, or making an educated guess - it's just more of the same. Humans suffer from the same limitations, but they fix this by moving the viewpoint around until a coherent 3D image is constructed.

Unless you have cameras that can accurately measure objects and move in the X and Y dimensions enough to cover the entire scene and all viewpoints for a given object, you're stuck in the same position - educated guessing.

Worthless stereoscopic eyeballs (1)

Cylix (55374) | more than 3 years ago | (#34521822)

I never knew I was using such worthless vision capabilities until now.

I do hope to upgrade to the far superior bug eyes which will allow me truly see in three dimensions.

Re:Worthless stereoscopic eyeballs (1)

ultranova (717540) | more than 3 years ago | (#34522164)

I do hope to upgrade to the far superior bug eyes which will allow me truly see in three dimensions.

Wouldn't that really require a phased array?

EX-TER-MIN-ATE! (0)

Anonymous Coward | more than 3 years ago | (#34521824)

This contraption looks like a Dalek! With multiple eyes instead of the single eye-stalk, mind you, but Dalek nevertheless!

that's not 3d (1, Redundant)

catbutt (469582) | more than 3 years ago | (#34521850)

It may be 360 degree, but not 3d. It doesn't process depth any more than a traditional 2d camera, it just takes a wider angle of view.

Re:that's not 3d (0, Informative)

Anonymous Coward | more than 3 years ago | (#34521864)

Actually, if you RTFA and watched the movie, it allows for 360 degrees of stereoscopy, and multiple cameras = better depth calculation = better looking 3D

Re:that's not 3d (1)

LBArrettAnderson (655246) | more than 3 years ago | (#34521872)

It's nothing new, though. We've been doing this for ages. The only thing remotely cool about this is the fact that it has a whole bunch of cameras that can put together large panoramas.

old hat (1)

Sooner Boomer (96864) | more than 3 years ago | (#34521892)

It's nothing new, though. We've been doing this for ages.

You're right, it's nothing new. It's not even real 3D, it's "stereoscopy" to a higher degree. We were doing true 3D analysis 15 years ago; video analysis of gait and other motions on a treadmill. We used multiple cameras mounted orthoganally, and a digital mixer to combine and record onto SVHS tape w/a SMPTE time code. Post recording analysis was done using Peak Performance (brand) software. This was way before the cinematographers and game makers started doing this. There's little new in science.

Re:that's not 3d (0)

im_thatoneguy (819432) | more than 3 years ago | (#34521880)

Yeah I can't see any difference between this and a fish-eye lens with a lat/long transform applied to it.

Two Canon 5Ds with 8mm lenses a foot apart would be considerably more effective and cover the same FOV.

What's new about this? (0)

LBArrettAnderson (655246) | more than 3 years ago | (#34521874)

The summary and article make it seem like this is some revolutionary *3D* device. It isn't. What it does to create 3D imagery has been around for a long long time (done in software, perhaps on a dedicated chip). The only newsworthy thing about this is that it can do very large panoramas.

I think... (1)

bazald (886779) | more than 3 years ago | (#34521878)

...you still have your work cut out for you, blade runner.

Lausanne is in Switzerland (3, Insightful)

martin-boundary (547041) | more than 3 years ago | (#34521898)

Whoever tagged the story "france" got it wrong. The *real* Ecole polytechnique is of course in France, but this one is in Switzerland.

No. (4, Insightful)

The Master Control P (655590) | more than 3 years ago | (#34521910)

There's only one kind of "genuine" 3D camera, and it requires very special film and one of absolute stillness or high-power pulse lasers. We call the process "holography," and if it doesn't do that it's not a real 3D "camera."

Words mean things.

Re:No. (1)

noidentity (188756) | more than 3 years ago | (#34523438)

That's not even 3D. A true 3D "camera" would capture a sample at every point in the volume being captured. That means it would show what's inside objects too. Put another way, if I take a 3D picture of a house, it should look the same regardless of where I happen to be standing with the camera.

Re:No. (3, Interesting)

CityZen (464761) | more than 3 years ago | (#34524500)

We should use the appropriate terminology: light fields. A traditional camera captures a point sample of a light field: you see a correct image from 1 point of view. A hologram captures a 2D planar sample of a light field: you see a correct image from any perspective that looks through the film (as if it's a window). To capture a volume sample of a lightfield is not really possible (at least, not at the same instant in time), since that requires having a sensor be placed everywhere in the volume, and of course the sensor itself interferes with the light samples that it's taking.

Re:No. (0)

Anonymous Coward | more than 3 years ago | (#34525762)

put another way what you want is a 4D camera that takes flat pictures of a 3D space... Right? Or some means of simulating such.

Re:No. (1)

rrohbeck (944847) | more than 3 years ago | (#34527908)

Does anybody know what the problem with true holographic cameras is? High power pulse lasers are available. What would the requirements be for a sensor to record the interference pattern?

Not Over Promising (1)

pgn674 (995941) | more than 3 years ago | (#34521948)

It sounds like this is a combination of the Kinect and the Google Street View or yellowBird [yellowbird...eality.com] camera. It had to read the article and watch the video twice, because initially it sounded like they were promising more than this could do. Turning a town or campus into a 3D model for a game sounds quite doable; you just need to move the camera around a ton as it records. As for getting a different perspective at a concert, he said you need several cameras. If you have a lot, then yeah, I can see a smooth perspective transition in real time being possible, but you would need a lot of these in the area.

already commercialized (2)

sixtuslab (1130675) | more than 3 years ago | (#34521950)

Here's a 5 cam version already commercialized http://bit.ly/aHDafK [bit.ly]

All 3D cameras are faulty (0)

Kim0 (106623) | more than 3 years ago | (#34522002)

All the 3D cameras that use an array of 2D cameras, like this one does, have a huge fault.
It is a big inefficiency and inaccuracy in aquiring the image. A large quantum mechanical loss of information of 99% to 99.9%.

The problem can be simply stated as that the camera is 4D, because it is a 2D array of 2D cameras,
while the image is 3D. This means that the camera for a 3D picture of about 1000 pixels on a side must be 1000 times bigger than necessary, or have 1000 times longer exposure, or a combination of these.

SOLUTIONS: All solutions rely on making the camera 3D instead of 4D.

1. Take a short movie while changing focus on a 2D camera.
Thus time in the movie is the 3rd dimension, translated into different focuses.

2. Make a camera with multiple 2D sensors placed at different focal lengths, and somehow transparentified, perhaps with half silvered mirrors.
This more complex camera can shorten the exposure time significantly, if there is enough light.

A combinations of these 2 methods is often most cost efficient in money and exposure time.

Thise were old thoughts of mine, and I do indeed consider this to be prior art, and quite obvious as well.
I also consider it to be evidence that I am better at this stuff than almost all 3D camera engineers.

Kim0+, M.Sc. Measurement & Quantum Physics

Re:All 3D cameras are faulty (1)

alienzed (732782) | more than 3 years ago | (#34522022)

"2. Make a camera with multiple 2D sensors placed at different focal lengths, and somehow transparentified, perhaps with half silvered mirrors. This more complex camera can shorten the exposure time significantly, if there is enough light." You want them to create a lens designed to capture light AND be transparent? I think you missed a few physics classes back in high school.

Re:All 3D cameras are faulty (2)

chichilalescu (1647065) | more than 3 years ago | (#34522084)

he/she missed a few geometry classes too, because he/she thinks you can capture 3D with only one camera. and some other geometry classes, because he/she thinks a 2D array of 2D cameras makes a 4D camera.
simple explanation for apparent stupidity: we just fed a troll.

Re:All 3D cameras are faulty (0)

Kim0 (106623) | more than 3 years ago | (#34522268)

Hi Arrogant ignorants.

You have failed to understand anything at all.

A was writing about transparent sensors, and I see from how I wrote it that it takes an idiot to misunderstand that.

And yes, I can indeed capture 3D with only one camera. I will likely put this project on my web page some day, along the other projects.

kim.oyhus.no

Re:All 3D cameras are faulty (1)

DarwinSurvivor (1752106) | more than 3 years ago | (#34522082)

Stereoscopic images are not acquired with a 2D array of 2D cameras, they are aquired with a 1D array (single line, one next to the other) of 2D cameras. A 2D array would be a GRID. The camera in TFA could be described as a 2D array of 2D cameras (with the array bend in the 3rd dimension to create a semi-sphere.

Also, for which use are they flawed? If you want to render a scene from any direction, then yes, you most likely need more than a 1D array of 2D cameras, however if the scene is intended to be viewed from 1 perspective only (like a movie scene) then a 1D array of 2D cameras is sufficient since that's all we HUMANS use to interpret the final images.

Re:All 3D cameras are faulty (1)

Kim0 (106623) | more than 3 years ago | (#34522284)

Stereoscopic images are not the same as 3D.
The difference is 2 pictures versus a line of pictures.

A 2D array of 2D cameras give 4D.
Here is a picture of it:

http://images.gizmag.com/hero/3d360camera.jpg [gizmag.com]

The flaw is an inefficiency of about 100 due.

Re:All 3D cameras are faulty (1)

DarwinSurvivor (1752106) | more than 3 years ago | (#34565142)

Dimensions cannot be simply calculated by simple addition or multiplication of dimention values (2D+2D=4D or 2D*2D=4D). You obviously have a very loose concept of space reasoning as 4D is is 3D with TIME added, which if the cameras were taking such a thing into account, then they would be 3D (x, y, time) cameras to start with!

Are you implying that if I altered the depth of some of the cameras (thus having them in a 3D grid) we would have 5D or 6D (depending on if you are using addition or multiplication for your imaginary 4D concept)? Oh wait, the cameras are ALREADY in a 3D configuration since they exist on a SPHERE!!!

Would you care to LIST these 4 dimensions of which you speak? The only way I can see getting 4 dimensions is x (left/right), y (up/down), z (depth), time. If you are counting time then the cameras would be 3D to start with (not 2D cameras).

Yes, a stereoscopic image is not going to let you see around corners (much) but that makes NO difference unless you want to be able to have the perspective change as you move your head.

BTW, simply restating your previous comment with different grammar and adding a picture that I have already seen (in the article!) does not make your original comment any more valid.

Re:All 3D cameras are faulty (0)

Anonymous Coward | more than 3 years ago | (#34522130)

So this video wall right here, a 2D array of 2D LCDs, is 4D?!?!

That's some Quantum Physics right there.

Re:All 3D cameras are faulty (1)

Kim0 (106623) | more than 3 years ago | (#34522288)

No. A video wall have the LCDs in the same plane as the LCD planes, so that is just re-use of dimensions.

The only Quantum Physics in LCDs are their polarizing filters and the transistors. (And LEDs in the newer ones.)

Re:All 3D cameras are faulty (1)

ikkonoishi (674762) | more than 3 years ago | (#34522158)

Personally I'm just glad they finally have a 2D camera out there. I keep cutting myself on my 2D one, and once I set it down on a flat surface I can't pick it up again.

Re:All 3D cameras are faulty (0)

Anonymous Coward | more than 3 years ago | (#34522190)

Really? I have no problem putting my hands on DD's...

Re:All 3D cameras are faulty (1)

ikkonoishi (674762) | more than 3 years ago | (#34522540)

Just keep your ifs and your ofs straight, and you should be fine.

Re:All 3D cameras are faulty (1)

kegon (766647) | more than 3 years ago | (#34522192)

The problem can be simply stated as that the camera is 4D, because it is a 2D array of 2D cameras, while the image is 3D.

You cannot simply add the dimensions, it depends on how you integrate the image data together. Us people who don't know very much call this integration 3D reconstruction. "The image is 3D" - do you mean the real world is 3D ? The image, as you put it, is a projection from 3D onto a 2D plane and is most definitely 2D.

Humans possess a stereoscopic vision system, each eye is capturing a 2D image at any moment in time. I expect you would call that a 4D vision system ?

SOLUTIONS: All solutions rely on making the camera 3D instead of 4D. 1. Take a short movie while changing focus on a 2D camera.

So now you add a third dimension - time. And the concept of Depth from Focus is not your idea at all, it has been around for a very long time.

Your other idea involving "somehow" doing something cannot be considered prior art. In the grand scheme of things an invention has to be realisable.

I also consider it to be evidence that I am better at this stuff than almost all 3D camera engineers.

If I could see a smiley on the above line I wouldn't think you are an idiot.

Re:All 3D cameras are faulty (1)

Kim0 (106623) | more than 3 years ago | (#34522366)

| You cannot simply add the dimensions, it
| depends on how you integrate the image data
| together. Us people who don't know very much
| call this integration "3D reconstruction".

You misunderstood. There are 4 dimensions to the
data captured by the camera:
http://images.gizmag.com/hero/3d360camera.jpg [gizmag.com]

1. The X axis on the light sensors.
2. The Y axis on the light sensors.
3. The radius of the cameras from the top of the dome.
4. The angle of the cameras from the top of the dome.

| "The image is 3D" - do you mean the real world
| is 3D ? The image, as you put it, is a
| projection from 3D onto a 2D plane and is most
| definitely 2D.

The light reflected from a scene is 3D.
If I know the light in a 3D volume,
I can calculate it forwards and backwards in time.

The same goes for light passing through a plane
during some time. Plane is 2D, while time is 1D.

So 3D is enough to represent all information in light. 4D is therefore a waste.

| Humans possess a stereoscopic vision system,
| each eye is capturing a 2D image at any moment
| in time. I expect you would call that a 4D
| vision system ?

I would call that stereoscopic vision.
It is just 2 times 2D.

| And the concept of Depth from Focus is not
| your idea at all, it has been around for a very
| long time.

I do not believe you.
You are welcome to disprove me by showing an
example of someone that have done 3D from a
depth change movie.

| Your other idea involving "somehow" doing
| something cannot be considered prior art.

Yes it can. I have worked enough with patents
to have seen it done.

| In the grand scheme of things an invention has
| to be realisable.

To me it is obvious that it is realisable.
I better do it. I had no idea that this concept
which I find so simple and obvious is so
incomprehensible and unfathomable to slightly
intelligent people.

Kim0+

Re:All 3D cameras are faulty (0)

Anonymous Coward | more than 3 years ago | (#34522860)

| I have worked enough with patents
to have seen it done.

Maybe like me? Wiping one's ass with them?

Re:All 3D cameras are faulty (1)

jbengt (874751) | more than 3 years ago | (#34523818)

You misunderstood.
There are 4 dimensions to the data captured by the camera:
http://images.gizmag.com/hero/3d360camera.jpg [gizmag.com] [gizmag.com]

1. The X axis on the light sensors.
2. The Y axis on the light sensors.
3. The radius of the cameras from the top of the dome.
4. The angle of the cameras from the top of the dome.

You are misunderstanding. There are only 3 dimensions to information being collected. There may be 4 dimensions needed to describe a particular array of cameras, but that does not magically create a 4D amount of information. (actually, you left out the 3 dimensions describing the direction of the focal plane and the focal length of the lens, the dimensions describing the imaging surface, the pixel arrangement, and other dimensions required to describe the camera array completely.)
Where more than two cameras image the same voxel, there is "redundant" information captured by the multiple cameras. But though the "extra" information can be used to increase the accuracy and precision of the 3D information gathered, that does not amount to an extra dimension. n*D^3 != D^4 (where n = the number of camera pairs used to stereoscopically capture 3D information from the scene.)

Re:All 3D cameras are faulty (0)

Anonymous Coward | more than 3 years ago | (#34526328)

Yes it accounts for an extra dimension. This is called light field photography (can be done in a plane as well). It's incredible the amount of bullshit /. users write here. Go study more...

Re:All 3D cameras are faulty (1)

Kim0 (106623) | more than 3 years ago | (#34526522)

Finally someone who is not an arrogant ignorant, but instead knows something.

And I agree about the /. users.
They seem to have become more of the self righteous idiot kind. They are not useful.

Kim0+

Re:All 3D cameras are faulty (1)

Kim0 (106623) | more than 3 years ago | (#34526508)

You are confusing dimensions with parameters.
Your argument is therefore invalid.

Kim0+

Pr0n (0)

lemmis_86 (1135345) | more than 3 years ago | (#34522026)

Wohooo!! \o/

That's not 3D, its panorama (1)

bradley13 (1118935) | more than 3 years ago | (#34522126)

TFA gets it wrong, too... Sure, it may be great for immersive experiences, but it doesn't even address the question of 3D. For that, we are still stuck with holograms.

Its just a hi-res Omnidirectional camera (1)

shervinemami (1270718) | more than 3 years ago | (#34522222)

From the article & video, all I can see is a higher-resolution version of an Omnidirectional camera, which is very common in mobile robots. Such as this list of about 50 different types! "http://www.cis.upenn.edu/~kostas/omni.html"

They keep referring to the notion of depth being used, but unless there is some big technology that they completely forgot to mention in the article & video, it just does the equivalent of pointing a camera into a bowl shaped mirror, allowing you to see in all 360 degrees at once. eg: "http://en.wikipedia.org/wiki/Omnidirectional_camera"

That is quite different to say it is truly 3D, since it is still a 2D image without depth, just that its wrapped around a circle shape instead of rectangle shape.

The amazing thing is... (1)

Charliemopps (1157495) | more than 3 years ago | (#34522510)

The amazing thing is... My realtor must have been a genius because when we sold our house 4 years ago he had that very same camera take a picture of our living room...

Shouldn't that be 4 \pi sr ... (1)

MacTO (1161105) | more than 3 years ago | (#34522642)

After all, we're talking 3-D and not 2-D!

Re:Shouldn't that be 4 \pi sr ... (1)

udippel (562132) | more than 3 years ago | (#34524236)

I'd mod you up if I had mod points. So I can only dump my comments mired with frustration here.
Of course, what the shit is 360 degrees here? On a plane you have 360 degrees. You can draw them on a simple exercise book from your school days. Though, those so-called scientists, being engineers, ought to know the basics of undergrad engineering: a sphere has 4 times pi. And their camera doesn't. Look at the photo of the original article, it is a hemisphere. No way to see the nadir of the 'dark' half. In principle we have a 2-dimensional array of 2-dimensional sensors. So that's stereoscopy, left-right and up-down. That is 2 pi.
Should anyone be doubtful, still: make this hypothetical experiment: place the camera hemisphere in front of the table at which you sit. Half above, half below the surface. You can see left and right, and you get a parallax for the objects on the table. And you get to see the bottom of the table when you use the cameras positioned below the table plane. All hunky dory. But this is 3D only in your phantasy. 3D would be if you could move around, and see that table from the side, from behind, from top, from bottom. How can you see objects behind your monitor as placed on that desktop, with the camera arranged in front? Yes, the article says by placing more than one camera. Fine, again. But nothing beyond what we have known for generations: Just put sufficient cameras to record any object from at least two perspectives, and you can reconstitute the three-dimensional space.
None of this is being done by said camera.

Bullshit (2)

zmooc (33175) | more than 3 years ago | (#34522728)

Bullshit. This is not genuine 3D. This is just stereovision using a lot of cameras demonstrated by a guy with a 'orrible french accent that talks a lot about what could be done but in fact they do not even come close to what this other guy built in a fraction of the time using a MS Kinect: http://www.youtube.com/watch?v=5-w7UXCAUJE [youtube.com]

That video also makes it very clear why the fantasies of the french guy will never come true. At least not with that camera.

Bleh (1)

Culture20 (968837) | more than 3 years ago | (#34522750)

This isn't what I want in a 3D camera. I want to be able to spin the scene after moving from original point. I want to see what's behind something. The cameras need to be encapsulating the scene.

Though EPFL is usually good, (1)

udippel (562132) | more than 3 years ago | (#34522784)

this is bad advertisement. And timothy ought not have posted it.
As someone who has worked in stereoscopic research, there is nothing new to it in this 'development'. Except, of course, maybe the brute force real-time stitching of the images. The idea to arrange a multitude of cameras on a half-sphere has grown a beard over decades.
Worse, there is not much of a difference between a traditional '3D'-view (which isn't, actually, 3D), and this arrangement. A quarter century ago some chaps had a somewhat functional setup with 6 cameras and 5 perspectives. In these days, we can - thanks to advances in computers - calculate any intermediate 2D perspective with parallax. 'And what', is the only comment I could give.
And most relevant, probably, it doesn't address the most pressing question: How to project it; how to reconstruct a (calculated) 3-dimensional object in view-space (dunno if this word exists, but is the best construction I could think of)? And don't come and tell me, to use a similar setup of projectors! because that wouldn't work. Much ado about nothing. The good prof is eventually just hoping for tenure by advertising this thingy.
I do agree, the stitching algorithm could be new, more powerful, more precise. But then, the public wouldn't actually be impressed sufficiently.
No, EPFL, you didn't do much of a service to yourself with this clip, alas.

Re:Though EPFL is usually good, (1)

Animats (122034) | more than 3 years ago | (#34523778)

this is bad advertisement. And timothy ought not have posted it.

Right. It's 3D from stereoscopy, not a depth camera. The baseline between imagers is small, so it won't have much depth resolution for distant objects. Note that while the video shows outdoor scenes, it doesn't show depth information for them.

Now the Advanced Scientific Concepts flash LIDAR [advancedsc...ncepts.com] is a real time of flight depth camera, a 128 x 128 pixel LIDAR. With a real time of flight device, depth resolution stays constant with distance, as with radar. This device tends to be equipped for a narrow field of view and long range, because it's sold to the military. But that's not inherent in the technology. A similar device, but with mechanical scanning, is the Velodyne laser scanner [velodyne.com] , which almost everybody in the 2006 Grand Challenge used.

I've met the people behind both systems. The ASC device is potentially mass-produceable at moderate cost, but the company is focused on DoD applications and hasn't pursued that. It requires custom IC fabrication, which is cheap when you make millions and extremely expensive when you make hundreds. The Velodyne thing is a better version of the impressive but fragile Team Dad laser scanner from the 2005 Grand Challenge. It's a spinning array of little LIDAR units. Both cost around $100,000, due to the tiny market.

Re:Though EPFL is usually good, (1)

benthurston27 (1220268) | more than 3 years ago | (#34532534)

What if, you had a room with a bunch of objects and this camera in the article somewhere in the middle, and then you surrounded the room with a hemisphere of honeycombed mirrors so that your 360 degree camera in the center could look at the images on the mirrors to see the scene from every angle and somehow use software to reconstruct all the information?

I have a real one at work. (1)

trout007 (975317) | more than 3 years ago | (#34523196)

In engineering we use laser scanners that use a laser as a rangefinder to find out how far from the camera each pixel is. You then shoot from different perspectives to build a 3D scene that you can move around in. http://www.faro.com/3dimager/videos/ [faro.com]

Those roly things on the front of my head... (1)

KRL (664739) | more than 3 years ago | (#34523248)

"... they actually record images in stereoscopic format, using two 2D images to create the illusion of depth" My eyes work the same way.... dammit... if I want to see around something I have to actually move my damn head!

Matrix (1)

gmuslera (3436) | more than 3 years ago | (#34523510)

This is inverse 3D... instead of what we want to see from all the possible points of view, we have what is around the camera.in a plain view

I prefer the approach of the cameras surrounding the action in the first Trinity scene in Matrix.

360 Degrees? (1)

denshao2 (1515775) | more than 3 years ago | (#34523614)

Someone is still thinking in 2D.

Apples and Oranges? (0)

Anonymous Coward | more than 3 years ago | (#34523882)

I'm curious - isn't this an apples to oranges comparison? The camera is looking in all directions (i.e. panorama view) - but it isn't looking at the object from all directions (i.e. I still can't see something hiding behind the object). How does it "reconstructs the images in 3D"?

3 Flying Cameras (1)

Doc Ruby (173196) | more than 3 years ago | (#34523894)

It seems to me that 3 flying cameras surrounding the scene should be able to capture practically any scene in 3D (4D, really, including time - though time might be a fractional dimension since it seems to move only forward).

3d porn (1)

kuleiana (629890) | more than 3 years ago | (#34524082)

3D Porn! Need I say more? Anyway now that I got your attention I remember sketching a camera like this 15 years ago and learning *then* that it was nothing new... I heard that Disney's panoramic movie at Disneyland LA used something like this, just in analog and without the processing. True 3d processing is like what you see at http://2d3.com/ [2d3.com] ...

meh (0)

Anonymous Coward | more than 3 years ago | (#34524088)

So let me get this straight...they invented a holograph??? wow.. I was making these (holograms mostly) back in high school. Does this give me prior art to sue them, make a million dollars, and buy drugs and whores?

Re:meh (0)

Anonymous Coward | more than 3 years ago | (#34524278)

Does this give me prior art to sue them, make a million dollars, and buy drugs and whores?

Fscking keep the drugs, if you want. I'll take the whores

old news (1)

nikkipolya (718326) | more than 3 years ago | (#34525270)

This news is 2 weeks old. I saw this very same video on youtube while I was searching for 3d cameras some two weeks ago. Was just curious as to what format the 3d cameras, if there are any, were storing their images in...

I have been waiting for 3d cameras to arrive for a long time now. Was imagining something that will probably shoot some kind of lasers may be, into the scene and capture the depth of the objects that I point to and reconstruct the scene in 3d. I don't think the camera in this video is portable enough. Meaning, it won't go far. It will most likely be like fusion in the tea cup.

This has already been done more elegantly. (1)

sugarmatic (232216) | more than 3 years ago | (#34525524)

This was done years ago with two 360 degree panorama cams. The two spherical panoramic images gave reasonably convincing 3D imaging for a field of view of at least 160 degrees. This means you could dart your eyes anywhere in this field of view with acceptable results. To render subjects outside of this field of view, you had to reformat the subject at a particular area to match the two images well enough to allow viewing. This allowed viewing out to a FOV of nearly 180 degrees. Two pics. A bit of simple software to transform the spherical image to a panorama. Worked with video and stills.

Nice hobby project here, but the old way worked better in many ways. Lots of references on the web.

360 yes, 3D no (1)

junglebeast (1497399) | more than 3 years ago | (#34525830)

Because all the camera focal points are approximately at the same location, the images can be stitched together in software to create a full hemispherical view. This is essentially the same type of snapshot that is used in Google street view, and allows you to look in different directions.

In order to change position, however, requires depth information. Because the focal points are not EXACTLY at the same location, it is theoretically possible to estimate depth, although the practical reality is that the precision of these depth estimates are sensitive to image resolution and the ratio between baseline and depth.

Even if each individual camera was something like 14 megapixel, the accuracy of depth estimations would be so terrible as to be completely useless for anything more than a few feet away from the device. The author's neglect to mention this critical short-falling in their demonstration...

old tech (0)

Anonymous Coward | more than 3 years ago | (#34526138)

Point Grey Research has been doing this for years and years with their Ladybug spherical cameras (eg. http://www.ptgrey.com/products/ladybug3/Ladybug3_360_video_camera.asp [ptgrey.com] ). They have a much more innovative many-camera unit, too - it is a PCI-E device (weird, but high throughput!) with a square array: http://www.ptgrey.com/products/profusion25/index.asp [ptgrey.com]

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>