Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Computer Vision Tech Grabs Humans In Real-Time 3D

CmdrTaco posted more than 4 years ago | from the my-soul-is-in-there dept.

Input Devices 110

Tinkle writes "Toshiba's R&D Labs in Cambridge, UK have developed a system capable of real-time 3D modeling of the human face and body — using a simple set of three different colored lights. Simple it may be, but the results are impressive. Commercial applications for computer vision technology look set to be huge — according professor Roberto Cipolla. On the horizon: cheap and easy digitized everyday objects for ecommerce, plus gesture-based interfaces — a la Natal — and in-car safety systems. Ultimately even driver-less cars. 'This is going to be the decade of computer vision,' predicts Cipolla."

Sorry! There are no comments related to the filter you selected.

oic (1, Insightful)

Anonymous Coward | more than 4 years ago | (#31674714)

Driverless cars huh? Not sure how safe I feel about that ;>

Re:oic (2, Insightful)

amRadioHed (463061) | more than 4 years ago | (#31674902)

I would feel much safer. Drivers are the cause of most crashes. If they can be replaced with something more reliable it would be a huge improvement.

Re:oic (5, Insightful)

WrongSizeGlass (838941) | more than 4 years ago | (#31675096)

I would feel much safer. Drivers are the cause of most crashes. If they can be replaced with something more reliable it would be a huge improvement.

Let's ask Toyota owners how they feel about 'driverless cars'. All it takes is one small problem, or even an incompatible system amongst the many manufacturers (keep in mind that odds are they all won't be running Linux).

This reminds me of Itchy & Scratchy Land [wikipedia.org] and its inspiration, Westworld [wikipedia.org] . What could possiblye go wrong?

Re:oic (1)

amRadioHed (463061) | more than 4 years ago | (#31675498)

Sure, ask anyone involved in any accident and what caused their accident will be most important to them. But what percentage of accidents do the recent problems with Toyota's comprise? I didn't say there were no accidents due to car failures, but the fact is even with Toyota's problems drivers are still responsible for more accidents and deaths then anything else.

Get your stastics out of the way of fearmongering (1)

HeckRuler (1369601) | more than 4 years ago | (#31675784)

What? Blasphemy! This had been on the NEWS! Everyone knows that if it's newsworthy then it MUST be a gigantic issue and pertinent to everyone.

Re:Get your stastics out of the way of fearmongeri (1)

Onymous Coward (97719) | more than 4 years ago | (#31677226)

Ah, enlightening rhetoric.

Yeah, it's interesting how news gives us a distorted perception of the world. Especially as we tend to focus on (and thus demand as consumers and thus encourage disproportionately) awful things.

Maybe the world isn't so evil as the news paints it?

Re:oic (0)

Anonymous Coward | more than 4 years ago | (#31676924)

Except the uptick in Toyota accelerator problems is largely due to media exposure. Every other manufacturer had the same problems at the same low reporting rate until the story exploded. Plus, the facts surrounding the recent 90+ MPH "out of control" Prius point overwhelmingly to a hoax for media attention and legal awards. I wouldn't be surprised if most or ALL of the publicized instances in the last few months were hoaxes.

Re:oic (0)

Anonymous Coward | more than 4 years ago | (#31678994)

Actually per unit sold, VWs and Audi's top the list for unexplained acceleration. But they aren't really a threat to the US manufacturers because their market share is so much smaller than Toyota. So we'll just ignore them.

Re:oic (1)

MobyDisk (75490) | more than 4 years ago | (#31676932)

I agree with your suggestion: ask Toyota owners. Go even further than that. Take a survey.

Compare the number of Toyotas that have failed because of the mysterious acceleration problem, to the number of cars that have failed because of problems with the human driver.

Re:oic (1)

pegasustonans (589396) | more than 4 years ago | (#31678010)

Let's ask Toyota owners how they feel about 'driverless cars'. All it takes is one small problem, or even an incompatible system amongst the many manufacturers (keep in mind that odds are they all won't be running Linux).

Drivers confusing the gas-pedal with the brake isn't a small problem. It's quite a large one.

Fixing the troublesome component (i.e. eliminating the human driver) would likely reduce accidents quite a lot.

Of course, the accidents that did occur would be sensationalised, but hopefully people would realise the increase in safety is worth it.

Re:oic (0)

Anonymous Coward | more than 3 years ago | (#31679700)

Yup. If only people would finally realize that we know whats best for them.

Re:oic (1)

geekoid (135745) | more than 4 years ago | (#31678114)

Properly done, no much.

Did you know must new jest take off, fly to a destination, and then land on their own?
Automated car systems are coming, and they will be fine. The current position of the market will dictate 2 thinkgs:
1) Slow adoption - Meaning a piece at a time, then the coupling a a few pieces and so on.

2) The public wont' tolerate unsafe vehicles.

Re:oic (1)

AmberBlackCat (829689) | more than 4 years ago | (#31678564)

You should go ahead and do that. Just find a Toyota owner and ask if they're worried, or if they're having any problem with their car. Because, as I said before [slashdot.org] , it's the Non-Toyota owners who are offering the majority of the FUD.

Re:oic (1)

Restil (31903) | more than 4 years ago | (#31679206)

It's like any other obscure car problem. Assuming there is an ACTUAL problem, despite the lack of efforts to find it, it will probably not affect more than 100-150 vehicles over the lifetime of all of the products that have the potentially faulty system. It's enough to justify a recall, but on the other hand, it's probably less of an occurrence than random chance would otherwise provide. Even if there IS a glaring problem, most Toyota owners will never experience it.

The problem now is that Toyota has a PR problem and they're going to have to find a way to solve it. Either discover a technical flaw... ANY flaw.. and fix it, or prove beyond the shadow of a doubt to a skeptic public that none exists.

-Restil

Re:oic (0)

Anonymous Coward | more than 4 years ago | (#31679062)

Let's ask Toyota owners how they feel about 'driverless cars'.

I would ask one of those Toyota drivers how they feel about that, but they wouldn't be able to hear me. They're all too old. [washingtonexaminer.com]

Don't drivers cause all crashes (2, Insightful)

jweller13 (1148823) | more than 4 years ago | (#31675630)

I would venture to say that drivers cause 100% of car driving accidents.

Re:Don't drivers cause all crashes (0)

Anonymous Coward | more than 4 years ago | (#31675932)

The ultimate cause of car wrecks is freedom. People should not be allowed to leave their homes, period. That is the cause of all wrecks.

Re:Don't drivers cause all crashes (1)

hufman (1670590) | more than 4 years ago | (#31679218)

There's other causes of car driving accidents, such as when wildlife forget to not look both ways before crossing traffic in front of heavy traffic. Also, icy roads occasionally cause problems. But yes, most problems are because the drivers are allowed out of their houses.

Re:oic (1)

shadowrat (1069614) | more than 4 years ago | (#31675792)

Drivers cause the most crashes now. That might not be the same when the proportion of driverless to driven cars is a little higher.

Re:oic (1)

shadowrat (1069614) | more than 4 years ago | (#31675620)

it's totally safe as long as they only drive in dark rooms illuminated only by red, green, and blue lights at fixed positions.

Re:oic (1)

dwiget001 (1073738) | more than 4 years ago | (#31675934)

Well, in different areas of the U.S. I have driven (San Francisco, CA; Denver, CO;, Chicago, IL; Los Angeles, CA, Phoenix, AZ; Tampa, FL and others) I would say that driver-less cars would as safe (if not safer) than cars with drivers in some of those areas.

Of course, it kind of depends on the driving conditions (rush hour, driven rain storms, blizzards, thick fog, etc.).

Skip to the chase (5, Funny)

Locke2005 (849178) | more than 4 years ago | (#31674726)

What implications does this development have for the pornography industry?

Re:Skip to the chase (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#31674890)

A couple weeks ago, while taking my asian girlfriend shopping at the local mall, I had to take a piss. As I entered the john, Barack Obama -- the messiah himself -- came out of one of the booths. I stood at the urinal looking at him out of the corner of my eye as he washed his hands. He didn't once look at me. He was busy and in any case I was sure the secret service wouldn't even let me shake his hand.

As soon as he left I darted into the booth he'd vacated, hoping there might be a lingering smell of shit and even a seat still warm from his sturdy ass. I found not only the smell but the shit itself. He'd forgotten to flush. And what a treasure he had left behind. Three or four beautiful specimens floated in the bowl. It apparently had been a fairly dry, constipated shit, for all were fat, stiff, and ruggedly textured. The real prize was a great feast of turd -- a nine inch gastrointestinal triumph as thick as his cock -- or at least as I imagined it!

I knelt before the bowl, inhaling the rich brown fragrance and wondered if I should obey the impulse building up inside me. I'd always been a liberal democrat and had been on the Obama train since last year. Of course I'd had fantasies of meeting him, sucking his cock and balls, not to mention sucking his asshole clean, but I never imagined I would have the chance. Now, here I was, confronted with the most beautiful five-pound turd I'd ever feasted my eyes on, a sausage fit to star in any fantasy and one I knew to have been hatched from the asshole of Barack Obama, the chosen one.

Why not? I plucked it from the bowl, holding it with both hands to keep it from breaking. I lifted it to my nose. It smelled like rich, ripe limburger (horrid, but thrilling), yet had the consistency of cheddar. What is cheese anyway but milk turning to shit without the benefit of a digestive tract?

I gave it a lick and found that it tasted better then it smelled.

I hesitated no longer. I shoved the fucking thing as far into my mouth as I could get it and sucked on it like a big half nigger cock, beating my meat like a madman. I wanted to completely engulf it and bit off a large chunk, flooding my mouth with the intense, bittersweet flavor. To my delight I found that while the water in the bowl had chilled the outside of the turd, it was still warm inside. As I chewed I discovered that it was filled with hard little bits of something I soon identified as peanuts. He hadn't chewed them carefully and they'd passed through his body virtually unchanged. I ate it greedily, sending lump after peanutty lump sliding scratchily down my throat. My only regret was that Barack Obama wasn't there to see my loyalty and wash it down with his piss.

I soon reached a terrific climax. I caught my cum in the cupped palm of my hand and drank it down. Believe me, there is no more delightful combination of flavors than the hot sweetness of cum with the rich bitterness of shit. It's even better than listening to an Obama speech!

Afterwards I was sorry that I hadn't made it last longer. But then I realized that I still had a lot of fun in store for me. There was still a clutch of virile turds left in the bowl. I tenderly fished them out, rolled them into my handkerchief, and stashed them in my briefcase. In the week to come I found all kinds of ways to eat the shit without bolting it right down. Once eaten it's gone forever unless you want to filch it third hand out of your own asshole. Not an unreasonable recourse in moments of desperation or simple boredom.

I stored the turds in the refrigerator when I was not using them but within a week they were all gone. The last one I held in my mouth without chewing, letting it slowly dissolve. I had liquid shit trickling down my throat for nearly four hours. I must have had six orgasms in the process.

I often think of Barack Obama dropping solid gold out of his sweet, pink asshole every day, never knowing what joy it could, and at least once did, bring to a grateful democrat.

Re:Skip to the chase (1)

Jello B. (950817) | more than 4 years ago | (#31675782)

A new take on an old classic.

Re:Skip to the chase (1, Insightful)

Anonymous Coward | more than 4 years ago | (#31674996)

Until they can construct 3D models AND animate them convincingly I don't think there are any implications. Having viewers download a 3D model to admire on their 2D display doesn't seem to offer much advantage over photos or video. Loading the model in an editor and applying different clothing or performing a virtual boob-job maybe?

Re:Skip to the chase (1)

Locke2005 (849178) | more than 4 years ago | (#31677124)

I take it you haven't seen Avatar. Using motion capture and performance capture, we should now have no problem capturing the "acting" in real time, and transferring it to a 3D model that doesn't have huge pimples on it's ass, bad teeth, and breast augmentation scars. And of course, the assertion that "size doesn't matter" will now be universally true, and Ron Jeremy can finally retire!

Re:Skip to the chase (0)

Anonymous Coward | more than 4 years ago | (#31678202)

Sure, they could capture the performance. But is there really a point to doing that rather than just filming it? I guess your point is, you would like to see porn stars with blue skin?

Re:Skip to the chase (1)

Locke2005 (849178) | more than 4 years ago | (#31678282)

No, I'd like to see stars that don't look like syphilitic crack whores. This would also allow them to use much older actors to play younger roles (voices don't change much over time), so Linda Lovelace could get back into the business.

Re:Skip to the chase (1)

Bigjeff5 (1143585) | more than 4 years ago | (#31679290)

I take it neither of you read about what all went into making the big blue aliens in Avatar believable.

After motion and performance capture, after running very refined automated scripts to tweak the movements and expressions, the Navii were squarely in the middle of the "uncanny valley", which is the effect that the closer you get to human-like expression without being correct the creepier and less-realistic a model feels.

It took thousands of hours of hand-tweaking the expressions and body movements to sufficiently pull those models out of the uncanny valley and make them believable. The current state-of-the-art does not address this yet, and you'll still have to put thousands of man-hours to get that last bit of realism to make the 3D characters believable. The less they look human the easier it is to do, so expect it to take some time before we see human 3d models that are just as believable as the real thing.

Re:Skip to the chase (0)

Anonymous Coward | more than 4 years ago | (#31675250)

Well since you don't have to hold the steering wheel anymore...

The obligatory response: (2, Funny)

internic (453511) | more than 4 years ago | (#31674788)

There are four lights!

Re:The obligatory response: (2, Interesting)

HTH NE1 (675604) | more than 4 years ago | (#31675054)

This is how the Martians see us.

Re:The obligatory response: (3, Informative)

HTH NE1 (675604) | more than 4 years ago | (#31675856)

This is how the Martians see us.

Overrated? You're making me feel old, and I wasn't even born yet.

It's a reference to the RGB eyes of the Martians in the 1953 movie version of The War of the Worlds. The tri-segmented eyes in the movie emitted red, green, and blue light, illuminating the subject, allowing the cyclopian Martians to see in 3D, just like how a cyclopian camera can derive 3D information using this method now. Otherwise, as depicted with Futurama's Leela, a cyclops would have no depth perception.

Of course, the amount of depth perception would depend on the spread of the lights, so even the Martians' sense of depth would be limited, but not non-existent.

Re:The obligatory response: (1)

Penguinisto (415985) | more than 4 years ago | (#31675328)

Err, it's only obligatory to Star Trek (specifically, TNG) fans.

(damn - I'm not really a Trek fan but I actually know that. double-damn!)

Re:The obligatory response: (1)

Bugamn (1769722) | more than 4 years ago | (#31675804)

Well, the plot of that episode references 1984. Anyway, weren't they five lights?

Re:The obligatory response: (1)

internic (453511) | more than 4 years ago | (#31677636)

In the episode there were four lights, but the interrogator claimed there were five. I figured claiming there were three is just as good. :-)

Obvious applications in rapid prototyping. (4, Interesting)

Anonymous Coward | more than 4 years ago | (#31674796)

Right now, 3D camera technology to scan a hand-made prototype into commercial CAD software revolves around a scanning laser, and special cameras, and a turn table.

Combining this technology with other image mapping software would allow you to use 3 or 4 fixed cameras with overlapping FOVs, you would be able to simply set your source model on a table, turn on the lights, take a picture, and you are done.

I would SOO love to have a FOSS implementation of this modeling software.

(I sculpt, and being able to make a large physical object, scan it, then send the digital model to a rapid prototype house and get a miniature size made from the digital version would be VERY handy.)

Re:Obvious applications in rapid prototyping. (2, Insightful)

ircmaxell (1117387) | more than 4 years ago | (#31674858)

What I would find interesting, is if they could make RGB lights that flash each color for only a tiny fraction of a second. So to the average person, the light looks white, but to the camera (which would need to be fast to read that much change) it appears the color for that frame. So that way, you could have a system like this in a normal room, and record a 3D model of the room at all times (Think of a security camera, but one that could take a 3d image instead of a 2D one)... It seems cool so far, let's see if it matures...

Re:Obvious applications in rapid prototyping. (2, Insightful)

amRadioHed (463061) | more than 4 years ago | (#31674942)

Or even better would be a system that uses infrared or some other wavelength that we can't see.

Re:Obvious applications in rapid prototyping. (1)

Tekfactory (937086) | more than 4 years ago | (#31675140)

Which is easier separating and stitching together 3 different colored frames each taken at different times from a high speed camera, or 3 synchronized streams of video of the same subject matter taken from 3 different regular speed cameras with different color lens filters on them?

Re:Obvious applications in rapid prototyping. (1)

HTH NE1 (675604) | more than 4 years ago | (#31675316)

But you don't need to flash the subject three times or use a 3x rate camera. You can continuously light the subject with the three colors and separate the colors in each frame in the computer, much like how Photoshop lets you manipulate the red, green, and blue channels of a digital photo. The 3D effect comes from the three lights being in different, predetermined positions (three axes of a cube converging on the subject). You get the full 3D effect at a normal framerate without increasing the amount of data captured.

Re:Obvious applications in rapid prototyping. (1)

ircmaxell (1117387) | more than 4 years ago | (#31675592)

Well, the frame being analyzed would need to comprise of one color from each light source. Considering that we'd notice a flicker in the light if it switched at anything less than 30hz, you'd need a camera that could record more than that frame rate (say 60hz or 120hz). So the easiest way (in my mind) would be to either flicker the each light source between one color and white at something like 120 hz (synchronized of course), or simply "rotate" the colors between the 3 lights (so every 1/120 of a second, light 1 would become the color of light 2, light 2 of light 3, light 3 of light 1). Sure, you'd need some pretty decent synchronization to make it all work flawless, but you could do that by flashing all the colors white at the completion of 1 cycle (so light 1's cycle would be RBGW, light 2's would be BGRW, light 3 GRBW), then just detect the white flash in the camera and you know which color is where by the number of frames since the white flash.

The whole point of this, would be so that the lights could appear white to the human eye (And hence can double for normal lighting in a well designed room), while still providing the segmented colors necessary for this technique to work.

Re:Obvious applications in rapid prototyping. (2, Informative)

HTH NE1 (675604) | more than 4 years ago | (#31678498)

The whole point of this, would be so that the lights could appear white to the human eye (And hence can double for normal lighting in a well designed room), while still providing the segmented colors necessary for this technique to work.

The positions you need to put the colored lights in for the math to work properly are not the same positions one uses to properly light a subject being recorded. You'll produce an environment where the subject is overly lit and you'll have to resort to virtual lighting to properly illuminate the 3D model in post. And if you're going to have to do it in post, why bother with the expensive strobing and high-speed videography?

This will be used in a controlled mocap-like environment, but without the ping-pong balls and spandex. That alone is enough of a technical advancement that your talent won't mind the colored lights. They won't even need to apply makeup. Hairstyling though looks to still be an issue. You may need skullcaps and CG hair for awhile yet.

Re:Obvious applications in rapid prototyping. (1)

Tinctorius (1529849) | more than 4 years ago | (#31676278)

That's a great idea if you want to mocap a tonic-clonic seizure.

Re:Obvious applications in rapid prototyping. (1)

itsthebin (725864) | more than 4 years ago | (#31679524)

security cameras with this capability would help the accuracy of automaatic facial recognition

Re:Obvious applications in rapid prototyping. (1)

HTH NE1 (675604) | more than 3 years ago | (#31679702)

The first generation though would only be used in discotheques.

Re:Obvious applications in rapid prototyping. (0)

Anonymous Coward | more than 4 years ago | (#31675006)

yeah, but they're a bit late. project natal was presented commercially last year.

Re:Obvious applications in rapid prototyping. (1)

ircmaxell (1117387) | more than 4 years ago | (#31675118)

Well, from what I understood about Natal, it was object recognition and tracking. This is about full blown modeling, the difference being the fact that this can create a full 3d representation of an object in the computer down to details, where as recognition and tracking would only enable you to figure out what you were looking at and watch as it moves (So it could tell the difference between a book and a person, but whether or not you were smiling)...

Re:Obvious applications in rapid prototyping. (1)

religious freak (1005821) | more than 4 years ago | (#31675516)

Maybe this is a stupid question, but where the hell is Natal? msft does a great job at demoing some innovative tech, but I never see it in the marketplace. I'm not an anti-ms troll (I honestly don't give a shit), but seriously where's: surface, photosynth, natal, etc, etc?

If they're around, msft doesn't do much of a job in putting them in front of people, IMO.

Sculpting / Rapid Prototyping solutions (1)

dj245 (732906) | more than 4 years ago | (#31676874)

I suggest a Faro Laser ScanArm [faro.com] or possibly a Faro Laser Scanner. Both can turn a hand-made model into a 3d drawing. The equipment is fairly expensive (~$100k), but you can hire firms that have the equipment to scan your stuff. The company we use charges about $200 an hour, but depending on what you're scanning, this might be really cheap. The Faro technology is fantastic, but since the market is not so large, the prices are high. The equipment is also precision-made and durable enough to survive industrial environments, both of which probably increase the cost dramatically.

Re:Sculpting / Rapid Prototyping solutions (1)

Zerth (26112) | more than 4 years ago | (#31677310)

Or just get a laser level, a webcam, and some substantially cheaper software [david-laserscanner.com] . Or use meshlab, with some more effort.

Re:Sculpting / Rapid Prototyping solutions (0)

Anonymous Coward | more than 4 years ago | (#31680644)

I am well aware of what the current technologies are, and how expensive they are.

ALL of them involve a laser light that gets incrementally dragged across the surface of the object, and the output is a set of points in a cloud, and yes, there ARE specialist firms that deal with it.

The point I was trying to make was that this technology would make a more simplified technology available, and would greatly reduce the cost of entry into the digital prototyping market. It might not get the same resolution as a traditional scanner, but most everyone can afford a high-end digital camera, and some LEDs.

This whole technology looks like a simple ray-trace algorithm (Single bounce), coupled with selective output from a CCD, and some preprogrammed static variables. (Angles of the LED cones of exposure, distance between LEDs, Distances of LEDs to surface of object, and distance of camera from object, for instance.) All of that could be incorporated into a digital camera firmware.

The most expensive part of the setup would be the software licensing.

Re:Obvious applications in rapid prototyping. (1)

Bigjeff5 (1143585) | more than 4 years ago | (#31679388)

You'd need different sets of colors for each camera or you'll get cross-contamination between cameras. It would be better to just spin the object. The other option is to use specific wavelengths and filter out the light profiles for each camera.

Windows xp FTW (0)

Anonymous Coward | more than 4 years ago | (#31674850)

n/t

Implications (1)

keithjr (1091829) | more than 4 years ago | (#31674906)

I hate to be a downer, as I'm often fascinated by computer vision technology, but aren't there some very negative potential applications here? The UK is basically coated in CCTV cameras at this point, and our phones can broadcast GPS data to telcos (whom we KNOW are happy to hand over data to the NSA if they ask kindly). Isn't fully-automated human tracking the third element of the surveillance state trifecta?

Re:Implications (1, Insightful)

Anonymous Coward | more than 4 years ago | (#31675016)

And with face-recognition and 3D-mapping, soon you'll get a ticket via snail-mail.

"Dead Keith Jr.,

in the last three months we have noticed that you have gained 15% in body mass. Please report to the gym immediately or your health care benefits will be suspended."

Re:Implications (5, Funny)

oldspewey (1303305) | more than 4 years ago | (#31675078)

If the letter is in fact addressed "Dead Keith Jr.," then shouldn't it say something more along the lines of "in the last three months we have noticed that you have turned ghoulish grey and started to stink like hell. Please stay the fuck in the ground and stop disturbing your former friends and neighbours."

Re:Implications (2, Funny)

Tekfactory (937086) | more than 4 years ago | (#31675184)

"Dead Keith Jr.,

in the last three months we have noticed that you have gained 15% in body mass. Please report to the gym immediately or your health care benefits will be suspended."

From the Greeting I'd think his health benefit was already suspended.

I guess that bodies really DO bloat a little after death.

err... scale (1)

malp (108885) | more than 4 years ago | (#31676504)

Could you just use a scale?

Re:Implications (1)

oldspewey (1303305) | more than 4 years ago | (#31675034)

The key is to not let them stop at a trifecta. We the people need to add a 4th element: renegade tracking and recording of "the watchers."

Once everybody is subjected to the same rules and consequences, the idea of a surveillance society seems a lot less scary.

Re:Implications (0)

Anonymous Coward | more than 4 years ago | (#31675958)

Good idea. How to implement?

Re:Implications (2, Funny)

WrongSizeGlass (838941) | more than 4 years ago | (#31675138)

I hate to be a downer, as I'm often fascinated by computer vision technology, but aren't there some very negative potential applications here?

You mean like how this will affect a bunch of epileptic kids walking down the street on a school field trip?

Re:Implications (1)

Yvan256 (722131) | more than 4 years ago | (#31676586)

Oh, it's gonna be a trip alright. Give them some room.

(Score: 5, Dark humour)

Colors (2, Interesting)

Conspiracy_Of_Doves (236787) | more than 4 years ago | (#31674950)

Instead of red green and blue, could they use three different frequencies in the infrared range? Then they could also take photographs in normal visible light and wrap then around the model.

Re:Colors (3, Informative)

HTH NE1 (675604) | more than 4 years ago | (#31675496)

You'd need a custom CCD that's sensitive to each of those frequencies, as well as method of storing the image preserving the intensities of each component. And if you want a color full-motion 3D model, that CCD would need to be sensitive to six frequencies--the 3D sampling set and RGB--all at once. To fit all those different sensors will enlarge your CCD, else you'll lose resolution.

Re:Colors (1)

Conspiracy_Of_Doves (236787) | more than 4 years ago | (#31675584)

Soo... expensive but not impossible.

Re:Colors (1)

HTH NE1 (675604) | more than 4 years ago | (#31678184)

Easier for a still-life photo. Harder and more expensive for full-motion that you'd tend to just skin the model with a known texture. And still not usable in an uncontrolled (particularly outdoor) environment or in overlapping environments.

There comes a point where the expense of the R&D outweighs the usefulness of the end product. The ability to profit from the result is one. TFA's solution is lucky in that it can be done inexpensively with consumer hardware, a rigid light rigging, and a solved application of mathematics.

But they do throw a lot of money into motion capture special effects for motion pictures. There'd be a lot of money to be made by the person who can first develop a system that can provide a virtual camera flying through a live event with full color. And you'd get the bonus of being able to substitute one environment for another, live.

What am I saying, "first develop"? The first to patent it will make the money regardless of his ability to realize the invention. Whosoever actually succeeds in creating it will get sued for infringing the patent.

THIS is going to be the decade (1, Insightful)

Anonymous Coward | more than 4 years ago | (#31675052)

( 'This is going to be the decade of computer vision,' predicts Cipolla. )

where Twitter creates democracy and freedom around the world [youtube.com] .

Yours In Perm,
K. Trout

Interference (2, Insightful)

acheron12 (1268924) | more than 4 years ago | (#31675092)

Since this requires shining lights on the object to be digitized from particular angles, two or more independent vision systems (e.g. in driverless cars) would probably interfere with each other.

Re:Interference (1)

Pollardito (781263) | more than 4 years ago | (#31676398)

The article, the summary, and the links to the article in the summary are all a bit confusing. There are two different 3D modeling processes being demonstrated in the article. One uses a camera and a turntable to model objects, and the other uses one camera and an RGB lighting system. The second is what they propose to use for visualizing people:

When it comes to capturing the raw shape of the human body and face in real-time the multiview stereo system is no good - humans move and expressions are, by nature, mobile. However, pictured above is another 3D modelling technology developed at Toshiba's labs that has been designed to capture the human body and face moving in real-time - yet is still faithful to every individual lump and bump.

and the first seems to be what they're proposing to use for driverless cars (though they give no details about how a setup that uses a turntable would be transferrable to that situation):

Another use could be putting video cameras into cars and using the system as a driver aid - by reconstructing road scenes as the car travels along to help with driver safety and parking, and ultimately enable driverless cars.

but then they also make a statement about using a camera inside the car to look at the driver for signs of sleep, something that I assume the second method would be better for (but they don't even talk about that setup until the next page)

An in-car computer vision system could detect when a driver hasn't seen a car stopped in front of it, or when they are in danger of falling asleep, according to Cipolla. "It can look at you when you're driving and see if you're blinking and falling asleep, so warn you. It can actually look outside the lanes and see your driving is very erratic, you seem to be crossing over frequently and correcting sharply - a very strong sign you're about to fall asleep," he says.

Impressive (1)

wilcley (1183323) | more than 4 years ago | (#31675098)

It looks like the Toshiba group accomplishes with one camera what these guys [ted.com] did with dozens.

Re:Impressive (1)

marcosdumay (620877) | more than 4 years ago | (#31678600)

Ok, but the multicamera one can work on any light.

John Bell, err, Bill Joy was right!!! (2, Insightful)

Anonymous Coward | more than 4 years ago | (#31675122)

Just improve skynet's target acquisition algorithms, why don't you?!!!

All your face are belong to us (3, Insightful)

moteyalpha (1228680) | more than 4 years ago | (#31675124)

That is a fantastic leap in thinking!
I am wondering if this technique could be used with the spectrum of stars to identify the 3 dimensional structure of distant galaxies and clouds of gas?

Re:All your face are belong to us (1)

ircmaxell (1117387) | more than 4 years ago | (#31675458)

Well, doubtful. The way this works, is that different colors of light are positioned at different angles. So the camera captures the resulting colors based on color mixing. The computer can deduce the angle of any one point by looking at the color reflected by it. Then, once you have all the angles, you can join the neighboring pixels into a "map", and use the angle changes to predict depth (hence how it's able to deduce depth from a 2D image). So for it to work, you'd need to know the exact position of each colored light source (not something that's available when looking at a light source such as a galaxy)...

Re:All your face are belong to us (1)

moteyalpha (1228680) | more than 4 years ago | (#31675736)

I will explain more. A sun has a spectrum based on its position in the sequence [wikipedia.org] and each sun then is like a different light source and the data spectrum of the different reflections could be combined to produce a 3D of the galaxy. I think that I could devise the python script for that from 2D images which have spectral data.

Re:All your face are belong to us (1)

HTH NE1 (675604) | more than 4 years ago | (#31675668)

I am wondering if this technique could be used with the spectrum of stars to identify the 3 dimensional structure of distant galaxies and clouds of gas?

Only with crude beings does this work, not luminous matter.

And you'll have too many stars with overlapping spectra to have effective chroma isolation for mapping non-stellar matter, let alone the problem of first mapping out all the light sources contributing to its illumination.

Re:All your face are belong to us (1)

moteyalpha (1228680) | more than 4 years ago | (#31675954)

You are right, that would be like taking an encrypted signal from two satellites and merging them using the relativistic velocity of the satellites and the frequency shift as it passed into the gravity well, combining them and determining my position on the surface of a sphere.
I know I shouldn't dream of new things, but if I could do that, I would call it GPS.

Re:All your face are belong to us (1)

HTH NE1 (675604) | more than 4 years ago | (#31677946)

At least your satellites are in known and regular positions and produce signals readily separable from each other. The natural distribution of stars in the Universe are not so conveniently arranged and their photons not nearly so distinguishable after they've been reflected off an object of unknown topology.

Consider that the method described in TFA only works for a single photographer in a controlled environment. Don't let the blackness of space fool you: there's a lot of light pollution out there emitted by billions and billions of stars.

In fact though, we've noted when supernovae have occurred in the past, know when the light from a nova will reach a nebula later, and when its reflection from that nebula would be seen by us, to map out some structures already. At the scales and distances involved, the wavefront of the nova appears to move across those structures very slowly giving a lot of time to observe and map the overall supermacroscopic structure, even from just the single flare of light. And then comes the compression wave of interstellar hydrogen altering that structure which we see in the backscatter of that light.

Perhaps you should dig into the records to find when light from multiple supernovae in the past will arrive simultaneously somewhere else and hope there's sufficient matter there to reflect back to us within your lifetime. With luck, perhaps it'll illuminate an otherwise dark nebula that will appear to us in the shape of some gigantic symbolic gesture of the hand of God telling us to fuck off.

Re:All your face are belong to us (1)

moteyalpha (1228680) | more than 4 years ago | (#31678436)

I wish I had known that complicated things can't be achieved when I was younger and it would have saved me and my friends a lot of time. What you are describing is like ray tracing and that is quite impossible I know, now that you have informed me.
Blender [blender.org]
As far as finding the hand of the ceiling cat, that is obvious in the wonderful lulz that illuminate us.
I know what you mean about the stars, every night I look up and they wander about like fireflies with no obvious pattern.
If these techniques were already employed in other wavelengths [nao.ac.jp] then I might refute your statement, but alas I have no proof.

Re:All your face are belong to us (1)

HTH NE1 (675604) | more than 4 years ago | (#31678958)

Well, ray tracing is easy once you know the position and direction of every photon. In natural practice, there's a bit of uncertainty regarding that. But you might be able to fudge that a bit for astronomical scale. Ray-traced images of a terrestrial nature always seemed artificial to me, like the environment depicted was always in a perfect vacuum.

But what if it could be applied instead at an atomic scale, using charged particles to control the simultaneous emissions of photons of certain wavelengths from fixed positions at proteins to determine how they are folding rather than modeling them?

(You are very adept at conveying sarcasm in a text forum. I bow to your skill. Of course, one role of the skeptic is to urge the (more) creative thinkers to prove him wrong. I hope I have not given you any offense.)

Promise (1)

sonicmerlin (1505111) | more than 4 years ago | (#31675268)

This has such incredible promise for the low-cost development of modern day games. Animation still presents a problem of course.

Re:Promise (1)

snooo53 (663796) | more than 4 years ago | (#31676784)

The thing is, I'm not sure a glorified 3d scanner is going to help all that much. It would be cool for say digitizing the layout of your home or a model object, and being able to do so cheaply, but what I think would have a bigger impact is software intelligent enough to separate those objects out from the environment. Being able to recognize that say a flower or a person's arm is bendable, but a chair isn't. Or being able to recognize that the bottle of soda under a bright refrigerator light is the same bottle of soda that was just pulled out of a dark grocery bag. These are very difficult computer vision problems that are brains are well adapted to handle and we are only scratching the surface of being able to describe how that works.

Real Avatars and computer gaming (3, Interesting)

NonSenseAgency (1759800) | more than 4 years ago | (#31675366)

One of the uses mentioned in the article was that this would enable gamers to upload realistic portrayals of themselves into computer games as their avatar. Unfortunately ( or perhaps fortunately for some of us), real virtual life isn't anything like Neal Stephenson's "Snowcrash" novel. Most gamers, unlike the hero Hiro Protagonist (pun intended) do not want to look like themselves at all. They are bigger, or meaner, or better looking or in the case of all too many, not even the same gender. What would seem far more likely is a market springing up in avatars made from recordings of real people. So this begs a whole new question, who owns your avatar? Intellectual Property rights just took a huge twist.

And... (1)

kenp2002 (545495) | more than 4 years ago | (#31675434)

In further news 20 million CAPTCHA drones in 3rd world countries rioted at the prospect of being replaced by advances in computer vision which will render captcha technology useless...

Mick (1)

michaelmalak (91262) | more than 4 years ago | (#31675612)

Did anyone else, having read just the headline, think this was about Mick Jagger?

Artificial Intelligence is around the corner (1)

CrazyJim1 (809850) | more than 4 years ago | (#31675706)

Real World to 3d models is a core component on AI. The AI needs to see its world before it can make decisions inside it. Imagine quake, you can make a bot to play inside it because you have all the data in the game. Now if you wanted to make a "Fetch me a beer bot", the thing would have to know what your house looked like to navigate the instruction path.

Obviously you'll need to write software that also "identifies" the 3d objects you're looking at, and that will take some work, but isn't impossible using pattern recognition.

I have a small page on how I think AI will come about [goodnewsjim.com]

Re:Artificial Intelligence is around the corner (0)

Anonymous Coward | more than 4 years ago | (#31678552)

Real World to 3d models is a core component on AI.

Nonsense. You don't have 3D models in your head, and neither does AI.

The AI needs to see its world before it can make decisions inside it.

It doesn't need 3D models for that.

Now if you wanted to make a "Fetch me a beer bot", the thing would have to know what your house looked like to navigate the instruction path.

Again, nonsense. You don't use a 3D model in order to fetch a beer and neither does a robot.

Re:Artificial Intelligence is around the corner (1)

CrazyJim1 (809850) | more than 4 years ago | (#31679220)

Nonsense. You don't have 3D models in your head, and neither does AI. You know where things are in relation to yourself. You know what things look like when you've looked at them before. This is the same data.

I like this... (0)

Anonymous Coward | more than 4 years ago | (#31675758)

For the possible use in making avatars for games. That alone makes it a great technology. You could have the computer record your facial expressions and apply them to ingame emotes too. Would make for excellent MMORPG characters.

The article started Face, how hard are guestures? (1)

physburn (1095481) | more than 4 years ago | (#31676648)

I would have thought gesture recognition would be relatively easy just moment the position of hands, which should be the nearest object. But its taking some time. Certainly having a computer recognise basic hand movements and running scripts accordingly would be get timesaver. On the subject, when will windows get a proper scripting language, like Rexx was on OS2 and amiga?

---

Computer Vision [feeddistiller.com] Feed @ Feed Distiller [feeddistiller.com]

Re:The article started Face, how hard are guesture (1)

RJFerret (1279530) | more than 4 years ago | (#31680894)

On the subject, when will windows get a proper scripting language, like Rexx was on OS2 and amiga?

OMG, off topic but I SO miss ARexx...

The closest I've found is AutoHotKeys, which has a whole scripting language and can interact with the UI of different software. It's not as useful as having Rexx ports in applications, but opens up many capabilities (the typo auto-correcter alone is worth the download).

One step closer to... (1)

D3 (31029) | more than 4 years ago | (#31677196)

The Running Man!

Re:One step closer to... (1)

HTH NE1 (675604) | more than 3 years ago | (#31679824)

Or Looker.

"Hi, I'm Cindy. I'm the perfect female type, 18 to 25. I'm here to sell for you. Hi, I'm Cindy. I'm the perfect female type, 18 to 25. I'm here to sell for you. Hi, I'm Cindy...."

Old principle, but better! (0)

Anonymous Coward | more than 4 years ago | (#31677230)

Seems like a photometric stereo using colored light technique. This has been done since at least early 90's.
Other applications:
http://mi.eng.cam.ac.uk/research/projects/VideoNormals/

But their take seems to make it work very well on the entire head, including hair - so a wide range of reflectance behavior.
This has been the challenge to this technique. Good work and proof of concept on how far this technique can go though!

Uh Oh (1)

spaceman375 (780812) | more than 4 years ago | (#31678012)

Looks around...
Red Light source --check
Yellow light source - check
Green light source - check
Other colors from monitor ......

I think I'll go polish my tinfoil hat.

/// Oooo, Shiny

Martian eyes (0)

Anonymous Coward | more than 4 years ago | (#31679356)

and thats new technology? No, simply old martian tech

at last, they are filtering a bit of technology from Area 51s hijacked UFO

http://www.war-ofthe-worlds.co.uk/images/war_worlds_pal_10_x.jpg ;D

Yeah! (0)

Anonymous Coward | more than 4 years ago | (#31680596)

This is the year of the 3d vision enhanced Linux desktop!...... yep

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?