Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Cheap 3D Motion Sensing System Developed At MIT

timothy posted more than 4 years ago | from the let-the-ractives-begin dept.

Input Devices 60

Al writes "Researchers at the MIT Media Lab have created a cheaper way to track physical motion that could prove useful for movie special effects. Normally an actor needs to wear special markers that reflect light with numerous high-speed cameras placed around a specially-lit set. The new system, called Second Skin, instead relies on tiny photosensors embedded in clothes that record movement by picking patterns of infrared light emitted by inexpensive projectors that can be mounted in ceilings or even outdoors. The whole system costs less than $1,000 to build, and the researchers have developed a version that vibrates to guide a person's arm movements. Watch a video of Second Skin in action."

Sorry! There are no comments related to the filter you selected.

Tracking fidelity (3, Interesting)

Anenome (1250374) | more than 4 years ago | (#27777161)

The tracking fidelity from the video seems low. For movie work you need a very smooth input, otherwise you end up spending a lot of money to smooth out the positional data which has the side-effect of making it look more artificial and robot-like.

What I do like is the use of projected patterns to track individual dots, that's pretty clever. But it seems like this won't be the final solution. Ultimately we're going to need to perfect a micro-GPS system, and that has many more applications than just use as movement-capture for movie production.

Re:Tracking fidelity (4, Informative)

Anonymous Coward | more than 4 years ago | (#27777539)

The video on the SecondSkin web site says it captures 5000 frames per second. I think the slowness you perceived in the feedback video was due to the feedback software, not the capture system.

Re:Tracking fidelity (1)

citizenr (871508) | more than 4 years ago | (#27779421)

The video on the SecondSkin web site says it captures 5000 frames per second. I think the slowness you perceived in the feedback video was due to the feedback software, not the capture system.

How can it capture at 5000fps when projectors that give it a point of reference work at only 1000fps? besides its jerky

Re:Tracking fidelity (1)

Anenome (1250374) | more than 4 years ago | (#27780167)

Mmm, frames per second and fidelity are two different things, much like performing 5,000 calculations per second is one thing and performing floating-point calculations versus non-floating point calculations. It's like you're saying it's a video camera that takes 5,000 FPS and it's shiny, and I'm looking at the features-tag that says it takes only 1 megapixel resolution. We want, we need, more resolution. Especially when it comes to mo-cap for feature films where the slightest jitter is extremely noticeable.

If the system's granularity is too high then I can never produce smooth motion no matter how many times a second you want to capture a frame. Since their system is based on a visual-detection rig, the granularity will be based on the resolution of the video-cameras along with how far from the targets they are. This will give a base granularity. If the granularity is a centimeter, that's really bad. If it's millimeters, that's a lot less noticeable, but still not good enough to not be noticed by the eye. We are capable of extremely subtle movements, and low granularity is necessary so that it doesn't look like you've moved when you haven't. Such as an eye locked on a target, it appears to be perfectly still, but if the granularity is too high it would perform tiny pops that would definitely look weird (not that this system is for tracking eyes, but that's an example). While the monitor did seem to be skipping frames, it also seemed to show a low granularity. The hand was moving in a way that wasn't being smoothly tracked by the rig, that's my concern.

I still say that a positional system based on mini-transponders to form a micro-GPS system is going to be innately more accurate due to both ease and precision of the triangulation calculations -- and it will be extensible to a multitude of other systems, such as shipment tracking, RFIDs, cellphones, and a whole host of other applications.

Re:Tracking fidelity (2, Insightful)

Jay L (74152) | more than 4 years ago | (#27780297)

If the system's granularity is too high then I can never produce smooth motion no matter how many times a second you want to capture a frame.

Is video too complex to allow the sort of math we do on audio? In the audio realm, most ADCs are natively 1-bit converters with a ridiculously high sampling rate (MHz). That turns out to be mathematically equivalent to, say, 24-bit audio at 192KHz.

But audio's a single waveform, and video's a collection of pixels, so I guess it's all different.

Re:Tracking fidelity (1)

SlashWombat (1227578) | more than 5 years ago | (#27782973)

Audio "1 bit" converters are Delta-Sigma converters, and work internally at very much higher clock rates than the configured sample rate would imply (16 bit accuracy converter clocks in the 10's of MHz for 32..48 kilobit sample rates). Video needs to be sampled at much higher rates. Good old PAL/NTSC generally is sampled at approx 12.5 to 13.5 MHz minimum, and often much faster. The Sigma-delta converter for this would need to run in the GHz range to provide 8..10 bits of accuracy per pixel. This would consume a LOT of power!

Re:Tracking fidelity (1)

Jay L (74152) | more than 5 years ago | (#27785363)

The Sigma-delta converter for this would need to run in the GHz range to provide 8..10 bits of accuracy per pixel

Oh... duh. This is why I'm no good at math. Thanks for the explanation.

... for movie special effects (0, Troll)

petershank (463008) | more than 4 years ago | (#27777195)

because goodness knows in these troubling times, our society needs to concentrate our technological progress into the betterment of movie special effects, and a better cost structure for producers of action blockbusters.

Re:... for movie special effects (1)

Burkin (1534829) | more than 4 years ago | (#27777219)

I'm an action blockbuster producer you insensitive clod!

Re:... for movie special effects (1)

edlinfan (1131341) | more than 4 years ago | (#27777235)

Hey, watching movies is *fun*.

There's nothing like a good film to (temporarily) take your mind off reality.

Re:... for movie special effects (-1, Troll)

Anonymous Coward | more than 4 years ago | (#27777281)

Yes let's all watch movies to take our mind off of all the "reality" we experience at /.

Re:... for movie special effects (1)

relguj9 (1313593) | more than 4 years ago | (#27778781)

Hey, watching movies is *fun*.

There's nothing like a good film to (temporarily) take your mind off reality.

Eh, but what is reality?

More appropriately I think would be to say, rather than to escape reality, would be that living vicariously through movies is entertaining. We generate work through our own desire for entertainment and luxury, otherwise must of us would be out of work cuz all we really need to survive is food and shelter and we're so efficient at those things that only a few can sustain very many. Entertainment and enjoyment of life brings meaning to it aside from mere subsistence.

What I mean to say is, everything is reality. And the OP is a jerk troll! /randombanter

Re:... for movie special effects (1)

MobileTatsu-NJG (946591) | more than 4 years ago | (#27777449)

because goodness knows in these troubling times, our society needs to concentrate our technological progress into the betterment of movie special effects, and a better cost structure for producers of action blockbusters.

Yeah, you wouldn't want people spending tons of money on frivilous things during an economic downturn.

Re:... for movie special effects (1)

The Gaytriot (1254048) | more than 4 years ago | (#27777891)

Amen! We should all be suffering! What is wrong with all these people with their fun games and entertainment, don't they know we're in a recession?

Re:... for movie special effects (1)

OzeBuddha (459435) | more than 5 years ago | (#27785427)

Speaking of movie special effects.. anyone else see a striking similarity to the ractives in Neal Stephenson's book Diamond Age? The next step is sensors permanently embedded under the skin..

Re:... for movie special effects (1)

OzeBuddha (459435) | more than 5 years ago | (#27786485)

Hence "from the let-the-ractives-begin dept" d'oh

Second Skin? Unfortunate name... (2)

rts008 (812749) | more than 4 years ago | (#27777319)

When I saw the name of this, I immediately thought of Second Life.

Second Skin takes over Second Life!

Oh, the humanity! [or lack of...]

I bet the pr0n industry could have fun with this...

Re:Second Skin? Unfortunate name... (1)

pete-classic (75983) | more than 4 years ago | (#27777731)

Huh. The first thing I thought of was Crown Skin Less Skin condoms. Which are already very popular in the porn industry.

-Peter

Re:Second Skin? Unfortunate name... (1)

Rananar (135600) | more than 4 years ago | (#27779593)

Already been done [secondskinlabs.com] .

Combine with VR Cave! (1)

StCredZero (169093) | more than 4 years ago | (#27781061)

Combine Second Life and Second Skin with virtual reality "cave" technology [digitalcon...oducer.com] and you have a low rent holodeck. Use it to interpret gestures like the Wii does, and yes, you have a revolution in cybersex and interactive pr0n.

I say it's a buy! Someone is going to make many millions on this. (Especially if they invent a Bluetooth API for optional teledildonics [wired.com] .)

WiiHD? (2)

cwrinn (1282510) | more than 4 years ago | (#27777325)

Wii HD suit?

Re:WeeeeeeHD? (1)

FatdogHaiku (978357) | more than 4 years ago | (#27780933)

...relies on tiny photosensors embedded in clothes... ...and the researchers have developed a version that vibrates...

Someone will work the system into porn and THEN we'll have a video game that is REALLY addictive!

No more "balls in your face" jokes (3, Funny)

Drakkenmensch (1255800) | more than 4 years ago | (#27777397)

If the suit used to capture motion is not the standard black suit covered in little ping pong balls anymore, it's gonna make DVD "making of" extra features a lot less entertaining to watch.

Re:No more "balls in your face" jokes (2, Funny)

GodfatherofSoul (174979) | more than 4 years ago | (#27779385)

"Balls in your face"? What sort of DVDs are we talking about?

Wow, thanks MIT (-1, Troll)

Anonymous Coward | more than 4 years ago | (#27777411)

Because on a 100 million dollar movie budget, I'm sure the motion sensing has a huge impact. When's the last time MIT did anything relevant? The Apollo missions?

Re:Wow, thanks MIT (1)

The Friendly Strange (1228202) | more than 4 years ago | (#27777971)

It's not about the Institute, it's about the people. Each and everyone of them has made significant contributions to their respective field: astronautics, semiconductors, etc. You may not know now what can be done with these technologies, but in the long run they will be vital for new processes you can benefit from. Just wait and see...

Re:Wow, thanks MIT (1)

Peganthyrus (713645) | more than 4 years ago | (#27780149)

It'll sure have a huge impact on movies being made by five friends with whatever effects they and their buddies can put together! Hack together your own mo-cap studio for a couple thousand, and the amount of stuff you can do goes way up.

Also:

For the movie industry, this potentially means that motion tracking can be done on a regular set, which would save production time and let the actors work in a natural setting. "These elaborate systems get in the way of trying to shoot these films," says Steve Sullivan, the senior technology officer at Lucasfilm's Industrial Light & Magic (ILM). "A lot of people see motion tracking as being a solved problem, but I think there's much more we can do to make it more accessible to a range of people and less in the way."

WHY IR?! (0)

Anonymous Coward | more than 4 years ago | (#27777421)

Why use IR when it has proven time and time again to fail in 3D uses?
You can block IR with incredible ease, so much so that it probably happens a million times every day in IT by accident.

Give me some radio / wi-fi / wimax / bluetooth / similar.
Well, Bluetooth might not be that good either.

Re:WHY IR?! (1, Offtopic)

Burkin (1534829) | more than 4 years ago | (#27777467)

How exactly are radio/wifi/wimax/bluetooth at all relevant in relation to a motion capture camera?

Re:WHY IR?! (0)

Anonymous Coward | more than 4 years ago | (#27777633)

All of them are electromagnetic waves.
The method used by them is inefficient.

Re:WHY IR?! (1)

MobileTatsu-NJG (946591) | more than 4 years ago | (#27777717)

How exactly are radio/wifi/wimax/bluetooth at all relevant in relation to a motion capture camera?

It's not. It sounds like he's confusing the use of IR with an IRDA port on a laptop. (BTW: His question wasn't off-topic. He asked an interesting question.)

IR is used to illuminate the balls on the mocap suit so that the cameras in the volume see little else but bright white specks to track. They use IR in particular because they can make those balls really bright for the volume, but still retain normal lighting on the stage. Besides not requring actors to act in the dark, they also do this so they can have regular video cameras capture the action for reference.

What he might have meant is using a technology where instead of looking at where sensors are from the outside, use sensors that transmit where they are via radio or something. There is some sense in that, provided the technology exists. I saw demos years ago of a suit sort of like that. I don't know if it transmitted translational data (as opposed to just rotational data...), but even if it did, there was a nice big cable coming from the actor to a computer somewhere. Useless for stunt work. Anyway, he might be thinking of that, but you cannot really tell from the way he positioned his rant. But, yeah, stunt work is done quite often with mocap and once you start putting wires and blinkie on actors you start losing flexibility.

His complaint, as it is stated, is akin to bitching about cars using petroleum when nuclear technology has proven to be a lot better. If you're really vague with the details and throw practicality out of consideration you can make a compelling-sounding rant.

Re:WHY IR?! (0)

Anonymous Coward | more than 4 years ago | (#27778853)

I never had any problems reading it.

It is quite simple, really.

IR sucks because it can get blocked easily. (an example, placing an arm over the balls on one side of the body)
There are better methods of transmitting positions with radio transmitters, as you brought up.
It doesn't need any fancy stuff, it is basically "local GPS".

Radio transmitters wouldn't even need to be visible, either.
The problem with IR methods used now are the fact that there are little balls all over said actor.

Radio transmitter suits would be many times better overall for getting accurate position data of body parts with very little effort compared to IR.
Currently (as far as i can remember), IR mocap requires a load of cameras all over the place to account for you covering some balls during mocap.
With radio, a triangle is all that is needed.

I honestly don't know why IR is chosen over radio.

Re:WHY IR?! (1)

MobileTatsu-NJG (946591) | more than 4 years ago | (#27779119)

I honestly don't know why IR is chosen over radio.

It bogs down the actors with either cables or batteries. Also, radio requires a smaller volume and requires such a high frequency that it ends up becoming line of sight, anyway.

In short: It's inferior to IR and optical capture in real-world scenarios./

Re:WHY IR?! (0)

Anonymous Coward | more than 4 years ago | (#27780323)

Mobile Phones and GPS would like a word.
Both have been working for years without many problems.

Yes, the person you are replying to only mentioned radio, but I believe they really meant those frequencies used for communication outside of those that need line-of-sight.

Re:WHY IR?! (1)

MobileTatsu-NJG (946591) | more than 4 years ago | (#27780859)

Mobile Phones and GPS would like a word.
Both have been working for years without many problems.

You don't strap 40 cell phones to a human and expect them to, at 60fps, provide accurate position and rotation data down to the millimeter. That's about like saying "we landed a man on the moon several decades ago, there's no reason we can't get people to Mars."

It's a more difficult problem than it looks.

Re:WHY IR?! (1)

detachable_halo (1519547) | more than 4 years ago | (#27779835)

Surely with today's technology it shouldn't be difficult to build an internally-recording suit that doesn't require a tethered connection. Cell phones prove that GPS technology doesn't require anything too bulky. Why not adapt the idea a bit, and make a suit covered in small sensors that record relative positional data from static transmitters.

I envision establishing a "box" of eight transmitters (that many isn't technically necessary, but might provide more accuracy and error-correction; initial thought is literally a rectangular configuration with a sensor at each point, but the formation need not be that rigid) which is only limited in size and shape by the sensitivity (reception range) of the sensors. Once the transmitters are on, the sensors begin recording relative positions and storing them to distributed flash memory hubs; more hubs = smaller individual storage capacities needed, shorter transmission distances, and possibly smaller footprints. All that needs to be stored would be transmitter ID, sensor ID, relative distance, and a timestamp. The data can be uploaded to a server later, along with inputting the relative positions of the transmitters, and the data points can be calculated and compared to build accurate locations at specific times, and plot out the total motion capture. With a little modification, radio transmission from the hubs could enable real-time uploading to processing server.

Advantages I see to this: transportable (motion capture can be done on location, instead of set stages/rooms), untethered, scalable (use as many sensors as you like). Maybe more that I can't think of right now.

Disadvantages: processing-intensive to convert data points to motion plot.

Unknowns: power consumption of the suit, and how best to provide the power capacity it needs for sustained recording.

Anyone see any major flaws in my admittedly hasty design?

Re:WHY IR?! (1)

HTH NE1 (675604) | more than 4 years ago | (#27780521)

What he might have meant is using a technology where instead of looking at where sensors are from the outside, use sensors that transmit where they are via radio or something. There is some sense in that, provided the technology exists. I saw demos years ago of a suit sort of like that. I don't know if it transmitted translational data (as opposed to just rotational data...), but even if it did, there was a nice big cable coming from the actor to a computer somewhere.

Just get a spandex suit with tiny RFID tags embedded in it and build an array of receivers in the studio in fixed positions and do the equivalent of GPS triangulation on the smaller scale. Record the data and do the math later to whatever accuracy you need (you're not locating yourself on Earth so civilian GPS hardware limitations don't apply). Meanwhile, your actor is able to wear normal costumes on top of the spandex suit and you'll be able to augment his performance on a more practical set. Maybe even shooting on location.

And no heavy cables coming out your pants' leg.

Re:WHY IR?! (1)

MobileTatsu-NJG (946591) | more than 4 years ago | (#27780897)

Make it work in a 70' square volume with 18 actors moving in it and patent it.

Re:WHY IR?! (1)

HTH NE1 (675604) | more than 4 years ago | (#27781101)

I gave up on the idea of patenting it. It's too obvious. All the barriers to doing it are easily discoverable and the solutions too few and thus obvious to survive a patent challenge (such as coming up with a powerful enough signal to drive the RFID tags without violating FCC regs and not interfering with the reception of their signals, getting them all to uniquely identify both by identity and elapsed time from their driving signal (containing its own clock signal), optimum placement of receivers are a matter of mathematics). It comes down to just a matter of experimentation which I have neither the opportunity nor inclination to do myself.

And besides, I've already stated enough about it in a public forum to make a patent of it by anyone who hasn't already beaten me to the idea impossible. My postings on the subject are now prior art against anyone trying to bar anyone else from doing it. (Some of you were close to coming up with it too, just getting hung up on batteries and cables instead of going for RFID tags.)

Re:WHY IR?! (1)

MobileTatsu-NJG (946591) | more than 4 years ago | (#27781207)

What I'm trying to say is 'easier said than done'. The IR technology has gone quite a ways. I think they'd all love to have something that's just as capable without occlusion problems or expensive solutions. Really, though, that many actors in that big of space with a minimum of 40 markers each, that's a tall order no matter which technology you use.

Don't get too excited yet (1)

OrthodonticJake (624565) | more than 4 years ago | (#27777555)

I actually presented a poster next to Ramesh Raskar at CHI earlier this month. While a very interesting project, he seemed to indicate that it was still very much a work in progress.

We know what they were thinking! (1)

SlipperHat (1185737) | more than 4 years ago | (#27777605)

<quote>researchers have developed a version that vibrates to guide a person's arm movements. </quote>

One word: autopilot.

(Ironically, my captcha was "females")

The problem is... (1)

El Cabri (13930) | more than 4 years ago | (#27777611)

That since most of the cost resides in doing something useful with the data (completely producing the images), the time and talent of the people that are _in_ the suits, etc, the producers really don't give a frak that their motion capture system costs $1000, $15000 or even $100000. What they want is something that is proved to work, that technicians are familiar with, and that you can readily rent by the hour along with the facility it's located in. So thank you Media Lab for another useless gadget.

Re:The problem is...no problem after all. (1)

MarkvW (1037596) | more than 4 years ago | (#27778079)

Parent's attention is fixed on the existing moviemaking structure and is not directed to alternative distribution and creation channels. Those alternative channels are the wave of he future. The cheaper production gets, the more opportunity we'll all have for a greater array of diverse movies.

Someday a truly independent movie is going to hit it big via reasonably independent internet distribution. That will change everything. Technology like this only makes that day closer to reality!

I say hurrah!

Re:The problem is... (2, Informative)

mshannon78660 (1030880) | more than 4 years ago | (#27778191)

Actually, if you RTFA, you'll see that they already address this. One of the difficulties with current systems is that you have to go to the system to do the motion capture. This new system could potentially be used on set - which would be very attractive in situations where live-action and CG are mixed.

Re:The problem is... (3, Insightful)

Dutch Gun (899105) | more than 4 years ago | (#27778527)

There are many small and medium sized game development houses who would love an inexpensive motion capture system in order to capture data for things like in-game cut-scenes. And to them, yes, it makes a pretty big difference whether a system cost $1000 vs $100,000. Having to rent a studio by the hour is also pretty damned expensive.

Besides which, it seems foolish to offhandedly dismiss new technology such as this before it's had even a chance to develop into a useful product.

Re:The problem is... (1)

Anenome (1250374) | more than 4 years ago | (#27780243)

Perhaps, but you're thinking small time, here. If the price of a good-enough Mo-Cap system got down to $1,000, do you know what that means??? That means that there would certainly be some hobbyists taking this home and experimenting with it. When that happens lots of fun things can result.

Like a WiiMote! (4, Interesting)

DdJ (10790) | more than 4 years ago | (#27777675)

What's interesting to me is, this is almost exactly how the WiiMote works so cheaply!

A lot of people assume that the Wii's sensor bar actually senses, and that it can tell where the WiiMote is. But that ain't so. The sensor bar is just a pair of IR emitters. The front of the WiiMote is an IR camera. The thing you hold in your hand is looking at the external IR sources and using those to try and figure out where it is, and then telling that to the base system, almost exactly as is described in this article.

It's like someone said "hey, let's do motion capture by gluing WiiMotes all over a person's body!".

Re:Like a WiiMote! (1)

ageforce_ (719072) | more than 4 years ago | (#27777875)

the big difference is, that the wiimote needs to have an IR camera. In the presented method the receptors are cheap infrared sensors. The position is calculated by decoding the patterns the projectors send.
A similar technique has been used to calibrate the image of a projector to a surface. Here is a video: http://blog.makezine.com/archive/2008/04/automatic_projector_calib.html [makezine.com]

Re:Like a WiiMote! (0)

Anonymous Coward | more than 4 years ago | (#27777979)

Actually, I don't think it's the same type of system.

The video showed the images being projected. Although these aren't scanned, if you looked at the "scan lines" of these images, it looks like they're projecting progressively higher-frequency square waves into space. When looked at in composite (by the photosensor), you basically get a coordinate along the latitudinal axis of the projector from this. The pattern coming out of the photosensor is going to be unique for each given radial from the projector.

It's an interesting idea, but I think it suffers from too many drawbacks to be useful.

First, each photosensor needs to send its data back to somewhere. That's a lot of data coming from whatever is wearing this thing. Camera-based systems allow the subject to be passive and can track a bunch of targets.

Second, the temporal resolution is limited to the flash rate of the projectors and is inversely related to the spatial resolution. Higher spatial resolution requires more progressive images to be displayed per "capture interval" which means capture intervals will be spaced out more.

Re:Like a WiiMote! (1)

redJag (662818) | more than 4 years ago | (#27779375)

Well the sensor bar doesn't "sense", but the Wiimote definitely does. I realize your post doesn't contradict this, but it also implies that all Wii motion-sensing is done with IR and that isn't the case. The Wiimote has an accelerometer that can detect movement on 3 axes. The IR camera is used for detecting where on the screen it is pointing. http://en.wikipedia.org/wiki/Wiimote#Sensing [wikipedia.org]

Wiimote: Motion vs. Position (1)

jonaskoelker (922170) | more than 5 years ago | (#27783793)

the Wiimote definitely [senses]. [...] [parent also implies that all Wii motion-sensing is done with IR and that isn't the case. The Wiimote has an accelerometer that can detect movement on 3 axes.

Keeping up with "your post doesn't contradict this", I want to add:

The accelerometers sense differential data (motion), whereas the IR camera senses static data (direction towards IR light).

If you assume that there are only two infrared sources out in the world (in either end of the sensor bar) and they don't move, you can use your camera reading to infer your angle in the horizontal plane as long as you can see the infrared sources. Using that, plus the strength of gravity at different points on the wiimote, you can compute its three-dimensional angle at any time.

If you knew your position at time t0 and all motion afterwards (but no IR camera information), you could in theory compute your position at all later times; in practice, due to the relatively low resolution (1 byte per accelerometer per (ISTR) 100hz sample), this doesn't work so well, so you need the IR camera.

Missing the point? (1)

Robyrt (1305217) | more than 4 years ago | (#27779223)

I thought the most valuable part of motion capture data was the actor's face, as it's the most difficult to simulate in CG. This is a neat system, especially for the price, but it doesn't provide the best feature of the original.

System looks flawed (1)

GodfatherofSoul (174979) | more than 4 years ago | (#27779417)

This system will probably be used on athletes, ninjas and commandos. From the video, it obviously only works on an arm without any muscle tone.

Here's how it works (1)

heroine (1220) | more than 4 years ago | (#27780413)

It relies on cycling a repeating pattern from every projector 500 times/sec. Every pixel in the pattern encodes a unique symbol by the colors & the changes in the colors over time. By sensing what symbol hits each sensor, you know what pixel from the projector is hitting the sensor & what position on the projector's XY plane the sensor is in. If you know the XY plane position from 2 projectors, you can triangulate the sensor's 3D position, but projectors with enough resolution & bandwidth to do the job are expensive. $1000 would be for very low resolution.

Keeping the projectors focused (1)

heroine (1220) | more than 4 years ago | (#27780479)

So how do they keep the projected patterns in focus as the actor moves towards & away from the projectors? What if you want to track a close actor & a distant actor simultaneously? Those projected patterns aren't going to be in focus & the sensors won't know where they are.

Gray code patterns (1)

janwedekind (778872) | more than 4 years ago | (#27780967)

They seem to use Gray code [wikipedia.org] sequences (only one bit differs between to neighbouring codes). Johnny Chung Lee (the Wiimote Whiteboard guy) already demonstrated the use of structured light and optical fibers [johnnylee.net] in his thesis. He used it to rapidly locate projection surfaces.

Using sensors and gyros... (1)

pbrandao (850680) | more than 5 years ago | (#27823327)

There are commercial products (MVN from Xsens [xsens.com] (former Moven)) that use inertial sensors and gyros to derive the motion. One of the advertised uses is the movie/digital effects industry.

Don't about the real performance of the technology but the idea in itself seems to enable some freedom (no need for interior studios, less expensive).
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?