×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Improved Image Quality For HMDs Like Oculus Rift

Unknown Lamer posted about 6 months ago | from the barrel-roll dept.

Graphics 55

An anonymous reader writes "The combination of smartphone panels with relatively cheap and light-weight lenses enabled affordable wide-angle Head Mounted Displays like the Oculus Rift. However, these optics introduce distortions when viewing the image through the HMD. So far these have been compensated for in software by using post-processing pixel shaders that warp the image. However, by doing so a loss in image quality (sharpness) can be perceived. Now researchers from Intel found a way around this error by using different sampling for rendering, therefore potentially increasing the image quality of all current and future HMDs with a wide field of view." Rather than applying barrel distortion to the final raster image, the researchers warp the scene geometry during rasterization. However, it currently requires ray tracing so it's a bit computationally expensive. Note that a vertex transformation can be used (with tessellation used to enhance the approximation), but the results are of variable quality.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

55 comments

Way over my head (0)

Anonymous Coward | about 6 months ago | (#45212993)

Way over my head but isn't rendering at a higher res so that the final result looks good no matter the warping better than ditching all the 3d engines work and use raytracing?

Re:Way over my head (1)

Anonymous Coward | about 6 months ago | (#45213071)

How does that sell more Intel CPUs?

Re:Way over my head (0)

Anonymous Coward | about 6 months ago | (#45216067)

I'm not sure if it's relevant, but yes, many demos for the OR will render at a higher resolution than the OR supports. When the image is downsampled, it naturally gets anti-aliased, and so it looks much nicer on the OR's low-res screen.

Better for Multi-Monitor Gaming (4, Interesting)

Beardydog (716221) | about 6 months ago | (#45213129)

Oculus Rift is one of the greatest products ever, and Ima let you finish, but this is even better for multi-monitor gaming.

At least Oculus Rift had identified and addressed the problem of distortion, even though their solution loses image quality. Multi-monitor gaming has been garbage for a decade because everyone seems content with horrific distortion at large FOVs.

I know, it's all a matter of screen placement and eye positioning. That's dumb. I want a wrap-around image. I want to aim a projector at each of three walls and have the result make sense.

If you've tried Fisheye Quake, you know it's hell on your system, and still doesn't look great. If this technique is at all performance, everyone needs to start shipping with support, and they need to start yesterday.

Re:Better for Multi-Monitor Gaming (0)

Anonymous Coward | about 6 months ago | (#45214887)

Correct multimonitor gaming today is only for two use cases:

1) Games what can have dual-display information like RTS games where you have two different locations on separate displays or minimap/satelite view on one and game itself on other.

2) Local Co-Op where friends can socialize (really!) and play front of same computer with two set of keyboard/mouse or joypads and both gets own view from display. So player A = display 1 and player B = display 2.

All the other things like "surround gaming" or "extended FOV" are terrible ideas.

Re:Better for Multi-Monitor Gaming (0)

Anonymous Coward | about 6 months ago | (#45215763)

You could get away with some sort of airplane or vehicle simulator where there would be side or extra views, assuming the user can position their screens appropriately. At least then the split between monitors wouldn't be an as much of an issue if it corresponds to what would be a obstructed view in the vehicle between windows.

Re:Better for Multi-Monitor Gaming (1)

bill_mcgonigle (4333) | about 6 months ago | (#45216081)

There was a Mac game called A10-Attack which had multiple views. We'd put three NuBus cards in a Mac IIci and have front and side window views. It was pretty fun, and we didn't even think about being cooked by being inside the CRT zone. The screens *were* supposed to be windows, so the illusion didn't need to worry about the gap between screens.

Goodness, that was eighteen years ago.

Distortions (1)

sexconker (1179573) | about 6 months ago | (#45213139)

How about you use better lenses so you don't have distortions in the first place?
Stop trying to fudge shit in software. It doesn't work with projectors with shitty lenses, telescopes with shitty lenses, or camera phones with shitty lenses.
If you want a good image use good optics. Yes, that means added cost and weight. Deal with it.

Re:Distortions (2)

Baloroth (2370816) | about 6 months ago | (#45213289)

Yes, that means added cost and weight. Deal with it.

Sure, if you don't mind producing a headset that no-one wants to buy because it's too expensive, no one wants to use because it's too heavy, and no one wants to supports because of the first two things. Or, in other words, if you want to be a complete and utter failure.

Re:Distortions (1)

sexconker (1179573) | about 5 months ago | (#45224375)

Yes, that means added cost and weight. Deal with it.

Sure, if you don't mind producing a headset that no-one wants to buy because it's too expensive, no one wants to use because it's too heavy, and no one wants to supports because of the first two things. Or, in other words, if you want to be a complete and utter failure.

Except the people who are interested in these headsets (I am not among them) all excrete into their panties at the mere mention of the Oculus Rift. The only thing they complain about is the image quality. Fixing that should be their priority, even if it makes the units slightly heavier or slightly more expensive.

Re:Distortions (2)

Beardydog (716221) | about 6 months ago | (#45213665)

It's not a lens problem. The lenses are trying to correct for the fact that current games display 3D images meant for display on flat surfaces. The lense is there to distort to image and make it wrap around your eyes, but the portion of the image you're wrapping is distorted and lacking detail, even before the lens smears it across your peripheral vision. This is a method for making the initial image much better and full of data so that less aggressive smearing is necessary, and the per-smear image has more data in it to begin with.

Re:Distortions (1)

r_jensen11 (598210) | about 6 months ago | (#45213759)

It's not a lens problem. The lenses are trying to correct for the fact that current games display 3D images meant for display on flat surfaces. The lense is there to distort to image and make it wrap around your eyes, but the portion of the image you're wrapping is distorted and lacking detail, even before the lens smears it across your peripheral vision. This is a method for making the initial image much better and full of data so that less aggressive smearing is necessary, and the per-smear image has more data in it to begin with.

Wouldn't the next-step solution be to use curved OLED screens and develop rendering engines which take into account the spherical nature of the monitors?

Re:Distortions (0)

Anonymous Coward | about 6 months ago | (#45216383)

The problem is that none of the traditional polygon rendering algorithms work here. Rendering bezier surfaces is so CPU/GPU-intensive that all useable algorithms are about splitting the surface into smaller linear ones.
Essentially you have to re-think the ten years of computer graphics theory that the last ten years are built upon.
Or switch over to ray-tracing/casting/marching since those doesn't require Euclidean geometry.

So yes, the next-step solution is indeed to develop rendering engines that take spherical nature of the monitors into account. One such step in the development would be what this article is about.

Re:Distortions (1)

Guspaz (556486) | about 6 months ago | (#45213985)

Except it's not just the image that is distorted and needs correcting, it's also the focal plane. And to those of us with vision problems like myopia, the curved focal plane of the Oculus Rift means only part of the image will be in focus. You can't correct for focus in software.

tesselate polygons, then warp in vertex shader ? (1)

flowerp (512865) | about 6 months ago | (#45213145)

Hi,

Can't one just subdivide (tesselate) polygons that appear relatively large in screen space so that they consist of many small polygons, with a few pixels each? This would allow for doing the barrel distortion entirely in the vertex shader with no ray tracing being required. The challenge is to perform the dynamic tesselation without requiring a constant updating of the geometry (vertex buffers) on the GPU.
Maybe newer APIs like DirectX10 or 11 would support this dynamic tesselation approach in hardware.

Re:tesselate polygons, then warp in vertex shader (1)

Nemyst (1383049) | about 6 months ago | (#45213497)

That's the last line of TFS. Tessellating the mesh so that triangles are approximately the size of a pixel is nearly as costly as raytracing, though.

Curved Displays (0)

Anonymous Coward | about 6 months ago | (#45213293)

Isn't this the ideal use case for Samsung and LG's new bendable displays? Why use software to do the bending at all?

Re:Curved Displays (1)

Guspaz (556486) | about 6 months ago | (#45214825)

The bendable displays can only be bent in one direction at a time. Think of it as the shape of the screen being half a cylinder, when what you'd need to do what you describe is half a sphere.

Eye strain (0)

Anonymous Coward | about 6 months ago | (#45213329)

They will need to supply a large bottle of asprin with every one of these things.
And I cant wait to see the lawsuits that are going to result.

Likely the Repetative Stress Disrder cases from the smartphone keypressing will probably happen first.

Corrective lenses adaptation? (1)

martyb (196687) | about 6 months ago | (#45213539)

From TFA (emphasis added):

Consumer, wide-angle HMDs use relatively cheap lenses to bring the screen into focus for the human eyes and enable the wide field of view. The drawback is that there are spatial distortions and chromatic aberrations. Luckily, those can be compensated for in software and this e.g. in the Oculus SDK done as a post-processing pixel shader. The default implementations are warping the image in a barrel-distorted way and are using bilinear texture access to do so. The result is that the processed image is getting blurred.

My first thought was "Why not use better quality lenses?" Sure, they'd be more expensive, but there is an expense involved in the software having to correct Every Single Frame. Why not fix it once, at the source, and obviate the need for continuous real-time updates?

But my next thought was more positive. I wear glasses (near-sighted and have astigmatism, too). It would be *so* nice if there were a way to correct for that in software so I could wear VR goggles without my glasses. I'd imagine a learning session where certain scenes (e.g. grids) are displayed and the system would apply software corrections under my control until it looked good to me. That could be saved as a profile and then be loaded whenever I used their VR display.

The next step, of course, would be to find a way to leverage this "training" into my next eye exam!

Re:Corrective lenses adaptation? (1)

ciderbrew (1860166) | about 6 months ago | (#45213671)

Having a screen display your prescription so you don't have to wear lenses or glasses? Is that even possible?

Re:Corrective lenses adaptation? (1)

Arkh89 (2870391) | about 6 months ago | (#45213809)

For the first thought, it is extremely hard (~impossible, depending on your specifications) to make a good quality optics with few elements that are (almost) free from distortions, chromatic aberration and aberrations (to keep it simple, blur) at the same time.

For the second, it might not be possible. The problem of our eyes is that they are imaging a perfect point not into a point but a small blur. The imaging quality is perfect when the size of this blur is kept minimal (more complicated in the facts, but this is a reasonable approach). This blur is due to the fact that rays going through a lens a different position do not experience the same delays in their path. If you want to correct for that you have to present a pre-corrected light wave to your cornea (eye lens) and the main problem is that this pre-correction cannot be achieved by a regular display which produces real images instead of complex imaging, where the phase is representing the delay of each point of this light wave (called wavefront).

Re:Corrective lenses adaptation? (1)

Arkh89 (2870391) | about 6 months ago | (#45213833)

My bad, the cornea is no the eye lens, shame on me...

Re:Corrective lenses adaptation? (0)

Anonymous Coward | about 6 months ago | (#45214379)

My bad, the cornea is no the eye lens, shame on me...

the cornea is part of the eye lens system. The cornea actually does about 2/3 of the focusing 1/3 is from the posterior lens behind the pupil.

Re:Corrective lenses adaptation? (1)

martyb (196687) | about 6 months ago | (#45215245)

Thank-you for your thoughtful reply! It's been decades since my last physics class that dealt at all with optics, and we never got into the various distortions and aberrations. It was also in light of a theoretically perfect lens (or two) and a bit with reflection and refraction. Your explanation of the wavefront delays made perfect sense - I think I see it now (pun intended!).

Re:Corrective lenses adaptation? (2)

lxs (131946) | about 6 months ago | (#45213863)

I recommend an optics course as your very first step because no distortion of the image on the screen will correct for the failure of your eyes to form a sharp image. What you want could possibly be done with liquid lens technology, but it will take decades for that to be anywhere close to affordable for the large lenses needed in this application.

Re:Corrective lenses adaptation? (1)

martyb (196687) | about 6 months ago | (#45215327)

It's been decades since my last physics course that dealt at all with optics, so a course that dealt specifically with optics is not a bad idea. We only touched upon ideal lenses, reflection, and refraction. Never touched on aberrations or distortions.

Liquid lenses would be nice. Bifocals are a pain. Full-lens near and far vision would be wonderful!

Re:Corrective lenses adaptation? (1)

wonkey_monkey (2592601) | about 6 months ago | (#45213931)

My first thought was "Why not use better quality lenses?" Sure, they'd be more expensive, but there is an expense involved in the software having to correct Every Single Frame. Why not fix it once, at the source, and obviate the need for continuous real-time updates?

Because the corrections are needed no matter how good your lenses are - it's a remapping of pixels so those pixels appear in the correct place in your plane of vision, and doesn't have anything to do with compensating for low quality lenses.

I'd imagine a learning session where certain scenes (e.g. grids) are displayed and the system would apply software corrections under my control until it looked good to me.

Not possible, I'm afraid. Whatever the screen puts out is going to get blurred by your astigmatism and short-sightedness, and only a physical lens can pre-correct it for your eyes.

Re:Corrective lenses adaptation? (1)

martyb (196687) | about 6 months ago | (#45215429)

Because the corrections are needed no matter how good your lenses are - it's a remapping of pixels so those pixels appear in the correct place in your plane of vision, and doesn't have anything to do with compensating for low quality lens

Yes, I get that, now. The corners of a flat display is further away than the center and the pixels there subtend a smaller arc on the eye than those at the center would. Duh!

I'd imagine a learning session where certain scenes (e.g. grids) are displayed and the system would apply software corrections under my control until it looked good to me.

Not possible, I'm afraid. Whatever the screen puts out is going to get blurred by your astigmatism and short-sightedness, and only a physical lens can pre-correct it for your eyes.

Duh. There is no "pre-blurring" or correction that would result in the right image appearing on my eye. My eye's focus is off and no matter what comes in, it will still land on the wrong place to make a clear image. My bad. Thanks for the explanation!

Re:Corrective lenses adaptation? (1)

Nemyst (1383049) | about 6 months ago | (#45214077)

Sadly, you can't apply that much correction in software. Warping can be done to a certain extent, but you cannot fix chromatic aberrations, which are inherent to any wide-angle lens, and other such optical effects. Even quality lenses would not eliminate everything and will still cause uneven pixel density across your field of view.

The Oculus Rift, like most VR head gear, is based off two small screens, one per eye. Those screens are rectangular and that's it. If you output a rectangular image, the image, once warped through lenses, will not have the same discretization: some pixels will be much larger than others. This cannot change even with higher quality lenses, and not distorting the image would completely eliminate the immersion and appeal. What happens instead is that the software outputs pre-warped images to mostly correct the problem, giving more pixels to certain areas of the image so that once warped everything is more or less equally dense. This comes at the cost of software preprocessing and not fully utilizing the available pixels on the screen (the OR's projected images aren't rectangular, so there are "wasted" pixels in the corners).

There was an article on another technology which holds a lot of promise, I think. It was demoed this summer by NVIDIA and is based off the principle of a lightfield. Instead of outputting two flat planes, the system outputs a higher dimensional image (can be 4D or 5D depending on the tech, I forget) which is used by a series of layered OLED displays to reproduce not only binocular vision, but also different depths. This allows for all sorts of nice new things, such as correcting vision in software from your prescription, or giving the eyes the ability to focus on different depths of the image. This is different from current technology, which only uses the brain's ability to interpret two images rendered from slightly different points as a 3D space. It should also help with the headaches and sickness people are getting from current 3D glasses and VR. The disadvantages are numerous, though: it takes a lot more computing power (we're talking about adding extra dimensions and fundamentally changing the rendering pipeline), it takes a lot more pixels to produce a small resolution image (even 1080p screens don't actually produce 1080p, a lot of the resolution is used to provide the depth) and it's a lot more expensive overall. Yet, I think it holds a lot of promise and I hope to see it in the future.

Despite having been tinkered with for decades, VR is still very much in its infancy. I think we'll see rapid evolution in the next decade or two as technology catches up with the dreams of people regarding virtual reality.

Re:Corrective lenses adaptation? (1)

marcansoft (727665) | about 6 months ago | (#45215309)

You can correct for chromatic aberration in software, to a varying degree. You can approximate it (so the aberration is ~1/3 of what it would normally be, by aligning the centers of the primary colors) for arbitrary inputs, e.g. a photograph captured with an imperfect lens (image editing software can do this). You can do it on the output side with perfect accuracy if you're displaying an image using three monochromatic light sources (e.g. a laser display), since the three wavelengths involved would then be distorted by three discrete amounts that are perfectly correctable. For RGB panels like LCDs and OLED displays the primaries aren't monochromatic, but they are more concentrated around the dominant wavelength than a natural light source with a uniform frequency distribution, so you get a result that's somewhere in between. This is what the Rift does to correct for chromatic aberration in software.

Uneven pixel density is only a problem if the pixel density at the sparsest point is too low. Today's displays already exceed visual acuity when viewed at a reasonable distance (e.g. a Nexus 10 or an iPad with a Retina display at a normal operation distance), though of course that is without covering a large fraction of the FOV. Give it a few more years and it'll only get better - once we have 8K phone-sized displays this will probably be a non-issue.

Re:Corrective lenses adaptation? (1)

martyb (196687) | about 6 months ago | (#45215857)

Sadly, you can't apply that much correction in software. Warping can be done to a certain extent, but you cannot fix chromatic aberrations, which are inherent to any wide-angle lens, and other such optical effects. Even quality lenses would not eliminate everything and will still cause uneven pixel density across your field of view.

My bad. I focused on the word "cheap" in "relatively cheap lenses", and assumed that was the cause of (at least some) of the problems they were trying to fix in software.

The Oculus Rift, like most VR head gear, is based off two small screens, one per eye. Those screens are rectangular and that's it. If you output a rectangular image, the image, once warped through lenses, will not have the same discretization: some pixels will be much larger than others. This cannot change even with higher quality lenses, and not distorting the image would completely eliminate the immersion and appeal. What happens instead is that the software outputs pre-warped images to mostly correct the problem, giving more pixels to certain areas of the image so that once warped everything is more or less equally dense. This comes at the cost of software preprocessing and not fully utilizing the available pixels on the screen (the OR's projected images aren't rectangular, so there are "wasted" pixels in the corners).

Thanks for that clear explanation! I get it, now. Pixels in the corners are further away from the eye than pixels in the center, so they look smaller, and the image looks distorted. They need to pre-distort the sizes of the pixels on the display so that they all look the same size when they get to the eye.

There was an article on another technology which holds a lot of promise, I think. It was demoed this summer by NVIDIA and is based off the principle of a lightfield. Instead of outputting two flat planes, the system outputs a higher dimensional image (can be 4D or 5D depending on the tech, I forget) which is used by a series of layered OLED displays to reproduce not only binocular vision, but also different depths. This allows for all sorts of nice new things, such as correcting vision in software from your prescription, or giving the eyes the ability to focus on different depths of the image. This is different from current technology, which only uses the brain's ability to interpret two images rendered from slightly different points as a 3D space. It should also help with the headaches and sickness people are getting from current 3D glasses and VR. The disadvantages are numerous, though: it takes a lot more computing power (we're talking about adding extra dimensions and fundamentally changing the rendering pipeline), it takes a lot more pixels to produce a small resolution image (even 1080p screens don't actually produce 1080p, a lot of the resolution is used to provide the depth) and it's a lot more expensive overall. Yet, I think it holds a lot of promise and I hope to see it in the future.

I had not heard of that. Given from what I've read that there are already tremendous problems with getting displays to refresh quickly enough to avoid problems when one's head moves, I can imagine that this is, indeed, a long ways off. Still, it's something to look forward to. Thanks for the look into the future!

Despite having been tinkered with for decades, VR is still very much in its infancy. I think we'll see rapid evolution in the next decade or two as technology catches up with the dreams of people regarding virtual reality.

I share your hope to someday see this. I remember playing Lunar Lander on an ASR33 teletype dialed into a PDP-8 back in the time of Star Trek's original series. So much of what I thought of as science fiction, then, has come to reality. We've come so far and I can't wait to see what else the future has in store!

The point of Rift is COTS (1)

DrYak (748999) | about 6 months ago | (#45214403)

My first thought was "Why not use better quality lenses?" Sure, they'd be more expensive, but there is an expense involved in the software having to correct Every Single Frame.

The idea behind the Rift is to produce a HMD that does NOT cost 1'300$ to build.
It does this by using cheap of-the-shelf parts.

Whereas things like the "eMagin Z800 3D Visor [wikipedia.org] " use special purpose display units (OLEDs) and needs very spetial complex optics so that the virtual display seems square (at 60%, it was one of the widest field of view at its time), Occulus went:
- Fuck this expensive shit, lets use the same Retina-level display as any other smartphone on the market, and throw a rather simple len at it. Okay, maybe having a complete field of view with a simple len will cause distortions, but that's nothing that wouldn't be possible to compensate. And gamers already have an expensive powerful graphic card. Running a shader on it to correct the rendered frame would be very cheap and barely introduce any noticeable slow down.

The Occulus doesn't use an expensive optic *on purpose* so its price has a zero less than anything else.

TFA's method is slow because they analyse what distortion should be done to the actual game world's geometry so that it renders pre-distorted. (So that the distortion is pixel pefrect, each pixel in the output is a rendered pixel. Whereas the current "fast" method simply distords a rendered frame, thus post-distortion rendered pixel don't map display pixels). And they do it with raytracing, because that's the easiest to test, even if it is the slowest.
As the summary sais, in-game that could be done with tesselation and geometry shader.

(And has been done in part in the past: Fish-eye Quake does similar kind of distortions [strlen.com] , too)

So, in the end, a small contribution by shaders still beats expensive complex optics, just for now they'll model it with a raytracer because it's easier to study.

I wear glasses (near-sighted and have astigmatism, too). It would be *so* nice if there were a way to correct for that in software so I could wear VR goggles without my glasses.

The bad news is that your problem can't be fixed in software. You're near-sighted + astigma, meaning that your eye fails to focus on the picture (and can't focus on a single point at all, actually). Software fixes are for distortion, meaning that the eye is capable of focusing on a pixel, but it gets the wrong pixel in that position.
The type of eye-sight problem that *could* be fixed in software is eye-mobility problems, where on of the eye isn't able to focus in the correct directions and thus gets "shifted" view, giving a doubled picture. This kind of problems are fixed with "prismatic" type of lens, this kind of problem could be fixed by shifting the image on the Rift in opposite direction.

The good news is:
Well read again the first paragraph: Rift use plain simple cheap lens. Just swap the lens with another set of cheap lens which are adapted to your near-sightness and voilà, glass-free 3D.

Re:The point of Rift is COTS (1)

martyb (196687) | about 6 months ago | (#45216075)

The idea behind the Rift is to produce a HMD that does NOT cost 1'300$ to build. It does this by using cheap of-the-shelf parts. [brevity snip]

Thanks! Early adopters are willing to pay more for something, but it helps to get the price down as soon as possible so as to build sales volume. They've put their money where they get the bet bang or the buck, and lenses are not it.

The bad news is that your problem can't be fixed in software. You're near-sighted + astigma, meaning that your eye fails to focus on the picture (and can't focus on a single point at all, actually). Software fixes are for distortion, meaning that the eye is capable of focusing on a pixel, but it gets the wrong pixel in that position. The type of eye-sight problem that *could* be fixed in software is eye-mobility problems, where on of the eye isn't able to focus in the correct directions and thus gets "shifted" view, giving a doubled picture. This kind of problems are fixed with "prismatic" type of lens, this kind of problem could be fixed by shifting the image on the Rift in opposite direction.

Failure to focus correctly... got it, thanks!

The good news is: Well read again the first paragraph: Rift use plain simple cheap lens. Just swap the lens with another set of cheap lens which are adapted to your near-sightness and voilÃ, glass-free 3D.

That makes sense. I'd assumed that it would be difficult to fit the OR over my glasses. Your idea of using a different lens gives me hope. On the other hand, since I am near-sighted and the astigmatism isn't that bad, I could probably wear it without my glasses. Those who are far-sighted, though, have no such luck, and the idea of replacement lenses makes perfect sense!

Re:The point of Rift is COTS (0)

Anonymous Coward | about 6 months ago | (#45216685)

The rift prototype does come with three sets of lenses. If it gets mainstream it would be possible to get these adjusted by your optician. Unfortunately many guys in the forums are saying their opticians don't want to deal with it for various reasons, most of them stupid (yeah yeah, people from the past).

Re:The point of Rift is COTS (0)

Anonymous Coward | about 6 months ago | (#45220587)

I believe the OR already comes with different lense types in the already existing dev kit. It was also one of the first questions they addressed when they very first began since a lot of their staff also wear glasses.

Re:The point of Rift is COTS (1)

Immerman (2627577) | about 6 months ago | (#45219377)

> (And has been done in part in the past: Fish-eye Quake does similar kind of distortions [strlen.com], too)

Actually it looks like Fish-eye Quake uses something somewhat more similar to the Rift's approach than to Intel's. Basically it renders "normally" to multiple 90* FOV screens that make up the faces of a cubic environment map, and then uses a lookup table to sample those images to create the smooth distortion of a fish eye lens. It's using multiple rendering frustums and a lookup table rather than pixel shaders to apply the post-rasterization distortion, but I I suspect it shares similar weakneses when it comes to subpixel detail loss.

Hmm, fish-eye rendering to VR glasses... that could be very interesting... or nauseating.

Re:Corrective lenses adaptation? (1)

gl4ss (559668) | about 6 months ago | (#45214969)

I don't think that kind of correction can be done in software. correction for eyes pointing at wrong place yes, focus no.

anyhow, oculus comes with couple of lenses for people with different eyesight and I'd imagine the consumer model will as well.

I didn't think the lens quality to be the problem with oculus though, the problem with rift right now rather than the blue/redshift from the lens is that the resolution is rather small and stretched over a large area(that the fov is large is a plus though), so you see the pixels(it's like playing at 320x200 all over again).

that being said, oculus kicks ass. so fucking much. it is the future of gaming - and movies too. 3d movies are shit unless viewed with such a thing.

(I used to have -6 glasses but got them lasered before the rift arrived.. worth every cent since even expensive lenses had so annoying blueshift)

Re:Corrective lenses adaptation? (1)

martyb (196687) | about 6 months ago | (#45216243)

I don't think that kind of correction can be done in software. correction for eyes pointing at wrong place yes, focus no.

Yes, I can see that now. I was kind of hoping that *I* was misunderstanding something and that it was, indeed, possible.

anyhow, oculus comes with couple of lenses for people with different eyesight and I'd imagine the consumer model will as well.

I didn't know that! Thanks!

I didn't think the lens quality to be the problem with oculus though, the problem with rift right now rather than the blue/redshift from the lens is that the resolution is rather small and stretched over a large area(that the fov is large is a plus though), so you see the pixels(it's like playing at 320x200 all over again).

We HAVE come a long way from the old CGA graphics! We've come a long ways, but it seems like it's still a long ways until we have totally immersive displays. Time will tell. Thanks for the reply!

Re:Corrective lenses adaptation? (1)

mcgrew (92797) | about 5 months ago | (#45227139)

I wear glasses (near-sighted and have astigmatism, too). It would be *so* nice if there were a way to correct for that in software so I could wear VR goggles without my glasses.

If you have $15,000 you can have your vision fixed completely, but it involves surgery and each eye costs about $7k. It's a mechanical replacement for your eyes' natural focusing lenses, and if you get the surgery you not only won't need glasses, because it cures both your myopia and your astigmatism, but you won't need reading glasses when you get old, because age-related presbyopia is caused by the natural lens getting stiff and it's removed during the surgery.

It's called a CrystaLens, and insurance will pay all but $1000 for each eye, provided you have cataracts. You can get the surgery without cataracts but insurance won't cover it. If you have access to steroid eyedrops, that will give you cataracts.

I have one in my left eye. Best $1000 I ever spent (cataract came from prescription drops for a painful infection). I'm looking forward to getting a cataract in the other eye now... resistance is futile.

Re:Corrective lenses adaptation? (1)

martyb (196687) | about 6 months ago | (#45255233)

Thanks for the reply! No cataracts (yet).

As a consumer, I generally avoid release 1.0 of anything. I realize you mentioned CrystaLens and not Lasik (spelling?) laser surgery A relative who had Lasik done reported nighttime halos from oncoming car headlamps. I'll let others get the bugs out of the process and find out what long[er]-term consequences may arise. My eyes are well-corrected with conventional glasses, so I can afford to wait.

Had not heard of this, but will keep it in mind for when the need arises. Your experience is helpful and I appreciate your passing it along!

Re:Corrective lenses adaptation? (1)

mcgrew (92797) | about 6 months ago | (#45261111)

There's a little lens flare in the CrystaLens, too, but not enough to matter. Oh, and you spelled "lasik" right.

As a developer who has done this decades ago (0)

Anonymous Coward | about 6 months ago | (#45213689)

There have always been those who have advocated doing the warp before the fragment shading but it is simply a bad idea with minimal benefits.

You can always oversample the render to texture if image quality is a problem and you get the added bonus of additional antialiasing.
The reality is that most 3D graphics sample beyond the Nyquist frequency and this is the root of the problem with resampling. Texture
hardware filtering on an oversampled render is a good solution.

Raytracing is not rasterization, and non linear rasterization is not easily optimized in hardware. It is a phenomenally bad idea.

Other approaches have used subdivision geomety but that suffers from performance issues and piecewise approximation.

Re:As a developer who has done this decades ago (0)

Anonymous Coward | about 6 months ago | (#45213861)

That's what I was thinking too. Adaptively over-sample in the areas that need extra detail. Also if there are multiple LODs avaible in the geometry step up to the higher ones in those areas too. Then do the distortion and down-sampling before the final output.

Re:As a developer who has done this decades ago (1)

wonkey_monkey (2592601) | about 6 months ago | (#45213947)

I've already posted, but I would have given you my last mod point. If nothing else, perhaps this reply will bring your post to the attention to someone who can mod it up (and mod me out of sight).

Oooookkkkkkk. (1)

Pino Grigio (2232472) | about 6 months ago | (#45214247)

So they've taken something with variable quality and improved it by implementing a solution that itself has variable quality with the downside that it's horrendously expensive to render.

Brilliant.

Re:Oooookkkkkkk. (0)

Anonymous Coward | about 6 months ago | (#45215987)

it is brilliant if you are Intel and looking for ways to sell more graphics processors.

Do these work for people with glasses? (1)

swb (14022) | about 6 months ago | (#45216191)

...Or worse, with bifocals?

When something like this becomes available, it'd be awesome even if it was only used for watching a movie on an airplane, but I worry it would be worthless for people who have an eyeglasses prescription.

Re:Do these work for people with glasses? (0)

Anonymous Coward | about 6 months ago | (#45220603)

They've already made the rift to work with glasses.

https://support.oculusvr.com/entries/24796006-Can-I-wear-glasses-while-using-the-Oculus-Rift-developer-kit-
http://www.youtube.com/watch?v=QaWRBM7GrEU

There's your problem right there... (0)

Anonymous Coward | about 6 months ago | (#45216891)

"relatively cheap and light-weight lenses"

Use better optics in the first place.

fris7 8psot (-1)

Anonymous Coward | about 6 months ago | (#45217425)

things I Still

Already out of date tech (0)

Anonymous Coward | about 6 months ago | (#45219021)

This is a much better system in my opinion.

http://www.kickstarter.com/projects/technicalillusions/castar-the-most-versatile-ar-and-vr-system

Something better than 3D glasses. (0)

Anonymous Coward | about 6 months ago | (#45219141)

I think this new approach to 3D, VR and AR is going to make the article subject obsolete.
http://www.kickstarter.com/projects/technicalillusions/castar-the-most-versatile-ar-and-vr-system

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...