Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

FPS Benchmarks No More? New Methods Reveal Deeper GPU Issues

Soulskill posted about 3 years ago | from the frames-per-stutter dept.

AMD 125

crookedvulture writes "Graphics hardware reviews have long used frames per second to measure performance. The thing is, an awful lot of frames are generated in a single second. Calculating the FPS can mask brief moments of perceptible stuttering that only a closer inspection of individual frame times can quantify. This article explores the subject in much greater detail. Along the way, it also effectively illustrates the 'micro-stuttering' attributed to multi-GPU solutions like SLI and CrossFire. AMD and Nvidia both concede that stuttering is a real problem for modern graphics hardware, and benchmarking methods may need to change to properly take it into account."

cancel ×

125 comments

Sorry! There are no comments related to the filter you selected.

I used my GPU card (-1)

Anonymous Coward | about 3 years ago | (#37351036)

to render this First Post

Re:I used my GPU card (0)

webmistressrachel (903577) | about 3 years ago | (#37351380)

Well played Sir. I used my overclocked Radeon HD 3650 to post this (almost as quick) reply...

Re:I used my GPU card (0, Flamebait)

Anonymous Coward | about 3 years ago | (#37351750)

Yuo should've stuck with nVidia...

Re:I used my GPU card (2)

nitehawk214 (222219) | about 3 years ago | (#37351848)

Well played Sir. I used my overclocked Radeon HD 3650 to post this (almost as quick) reply...

by Anonymous Coward on Friday September 09, @10:10AM
by webmistressrachel (903577) Alter Relationship on Friday September 09, @10:39AM

19 minutes? Sounds about right.

Re:I used my GPU card (1)

BitZtream (692029) | about 3 years ago | (#37352844)

Yea, or 29 minutes ... depending on if you want correct math or not. Hope thats not what you're using that GPU for :/

Re:I used my GPU card (1)

UnknowingFool (672806) | about 3 years ago | (#37353056)

I see someone is still using an old Pentium I. *ducks*

Re:I used my GPU card (1)

webmistressrachel (903577) | about 3 years ago | (#37355372)

That was my point... whoosh...

LIVE AND LET DIE !! (-1)

Anonymous Coward | about 3 years ago | (#37351048)

Because it is

09/10/11

or as we like to say

11/09/10

time/frame (1)

cheaphomemadeacid (881971) | about 3 years ago | (#37351118)

why not use time/frame min, max and avg values alongside fps?

Re: time/frame (1)

TenDollarMan (1307733) | about 3 years ago | (#37351146)

what about the standard deviation of the duration between frames?

Re: time/frame (3, Informative)

lgw (121541) | about 3 years ago | (#37351250)

The Crysis test loop measures the slowest frame, starting with the second loop (to avoid measuring disk performance). That "minimum FPS" number is what I personally use to benchmark graphics cards - it has always been the speed through the slow-to-render part of the map that matters.

Re: time/frame (1)

X0563511 (793323) | about 3 years ago | (#37353494)

This would work, though you should make sure to cut the first few seconds off. In just about every graphics benchmark I've ever seen, the initial few frames stutter madly as the process starts before the GPU has finished loading everything into VRAM.

Re: time/frame (1)

lgw (121541) | about 3 years ago | (#37354402)

Yes, the Crysis benchmark ignores the entire first test loop. (Didn't I just say that?)

Re: time/frame (1)

X0563511 (793323) | about 3 years ago | (#37354484)

You did... somehow my brain skipped right over that. Apologies.

Re: time/frame (1)

GameboyRMH (1153867) | about 3 years ago | (#37354864)

I notice Crysis is a game that suffers strongly from the problem the article talks about. It runs smoothly most of the time and then occasionally, it'll bog down for a small fraction of a second, apparently skipping a few frames.

Also some games seem to slow down no matter what computer you run them on. GTA3, GTA:SA and NFS: Undercover all do this if there are too many cars nearby. You can fiddle with the graphics settings all you want, same behavior.

Re: time/frame (0)

Anonymous Coward | about 3 years ago | (#37351326)

You mean, exactly as TFA suggests? Although as it goes on to explain, that doesn't give the full picture either. A single slow frame in a five minute session isn't really a big deal, so the max value isn't that useful. However, a long frame every 30 seconds would be infuriating and probably unplayable, but on a powerful card might still have a lower avg value than a worse but more consistent card that would be more enjoyable to play on.

Re: time/frame (1)

Anonymous Freak (16973) | about 3 years ago | (#37354490)

This.

min and max ms between frames, mean, and standard deviation.

Egad (1)

davidbrit2 (775091) | about 3 years ago | (#37351132)

Twelve pages of graphs and data? Couldn't he just have said "standard deviations and percentiles" and be done with it?

Re:Egad (2)

jhoegl (638955) | about 3 years ago | (#37351648)

Someone has never written a term paper :)

Re:Egad (1)

gknoy (899301) | about 3 years ago | (#37352596)

Or perhaps the author has no background or experience with statistics, and wouldn't know what is meaningful about one or the other.

Re:Egad (1)

Taty'sEyes (2373326) | about 3 years ago | (#37352420)

So first, I have not read TFA, but there are at least two different types of people: those that take in data through visual presentation and those that prefer text and numbers.
When I was an engineer, I presented everything to my management in text and numbers that "any idiot" could quickly understand. Just a couple of slides with the essential bullet points and some "standard deviations and percentiles". I never knew why my managers ended up getting frustrated, bored, or began working "on other projects" during my presentations (which frankly, would piss me off).
Then I would sit there and suffer through the next presenter's, a "manager type", slide deck which would be 54 pages of kaleidoscopic, nightmarish explosion of pies and bars and have to crunch visual data into easily understandable numbers. "It takes 32 slides to say we're going to miss delivery by three days?"
Anyway, I would look around at my managers and see that they all paid attention and were involved during the presentation. "I'm never going to get promoted because they hate me!"
The problem with my presentations was how the data was being presented to different people in the room. People that gravitate toward management (those that make business decisions and rule your fate), are better with being presented images. They seem to need the information broken into visual chunks. It is very difficult for them to take data and turn it into something visual. That is extra work.
While I can take in data or a few facts and quickly turn those into charts "in my head"; without thinking about it, the data forms an image in my head.
So I had to learn that if I wanted people to pay attention, I needed to consider the audiences' style of data acquision and present in a way most comfortable for them and not as it made sense to me. Once I learned this skill I was invited to cross over to the dark side.

Re:Egad (1)

bryan1945 (301828) | about 3 years ago | (#37355414)

It's called "make it shiny." As an engineer/tech guy myself, watching PP presentations by manager types make me want to either barf or put my head through the table. They violate basically every single rule of graphical presentation. Yet "it's shiny," so everyone is happy.

Is a multi-GPU problem. (2)

Tei (520358) | about 3 years ago | (#37351144)

Our eyes detect 'deltas' better than 'speeds', so if the odd numbered frames have a delay shorter than others, our eye will detect it. But this only affect setups with multiple GPU's. And is easy to fix. Just calculate the delta of the latest frame, and force the same delta, maybe use a buffer. This is not a problem once has ben detected, It may need some minor changes on engines, but thats all. IMHO.

Re:Is a multi-GPU problem. (1)

BitZtream (692029) | about 3 years ago | (#37351170)

Right up until some disk IO causes your last frame to be 2 seconds long, now every frame in the future is forced to 2 seconds between updates! Awesome for the win.

Its only slightly (and by slightly, I mean a lot) more complicated than you think it is.

Re:Is a multi-GPU problem. (0)

Anonymous Coward | about 3 years ago | (#37351318)

Ok, so if the delay exceeds a limit, just drop the update, for instance. Anyway, these are easy fixes, which are depending on the implementation, sure, but still.

I don't think the other user was saying "it's trivial", but "it can be fixed now that we are aware".

Re:Is a multi-GPU problem. (1)

X0563511 (793323) | about 3 years ago | (#37353854)

Outlier filtering is hardly complicated.

A smarter version of vsync (1)

tepples (727027) | about 3 years ago | (#37351376)

That sort of sounds like the solution presented on page 11 [techreport.com] : "More intriguing is another possibility Nalasco mentioned: a 'smarter' version of vsync that presumably controls frame flips with an eye toward ensuring a user perception of fluid motion."

Re:Is a multi-GPU problem. (1)

PitaBred (632671) | about 3 years ago | (#37351764)

That's the weird thing... on Starcraft 2 (from the article), the jitter was more pronounced on the single chips than in the multi-GPU configurations. It's not that simple ;)

Re:Is a multi-GPU problem. (1)

Khyber (864651) | about 3 years ago | (#37354770)

implying a single GPU is even a single processing core any longer

Re:Is a multi-GPU problem. (1)

RMingin (985478) | about 3 years ago | (#37352598)

Actually, if you read TFA, it's not only multi-GPU setups doing it. Also, the 'solution' you describe has been used by Nvidia since the GeForce 8 era. They call it 'frame metering', and it's not a perfect solution either.

Consistency (1)

mfh (56) | about 3 years ago | (#37351152)

tldr; benchmarks ignore consistency in their measurements and are therefore nonscientific marketing devices.

Feel of a given fps value (4, Informative)

tepples (727027) | about 3 years ago | (#37351160)

An oversimplification in the article:

After all, your average geek tends to know that movies happen at 24 FPS

Movies happen at a motion-blurred 24 fps. Video games could use an accumulation buffer (or whatever they call it in newer versions of OpenGL and Direct3D) to simulate motion blur, but I don't know if any do.

and television at 30 FPS

Due to interlacing, TV is either 24 fps, when a show is filmed and telecined, or a hybrid between 30 and 60 fps, when a show is shot live or on video. Interlaced video can be thought of as having two frame rates in a single image: parts in motion run at 60 fps and half vertical resolution, while parts not in motion run at 30 fps and full resolution. It's up to the deinterlacer in the receiver's DSP to find which parts are which using various field-to-field correlation heuristics.

and any PC gamer who has done any tuning probably has a sense of how different frame rates "feel" in action.

Because of the lack of motion blur, 24 game fps doesn't feel like 24 movie fps. And because of interlacing, TV feels a lot more like 60 game fps than 30.

Re:Feel of a given fps value (1)

Tr3vin (1220548) | about 3 years ago | (#37351424)

Movies happen at a motion-blurred 24 fps. Video games could use an accumulation buffer (or whatever they call it in newer versions of OpenGL and Direct3D) to simulate motion blur, but I don't know if any do.

Forced Unleashed 2 uses a technique similar to this, but there are probably others that do, too. It renders frames at about 30 frames a second, but updates the screen at a stable 60. It uses an interpolating motion blur to make the gameplay feel nice and smooth. This allows for more geometry to be drawn while still providing a "good" user experience. It is slightly bit different than a simple accumulation blur since they predict the motion when doing the blur instead of simply blurring previous frames.

Re:Feel of a given fps value (1)

TheLink (130905) | about 3 years ago | (#37351612)

Movies happen at a motion blurred 24 fps AND I think that sucks. On the "big screen" I can visibly see the stuff "rippling" down at 24 fps especially on scenery pans.

The 24 fps rate is not because the human eye can't see faster than that (it can), it's a compromise due to technology limitations nearly 100 years ago.

Except for "special effects" I prefer that the stuff be updated much faster and let our eyes do the motion blurring. And I don't really like blurred special effects.

I dislike movie scenes where stuff is out of focus or motion blurred just because the director thinks it's cool.

When I watch the real world, everything that I look at usually is in focus - because my eyes will focus on it. In many of the movies the directors like blurry stuff and as a result I get eye strain when my eyes try to focus on the parts of the movie that are blurry .

When you look at a fast moving object it's sharp (unless it's moving so fast that your eyes can't track it).

Re:Feel of a given fps value (0)

Anonymous Coward | about 3 years ago | (#37352218)

You're describing the effect of your brain trying to make sense of technical limitations.

Your brain has a processing pipeline that puts digital graphics hardware to shame. The steps, roughly, are as follows:

1) Edge definition
2) Object region identification (non-memory-bound, just define the regions visible in the current update)
3) Motion comparison (comparing one state to the next to determine motion, which requires, basically, a framebuffer and some minimal memory-bound processing)
4) Color
5) Object identification and other higher-level processing (all slow memory-bound processing happens here)

The fun thing is, after step 2, each region is given its own framerate. That framerate is 13 FPS per object, asynchronous. So all of the "your brain only sees X frames per second" people are right, but they're so far off the mark it's not even funny. Each high-contrast edge in your field of vision is potentially a boundary across which your brain maintains separate framerates. The FPS number is locked to 13, but it's asynchronous, so they may be getting updates from the optic nerve data at different times.

In GPU terms, the equivalent would be if each logical object defined by the software (not the GPU itself) were given its own rendering thread and locked to 13 frames per second. Oh, and everything gets timecode-matched to stereo audio streams. And it all runs with only about 80ms lag.

Computers just aren't there yet. And until they are, the problems you describe will persist.

tl;dr - Your lazy-ass brain is a signal processing powerhouse. Put it to use and read the full post.

Re:Feel of a given fps value (1)

grumbel (592662) | about 3 years ago | (#37351644)

to simulate motion blur, but I don't know if any do.

Pretty much all modern games use motion blur. Some of course use it better then others and it is often not quite as high quality as one might want it to be, but motion blur itself is almost everywhere these days (most noticeable when swinging the camera around in a third person shooter).

Re:Feel of a given fps value (0)

Anonymous Coward | about 3 years ago | (#37354274)

That is not the game usually but the lousy LCD monitor technology!

Re:Feel of a given fps value (1)

Anonymous Coward | about 3 years ago | (#37351822)

It's way more than motion blue that makes movies tolerable at 24 fps - motion blur is just one sign of the fundamental difference between recording and animation. If an animation runs at 60 fps, you are getting information about 60 frames in one second. This is not true for video - each frame of the 24 are actually an average over many million frames (look up the Planck time on Google - the fps is actually far more than a million). The universe runs at a tremendous amount of fps, and the 24 frames you see in a video is based on the information from all of those frames. So you get information way beyond 24 fps, even though you are only seeing 24 frames. So video has a lower frame rate, but far more information than animation can ever give you. It's like anti-aliasing in time instead of space, except instead of sampling just a few pixels extra, you are sampling a huge amount of extra pixels. Motion blur can emulate anti-aliasing-in-time just like blurring can emulate the usual anti aliasing in space, but in fact anti aliasing is better than mere blurring because anti aliasing adds information to each frame while blurring removes information from each scene.

Re:Feel of a given fps value (0)

Anonymous Coward | about 3 years ago | (#37352548)

I wouldn't say it's "more than motion blur". You just described what motion blur actually is. Video games are simulating motion blur, just like they simulate shadows and simulate shading. The fact that they call it "shadowing" when they're talking about the algorithms and their use in games doesn't mean you need to describe real-world shadows as "full-object atomic-scale light occlusion". We all know that one is real and one is an approximation. It's enough to say, "real-world motion blur carries more data than simulated motion blur, because it includes information that would occur between frames in a game or film."

Re:Feel of a given fps value (1)

Hatta (162192) | about 3 years ago | (#37351872)

Did Voodoo cards apply any sort of motion blur? I've long noticed this stuttering, e.g. when circle strafing around a corner it's pretty easy to see that the corner doesn't move smoothly. It even happens when running very non-demanding games (prboom, openarena, UT, etc) at 75hz under vsync with modern hardware under any OS.

The only hardware I've ever used where I didn't see this effect is my Voodoo 2 setup. Those things are smooth as butter, as long as you don't throw too many polygons at them.

Re:Feel of a given fps value (1)

gumbi west (610122) | about 3 years ago | (#37352564)

Again, an over simplification.

Movies are shot at 24 fps with a 1/180 second exposure time. When they make the exposure time longer, you notice and it gets hard to watch. Listen to the commentary by Joss Whedon on Serenity for more, but the scene on the reaver planet are shot at 1/120 and are a little unsettling. Things like panning include blur. the scene in Saving Private Ryan on the beach are shot at 1/60 and are noticeably difficult to watch--when the camera moves, everything blurs. When something flys by you get 1/2 of its motion recorded.

Re:Feel of a given fps value (1)

johanatan (1159309) | about 3 years ago | (#37354810)

Since when is 'the scene' plural? Once I would've ignored, but twice seems intentional.

Re:Feel of a given fps value (1)

DeadboltX (751907) | about 3 years ago | (#37353102)

The biggest difference in your example is that there is a big difference between WATCHING a video game with visually imperceptible stuttering and PLAYING a video game with visually imperceptible stuttering. The latter leaves the gamer confused about why their controls are suddenly unresponsive.

Re:Feel of a given fps value (1)

X0563511 (793323) | about 3 years ago | (#37354024)

I'm guessing you've not played any multitrheaded games then?

The controls are plenty responsive. The visual feedback is not.

Ahh, complexity... (1)

fuzzyfuzzyfungus (1223518) | about 3 years ago | (#37351206)

This will certainly make benchmarking a bit more complex. One hopes that the gamers like going back to stats class.

You'll need the FPS value, as before, (ideally with a worst-case FPS reported); but you'll also want a measure of the deviation of every frame's draw time from the average draw time being reported. And likely a measure of how atypically bad frames are distributed(ie. 5 seconds of super-low framerate during some sort of loading is annoying. 20 25 millisecond frames scattered throught action-heavy areas is really annoying...)

It would also be interesting to see what this does to the (traditionally poor) reputation of the sucker-edition cards that get loaded up with relatively huge amounts of slow memory in order to make them seem like a good deal(ie. if 2GB of GDDR5 is the lunatic fringe, and 512MB of GGDR5 is the solid-value-gamer special, you'll see cards with 1GB of DDR2/3, marketed to the unsuspecting as alternatives to the solid-value line. Their average framerates are usually pretty tepid, because DDR is slow; but they honestly do have a lot of it, so they needn't hit the PCIe bus to load something from system RAM as often...)

Re:Ahh, complexity... (1)

Tasha26 (1613349) | about 3 years ago | (#37351428)

Well, we definitely need a measure of output... Maybe am daft, but how about FPS achieved for X millions textured triangles or polygons [with shadow]?

Re:Ahh, complexity... (1)

fuzzyfuzzyfungus (1223518) | about 3 years ago | (#37351772)

I suspect that such a simplified benchmark would work just fine; but would be of interest to relatively few people. Most users of graphics cards either don't care at all, and have integrated graphics, don't care at all about theoretical performance; but do care about sniping n00bs in Medal of Halo 3, or are GPU compute users, who have their own quite specific demands.

Even in the realm of game benchmarking, you can see some pretty dramatic differences, between engines, in how Nvidia's approach or ATI's approach stacks up for a given generation of cards. It has been a while since relatively abstract tests of generic capabilities have been able to provide much insight into how a card will stack up in what you want it to do...

The point of FPS-benchmarks (1)

Anonymous Coward | about 3 years ago | (#37351208)

The point of doing a FPS-benchmark is to reveal how the graphic card performs in the games that most people play. People don't care about the theoretical performance. They just want to know if it can run the new cool game with the good graphics. Either a game renders fast enough, or it is so slow that you can't turn out the special effects that makes the game look really good. It's all about the game. The other stuff is not so important.

Re:The point of FPS-benchmarks (1)

tepples (727027) | about 3 years ago | (#37351396)

The point of doing a FPS-benchmark is to reveal how the graphic card performs in the games that most people play.

What about people who don't play a lot of first-person shooters? Some of these benchmarks measure only fps in FPS, not fps in other genres.

Either a game renders fast enough, or it is so slow that you can't turn out the special effects that makes the game look really good.

The point of the article is that an isolated frame that takes too long to render can jerk one out of being absorbed in the effects.

Re:The point of FPS-benchmarks (1)

BitZtream (692029) | about 3 years ago | (#37352956)

Wrong.

The point of an FPS benchmark is to get a higher number than some guy on a forum so you can brag about it.

FPS has never been about how well a system can render a scene because its a shitty measure of it. It ignores quality, complexity, accuracy, stuttering (why is this supposed to be new? GPU stuttering is less noticable than disk IO causing jitter, and NOW the framerate no longer matters? Thats just dumb.

There is nothing 'new' about the frame rate being a shitty method of rating performance, well, except this slashvertisement.

What fraction of GPUs are used for graphics? (1)

JoshuaZ (1134087) | about 3 years ago | (#37351254)

At this point a large use of GPUs seems to be for processes where they are more efficient than CPU. The most obvious is vector processing. If one is doing heavy computational work then the standard benchmarks seem fine. What fraction of the GPU market is for actual graphical use?

Re:What fraction of GPUs are used for graphics? (0)

Anonymous Coward | about 3 years ago | (#37351630)

Is that a serious question? Because the answer is about 99% (or I guess 99/100 since you asked for a fraction). GPGPU processing is pretty cool and interesting, but it's still a tiny tiny sliver of the multi billion dollar a year GPU industry.

New benchmark: percentage of frames rendered at (1)

obarthelemy (160321) | about 3 years ago | (#37351364)

less than 60 fps.

kthxbye.

Re:New benchmark: percentage of frames rendered at (1)

tepples (727027) | about 3 years ago | (#37351418)

Which isn't unlike the benchmark that the article uses: 99th percentile frame rendering time. You want this to be under 16 ms in order to keep a consistent 60 fps.

Re:New benchmark: percentage of frames rendered at (1)

xouumalperxe (815707) | about 3 years ago | (#37351976)

You completely missed the point. The 99th percentile frame rendering time gives you a reasonable approximation of 1/fps. What we REALLY want to know about is those few frames that fall above the 99th percentile -- those are the ones that cause stuttering.

Re:New benchmark: percentage of frames rendered at (1)

AvitarX (172628) | about 3 years ago | (#37354654)

I agree, I am not sure that 99 percentile is working or not now, but that still allows for a bad frame every other second.

More useful would be the average of that worse 1%, or to crank it up to 99.9% (bad frame every twenty seconds), or longer, to some acceptable rate.

Even if 99 percentile works now, it's ripe for abuse by the manufacturers if it becomes a common metric (I imagine anyway there could be trade-offs in a driver where 1 frame every 2 seconds is terrible, rather tan one frame a second being OK, with all the rest great).

60 FPS (1)

sproketboy (608031) | about 3 years ago | (#37351388)

Don't you only need 60 FPS to have the illusion of animation anyway?

Re:60 FPS (1)

omnichad (1198475) | about 3 years ago | (#37351454)

Just for the illusion of motion? That's around 12fps, which is what cartoons use. 60fps is the start of fairly smooth motion, but 120hz is in uncanny valley territory for me. Too smooth, but not smooth enough.

Re:60 FPS (2)

Ferzerp (83619) | about 3 years ago | (#37351734)

Are you sure about this, or are you basing it on experience with really poor TV frame interpolation? That's not 120Hz. That's more like 30Hz with lots of fake frames.

Re:60 FPS (1)

omnichad (1198475) | about 3 years ago | (#37351812)

You have a point on the fake frames. I don't have anything capable of generating 120Hz as source material, or for displaying it. So in-store demo is really all I have. I suppose I'd have to watch the 120Hz interpolation in slow motion to see if the artifacts are the cause. Maybe it's because too much motion blur is missing after the interpolation (more than should be for that amount of interpolation).

Re:60 FPS (0)

Anonymous Coward | about 3 years ago | (#37352714)

>60hz doesn't exist. They strobe the backlight at 120Hz while they play back 60fps or 30fps video. the strobe trick makes motion stutter! Yes, they are purposely introducing stutter into the video! LCD has always been blurry; the TFT displays lowered it but it still exists today. The way to get rid of a smooth blurring motion is to stutter it by using a strobe light and removing that >60hz blur by slicing it into 120hz frames even though they are the SAME IMAGE. If you strobe at higher rates at some point the rate will be too high and it'll appear to be blurring again; the trick stops working (I am skeptical the 240Hz TVs are doing any good over a 60... its probably marketing BS like 4G cell service from ATnT.)

Your conscious mind doesn't perceive much higher than 60fps which is why subliminal images are around 70fps (technical logistics aside; 1/64 I believe was the lowest effective rate.) However you are analog, you don't see every frame the same way for the same fraction of time and each person also differs as well. The way your eye chemistry works makes it hard to flash a high contrast image in without some "blurring" which retains the image long enough to lower the max frame rate your eyes can detect.

Something sliding at a higher rate can be noticed vs a slower rate -- even then I doubt that its much more beyond 60fps-- I have had LCD TFT displays next to CRTs for years the LCD never has reached 60fps; I don't think it reached 30fps completely. A slight blur exists at a higher rate-- its realworld== analog: the LCD takes time to shift even if it makes it at 30fps there is a slight change visible when next to CRT. Also this doesn't mean that consciously somebody can't see it but their subconscious is making them feel odd about it; especially when you give them a clear frame of reference. The subconscious is a big integral part of your perception. Your subconscious is usually calculating rates of motion of things you perceive; this can be proven with 1 frame flashes of an object and asking the user which way it is moving then inserting higher rate frames they can't see to give it a direction of motion-- they'll guess the direction better even though they didn't see it move... or they'll think they saw it move it when it is highly unlikely they consciously did.

Re:60 FPS (1)

squizzar (1031726) | about 3 years ago | (#37352998)

We got an LG 3D TV at work which has LG's truemotion. At first glance, with slow FPS material this does 'smooth' the image a lot - the slow FPS stuff has noticeable flicker almost like a cinema screen - the truemotion looks more 'solid'. After a few seconds you see how they've done it - it's almost like MPEG motion detection - you will see certain blocks that move across the screen. Quite often this is effective, but some material confuses it - in one sequence following a bicycle down the street the gravel on the pavement appears to start moving with the bicycle - clearly the noisy material has confused the algorithm and it has interpreted it as a moving patch rather and predicted some motion to generate extra frames. Occasionally pieces of things (including one guys head) get attached to something and start moving with it. Hard to spot if you aren't looking for it, but very weird when you do

Re:60 FPS (0)

Anonymous Coward | about 3 years ago | (#37353116)

I do have equipment that can generate and display the aformentioned 120fps - GTX 260 and a 120hz monitor(for 3DVision use) and TF2. I can say that things are noticably smoother above 100fps, but the minute you hit an effect-heavy area, the effect is lost as the framerate drops horribly for small fractions of a second. However, even without the 120fps source, it's still nice - even simple things like your mouse cursor seems faster, and ghosting is pretty much nonexistant.
I think I once recorded a TF2 demo at 120fps... took ages to render, but it did look butter-smooth when played back... though it *felt* a lot slower than when I originally played it, so there could have been an issue somewhere...

-RobbieThe1st

Obviously, no one read TFA (4, Informative)

HBI (604924) | about 3 years ago | (#37351390)

On the last page, in the last paragraph, he indicates that all of the data you just read through is shit and probably invalid. Turns out he was measuring the wrong place in the pipeline - before rendering - and what he measured doesn't track with the actual user experience.

I'd like my 5 minutes back, please.

Re:Obviously, no one read TFA (1)

tonywong (96839) | about 3 years ago | (#37351628)

Not only that but for the most part he couldn't perceive the microstuttering most of the time. That article should have been turned away for better research.

Re:Obviously, no one read TFA (1)

Kjella (173770) | about 3 years ago | (#37351710)

Actually he measured the rendering just fine. But what nVidia told him is that they're doing some timing magic before they display it in SLI setups, which is currently not possible to measure with FRAPS or any of the other standard FPS tools. So right now you would need to get a high-speed camera to snap pictures of the screen to know what the user sees. But regardless of that they can't get rid of all the stuttering that easily, the slowest frame still takes much longer to render than the average. Also this means the latency is actually at times higher than the 1/FPS rate should suggest. Very interesting stuff, even though these numbers are suspect. Another good reason to just get a single card solution, which are getting awfully fast anyway.

Re:Obviously, no one read TFA (4, Informative)

sanosuke001 (640243) | about 3 years ago | (#37351714)

If it makes you feel better, I write 3D applications and our software has stuttering issues when loading new texture data (very large texture sets so its a tradeoff we accept for the most part). It is a problem and taking an average over time for FPS is mostly bullshit. I actually do some per-frame render time benchmarks when I'm developing as its more useful when trying to test consistency.

Re:Obviously, no one read TFA (1)

Bensam123 (1340765) | about 3 years ago | (#37352396)

If I'm reading the same spot, that's talking about latency between the call when the game wants to draw it and when it's actually rendered, in response to a technique Nvidia is using to smooth out FPS, not variation/jitter in the FPS, which is what a majority of the article is talking about.

I'd like your five minutes and mine for taking the time to read your response too.

Re:Obviously, no one read TFA (1)

desdinova 216 (2000908) | about 3 years ago | (#37353746)

why would they, This is /. after all.

Disk IO (1)

Nanosphere (1867972) | about 3 years ago | (#37351392)

Most of the time when I notice any stuttering is also the same time my hard drive lights up. Usually either the game or some background service decides to flood the disk with IO requests. In a few instances I've even had Windows become completely unresponsive until whatever disk operation that is running completes. It doesn't matter how much RAM I have. I haven't purchased any SSDs yet but I'm sure they help a lot to alleviate the problem. The question is is this a fault in how programs or the operating system handle secondary storage? Why should a disk intensive operation halt the rest of my OS especially when the entire OS could fit in RAM?

Re:Disk IO (1)

warchildx (1695278) | about 3 years ago | (#37351620)

1) Pagefile/swapping. - the os thinks it is from the 90s and trying to do memory swapping (no matter how much ram you have).

2) disk controller drivers run in kernel mode. any rise in disk queue length (pending requests) hangs the system. arggh.

Re:Disk IO (1)

BitZtream (692029) | about 3 years ago | (#37353030)

I'm sorry, what OS are you using to which these things don't apply to? OSX, Linux, Windows all are exactly the same here, they all do these things (well, except for hang the system, unless of course we're hanging because we're waiting on something to be paged so the app we're running appears frozen), and are expected too, for obvious reasons. Well, clearly not obvious to you.

So again, what OS are you using, that those things don't apply to? QNX or BeOS?

Re:Disk IO (1)

Gaygirlie (1657131) | about 3 years ago | (#37352102)

Why should a disk intensive operation halt the rest of my OS especially when the entire OS could fit in RAM?

It depends on the devices in question, but often in desktop computers the CPU requests data from storage, the storage replies with the data, then the CPU transfers the data to memory, and goes back to step 1, ie. the CPU is constantly busy transferring data or waiting for the storage device to submit the data. If the storage device is slow to reply then the CPU is just waiting on it and can't do anything else in the meantime.

DMA is a method for removing the CPU out of the equation allowing the device to transfer data to memory directly, without keeping the CPU busy. But this still doesn't work for all cases and the system might have to fall back to non-DMA transfer. It should not be common anymore these days, but you never know.

Then there's various kinds of overhead involved, like e.g. if the file you're accessing is compressed then it obviously needs to be decompressed, and decompressing consumes CPU time. Magnetic media also poses the limitation of the heads inside the device having to move around in order to read files, and if you're reading multiple files at the same time the heads have to constantly keep moving back-and-forth slowing the transfer down by huge amounts. Similarly, the OS itself is likely programmed somewhat inefficiently if it can't run all of its stuff on one core and run the file transfer on another.

Point is, there's plenty of reasons for that.

[quote]The question is is this a fault in how programs or the operating system handle secondary storage? [/quote]

Both.

Tbh, I think an OS should provide applications with improved read/write methods where the application can supply a function for doing compression, decompression, obfuscation or whatever to the read/write method and a timeframe within which the application deems the operation should finish, and then it would be up to the OS to optimize the actual operation within hardware limits so as to fit inside the timeframe. This way the application wouldn't need to care about hardware details, like e.g. if it's writing on magnetic or flash storage, but the OS would know what size blocks to write and how many at a time. And if the application e.g. wants the OS to write 1 gigabytes of data to a file and gives the OS 30 seconds to do that in the OS can then stretch the write operation out so that it does take the full 30 seconds but consumes the least amount of CPU resources while doing it, thus leaving more for other applications and resulting in better user experience.

Re:Disk IO (1)

RobbieThe1st (1977364) | about 3 years ago | (#37353312)

Erm, isn't that what we already do, at least on Linux? I mean, the application simply uses read/write and possibly seek against an arbitrary filename - it's up to the OS and FS to handle actually getting or saving the data. Which means that the application doesn't know or care whether the file's on spinning media, a slow flash drive, SSD or cached in system ram!
The only thing that may not be optimal is that we have the sync() function, which usually won't return until the data has actually been saved to disk, though it's possible to change that functionality at the risk of data loss or corruption if the PC dies unexpectedly.

-RobbieThe1st

Re:Disk IO (1)

Gaygirlie (1657131) | about 3 years ago | (#37353450)

Erm, isn't that what we already do, at least on Linux? I mean, the application simply uses read/write and possibly seek against an arbitrary filename - it's up to the OS and FS to handle actually getting or saving the data. Which means that the application doesn't know or care whether the file's on spinning media, a slow flash drive, SSD or cached in system ram!

No, that's not the same thing. Using the regular Linux/Windows/etc file API reads/writes the data as fast as possible from/to the media and ignores completely system resources, nor does it actually try to optimize reads/writes in a way that causes least amount of stutter on the rest of the system. It's left to the applications themselves to handle, and applications rarely have access to all hardware details and knowledge about how to optimize their operation.

ugh (1)

kelemvor4 (1980226) | about 3 years ago | (#37351416)

As if the issue of micro stuttering hasn't already been covered in great detail numerous times in the past. I ran sli for a while, if it's a problem then features like vsync can help. If you are only running one GPU like 99.9% of folks out there then you don't need to waste your time on this article.
FWIW, FPS is still a fine benchmark. Like any benchmark, it only tells part of the story. That's why you use tools like 3dmark that run a battery of benchmarks to aggregate a rating, and then measure actual performance in games/applications. Review sites seem to have caught onto this say.. 15 years ago?

SLI/Xfire have never been good (0)

Anonymous Coward | about 3 years ago | (#37351504)

Undersupported, underutilized, full of bugs, and overall just a colossal waste of money. Not to mention how much more heat that means your case has to deal with.

Re:SLI/Xfire have never been good (1)

0123456 (636235) | about 3 years ago | (#37351646)

Yeah, having written multi-GPU drivers I'm amazed any of this stuff ever works, let alone actually improves performance. Back in our day there were a ton of things that game developers could do that would cut the performance to a tenth or less of what a single card would achieve due to triggering massive amounts of inter-card communication.

Re:SLI/Xfire have never been good (1)

Hatta (162192) | about 3 years ago | (#37352114)

Voodoo 2 SLI is as good as advertised. Twice the FPS, higher resolution support, and it "just works". No stability problems, and the cards work fine with passive cooling.

I'd agree that what nVidia did with SLI has all been garbage though.

Re:SLI/Xfire have never been good (1)

0123456 (636235) | about 3 years ago | (#37352216)

Voodoo 2 SLI is as good as advertised. Twice the FPS, higher resolution support, and it "just works".

That's because the Voodoo-2 was a cut-down piece of crap even in its heyday. You literally couldn't do anything with it that would make SLI difficult.

Most people I knew in graphics and gaming at the time hated 3dfx for crippling the industry with its refusal to add any features that weren't required to render Quake fast.

Re:SLI/Xfire have never been good (1)

Hatta (162192) | about 3 years ago | (#37353272)

The Voodoo 2 still has the prettiest output of any graphics card I've ever seen. It may not have all the features you want, but what it has it does really well.

You have a point though. 3dfx didn't do much to hold onto its market leader position and paid the price.

What about Display factor? (0)

Anonymous Coward | about 3 years ago | (#37351688)

And I mean just that. Displays. Monitors. That $500 Dell UltraSharp with the IPS panel that has 4ms response time. That is factor in all this, isn't it?

Re:What about Display factor? (1)

Computershack (1143409) | about 3 years ago | (#37351740)

How dare you throw in the performance of the bottlenecking bit of hardware into this? Thats just unfair. Next you'll be blaming hard drives for level loading times.

Re:What about Display factor? (1)

BitZtream (692029) | about 3 years ago | (#37353112)

No, because its consistent, every frame, at vsync and it doesn't hold back the rendering process.

Poor pressentation, but good content... (1)

geogob (569250) | about 3 years ago | (#37351698)

I find he content and discussion very interesting. For me, this was something obvious because of my line of work, but I can imagine that most people reading (and writing) GPU reviews had no clue what so ever about this.

As much as I find the content interesting, its presentation is awful. Although is is interesting the present some figures on a frame-count base, most of the overview figures should be on equivalent time base, allowing a proper comparison of the tests sequences. I'd have shown one frame based graphic to explain what was going on and than used this frame based scale only for the "zooms" illustrating specific features or effect.

Also, the author probably never heard of histograms and/or distributions. Nor of variance, standard deviation, etc.

mod dOwn (-1)

Anonymous Coward | about 3 years ago | (#37351788)

Traditional benchmarks are limited anyways (1)

Gaygirlie (1657131) | about 3 years ago | (#37351796)

I've always wondered about the insistence of various parties on using FPS as The Benchmarking Standard[TM] while ignoring all the things that contribute or limit the FPS achieved. A real benchmark should atleast keep track of CPU usage, GPU usage, bus bandwidth usage (ie. if for example GPU is idling a lot of its time because the bus can't keep up) and memory bandwidth usage. Then it would be much easier to find bottle-necks and make proper comparisons by ensuring that only the item to be benchmarked is causing bottlenecks, no other part in the equation.

Then again, I am not aware of a single benchmarking suite (or website, for that matter!) actually caring about bus saturation or providing meaningful information, only FPS numbers or some other inflated score to shake e-peens at.

No great loss. Benchmarks are BS. (1)

thePuck77 (1311533) | about 3 years ago | (#37351890)

The only way I have ever been able to test what real performance will be like in a given game or rendering in a given program is to play that game or render in that program. Even built-in benchmarks like in HL2 don't seem to take gameplay into account well enough. While (at best) benchmarks can be a help in deciding what to buy in a very general way, I have learned to be skeptical and trust my experience only. Even framerate monitors in games often don't reflect the smoothness of the experience of the game. Rift would show around 30-40 FPS, WoW would show 75-100, yet Rift would seem to feel far smoother.

Inverse measure is what we want (1)

jensend (71114) | about 3 years ago | (#37351896)

I'm glad somebody started looking at ms per frame instead of frames per second. Since what we really care about for game performance is whether frames are rendered quickly enough to give satisfactory reaction times etc, using frames per second is misleading.

Another example where the same thing happens is fuel consumption: we keep talking about miles per gallon, but what we primarily care about is the fuel consumed in our driving, not the driving we can do on a given amount of fuel, so this is misleading. To use wikipedia's example, people would be surprised to realize that the move from 15mpg to 19mpg (saving 1.4 gallons per 100 miles) has a much bigger environmental and economic impact than the move from 34mpg to 44mpg (saving 2/3 of a gallon per 100 miles).

Similarly, moving from 24 fps to 32 fps has a bigger impact on the illusion of motion, fluidity, and response times than moving from 40 fps to 60 fps (10.4 ms difference vs 8.3 ms difference in time between frames). I think everyone should have been using ms per frame all along.

(note: yes, I already said this on their forum, I just think it should be repeated here)

Re:Inverse measure is what we want (1)

uigrad_2000 (398500) | about 3 years ago | (#37352836)

Changing from frames/sec to msec/frame doesn't fix the problem at all.

I was playing Minecraft the other day with a buggy mod installed, and was getting 240 fps, but choppy performance. Sometimes I'd get 1 second spikes, and the fps monitor would change to show something like 30fps before creeping back up to 240. If you convert that to msec/frame, those numbers still look really good.

Other games that run at 50fps looked much better than this buggy minecraft mod. Taking the inverse (as you suggest) doesn't help. Reporting the time for the worst case frame would be the simplest way to show how choppy my experience was. A new metric entirely is needed to show the difference here.

Re:Inverse measure is what we want (1)

jensend (71114) | about 3 years ago | (#37353118)

Of course just looking at an average of either quantity isn't going to be sufficient- you need to look at the distribution of values, not just the mean. (Looking at the 99th percentile, as they did in the article, is a start.) But at least this way we're looking at the distribution of the right numbers.

Re:Inverse measure is what we want (0)

Anonymous Coward | about 3 years ago | (#37354300)

Yes, time measurements are what we need to measure and run stats on, since they correspond to a physical quantity. FPS is a time-average synthetic measure. For example there is no concept of an instantaneous FPS measurement...you must average the times over a finite time interval to get it.

Ts with all inverse measurements, they have a pole at 0 which throws off calculations. On my Canadian car, the L/100km measurement should go to infinity when the car speed approaches 0. But what I notice is that it rises as my speed decreases, but as I slow to a stop it's artificially filtered it to make it go down to 0...this was done to match people's intuitive understanding of fuel consumption (if I'm stopped I shouldn't be using any fuel.) MPG ratings, while mathematically equivalent to L/100km, show 0mpg at 0 speed and so avoid this problem.

Also, inverse measurements behave non-linearly near the inflection point, as is obvious when looking at a graph of the y=1/x function. At high values of x, the slope of the curve changes very little. So if the average frame time increased from 30ms to 30,000ms, the FPS would vary only within the range of 30FPS to 0. Similarly, as it dropped from 30 to like 8ms, the FPS number would balloon from 30 to 120. Such nonlinearities make it difficult to process FPS data...therefore, while you can compute the mean frame time and the stddeviation, you can't run any such stats on the FPS figures.

Lighten up people (0)

Anonymous Coward | about 3 years ago | (#37351912)

Some very snooty replies here. For me this was one of the most interesting articles I've read all year, and a nice change from all the identical articles I've read about graphics card benchmarking.

why not just histogram the data? (0)

Anonymous Coward | about 3 years ago | (#37352050)

He could just provide a histogram with frame times. It would show the amount of long frame times
and it would show jitter (the histogram would have two bumps).

Pretty sure (1)

h4x0t (1245872) | about 3 years ago | (#37352912)

They already show Min Frame Rate next to average on Tom's Hardware....

Suggested test fails (0)

Anonymous Coward | about 3 years ago | (#37353422)

Quote from TFA:

Put two computers side by side, one with a 60Hz display and the other with a 120Hz display. Go to the Windows desktop and drag a window around the screen on each. Wonder in amazement as the 120Hz display produces an easily observable higher fluidity in the animation.

I have a 60Hz monitor and a 120Hz monitor side by side on this computer. There was no perceptible difference in window dragging - looked the same on both monitors. I bet the author just connected his 120Hz on a better GPU than his 60Hz monitor has.

Re:Suggested test fails (1)

Mprx (82435) | about 3 years ago | (#37353956)

Do you use a wireless mouse? Some of them update only at about 60Hz, so you'll not see the benefit in window dragging. And even with a wired mouse, the default 100Hz (Linux) or 125Hz (Windows) mouse update rate isn't fast enough to guarantee updating every frame on a 120Hz monitor, so you'll need to change mousepoll (Linux) or install a hacked hidusbf with test certificate (Windows). It's also possible your window compositing system is capped at 60fps. IIRC correctly Compiz does that unless you change some hidden setting, and maybe Aero does too. I don't use compositing.

The difference between 60Hz and 120Hz is extremely obvious if you're actually updating every frame. I refused to use a LCD until 120Hz LCDs were available, and even that's just barely adequate. A 200Hz CRT is clearly smoother.

Re:Suggested test fails (1)

Mprx (82435) | about 3 years ago | (#37354450)

And don't forget that all OSs default to running 120Hz monitors at 60Hz. You have to change them to 120Hz manually. Windows even sometimes resets them back to 60Hz when you change some other display setting.

FPS vs refresh rate (1)

ryanw (131814) | about 3 years ago | (#37354302)

Why would anyone need a framerate faster then the refresh rate of the display refresh rate you're using?

I've never understood why anyone would push a graphics card faster then the refresh rate of the display you're using. Why not just cap it off at the max refresh rate, and let the card take more time in rendering each frame.....

It seems as though there should be some sort of "dynamic rendering" option. You want the framerate to match the refresh rate of the monitor, so why can't the rendering engine decide what to spend more or less time on?

For instance, there are the core objects and lights and maps that make up the main scene, then from there there's particle engines, reflections, additional shading, etc. If the card has the capability to do 500 fps, I'd rather it focus on making a REALLY AMAZING 90Hz or 120Hz (or whatever my refresh rate is)....

And the flip side is true as well. If I'm playing a game, I'd rather it keep up with the monitor refresh rate rather then paint a pretty picture. It doesn't make sense for it to a beautiful scene while I'm getting whomped on.

The rendering engine for video games should dynamically choose what to render based on what your computer is capable of. All special effects and anti-aliasing and everythiing should be turned on when it starts up ... and it should scale back the unnecessary items as it can't keep up... and throughout the game one room might have different settings on than another depending on everything going on.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?