Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Hardware Based XRender Slower than Software Rendering?

michael posted more than 11 years ago | from the unsolved-mysteries dept.

Enlightenment 297

Neon Spiral Injector writes "Rasterman of Enlightenment fame has finally updated the news page of his personal site. It seems that the behind the scenes work for E is coming along. He is investigating rendering backends for Evas. The default backend is a software renderer written by Raster. Trying to gain a little more speed he ported it to the XRender extension, only to find that it became 20-50 times slower on his NVidia card. He has placed some sample code on this same news page for people to try, and see if this is also experienced on other setups."

Sorry! There are no comments related to the filter you selected.

Hmm.. (-1, Flamebait)

Anonymous Coward | more than 11 years ago | (#6710369)

Maybe the guy should learn how to code, huh?

Re:Hmm.. (-1, Troll)

Anonymous Coward | more than 11 years ago | (#6710383)

and spell

Re:Hmm.. (1)

dustwun (662589) | more than 11 years ago | (#6710679)

How this got modded as Interesting and not Flamebait is beyond me. As far as Raster's coding skill goes, Let's see you write something akin to Enlightenment, or edje or evas, or any number of other apps and libraries. Now, in true open source spirit, did you even install his test app and submit results to help make anything better?

Wait a second... (-1, Flamebait)

Sp4c3 C4d3t (607082) | more than 11 years ago | (#6710372)

Wasn't this guy preaching the death of the Linux desktop not very long ago?

Re:Wait a second... (0)

Anonymous Coward | more than 11 years ago | (#6710392)

And what does that have to do with the questions and issues raised in the story?

Re:Wait a second... (1)

Sp4c3 C4d3t (607082) | more than 11 years ago | (#6710417)

The fact that he's working on it again. A sudden change of heart, apparently. I can remember slashdotters posting hate about this guy because he was claiming the Linux desktop was dead; claims like E having failed and that being his way of taking it.

Re:Wait a second... (0)

Anonymous Coward | more than 11 years ago | (#6710687)

He's been working on it constantly. Never stopped.

Yeah, well... (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#6710519)

*N?X is dead! There! I said it!

Some Suggestions for Rasterman (-1, Troll)

monsieur (583846) | more than 11 years ago | (#6710374)

Dear Raster Guy,

This is a fantastic article. I just have a few suggestions on how to clean it up.

First off is the introduction. It's too wordy. You dance around your thesis like a barefoot child on a griddle. Try being more direct, more concise. Get right to your point, then move on.

Now I'm not terribly familiar with the scientific method, but I think your experiment isn't described fully enough for the audience to gain any insight from it. First of all, you haven't given us much of a hypothesis to go on. What is it you really hope to gain from your testing? We have a vague ideal of the goal, but it's hard to see the true motivation that's pushing you towards such a goal.

Your depiction of the experiment itself is more than adequate. I don't really have any complaints here. It actually reminds me of an experiment I devised in my youth. Heh, that was a real mess. I remember it like it was yesterday.

It was the summer of 1873. The country was industrializing and the West was still being settled. For a young lad in rural Kansas, such as myself, life was a little more interesting with the railways making far-off cities more accessible to the common folk. My parents had been planning a trip to Knoxville for some time now, to visit relatives. My father sold a few of his cattle and scraped together enough money for us to make the journey. Mom, pop, my two brothers Anthony and Skeet, and my sister Juliana, all got on the Southern Express, headed east for St. Louis and on to Knoxville.

It had never occurred to me just how boring a train ride could be. My brothers ignored me, as usual, and my sister was fawned over by my mother constantly. With my dad sleeping most of the time, I was left to entertain myself.

The train was very crowded. I'd never seen so many people crammed into one place outside of church. Some of them weren't farm-folk, neither. There were a couple men and ladies in fine dress clothes, probably city dwellers. A few man were even worse dressed than us, probably miners or something. One of them had been staring at me for almost half an hour. I went over and talked to him.

"Hey mister. This your first time on train? It is for me!"
"Nooo, I ride trainss hic all the time."
"You okay mister? You smell funny."
"Heh heh ... at's because i haven't WASHed in threeeee days, son."
"My mom says to take a bath every day or the devil will eat my soul!"
"Well, now, ain't that precious ... you want a drink?"

That man introduced me to alcohol, my future, and my undoing. That man's moonshine set me on a long road to endless sorrow and pain. My drinking problem escalated rapidly. Upon arriving in Knoxville, I had already completed five twelve-step programs. None of them worked.

Fast foward to 2173. Shortly after my 900th birthday, I will go out for a binge with my friends Jesus Christ and Karl Marx. The three of us have been buddies for longer than I can remember. We will often get together to swap stories, talk about girlfriends, that sort of thing. We will drink, of course -- always heavily and always grain alcohol. Jesus never has any problem with the stuff, of course, but Karl and I can only down so much before we go blind and vomit our intestines out on the bar. Jesus can really work miracles, though. That guy will always have us patched up by morning.

Anyways, Jesus and Karl will be having a heated discussion about the relative merits of kittens and puppies.

"Kittens are fuzzy, and God is fuzzy, therefore kittens are better," Jesus will argue.
"Yes, but puppies grow into dogs, and dogs work in packs for greater effeciency, and to the benefit of dogs everywhere," will be Marx's counter.
"Kittens are really soft, and God is soft, therefore kittens are better," Jesus will reply.
"Dogunism has no place for your purring and your pawing and your meowing! The barking class will not stand for the placidity of the feline aristocracy!"
"Fuck you!"
"Go eat a dick, you Semetic has-been!"
"U R GAY!"

These sorts of conversations won't be uncommon between those two. I often stay out of such exchanges, but I will decide to be involved in this one.

"Hey guys, less screaming, more drinking." I will hope that they actually listen to me.
"Listen you bloody cocksucker, I'm the son of God, and you fuckin' know that ain't no bullshit. You best shut your faggoty little hole, or I'll flood your shitty asshole of a planet." Jesus, in rare form, will continue: "I need some whiskey."

That would shut me up for the rest of the evening, which won't be much longer, since Karl will have already found his way to the floor. He'll lay there in a pool of his own urine, his blouse coated in yellowish-orange vomit. Jesus will bow his head and recite a short prayer. He's always praying for something, that loon. Who's he praying to, anyways? Himself? We'll drag Marx back to his commune. The two of us will then retire to our gay nudist colony.

Many people ask me what it's like being gay and naked. I tell them it's a pain in the ass in the warmer months and pain all over in the colder ones. Most people don't ask me any questions after that.

Which brings us back to today, right now, and this article. What questions are you trying to answer here. The hypothesis is a good start, but what details are you looking for? It might be a good idea to do a little Q&A section along with your observations. I understand it's hard to list any observations before the experiment has had time to run, but I'd advise you to update your article from time to time, as appropriate.

Your conclusion also needs some work. I felt like I was reading absolute fluff. You dance around the real issues and never resolve anything you'd set out in the introduction. Try rewriting this and focusing on what's in your hypothesis. That might help.

Re:Some Suggestions for Rasterman (0, Offtopic)

SubjunctiveSam (669606) | more than 11 years ago | (#6710404)

Sorry, but that was the funniest thing I've read all month.

Re:Some Suggestions for Rasterman (0)

Anonymous Coward | more than 11 years ago | (#6710450)

This is too long. Can someone give me a summary?

Re:Some Suggestions for Rasterman (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#6710461)

Exhibit #6710374 is a classic example of why 'Funny' mods no longer give karma.

C'mon guys give him an 'Underrated' or something.

Re:Some Suggestions for Rasterman (1)

Rooktoven (263454) | more than 11 years ago | (#6710490)

I actually have a mod point and I'm not going to spend it here. I'd give an underrated if it wasn't so long. I know that's the joke and parts _are_ funny, but more shock value than than witty commentary, IMO.

Hey and at least this comment will make someone browse at -1.

Some Suggestions for Rasterman and windbags. (-1, Troll)

ratfynk (456467) | more than 11 years ago | (#6710492)

/* Critical acclaim is not an issue when you write about rendering on software. Try to think like a programmer, not an editor. There is a big difference between coders and normal humans. Therefore your scientific methods sometimes do not compile.*/

#exclude (wordoctors.h)
#exclude (critics.h)

*/ however you can #include just about anything
* that you can reasonably use before you are
* forced to int main() /*

Re:Some Suggestions for Rasterman (0, Offtopic)

HBI (604924) | more than 11 years ago | (#6710534)

Dude, you have read some P.J. O'Rourke I see!

Nice mixing of his themes.

Re:Some Suggestions for Rasterman (0)

Anonymous Coward | more than 11 years ago | (#6710535)

This is fuckin' hilarious.

Re:Some Suggestions for Rasterman (0)

Anonymous Coward | more than 11 years ago | (#6710633)

Your a fucking asshole.

M$ is evil! (-1, Troll)

Anonymous Coward | more than 11 years ago | (#6710381)

This is just another evil attempt by Micro$oft to monopolize the industry.

are the drivers installed? (3, Funny)

efishta (644037) | more than 11 years ago | (#6710387)

last time I checked all graphix cards need drivers to enable their acceleration.

Re:are the drivers installed? (1)

dustwun (662589) | more than 11 years ago | (#6710688)

Okay, this guy is the driving force behind the Enlightenment window manager. He's written several libraries and other X related apps in fairly common usage. This level of troubleshooting is just a bit of an insult don't you think?

New Nvidia card! (-1, Troll)

Flingles (698457) | more than 11 years ago | (#6710389)

the new Nvidia Crapx 5300 lets you use hardware rendering. And it does it all 30-50x slower than software! Buy now from Only $2 and a cookie.

2D acceleration using OpenGL? (5, Interesting)

gloth (180149) | more than 11 years ago | (#6710390)

He didn't really get too far into that, but it would be interesting to see how feasible it is to do all the 2D rendering using OpenGL, encapsulated by some layer, like his Evas.

Has anyone done that? Any interesting results? One would think that there's a lot of potential here...

One word: (4, Informative)

i_am_nitrogen (524475) | more than 11 years ago | (#6710441)


IrisGL or OpenGL (I think OpenGL is based on IrisGL, so Irix probably now uses OpenGL) is used extensively in Irix, for both 2D and 3D.

Re:One word: (0)

Anonymous Coward | more than 11 years ago | (#6710484)

Huh? There's nothing OpenGL powered in Irix's X winder's interface (known as 4DWM), as fars as I can tell.

Can OpenGL be used for 2D? Yah; but it's a ton slower than more specialized methods.

Re:One word: (3, Informative)

Krach42 (227798) | more than 11 years ago | (#6710680)

You forgot about an even more common example... QUARTZ! Apple's OSX does all rendering through Quartz, (as PDFs) which is accelerated by OpenGL, and called QuartzExtreme.

Re:2D acceleration using OpenGL? (4, Interesting)

Animats (122034) | more than 11 years ago | (#6710554)

That's technically viable, and I've worked with some widget toolkits for Windows that render everything through OpenGL. On modern graphics hardware, this has good performance. After all, the hardware can draw complex scenes at the full refresh rate; drawing some flat bitmaps through the 3D engine isn't too tough.

One problem is that multi-window OpenGL doesn't work that well. Game-oriented graphics boards don't have good support for per-window unsynchronized buffer swapping, so you tend to get one window redraw per frame time under Windows. (How well does Linux do with this?) Try running a few OpenGL apps that don't stress the graphics hardware at the same time. Do they slow down?

One of the neater ways to do graphics is to use Flash for 2D and OpenGL for 3D. Quite a number of games work that way internally. The Flash rendering engine typically isn't Macromedia's, but Macromedia authoring tools are used. This gives the user interface designers great power without having to program.

Re:2D acceleration using OpenGL? (4, Informative)

Rabid Penguin (17580) | more than 11 years ago | (#6710597)

Yes, and yes. :-)

The current version of Evas is actually the second iteration. The first version had a backend written for OpenGL, which performed quite well for large drawing areas, but was sluggish with many small areas (bad for window managers). The software engine easily outperformed in those cases, and will be used for the resulting window manager's border drawing.

For now, there is not an OpenGL engine in Evas, because of time constraints. E has a relatively small active development team atm, so it's difficult to say when someone will get around to adding the OpenGL engine. There should be one eventually, all nicely encapsulated except for a couple setup functions.

2D acceleration using OpenGL?-Pipe power. (0)

Anonymous Coward | more than 11 years ago | (#6710660)

Careful design of both the pixel, and vertices programs, and intelligent scene decomposition. i.e. Look at what's presently on your screen, and ask: "What's the minimumn I can send to get this? Same with changes (caching can help here).

Also you can get the 2D part to help out. I'm currently research what other parts (if any) I can use to help out in this task.

Re:2D acceleration using OpenGL? (0)

Anonymous Coward | more than 11 years ago | (#6710622)

Apple's Quartz Extreme (or whatever it's called) uses OpenGL to provide some 2D acceleration.

The prat in the hat is back (-1, Troll)

Anonymous Coward | more than 11 years ago | (#6710395)

Aw gawd...

The prat is back.


The damndest thing. (4, Informative)

Raven42rac (448205) | more than 11 years ago | (#6710398)

I have used both ATI and NVIDIA,(and 3dfx, and matrox, but staying relevant). Generally the NVIDIA cards I have owned have been vastly outperformed by the ATI cards right off the bat, without tweakage. (This is under Linux, mind you) Even with tweakage, in my experience, you rarely get the full potential from your card.

Re:The damndest thing. (1)

efishta (644037) | more than 11 years ago | (#6710405)

That's why every new "generation" of drivers by NVIDIA helps a bit in the performance department. The 20.xx series helped out dramatically for the GeForce3 series; and so on with the GF4 and GF5, each new generation of drivers help out more than the previous one. Otherwise, there wouldn't be a new driver released every week, which to me shows the dedication they have for their products. Some other companies release drivers ever few months or so (like ATI, which has a dismal driver quality record, much improved recently)

Re:The damndest thing. (-1, Troll)

Eric Destiny (255168) | more than 11 years ago | (#6710448)

I think you missed the part where he said This is under Linux, mind you. Pay attention cockstain.

Re:The damndest thing. (1)

Raven42rac (448205) | more than 11 years ago | (#6710460)

They have been getting better, I will give them that. One of the reasons ATI's drivers are better is because the 9800 driver is a sexed-up 9700 driver, so they have had a head start, which isn't honestly a bad thing.

But is your recollection from your impression (0)

Anonymous Coward | more than 11 years ago | (#6710646)

from benchmarks? In case you havn't been keeping up with the latest news, nvidia have been shown to be uber cheaters going back a long way.

accelerated? (3, Interesting)

Spy Hunter (317220) | more than 11 years ago | (#6710409)

Is XRender really accelerated? I thought that most Render operations were still unaccelerated on most video cards, and how and if they could be accelerated was still an open question. Maybe the real problem here is Render's software rendering code?

Alpha Channeling by 2010 guarenteed! (1)

HanzoSan (251665) | more than 11 years ago | (#6710442)

This is when I predict Xrender will be complete and Linux will be set to compete with OSX and Windows Longhorn in terms of rendering.

I suggest that Rasterman just forget about Xrender and use directfb or opengl.

Re:accelerated? (3, Interesting)

saikatguha266 (688325) | more than 11 years ago | (#6710642)

The NVidia drivers say something about Render Accleration as someone already pointed out. However, there is definitely some glitch somewhere. I tried the benckmark with the RenderAccel both turned off and on on my GeForce 3 with the 4496 drivers and perceived no significant difference in the tests except for test 1. (11s for no accel, 2.5s for accel, 0.62 for imlib2). The rest of the tests sucked for the driver (11s, 215s, 183s, 356s for tests 2 to 5 -- both with and without render accel as opposed to 0.21s, 4.5s, 2.7s, 5.8s for imlib2).

I use Xinerama with the secondary display on an ATI 98 Pro (Yay for college tuitions). One thing I did notice was that even in render-acclerated mode, if I drag the window to the middle straddling the screen split, the images display on both sides -- though ATI's side is scaled down even at the same resolution for some reason). However, if I use a gl application (glxgears, mplayers -vo gl2 etc) then straddling the screen only gives half a display on the GeForce board. So in this case, X is either not using XRender either because of NVidia drivers or is picking the lower of the capabilities of the video card. Or is doing something in the middle causing the GeForce and ATI displays to be different.

I wonder if there is any way to explicitly force X to use the hardware for XRender as you can do with GL.

Re:accelerated? (1)

whereiswaldo (459052) | more than 11 years ago | (#6710764)

Obviously, something is wrong here. Hardware rendering should always be faster than software rendering, if the hardware is being used properly.

In the stuff I've done, I'd guess a factor of 4 increase in speed at least.

Where is Keith Packard? (0, Troll)

HanzoSan (251665) | more than 11 years ago | (#6710423)

Keith please explain this! This shouldnt happen.

Also how many more years will it take before Linux can compete with OSX? 5 more years? Maybe 10? We have forever and a day you know.

Re:Where is Keith Packard? (1)

KentoNET (465732) | more than 11 years ago | (#6710482)

Err...Keith Packard ditched XFree86 to start his own fork, xwin [] ...

And you're does take longer to get complicated stuff done in free software land than it does when you've got how many guys being paid to do it.

Keith IS being paid. (3, Insightful)

HanzoSan (251665) | more than 11 years ago | (#6710499)

Also the only reason its taking so long is because they wont fork, theres millions of developers who Redhat, Suse, Lindows etc would love to pay to develop Xrender, you think Keith Packard is the only developer in the world qualified to do this? No hes not, and neither is Carl Worth, but until there is a fork, everything goes through this core group of developers who decide everything.

Its a management issue moreso than lack of developers or lack of money, believe me if Transgaming can get money, Xfree could get about x10 that amount of money, Mandrake has 15,000 subscribers paying $60 a year or something.

This isnt about money, its not about lack of programmers, its about management, the developers argue and fight over stupid stuff on mailing lists, theres only two developers working on Xrender and these developers seem over worked because they are doing so many other projects.

Its more complicated than it seems.

Xwin is not an official fork, at least I was told that it wasnt a fork, it was more of a threat of a fork, I am wishing and hoping they DO fork and then accept money somehow so we can pay developers to write this very important code.

Re:Keith IS being paid. (0)

Anonymous Coward | more than 11 years ago | (#6710518)

In defense of that're the one who brought up Keith Packard. And also, XFree is not a Linux project, it's multiplatform. Distribution money has nothing to do with what they make.

Re:Keith IS being paid. (2)

KentoNET (465732) | more than 11 years ago | (#6710526)

XWin is forking the Xlib (pretty much the heart of XFree86), though their own Xr and XCB (and a few other) projects. Check their site again, there are already CVS pservers up with code.

Re:Keith IS being paid. (2, Interesting)

HanzoSan (251665) | more than 11 years ago | (#6710609)

Interesting, but how can we fund them? They dont accept donations, they dont have a way for someone like me who doesnt have the skills to develop Xrender to pay people who do.

2 people on Xrender is why its taking so long.

Re:Where is Keith Packard? (0)

Anonymous Coward | more than 11 years ago | (#6710545)

mod up +2 informative.

no i am not the parent poster, just a concerned /.er

Putting the "wine" back in whining. (0)

Anonymous Coward | more than 11 years ago | (#6710487)

"Also how many more years will it take before Linux can compete with OSX? 5 more years? Maybe 10? We have forever and a day you know."

Why? Are you in a rush to be somewere?

Packard Bell (-1, Flamebait)

Anonymous Coward | more than 11 years ago | (#6710434)

I have a Packard Bell 486 that can FUCK YOU IN THE ASS. That's right folks, it's so queer, it can actually FUCK YOU IN THE ASS!!

Re:Packard Bell (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#6710583)

klerck is that you?

duh (2, Interesting)

SHEENmaster (581283) | more than 11 years ago | (#6710437)

graphics cards work quickly because they cut every corner that can possibly be cut. It makes sense that they would run computer software slower.

I'm more interested in using them for specific calculations. Imagine if one of these things was accidentally embued with the ability to factor gigantic numbers. The AGP slot is just an excuse to keep us from beowulfing them over PCI-X

Graphics cards and computation (5, Interesting)

Amit J. Patel (14049) | more than 11 years ago | (#6710504)

There has been some work on using graphics cards for computation [] . The tough part is figuring out how to rephrase your algorithm in terms of what the GPU can handle. You'd expect matrix math [] to work out but people have tried to implement more interesting algorithms too. :-)

- Amit []

Re:Graphics cards and computation (1)

cybermace5 (446439) | more than 11 years ago | (#6710598)

Back in school, a lot of discussion was being thrown around about using video cards to process the RC5 crack. Only thing was the processor may not be all that much faster than a computer processor. It would have depended on how close the graphic optimizations were to the code-crack algorithm.

Re: Graphics cards and computation (4, Informative)

Black Parrot (19622) | more than 11 years ago | (#6710662)

> There has been some work on using graphics cards for computation. The tough part is figuring out how to rephrase your algorithm in terms of what the GPU can handle.

Isn't there a lot of sloth involved in reading your results back as well?

Meanwhile, users of GCC can exploit whatever multimedia SIMD instructions their processor supports by telling the processor you want to use them. For x86 see this [] and this [] ; for other architectures start here [] . (Notice the GCC version in the URL; the supported options sometimes change between versions, so you should look in a version of the GCC Manual that matches what you're actually using.)

I confess I haven't benchmarked these options, but in theory they should boost the performance of some kinds of number-crunching algorithms.

BTW, Linuxers can find what multimedia extensions their CPU supports with cat /proc/cpuinfo, even from a user account. Look for multimedia support in the list at the end of the cpuinfo. Lots of those extensions only support integers or low-resolution fp numbers, but IIRC SSE2 should be good for high-precision FP operations. Use google to find out what your extensions are good for.

And post us back if you do some benchmarking, or find some good ones on the Web.

Re: duh (1)

Black Parrot (19622) | more than 11 years ago | (#6710618)

> I'm more interested in using them for specific calculations. Imagine if one of these things was accidentally embued with the ability to factor gigantic numbers.

Maybe that's the breakthrough that will allow us to factor large primes!

Not enough details (5, Informative)

bobtodd (189451) | more than 11 years ago | (#6710447)

Raster doesn't say whther he had 'Option "RenderAccel" "True"' enabled, which you must do on Nvidia cards if you want XRender acceleration.

Here is the entry from the driver README:
Option "RenderAccel" "boolean" Enable or disable hardware acceleration of the RENDER extension. THIS OPTION IS EXPERIMENTAL. ENABLE IT AT YOUR OWN RISK. There is no correctness test suite for the RENDER extension so NVIDIA can not verify that RENDER acceleration works correctly. Default: hardware acceleration of the RENDER extension is disabled.

Following that option, this one is noted:

Option "NoRenderExtension" "boolean" Disable the RENDER extension. Other than recompiling the X-server, XFree86 doesn't seem to have another way of disabling this. Fortunatly, we can control this from the driver so we export this option. This is useful in depth 8 where RENDER would normally steal most of the default colormap. Default: RENDER is offered when possible.

Re:Not enough details (3, Interesting)

madmarcel (610409) | more than 11 years ago | (#6710478)

When I enabled that setting on my linux box (redhat , latest version of X and a nvidia geforce 4200)
I got weird glitches all over the screen, most notably in the window borders and wherever windows or menu's overlapped other things on the screen. There was an increase in speed however. As you might expect I disabled it after about 15 minutes. Ugh. I'll have another look at it when it's been fixed :D

Re:Not enough details (1)

molarmass192 (608071) | more than 11 years ago | (#6710502)

Has nVIDIA worked the kinks out of this yet? I remember some bad mojo about this option with OpenOffice that makes me hesistant to re-enable it. I'm still on the 4363 release of the drivers, haven't installed the 4496 ones yet.

Re:Not enough details (1)

bobtodd (189451) | more than 11 years ago | (#6710533)

You're absolutely right, and the other guy too. I have it disabled myself (running 4496 drivers from the portage tree, there's a patch for linux 2.6 in there), I was more pointing out that that would be a relevant factor to the tests he was running. I didn't even notice much of a speedup from enabling it on my FX 5200, but I sure noticed the bugs. Mostly Gecko-based browser crashes for me.

But not to turn this into or anything.

Re:Not enough details (1)

bobtodd (189451) | more than 11 years ago | (#6710515)

Forgot to suggest, perhaps some Cg shaders could speed up the types of operations Raster is looking for in E 0.17? Not that I know much about Cg, but I've read some of the docs for it, and for stuff like the kind of candy the next version will have, well the shader languages appear pretty much designed to do this on the hardware.

Basically 2D vertex programs for the widgets' dimensions, fragment programs for effects on the contents of the widget shaders. Am I smoking crack here or what? Obviously its not a solution that will cover everyone, but I get the impression the next E is some while off anyhow, by which time many more people may have shader enabled GPUs.

(Gotta love the lack of post editing.)

Yawn (-1)

Anonymous Coward | more than 11 years ago | (#6710452)

I still don't see why Loonix has to use such a dated client/server setup. The XFree system is too damn slow, and too damn old to be used seriously as a desktop system. If you don't come up with something better, Lunix will continue to be a server-only OS, and a poor one at that.

Re:Yawn (0)

Anonymous Coward | more than 11 years ago | (#6710576)

a "poor" "server-only OS"... unlike IIS, which is clearly the bees testicles?


I agree that X is slow, sucks etc etc, but fuck man - get your arguments sorted out. say that theres no apps, blahblahblah, thats more convincing than the shit you just spouted about it being a poor server OS.

Re:Yawn (2, Interesting)

OrangeTide (124937) | more than 11 years ago | (#6710669)

client/server setup is a superior way of designing a windowing environment.

X11 uses unix sockets (or optionally slower, less secure TCP) and shared memory.

Win32 uses shared memory and messaging.

MacOS X .. I don't know for certain, I hope it uses Mach kernel messages.

QNX Photon uses qnx kernel messages and shared memory.

The real difference is the layer at which the windowing system exists. in the case of X11, MacOS X and Photon. the windowing system is just another process.

In Win32 it's a kernel thread (as far as I know). But still, you're sending messages from one place to another and constructing windows based on them.

Client/Server is the natural way to build a multi-application graphical environment.

Of course there are "fake" environments which amount of an embedded video driver and some library to draw widgets. (most DOS gui apps are like this).

An important truth about X (4, Funny)

frovingslosh (582462) | more than 11 years ago | (#6710464)

It may be big and bloated, but at least it's slow.

Re:An important truth about X (3, Informative)

OrangeTide (124937) | more than 11 years ago | (#6710712)

X is small and fast(at least XFree86 [] is). When you look at how much virtual memory it has mapped in. (using 'ps' for example). You also are seeing the amount of memory mapped in for the video frame buffer. Have a 32Mb video card? Well at *least* 32Mb of your virtual address space isn't mapping into system ram, it's mapped into video ram.

Also, with any application, the code space doesn't take system RAM in the same sense as data space does. Normally you map in pages of memory that point straight to the I/O device the executable exists on. (this is called mmap [] ). You only have a few pages of system memory actually in-use, for the areas of the program that are currently executing or have executed recently. It's pretty easy to draw an analogy to this and swap memory, except this is a lot simpler to implement in a kernel.

I've build mini systems where XFree86 and Linux and a handful of fun apps ran in 4Mb of RAM. For a diskless system, you would want to use something like XIP (eXecute-In-Place). that way you don't have to go crazy loading in applications into system RAM or have funny mmap things that try to cache memory. (if it's all in RAM disk why are you caching RAM with more RAM? :)

Also check out the AgendaVR3 pda [] . I own one of these gizmos. The company is basically out of business, but their PDAs definently ran XFree86 and a ton of apps with only 8Mb of flash and 8Mb of RAM.

Of course. If XFree86 is still too big for you, there is always The MGR Window System [] . This fun program is designed to basically allow you to run multiple shells on the same screen in a graphical way, with each one having it's own font size if you want. It looks like monochrome X11, but it's a lot smaller. It also works over both telnet and ssh quite transparently. (all the GUI stuff is encoded in vt100-like escape codes). You can even do real graphics with it, look at this big screen shot [] if you don't believe me. Also it's open source, which is good because it probably hasn't used on linux after kernel version 1.2, have fun tinking with it. :)

is this the man who said that "Windows has won"? (3, Insightful)

rxed (634882) | more than 11 years ago | (#6710476)

Is this the same person who some time ago said that: "Windows has won. Face it. The market is not driven by a technically superior kernel, or an OS that avoids its crashes a few times a day. Users don't (mostly) care. They just reboot and get on with it. They want apps. If the apps they want and like aren't there, it's a lose-lose. Windows has the apps. Linux does not. Its life on the desktop is limited to nice areas (video production, though Mac is very strong and with a UNIX core now will probably end up ruling the roost). The only place you are likely to see Linux is the embedded space." Slashdot article is also available here: l?tid=106

Thats a myth. (2, Insightful)

HanzoSan (251665) | more than 11 years ago | (#6710483)

What Apps can I not run under Linux?

My browser works, most of my games work, Photoshop works, Microsoft word works,

Do your research, Wine, Transgaming, Crossoveroffice

Re:Thats a myth. (0)

Anonymous Coward | more than 11 years ago | (#6710527)

"Works" is relative. Lots of applications will run under Wine but still exhibit quirky behavior.

Re:Thats a myth. (1)

halo1982 (679554) | more than 11 years ago | (#6710591)

What Apps can I not run under Linux?

My browser works, most of my games work, Photoshop works, Microsoft word works,

Do your research, Wine, Transgaming, Crossoveroffice

Yes, true. And Joe User doesn't want to bother (or know how) installing, recompiling, etc just to get their apps to work, when they can just install the program and run it from Windows and only have a few crashes/reboots. Sad but true. I think you may be forgetting not everyone is a /.er

Anyway though, thats totally off topic. I just had to get that in.

Thats why theres Lindows with ClickNRun (1)

HanzoSan (251665) | more than 11 years ago | (#6710643)

When was the last time you used Linux? In 2000? Its changed ALOT since 2000.

Re:Thats why theres Lindows with ClickNRun (0, Troll)

arkanes (521690) | more than 11 years ago | (#6710705)

The parent is correct. You are not. It's not optimal, or even fair, but it's true. Arguing with people that "you can have everything you want" under Linux won't get you far because its clearly false.

Re:Thats why theres Lindows with ClickNRun (1)

HanzoSan (251665) | more than 11 years ago | (#6710771)

That is not the arguement. The fact is, 90% of users use the same few applications, and those applications workk under Linux.

These are the facts, this is not opinion, there is nothing to debate, people do not use the apps which dont run under Wine, and all the apps which people do use run under Wine.

Re:Thats a myth. (0)

Anonymous Coward | more than 11 years ago | (#6710604)

You just completely missed the point.

Re:Thats a myth. (1)

OrangeTide (124937) | more than 11 years ago | (#6710770)

Every single app that I would want to run is already available and runs under Linux natively. For example:

mozilla, neverwinter nights(w/ expansion pack), gcc, gdb, make, gnuplot, bc, gimp [] , icebreaker [] , valgrind [] , electric fence [] , Crossfire [] , LyX [] , angband [] , Nethack (falcon's eye) [] , vim [] , XFree86, pekwm [] and netpbm [] .

There are few apps that I run that are not on that list. Really, if you think about it. On any computer system the top 90% of the apps you run could probably be counted on one hand.

But I'm one of those unusual people who has his laser printer working in Linux and only has a windows box to test the software I write. I compile the windows version on Linux of course. (using these scripts [] to build the cross compiler).

Re:is this the man who said that "Windows has won" (0)

Anonymous Coward | more than 11 years ago | (#6710491)

yes, it is the same guy.
just because he thinks market wise linux doesn't have a chance in taking over the desktop doesn't mean that he doesn't want to have a badass wm / suite of libs for himself (and anyone else who wants to use it)

Re:is this the man who said that "Windows has won" (0)

Anonymous Coward | more than 11 years ago | (#6710524)

Here's the rasterman article at linux and main [] where he does indeed say what you have posted. Your post sounds like flamebait, but upon reading the links, you're right.

Rendering backends for Evas??? (3, Funny)

JFMulder (59706) | more than 11 years ago | (#6710512)

What, so now they've got rendering backends in Evangelions?

Raster's on holiday (5, Informative)

Rabid Penguin (17580) | more than 11 years ago | (#6710532)

Normally, he would answer some questions or comments posted about something he has written, but he will be out of town for at least a few days.

I highly doubt he meant for this to get wide-spread exposure beyond developers of Enlightenment or X. Since it has, this is a good opportunity. I'll make this clear for anyone that didn't catch it, raster WANTS XRENDER TO BE FASTER! If there is a way to alter configuration or to recode the benchmark to do so, he wants to know about it.

Rather than posting questions about his configuration (which he can't answer right now), grab the benchmarks that he put up and get better results.

Now back to your regularly scheduled trolling...

Re:Raster's on holiday (1)

bobtodd (189451) | more than 11 years ago | (#6710631)

If the past is any guide, a number of Slashdotters are awaiting E 0.17 with at least some level of hope that Raster will come through with something that rocks once again. In that light, he'd be silly not to expect his first news post in months to get some attention.

I still have some version of E .16 installed, and use it from time to time. But then, I switch between window managers like a little girl dressing up her dolly.

I also think its a little....unusual to expect anyone to go and tell him about configurations; I know of the guy as primarily a graphics oriented programmer, who I would expect to know his drivers already. Do you like being told how to suck eggs?

Re:Raster's on holiday (1)

Rabid Penguin (17580) | more than 11 years ago | (#6710681)

Maybe he did expect it, he didn't mention one way or another before he left. "Some attention" is much different than making it on slashdot, which I think would fall more under the "ass-load of attention" category. :-)

You're right, he is a graphics oriented programmer, and he does know his stuff quite well. My comment regarding configurations was addressing numerous posters asking "did he enable feature X?", "is he using kernel Y?", or "did he install the right driver?", etc.

The point was, he made his testing methods available, if people take issue with them they are free to do something about it. He wants to be proven wrong so he has a good reason to add an Xrender backend to Evas.

Lessons from the ancient (4, Interesting)

Empiric (675968) | more than 11 years ago | (#6710542)

There's an example from back in the 80's that still probably serves as a good engineering reference for people working on hardware/software driver issues.

In those days of yore (only in the computer industry can one refer to something 20 years ago as "yore"...) there was the Commodore 64. It retains it's place as a pioneering home computer in that it offered very good (for the time) graphics and sound capability, and an amazing 64K of RAM, in an inexpensive unit. But then came its bastard son...

The 1541 floppy disk drive. It became the storage option for a home user once they became infuriated enough with the capabilites of cassette-tape backup to pony up for storage on a real medium. Unfortunately, the 1541 was slow. Unbelievably slow. Slow enough to think, just maybe, there were little dwarven people in your serial interface cable running your bits back and forth by hand.

Now, a very unique attribute of the 1541 drive was that it had its own 6502 processor and firmware. Plausibly, having in effect a "disk-drive-coprocessor" would accelerate your data transfer. It did not. Not remotely. Running through a disassembly of the 6502 firmware revealed endless, meandering code to provide what would appear, on the surface, to be a pretty straightforward piece of functionality: send data bits over the data pin and handshake it over the handshake signal pin.

As the market forces of installed base and demand for faster speed imposed themselves, solutions to the 1541 speed problem were found by third party companies. Software was released which performed such functions as loading from disk and backing up floppies as speeds that were many, many times faster than the 1541's base hardware and firmware could offer.

The top of this particular speed-enhancement heap was a nice strategy involving utilizing both the Commodore 64's and the 1541's processors, and the serial connection, optimally. Literally optimally. Assembly routines were written to run on the both 64 and the 1541 side to exactly synchronize the sending and receiving of bits on a clock-cycle by clock-cycle basis. Taking advantage of the fact both 6502's were running at 1 Mhz, the 1541's code would start blasting the data across the serial line to the corresponding 64 code, which would pull it off the serial bus within a 3-clock-cycle window (you could not write the two routines to be any more in sync than a couple 6502 instructions). This method used no handshaking whatsoever for large blocks of data being sent from the drive to the computer, and so, in an added speed coup, the handshaking line was also used for data, doubling the effective speed.

The 1541 still seems pertinent as an example of a computer function that one would probably think would best be done primarily on a software level (running on the Commodore 64), but was engineered instead to utilize a more-hardware approach (on the 1541), only to be rescued by better software to utilize the hardware (on both).

There's probably still a few design lessons from the "ancient" 1541, for both the hardware and the software guys.

Re:Lessons from the ancient (2, Interesting)

red floyd (220712) | more than 11 years ago | (#6710605)

The other classic example was the original PC-AT MFM controller.

IIRC, they originally tried (slave mode -- the only available thing then) DMA, and in general, it was faster to pump the data out by hand.

Re:Lessons from the ancient (4, Insightful)

The Vulture (248871) | more than 11 years ago | (#6710607)

The 1541 drive itself was actually quite fast, reading an entire sector in much less than a second (if you set the job code directly in the drive). It was the serial transfer that was super slow (as you stated).

Unfortunately, the fast loaders assuming that the CPU and the drive both ran at exactly the same speed was a cause for problems. The PAL version of the C64 ran at a different speed (a bit slower, I believe), thus making fast loaders either NTSC or PAL specific (although there may have been one or two that could actually take the clock speed into consideration). The same fault meant that fast loaders sometimes didn't work with some variants of the drives (different CPU's, all supposedly 6502 compatible, but not necessarily so).

Additionally, because these fast loaders required exact timing, something had to be done with the VIC-II (interrupts from it would cause the 6510 in the C64 to lose it's timing) - usually the screen was blanked (basically turning off the VIC-II), or at the least, turning off sprites (sprites by the way, while nice, were a PITA becuase they disrupted everything, including raster timing).

Commodore did screw things up... They had four (or was it six?) connectors on each end of the cable, they could have made it at least quai-parallel, rather than the serial with handshaking. Unfortunately, they only hooked up two, CLK (handshaking clock) and DATA (for the data bit). However, seeing as the 1541 was the same hardware mechanism as the 1540 (it's predecessor for the VIC-20) and contained most of the same software (you could use a "user" command to change the speed for the VIC-20), they couldn't just go out and change the design. I almost get the feeling that they took the serial bus from the VIC-20, put it in the C64, figuring that they'd be able to use the 1540 drive. Then at the last minute, they realized that it wouldn't work and they made the 1541, as well as a ROM upgrade for the 1540 to work with the C64.

While getting rid of the handshaking and transferring an extra bit over that line made sense then, with modern computers, I wouldn't trust it. There's too many components from too many manufacturers, and I really like my MP3 and pr0n collections too much to lose them to one bit being corrupted.

-- Joe

Unfair comparison (3, Informative)

Anonymous Coward | more than 11 years ago | (#6710548)

The numbers being reported for this benchmark are at best questionable--yeah, like that's new. The imlib image is composed off-screen and then rendered at the last moment to the display. The Xrender, non-off screen, version has the penalty of having to upgrade the physical display so frequently. If you make imlib2 render the image to the screen *every* draw, you end up getting results very similar to the Xrender on-screen display. Now, the fact that the Xrender off-screen display is so poor *is* a concern.

Re:Unfair comparison (0)

Anonymous Coward | more than 11 years ago | (#6710561)


I for one (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#6710566)

welcome our new software rendering overlords.

damn this joke isnt funny...

In soviet russia our new sofware overlords render YOU

and natalie portman can software render my hot grits, you insensitive clod!

now a beowulf cluster of such posts STILL arent funny.

wishing the moderators would stop smoking crack...

mr AC...

nVidia Linux woes (4, Informative)

bleachboy (156070) | more than 11 years ago | (#6710570)

I have an nVidia GeForce2 Ultra, and recently upgraded my kernel to 2.5.75. It caused my X graphics to become unbelievably slow -- like 2400 baud modem slow when doing a directory listing or anything where text was scrolling. Downgrading to 2.4.21-ac4 (ac4 needed for some Adaptec drivers) and it was back to fast again. Further, my favorite 3D shooter was about 60 fps faster with the 2.4 kernel. The kernels were compiled identically, or at least as identically as you can get with 2.4 vs 2.5. Here's a few tips I can offer to the nVidia users out there:
  • In case you don't know, nVidia provides official (but woefully non-GPL) drivers [] . They also have a message board [] which I found to be quite informative at times.
  • Compile your kernel with MTRR support. It will speed things up a great deal.
  • Compile your kernel without AGPGART support. The nVidia driver(s) are faster.
  • If you want to try the nVidia driver with a 2.5 kernel, you'll need a patch [] .
  • If you have an nForce chipset, make sure to add "mem=nopentium" to your kernel boot parameters, or else your system will be incredibly unstable. Better yet, ditch your nForce chipset (I did) since the Linux support totally blows, at least for now. Give your old nForce chipset to your wife, girlfriend, mother, Windows box, or whatever.

Re:nVidia Linux woes (1)

equiraptor (562961) | more than 11 years ago | (#6710610)

I really appreciate the insinuation that women can't do Linux. Maybe the ones you know can't, but many of us can.

Well, yes (2, Interesting)

reynaert (264437) | more than 11 years ago | (#6710572)

As far as I know, only the Matrox G400 card has good hardware render accelaration. NVidia's support is still experimental and rather poor. Render is still considered experimental, and speed is not yet considered to be very important. Full accelerated support is planned for XFree86 5.

similar experience (1)

sporkboy (22212) | more than 11 years ago | (#6710648)

This reminds me of the experience of WindowFX, a 3d transparency/animation tool made by Stardock. They included hardware 'acceleration' as a settable option, but for most cards it was anything but an option, ran at 1fps.

The exception being the G400, then the Radeon, and only very recently (on Windows) the GeForce. It's entirely an issue of how well the drivers are implemented, and since many of these 2d acceleration functions aren't widely used they're often overlooked in favor of the (traditionally) common case. I'd expect that NVidia hasn't been lobbied so heavily to make it's Linux drivers support these functions, as it took months for Stardock to lobby them to alter the Windows drivers to do the same.

I guess the answer is that I'm not surprised to hear something like this, and there is hope even if it's small hope that it will get better.

Re:similar experience (1)

Krach42 (227798) | more than 11 years ago | (#6710751)

Yeah, nvidia sucks for DirectFB also... I actually saw rasterman on #directfb a lot, and talked to him while I was developing the blitting for the nvidia driver for DirectFB.

The odd thing about DirectFB, is only the Matrox has decent acceleration for DirectFB. It's pretty frustrating in my opinion. I like DirectFB a lot, but it just doesn't have the support.

And for all those who say it only takes "participation" I can tell you BULL on that level. I was at one point one of the most educated people on how the nvidia drivers work (outside of nvidia of course) and I didn't know how to accomplish anything besides Boxes, Lines, and Blits... and that's pitiful. But that's all that's available.

8500 pro 128 meg (-1, Troll)

Anonymous Coward | more than 11 years ago | (#6710589)

I am a 19 year old guy, and one afternoon I thought it would be fun to insert a cucumber all the way into my anus. I had done this before and was always able to pass it back out easily. Well this time it went in, and I couldn't get it to come back out. I also had previous engagement with a friend that afternoon so I reluctantly went with the cucumber still in my anus. I was fine for most of the afternoon until I felt it coming! Needless to say I had to rush for the bathroom, but nobody was the wiser. That experience scared the hell out of me, but it was still fun.

It takes time to talk to hardware (3, Interesting)

garyebickford (222422) | more than 11 years ago | (#6710617)

I worked on 2D & 3D libs a while back for a graphics company. Among the biggest problems at the time was that each different output device had its own feature set, implemented slightly differently. Every designer had their own ideas of what would be 'cool' in their graphics engine, which tended to follow the latest progress in the field.

General purpose graphics libraries such as ours ended up spending most of the time dealing with the cool features than the features saved. For example, if a plotter had a 2D perspective transform built in, was it better to do the 3D projection ourselves and just feed it untransformed vectors, or map the 3D in such a way as to allow the 2D processing of the plotter to help out? This might require pre-computing sample data.

Also, since the plotter had 2D transforms we have to do a lot more work including reading the plotter's status and inverting the plotter's transform matrix to make sure that the resulting output didn't end up outside the plotter's viewport.

A code analysis found that over 90% of the code and 90% of the processing time was spent preventing and dealing with input errors and handling compatibility issues.

Nowadays, it's harder in many ways with a wide variety of hardware based texturing and other rendering - do we do the lighting model ourselves, or let the HW do it? It may depend on whether we're going for speed and 'looks' or photometric correctness.

Show of Hands (1)

sharkey (16670) | more than 11 years ago | (#6710627)

Anybody else read that as "XBender"?

I actually downloaded and ran his benchmark (3, Interesting)

LightStruk (228264) | more than 11 years ago | (#6710645)

and I noticed something strange. For those of you who can't or won't try Rasterman's benchmark yourself, the program runs six different tests, each of which uses a different scaling technique. Each of the six tests is run on the three different test platforms: XRender onscreen, XRender offscreen, and Imlib2. Imlib2 is also written by Rasterman, and is part of Enlightenment.

Here are the test scores from one of the rounds -

*** ROUND 3 ***

Test: Test Xrender doing 2* smooth scaled Over blends
Time: 196.868 sec.

Test: Test Xrender (offscreen) doing 2* smooth scaled Over blends
Time: 196.347 sec.

Test: Test Imlib2 doing 2* smooth scaled Over blends
Time: 6.434 sec.

Now for the strange thing. For the first platform, I watched as the program drew the enlightenment logo thousands of times in the test window, as you would expect. For the second test, it took about the same amount of time, but drew offscreen, again, as the test's name would indicate. However, for the imlib2 test, it also didn't draw anything in the test window.
I got the impression (perhaps wrongly?) that Imlib2 would actually draw to the screen as well. Since it doesn't change the screen, I have no way of telling if imlib2 is doing any drawing at all.

So, I'm digging into the benchmark's code... I'll let you guys know what I find.

Re:I actually downloaded and ran his benchmark (1)

saikatguha266 (688325) | more than 11 years ago | (#6710696)

I noticed that as well. I commented out test 1 since with renderaccel it was at par with imlib, but test2 was slow. As for test 2, I moved the imlib test to the top, followed by the xrender and then xrender offscreen.

Surprise. I think I see something I shouldn't be seeing. After imlib claims that it is done I suspend the app and in my window I see 4 unequal sized rectangles slightly covering each other. on the top is a black-green diagonal gradient (NorthEast). under it is a light blue-dark blue NW gradient. Under the blue is a black-red southward gradient and covering that is a longish red-yellow Eastward gradient.

Me thnks imlib had a brainfart ... or graphics didn't load or something. Can someone confirm this?

Re:I actually downloaded and ran his benchmark (2, Informative)

saikatguha266 (688325) | more than 11 years ago | (#6710706)

Whops. Mod me down on that last one. The image I described was the opaque image that is being used as a background, and the bufferring threw off the printf vs. x sync I am guessing. On closer examination ... imlib does seem to work ... but doesn't display anything while its doing stuff ... only the final image.

The results are not obviously broken (5, Insightful)

asnare (530666) | more than 11 years ago | (#6710666)

A lot of people are questioning the results claimed by Rasterman; however try downloading the thing and running it for yourself. I see the same trend that Rasterman claims when I do it.

My system: Athlon 800, nVidia 2-GTS.
Drivers: nVidia driver, 1.0.4363 (Gentoo)
Kernel: 2.4.20-r6 (Gentoo)
X11: XFree86 4.3.0

I've checked and:

  1. agpgart is being used;
  2. XF86 option "RenderAccel" is on.

The benchmark consists of rendering an alphablended bitmap to the screen repeatedly using Render extension (on- and off-screen) and imlib2. Various scaling modes are also tried.

When there's no scaling involved, the hardware Render extension wins; it's over twice as fast. That's only the first round of tests though. The rest of the rounds all involve scaling (half- and double-size, various antialiasing modes). For these, imlib2 walks all over the Render extension; we're talking three and a half minutes versus 6 seconds in one of the rounds; the rest are similar.

I'm not posting the exact figures since the benchmark isn't scientific and worrying about exact numbers isn't the point; the trend is undeniable. Things like agpgart versus nVidia's internal AGP driver should not account for the wide gap.

Given that at least one of the rounds in the benchmark shows the Render extension winning, I'm going to take a stab at explaining the results by suggesting that the hardware is probably performing the scaling operations each and every time, while imlib2 caches the results (or something). The results seem to suggest that scaling the thing once and then reverting to non-scaling blitting would improve at least some of the rounds; this is too easy, however, since while it helps the application that knows it's going to repeatedly blit the same scaled bitmap, not all applications know this a priori.

- Andrew

Render Bench (4, Informative)

AstroDrabb (534369) | more than 11 years ago | (#6710671)

I just ran the render bench from the link. The results are pretty amazing.
Available XRENDER filters:
Set up...
Test: Test Xrender doing non-scaled Over blends
Time: 22.842 sec.
Test: Test Imlib2 doing non-scaled Over blends
Time: 0.501 sec.

Test: Test Xrender doing 1/2 scaled Over blends
Time: 11.438 sec.
Test: Test Imlib2 doing 1/2 scaled Over blends
Time: 0.188 sec.

Test: Test Xrender doing 2* smooth scaled Over blends
Time: 225.476 sec.
Test: Test Imlib2 doing 2* smooth scaled Over blends
Time: 3.963 sec.

Heh (-1, Offtopic)

MagPulse (316) | more than 11 years ago | (#6710691)

I remember the days when I thought I'd just wait for Enlightenment v1.0. Years later, Enlightenment followers tell me, 0.16 isn't an alpha version or even a beta, it's VERSION SIXTEEN, man.

I've experienced this myself. (4, Insightful)

Anonymous Coward | more than 11 years ago | (#6710743)

The problem is in *sending* the graphics commands to the hardware. If you're manually sending quads one at a time, I found that for 16x16 squares on screen, it's faster to do it in software than on a GEForce 2 (that was what I had at the time - this was a few years back). Think about it:

== Hardware ==

Vertex coordinates, texture coordinates and primative types are DMA'd to the video card. The video card finds the texture and loads all the information into it's registers. It the executes triangle setup, then the triangle fill operation - twice (because it's drawing a quad).

== Software ==

Source texture is copied by the CPU to hardware memory, line by line.

Actual peak fill rate in software will be lower than hardware - but if your code is structured correctly (textures in the right format, etc) - there's no setup. The hardware latency looses out to the speed of your CPU's cache - the software copy has the same complexity as making the calls to the graphics card. :)

The trick is to *batch* your commands. Sending several hundred primatives to the hardware at the same time will blow software away - especially as the area to be filled increases. Well.. most of the time, but it really depends on what you're doing.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?