Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Software Rendering Engine GPU-Accelerated By WebCL

Soulskill posted about 10 months ago | from the expanding-options dept.

Graphics 84

Phopojijo writes "OpenGL and DirectX have been the dominant real-time graphics APIs for decades. Both are catalogs of functions which convert geometry into images using predetermined mathematical algorithms (scanline rendering, triangles, etc.). Software rendering engines calculate colour values directly from the fundamental math. Reliance on OpenGL and DirectX could diminish when GPUs are utilized as general 'large batches of math' solvers which software rendering engines offload to. Developers would then be able to choose their algorithms for best suits their project, even native to web browsers with the upcoming WebCL."

cancel ×

84 comments

Software-rendered API wrappers through OpenCL (2)

Hsien-Ko (1090623) | about 10 months ago | (#45017711)

...is something i'd really like to see -especially one that does Glide and the dither+undither characteristic of the first three Voodoo cards.

I know there's MAME/MESS but I don't think they do the infamous filtering.

Re:Software-rendered API wrappers through OpenCL (1)

bored (40072) | about 10 months ago | (#45018209)

This sounds like the perfect application for a shader against a whole screen texture. See https://en.wikibooks.org/wiki/OpenGL_Programming/Post-Processing [wikibooks.org]

Or is there something I'm missing? After all there are a number of glide wrappers nGlide [zeus-software.com] and glidos [glidos.net] already.

Re:Software-rendered API wrappers through OpenCL (1)

Rockoon (1252108) | about 10 months ago | (#45019009)

Or is there something I'm missing?

Yeah, the whole API and software rendering thing.

The advantage of early API's like glide was that it was much lower level than opengl/d3d, allowing for very efficient hardware-assisted software rendering.

As a simple for-instance, the transition between those lower level API's and software rendering into those higher level API's and fully-hardware rendering was that things like landscapes suddenly used polygons and their datasets ballooned enormously. There was nearly an entire decade of fixed-function-pipeline where the quite-negative artifacts of that transition such as LOD popping and disconnected seams on landscapes were common place, and even to this day its difficult to get a landscape renderer correct at LOD transitions (most engines have just pushed the LOD transitions further away and covered the ground with instanced-foliage to hide the problem.)

The advantage of the software rendering ray-surfing methods [flipcode.com] were that there werent any LOD transitions with regards to landscapes if you didn't want there to be. Hell, they were rendering seamless and popless landscapes on 80386's in the demo scene but these days its the norm for there to be pops and seams.

Re:Software-rendered API wrappers through OpenCL (1)

bored (40072) | about 10 months ago | (#45019425)

Oh I understand the reasons for wanting a software rendering engine. But, the OP was looking for a "function" that is currently available (AFAIK) due to the fact that most of the fixed function behavior of early 3D APIs/GPU (like glide) is now programmable and can be simulated with the more generic pipelines in modern GPUs. In fact there are attempts at writing full blown ray tracers by misusing just GLSL.

But, more to the point, i'm not even sure GPU's are necessarily for many game related drawing. I recently wrote a basic/little 3D engine as a test case for using a SSE4/intrinsics C++ wrapper library I cooked up while bored. The number of polygons I could transform and texture map just using a couple cores and SSE was pretty astonishing. Makes me think a lot of the 3D "isometric" games are wasting their time doing everything in full 3D only to draw it in basically 2D.

Re:Software-rendered API wrappers through OpenCL (1)

gl4ss (559668) | about 10 months ago | (#45020559)

I think people are glorifying glide in this thread a biiiiiit toooo muuuch.

glide drew triangles. what the voodoo did in hw was to draw horizontal lines zbuffered, textured and shaded. what this did enable was that people with sw software engines could just pop their trifiller to use that and away they went. now if the engine used something fancier like modelling the ground as a surface(like the game outcast) then there was no fucking way to use 3dfx voodoo for accelerating that.

I am not aware of a SINGLE GAME that did with glide what people in this thread said. in fact, using glide straight off cut you from pushing video from ram to complement things rendered on the voodoo.

in short, I don't understand why someone would refer this(going back to pure cpu style but accelerated with cl) as going back to glide when glide had nothing of the sort functionality since it tied you to triangles with non-dynamic textures. you could use opengl in the same fashion as soon as the minigl port for quake was out..

Re:Software-rendered API wrappers through OpenCL (0)

Anonymous Coward | about 10 months ago | (#45021241)

I thing the original poster was asking for a something that emulated the 16-bit dithered color to 24-bit filtered output on the voodoo. I guess to make old games look good, really there isn't anything the voodoo cold do that hasn't been possible to do with an emulator for a number of years.

Re:Software-rendered API wrappers through OpenCL (1)

gl4ss (559668) | about 10 months ago | (#45023071)

but that was the suck. 24 bit is better. I think you would need to use 16bit throughout for the effect though, to get alpha mess up as badly as on voodoo(which is why you just can't slap on a shader on the framebuffer to turn it into 16bit, it wouldn't look the same as if the whole scene was rendered with 16bit+dither from the start)

Re:Software-rendered API wrappers through OpenCL (0)

Anonymous Coward | about 10 months ago | (#45028057)

which is why you just can't slap on a shader on the framebuffer to turn it into 16bit, it wouldn't look the same as if the whole scene was rendered with 16bit+dither from the start

I think the argument was the final output was 24-bit even if the computation was done at 16. So there was some marginal quality improvement by smoothing color transitions. If your concerned about errors in the 16-bit computation the shader could downsample to 14 bits or something before interpolation back to 24-bits.

Frankly, I suspect you could make some of those games look even better just by adding a little noise. Something like a simulated film grain. Basically dirty up the video a little to hide some of the rendering problems.

STAAAAAHP! (5, Insightful)

girlintraining (1395911) | about 10 months ago | (#45017713)

Developers would then be able to choose their algorithms for best suits their project, even native to web browsers with the upcoming WebCL."

If web browsers were people, that statement would have caused a mass suicide of them. Guys, stop trying to turn the browser into a platform. It introduces so many layers of complexity and security issues that it's a miracle anyone has any trust or faith in the internet at all. It's getting to the point where the only way to safely browse the net is to shove the entire browser into a virtual machine... and even that only manages to protect your own computer, say nothing of your online activities, credentials, life, etc.

We need to be making browsers simpler, not more complex. Feature bloat is making these things a leper's colony inside your computer... a cesspool of malware and vulnerability. Don't add to it by coming up with some new way for developers to directly access the hardware of your computer because you're too fucking lazy to write an app to do whatever it is, and want to cram it into the browser instead. You're just encouraging them.

Seriously, we need a 12 step program for these "web 2.0" people.

Re:STAAAAAHP! (1)

lance_of_the_apes (2300548) | about 10 months ago | (#45017805)

+1

Hell, make it +10.

Re: STAAAAAHP! (1)

eyegone (644831) | about 10 months ago | (#45019259)

Make it +10,000.

Re:STAAAAAHP! (2)

fragfoo (2018548) | about 10 months ago | (#45017889)

Developers would then be able to choose their algorithms for best suits their project, even native to web browsers with the upcoming WebCL."

If web browsers were people, that statement would have caused a mass suicide of them. Guys, stop trying to turn the browser into a platform. It introduces so many layers of complexity and security issues that it's a miracle anyone has any trust or faith in the internet at all. It's getting to the point where the only way to safely browse the net is to shove the entire browser into a virtual machine... and even that only manages to protect your own computer, say nothing of your online activities, credentials, life, etc.

We need to be making browsers simpler, not more complex. Feature bloat is making these things a leper's colony inside your computer... a cesspool of malware and vulnerability. Don't add to it by coming up with some new way for developers to directly access the hardware of your computer because you're too fucking lazy to write an app to do whatever it is, and want to cram it into the browser instead. You're just encouraging them.

Seriously, we need a 12 step program for these "web 2.0" people.

Browser based apps are not done because developers are "too fucking lazy to write an app to do whatever" but because they are lazy/costs more money to do it for multiple platforms, including mobile ones, and they just-work without the need of installing anything (the app itself, JRE, whatever) and the need for some kind of user privileges.

Re:STAAAAAHP! (1)

NoNonAlphaCharsHere (2201864) | about 10 months ago | (#45018557)

Seriously, we need a 12 step program for these "web 2.0" people.

I can think of 5:

1. Line 'em up.
2. Shoot 'em.
3. Bulldoze 'em into a ditch.
4. Cover with lime.
5. Repeat.

Missing steps.... (1)

rts008 (812749) | about 10 months ago | (#45020007)

5. Repeat

Well, that actually accounts for eight of the twelve steps.

I suggest some modding:
1. round up politicians and remaining lawyers
2. add them to the "web 2.0" crowd
3. line them up
4. Shoot 'em
5. cover with lime
6. bulldoze 'em into a ditch
7. repeat (you can never be too careful here)

And there's your 12 step program!

Re:STAAAAAHP! (0)

Anonymous Coward | about 10 months ago | (#45023279)

...but not so lazy that they write CSS for each desktop browser and a mobile version of the site for each mobile browser....

Writing an interactive web app to work across browsers is the same dance as writing a desktop app to work across platforms.

I develop a complex engineering simulation tool in Java SE. The only platform specific code is for the license protection system which relies on native libraries for each targetted platform.

The UI looks and works the same on each platform. No, it does not look like a native app on any platform, but it does look modern and not ugly.

I have written some web apps as well in PHP and with Ruby on Rails. I had to spend more time getting consistency between browsers for these apps than the time I would have spent getting a Java app to work and look the same on different operating systems.

Re:STAAAAAHP! (1)

fragfoo (2018548) | about 10 months ago | (#45030583)

Browser consistency, desktop and mobile included, is obtained by properly using a Javascript framework which does it for you and CSS Bootstrap, for example.

Re:STAAAAAHP! (1)

Lennie (16154) | about 10 months ago | (#45033701)

As there are millions of webdevelopers and only a couple of hundred thousand of 'native app' developers for iOS and Android which charge a lot more money.

Really, development speed and knowledge of native platforms is an important factor. If you only need to know one platform and can reuse code this translates to less time, less knowledge of native platforms and thus less cost.

Less cost, that's what this is about. Businesses like less cost.

Re:STAAAAAHP! (4, Interesting)

Phopojijo (1603961) | about 10 months ago | (#45017901)

Actually, I look at web browsers as an art platform. It is programmed by a set of open standards which gives any person or organization the tools to build support for the content which is relevant to society. A video game, designed in web standards, could be preserved for centuries by whoever deems it culturally relevant.

For once, we have a gaming platform (besides Linux and BSD) which allows genuine, timeless art. If the W3C, or an industry body like them, creates an equivalent pseudo-native app platform... then great. For now, the web is the best we have.

Re:STAAAAAHP! (1)

chuckinator (2409512) | about 10 months ago | (#45018023)

POSIX has been around for a while. ISO/IEC also publish open standards for C and C++.

Re:STAAAAAHP! (0)

Anonymous Coward | about 10 months ago | (#45018489)

From the POSIX spec:

The following areas are outside the scope of POSIX.1-2008:

  • Graphics interfaces

Most games have graphics.

Re:STAAAAAHP! (0)

Anonymous Coward | about 10 months ago | (#45019069)

None of the games chuckinator likes do. He's an expert on Hunt the Wumpus, and Hunt the Wumpus will never die! Not least because I'm working on an HTML5 version I intend to bring to a web browser near you.

Make that two: POSIX and X11 (1)

tepples (727027) | about 10 months ago | (#45019097)

Then that makes two open specs that must be implemented: POSIX and X11. But you have a point that right now the only notable environments that focus on implementing these specs are desktop Linux and the free *BSDs. OS X, while based in part on FreeBSD, is not a free *BSD and no longer includes XQuartz as a standard feature [apple.com] .

Re:Make that two: POSIX and X11 (0)

Anonymous Coward | about 10 months ago | (#45019677)

You expect high efficiency programming on... X11?

I like you. You think big.

Re:STAAAAAHP! (0)

girlintraining (1395911) | about 10 months ago | (#45020897)

It is programmed by a set of open standards which gives any person or organization the tools to build support for the content which is relevant to society.

I think you need to lay off the koolaid man.

JPEG and [wikipedia.org] GIF [wikipedia.org] both have licensing issues; They are not free. The intended replacement for these, PNG, hasn't seen widespread adoption, can't do animations, but has no licensing issues. In fact, if you take a walk down a list of all the multimedia technologies commonly used on the web, MP3, MPEG4, h.264, AAC, surround sound... you will find yourself in a veritable desert when it comes to truly free standards. The standards may be 'open', that is, published... but a loose coalition of companies control all the key pieces.

In fact, the W3C, responsible for HTML, XML, CSS, etc., standards only two days ago [w3.org] decided to make documents about those standards available under an 'open source' license... optionally. Oh, but it gets worse. They're planning on introducing DRM and proprietary tech directly into the HTML5 standard; While you could conceivably use free technology, only the proprietary codec that will eventually be chosen for audio and video is guaranteed to be available on any HTML5-compliant browser.

A video game, designed in web standards, could be preserved for centuries by whoever deems it culturally relevant.

Copyright though means that they won't be able to even consider preservation for centuries, however. Many software licenses don't allow for backup copies... and it is highly doubtful you'll be able to keep that DVD you purchased it on alive for the next "150 years plus the life of the author", along with the equipment to read it, and some kind of adapter technology so that when the copyright finally expires, it might actually have a shot at being preserved.

If the W3C, or an industry body like them, creates an equivalent pseudo-native app platform... then great. For now, the web is the best we have.

What we have is a dystopian nightmare world of competing proprietary technologies, very little free/open alternatives, and billions being poured into the working groups responsible for setting these standards to ensure that every company involved gets a slice of the pie... meaning by the time they've gotten fat off their fill, you, the artist, will be starving. Again. Because before you can create art, you better pay a licensing fee, publishing fee, content storage fee, protection fee (for the drm to protect your art, of course), and ongoing monthly fees to your ISP.

No. Sorry, "the best we have" is not the web... it's stone tablets. Because those are about the only things someone doesn't have a patent on.

Re:STAAAAAHP! (1)

ameen.ross (2498000) | about 10 months ago | (#45023005)

JPEG and [wikipedia.org] GIF [wikipedia.org] both have licensing issues; They are not free.

Are you kidding me? The patents for GIF expired long ago [freesoftwaremagazine.com] . As for JPEG, that's as much a "living standard" as HTML5 is. It's worth researching further, but I'd think the older parts of JPEG aren't too problematic [pcworld.com] .

The intended replacement for these, PNG, hasn't seen widespread adoption, can't do animations, but has no licensing issues.

PNG has never been and never will be an intended replacement for JPEG, as PNG is lossless and JPEG is (mostly) lossy. And in what way hasn't PNG seen widespread adoption? It is the dominant lossless image format and is used absolutely everywhere. PNG can do animations too (though it's not supported anywhere meaningful), but WebM makes more sense for that. Don't like it? Use good ol' GIF then.
A replacement for JPEG would be WEBP [andrewmunsell.com] .

Re:STAAAAAHP! (2, Insightful)

Anonymous Coward | about 10 months ago | (#45017961)

Yeah, they really missed the boat not freezing computing in the 70s. These kids and wanting features and interesting applications and useful computing are a bunch of assholes. They should be forced to do things MY preferred way because my opinion is the only one that matters.

Fucking slashtards.

Re:STAAAAAHP! (1)

X0563511 (793323) | about 10 months ago | (#45017975)

... or if your users are too fucking lazy to install an app. Too bad, make them do it.

Re:STAAAAAHP! (0)

Anonymous Coward | about 10 months ago | (#45018079)

... or if your customers value their time. Too bad, make them go to the competition.

ftfy

Re:STAAAAAHP! (1)

CastrTroy (595695) | about 10 months ago | (#45018223)

I'm not sure that's the best solution either. With the current desktop offerings, all applications run with the full permissions of the user. Things are a little bit better on the mobile side. At least with Android I can see which permissions an app has, and by default they are very limited in what they can do. With Windows/Linux, any application I run can go and delete my entire home folder, or send it all out to some site on the web, or wreak all kinds of havoc. Currently, running in a web browser, if the only pseudo-sandbox that exists for desktop systems. I'd much rather run a web app from some random company then install some application on my computer.

Re:STAAAAAHP! (1)

lister king of smeg (2481612) | about 10 months ago | (#45018763)

I'm not sure that's the best solution either. With the current desktop offerings, all applications run with the full permissions of the user. Things are a little bit better on the mobile side. At least with Android I can see which permissions an app has, and by default they are very limited in what they can do. With Windows/Linux, any application I run can go and delete my entire home folder, or send it all out to some site on the web, or wreak all kinds of havoc. Currently, running in a web browser, if the only pseudo-sandbox that exists for desktop systems. I'd much rather run a web app from some random company then install some application on my computer.

on Linux isn't that what AppArmour or selinux is supposed to help negate

Rejected apps (1)

tepples (727027) | about 10 months ago | (#45019135)

Or if your users happen to have chosen a platform that gives the operating system publisher veto power over apps, and the operating system publisher has chosen to exercise this power over your app. For example, see Bob's Game or any story about rejection from Apple's App Store.

Re:Rejected apps (1)

X0563511 (793323) | about 10 months ago | (#45024583)

So sad for the users (and developers who write for it).

Re:Rejected apps (1)

tepples (727027) | about 10 months ago | (#45024763)

Would you prefer to be able to read Slashdot through the web but have to buy an app (and a different brand of computer to run the app) in order to post? That's what it'd be like.

Re:Rejected apps (1)

X0563511 (793323) | about 10 months ago | (#45026185)

The difference is that Slashdot is a website, not an application.

Line between website and application (1)

tepples (727027) | about 10 months ago | (#45026855)

If you make a distinction [wikipedia.org] , you need to explain the difference [logicallyfallacious.com] . Where does a website end and an application begin? Slashdot and other web boards are essentially web-based workalikes of an NNTP user agent, and that's certainly an application. I had assumed that the line was that one reads a "website" but posts using an "application". If elsewhere, where do you draw the line?

Re:STAAAAAHP! (0)

Anonymous Coward | about 10 months ago | (#45020371)

Okay, on desktop, you now give proprietary applications complete control over everything. Okay, almost everything - it used to be everything before WinXP and OSX, but even under modern NT-kernel based Windows or Darwin-kernel based Mac OS you still face the fact that the default mode for applications is "complete access to the user's account with no explanation whatsoever". If you don't know why this is an absolutely horrible thing to do, well... Let's just say that we've been trying to get our folks to stop clicking on every "download this app that steals your credit card in exchange for free smileys" banner ad. And it's still an issue.

So we should never encourage people to install unsandboxed proprietary software ever. So we need sandboxing. Let's look at the options:

1. Windows Store apps - would be nice except the entire Win32 API is sandboxed away and replaced with WinRT which forces applications into a windowing policy which at best could be described as a "poor man's Xmonad". On the desktop, this is a very disruptive windowing mechanism given that existing apps don't get to participate in this windowing policy at all. Also the aformentioned windowing mechanism is the primary reason why desktop users are staying far far far far far far away from Windows 8 which is the only platform that can accept Windows Store apps.
2. Mac App Store - gives you actual desktop access and the sandboxing limitations aren't too restrictive as to keep your app from working. However most of your desktop customers aren't using Macs to begin with.
3. The web browser - Most users have one regardless of platform and applications are fully sandboxed. There's a fairly small amount of things you need to worry about regarding input handling (XSS, XSRF, and SQLi to name a few) but these security issues affect *your* application, not other things on the user's computer. Capabilities across common browsers are good enough to satisfy most developers' vision for their application. The only main issue is offline support, but most desktop OSes are used with an active internet connection good enough to send small amounts of text data back and forth every few seconds.

Naturally the web browser comes out ahead as the least vulnerable platform to host an application upon and still reach a large swatch of customers. Yes, okay, maybe it deviates from Tim Berners-Lee's vision of an SGML-based hypertext document format but telling desktop users to install an app is akin to telling them that it's okay to wear a shirt with their social security number on it.

Re:STAAAAAHP! (1)

X0563511 (793323) | about 10 months ago | (#45024575)

Only if your OS is gimped. There is such a thing as Mandatory Access Control. You should look it up, we even have working implementations.

Re:STAAAAAHP! (2, Insightful)

Anonymous Coward | about 10 months ago | (#45017987)

Guys, stop trying to turn the browser into a platform.

No. Or "too late", rather.

these "web 2.0" people

These "web 2.0" people are going to continue to ignore you until what you espouse is as obviously stupid to everyone as it is to them. I'm not sure it isn't already.

Re:STAAAAAHP! (1)

chuckinator (2409512) | about 10 months ago | (#45017991)

The silver lining in this is that it lets us dazzle the suits with just how fast our C++ apps are in comparison to the mess that the web stack has turned into.

Re:STAAAAAHP! (2)

Phopojijo (1603961) | about 10 months ago | (#45018125)

It's getting much closer. Most ASM.js demos show C++-compiled-into-Javascript is only half performance of native C++ (and getting faster). That's a difference between 30fps and 60fps if all code was Javascript. WebCL, on the other hand, is almost exactly OpenCL speeds... so for GPU-accelerated apps (depending on whether Javascript or WebCL is your primary bottleneck) you could get almost native performance.

SmallPtGPU, from the testing I did a while ago, seems to be almost the same speed whether run in WebCL via Javascript or OpenCL via C++

Re:STAAAAAHP! (1)

Anonymous Coward | about 10 months ago | (#45018395)

only half performance of native C++

Google's Native Client sandbox only suffers a few percent performance degradation relative to native software. "Half" is comparatively awful.

Re:STAAAAAHP! (1)

Lennie (16154) | about 10 months ago | (#45033719)

Half is faster than most scripting languages. Look it up, a lot of scripting languages are 100 times slower than native.

Javascript is the fastest generally used scripting language, after or similar to Lua. And Lua was optimized to be an embedded language from the start, nobody really considered Javascript would go this far so it wasn't designed for that.

I think it's kinda cool how far people are able to push it.

Re:STAAAAAHP! (1)

chuckinator (2409512) | about 10 months ago | (#45018407)

Handwave it all you want, but there's huge difference between 30fps and 60fps, and "almost there" isn't good enough when you're still chasing frame rates that were common in native C++ applications in the late 90s and the customer wants your build yesterday.

Re:STAAAAAHP! (1)

narcc (412956) | about 10 months ago | (#45020003)

Meh, who cares?

Remember VB6? (Oh, the horror!) VB6 apps were slower and bigger than the equivalent written in VC++.

  Do you know why it was so popular?

Because the "horrible" performance was good enough for most applications. A good developer using VB6 cut his development time significantly (hours vs days, days vs. weeks). A beginner could actually get something to work in a reasonable amount of time. That's powerful.

The web as a platform has its own set of advantages that, for many applications, more than offset the speed issue. Your application is near effortlessly cross-platform and painless to deploy. I'd even argue development time as a significant advantage over c++.

Your talk about frame rates makes me think your focused on games. Well, it's good enough there as well -- even on mobile -- if you don't mind your game looking about Wii quality, that is. For most people, and most games, this is perfectly acceptable.

So, yes, "almost there" really is "good enough". No one is "chasing frame rates" that were "common in the late 90's" -- that doesn't even make sense. The web can easily handle "common in the late 90's" without breaking a sweat. You seem to think that if it's not good enough to run a modern AAA title at 60fps it's worthless as a platform. That's ridiculous.

Re:STAAAAAHP! (1)

doti (966971) | about 10 months ago | (#45025727)

"good developer using VB6"

hahaha

Re:STAAAAAHP! (1)

narcc (412956) | about 10 months ago | (#45028873)

You're an idiot.

Re:STAAAAAHP! (1)

Lennie (16154) | about 10 months ago | (#45033733)

If you think people use Javascript to create the frames in high speed games you are being silly.

They use native-like typed-arrays and WebGL when they make Javascript/HTML games, basically offloading most of the tasks to the native code.

Javascript/HTML5/OpenWeb platform/whatever you want to call it. Is just a collection of APIs to talk to native code.

Only the application specific code will be written in Javascript.

Re:STAAAAAHP! (3, Insightful)

blahplusplus (757119) | about 10 months ago | (#45018211)

" stop trying to turn the browser into a platform."

The reason why they are doing this, is the big push by major industries for more DRM. Although current DRM is ineffective against more technically inclined people. They want to eventually be able to encrypt and split up programs and data tying them to the server. Just like how diablo 3 took part of the program hostage across the internet and you had to constantly 'get permission' to continue playing the game.

If you think big companies are not looking at what the game industry and others are doing locking down apps, then you haven't been paying attention.

Re:STAAAAAHP! (1)

Anonymous Coward | about 10 months ago | (#45019153)

For the sake of all our sanity, could you please learn the distinctions between full stops (which you may know as periods... lol), commas and semi-colons before you next post? It will make your posts look less like Spot the Dog, and make people reading them hurt less.

Thank you.

Re:STAAAAAHP! (1)

buchner.johannes (1139593) | about 10 months ago | (#45019773)

The reason why they are doing this, is the big push by major industries for more DRM.

I think you are mixing up cause and effect. Major industries would like to continue using desktop apps or even "Trusted Computing". But the web is a platform, so they try to force their DRM onto it.

Re:STAAAAAHP! (2)

TomGreenhaw (929233) | about 10 months ago | (#45018231)

I respectfully disagree. Whether anybody likes it or not, when JavaScript support was added to browsers (a really long time ago) the browser became a platform. Great web based apps are harder to do than native apps. Well designed web based apps work on all platforms and do not require client side installation or support. The cost of distribution and maintenance of web based apps is dramatically lower and that reduces cost. Centralized code management makes change management much more effective and that can increase quality. As for security and safety, I think having no data stored on a client and all data stored on centralized servers managed by professionals is a tremendous advantage in every respect.

What's a better cross-platform platform? (1)

tepples (727027) | about 10 months ago | (#45019175)

Guys, stop trying to turn the browser into a platform.

Then what's a better platform for developers who want to reach users of Windows, OS X, desktop Linux, Android, iOS, Windows RT, Windows Phone, and the game consoles? Making a program work on more than one platform requires severe modifications, sometimes including translation of every line of code into a different programming language. Windows Phone 7 and Xbox Live Indie Games, for example, couldn't run anything but verifiably type-safe .NET CF CIL. And all except the first four require permission from the operating system publisher before your code will even run, and said permission is not guaranteed.

Re:STAAAAAHP! (0)

Anonymous Coward | about 10 months ago | (#45023023)

It is the other way around.

If we replace the 1000+ HTML and CSS features by let's say 100 WebCL commands, and allow library programmers to write those HTML and CSS in those 100 WebCL commands, then we improve on simplicity and hence on security.

Missing the point? (1)

Russ1642 (1087959) | about 10 months ago | (#45017831)

I thought the point of GPU's was to not only offload the rendering of 3D graphics but also the algorithms. Game developers don't want to have to program primary rendering algorithms with every game they create. Do they? Am I missing something?

Re:Missing the point? (0)

Anonymous Coward | about 10 months ago | (#45017947)

Game developers don't want to have to program primary rendering algorithms with every game they create. Do they? Am I missing something?

They might want to tweak something to get a unique effect for their game. Other than that, developers of all kinds use software libraries to implement common tasks. We could implement everything from scratch every time, but it turns out that it doesn't really make sense to write a new OS just to implement a clock app and since other people have already written OS'es, there is no need to implement a new OS. We build on code that is already written.

Libraries are copyrighted (1)

tepples (727027) | about 10 months ago | (#45019519)

Other than that, developers of all kinds use software libraries to implement common tasks.

The problem comes when licensing such libraries becomes cost prohibitive or requires the developer to give up the keys to his own kingdom.

Re:Missing the point? (2)

Phopojijo (1603961) | about 10 months ago | (#45018007)

Some want to use the same algorithms OpenGL and DirectX does... and those APIs are still for them.

Some do not. A good example is Epic Games who, in 2008, predicted "100% of the rendering code" for Unreal Engine 4 would be programmed directly for the GPUs. The next year they found the cost prohibitive so they kept with DirectX and OpenGL at least for a while longer. Especially for big production houses, if there is a bug or a quirk in the rendering code, it would be nice to be able to fix the problem directly rather than hack in a workaround.

Re:Missing the point? (1)

chuckinator (2409512) | about 10 months ago | (#45018081)

Since OpenGL 3.0, the traditional OpenGL rendering pipeline (supposedly) has been implemented using shaders under the hood. Also, check out the OpenGL Mathematics (GLM) library for the matrix operations in userspace that used to be at the driver level before.

Re:Missing the point? (1)

Jonner (189691) | about 10 months ago | (#45021403)

I thought the point of GPU's was to not only offload the rendering of 3D graphics but also the algorithms. Game developers don't want to have to program primary rendering algorithms with every game they create. Do they? Am I missing something?

Yes, you are missing something. The point of GPUs is to efficiently calculate pixel values to show on the screen. Specific algorithms can be implement in hardware or software and GPU hardware has been moving toward exposing more generic functionality for years, which WebCL can make available to Javascript code. It's the game engine or libraries used by the game engine that worry about low level details about how to talk to the GPU, whether that happens via OpenGL, WebGL, Direct3D, WebCL or something else.

Security (2)

damaki (997243) | about 10 months ago | (#45017929)

Great. I'd sure like my GPU, with its mad low level optimizations and surely ugly code to be used by unsigned code from random sources.
Java applets are far too secure, let's get to the lowest level!

Parse error? (0)

Anonymous Coward | about 10 months ago | (#45018101)

Anyone care to post an alternate summary. I'm not dumb [1], but fuck if I can understand one coherent idea in that summary.

Sounds like a good infoworld article.

1. http://en.wikipedia.org/wiki/Illusory_superiority

Re:Parse error? (1)

gl4ss (559668) | about 10 months ago | (#45018751)

remember when dos game engines used just whatever math (voxels, fake shit, raytracing against 2d map..) the coder could get to run fast enough to create the graphics? to not be constrained with triangles?

the "article" is about "hey, wouldn't it be cool to do that again just with gpu's?". the summary makes it sound as if the dude had done something cool with that idea.

the video on in the article shows a shaded triangle. fail, waste of time. and sort of ignores that programmable shaders are just for this purpose..

Who posted this (0)

Anonymous Coward | about 10 months ago | (#45018149)

There is no demonstration of anything.

Also, I tried to watch the video, but I've been on the internet enough to pick out a horsefucker in a crowd.

And that is a horsefucker.

summary has weird language (2)

Musc (10581) | about 10 months ago | (#45018157)

The terminology in the summary is confusing and wrong.

First of all, software rendering vs. hardware rendering isn't the same as scanline rendering vs. "rendering from the underlying math", which I assume is a bad attempt at a layman's description of raytracing. You can have a scanline triangle renderer in software, and you can have a raytracer in hardware. It is true that most GPUs are built for scanline rendering and not raytracing, but plenty of raytracers have been written that run on GPUs.

Second, if your renderer runs on the GPU using OpenCL, then it is not a software renderer, it is a hardware renderer, perhaps with a little more of the work done in programmable shaders and a little less done on the fixed function hardware.
What they meant to say was that you can program your own hardware renderer using CL kernels, rather than rely on the ever-decreasing fixed-function hardware that the triangle pipeline normally uses.

The only fixed-function capabilities in a modern GPU are texture filtering and rasterization. The vertex processing, lighting, and shading are all programmable.

There is something interesting and new here, which is that maybe sometime in the future the programmable hardware will be good enough that the fixed-function stuff can be done away with completely.

Re:summary has weird language (1)

Bengie (1121981) | about 10 months ago | (#45018555)

Upcoming GPUs support protected memory, C++, and pre-preemptive multi-tasking. A GPU is just a type of CPU. You will actually be able to pass a pointer from the CPU to the GPU and not have to translate it, it will work natively with it.

Re:summary has weird language (1)

Phopojijo (1603961) | about 10 months ago | (#45019769)

Actually the demo doesn't raytrace. In this demo "scene" (one triangle) it uses barycentric coordinates to determine if a pixel is inside or outside of a triangle. If it is inside? It shades it with one of two functions. These two functions derive red, green, and blue from how far the pixel is away from a vertex compared to the distance between that vertex and the center of the opposite edge (the animated function also has a time component). If it is outside the triangle? Pixel is skipped.

The specific algorithm is somewhat irrelevant (although it is actually pretty efficient for very large triangles). The point is that the GPUs are not limited to scanline triangles passed by a graphics API anymore.

Great if you have real broadband (1)

sl4shd0rk (755837) | about 10 months ago | (#45018163)

Web based content is fine for every urban dweller out there with a 3Mb+ pipe but there are a lot of people that barely have 1Mb DSL. Ever try and do a 60M Steam update on one of those sucky lines?

Re:Great if you have real broadband (1)

tepples (727027) | about 10 months ago | (#45019541)

60 MB at 1 Mbps takes 8 minutes. Let's say 10 minutes for overhead. Schedule it to run while you sleep.

Re:Great if you have real broadband (1)

Phopojijo (1603961) | about 10 months ago | (#45020459)

"Perpetual Motion Engine" can operate on the FILE protocol. You can point the web browser to a web page located on your hard drive (or a USB thumb drive) and it will work.

It can be run from a website over HTTP, but does not need to be. Heck, you could even burn it to a DVD and double-click the index.html file in it.

I/O Bandwidth (3, Interesting)

Mr. Sketch (111112) | about 10 months ago | (#45018233)

Many 3D engines are carefully tuned to the limited bandwidth to the GPU cards that provides them just enough bandwidth per frame to transfer the necessary geometry/textures/etc for that frame. The results, of course, stay on the GPU card and are just outputted to the frame buffer. Now, in addition to that existing overhead, the engine writer would now have to transfer back the results/frame buffer back to the CPU to process, generate an image, that is then passed back to the GPU to be displayed as an image? Or am I missing something?

While I'm sure it would allow customized algorithms, they would have to be rather unique to not be handled by the current state of geometry/vertex/fragment shaders. Are they thinking some of non-triangular geometry?

Maybe there is a way to send the result of the maths directly to the frame buffer while it's on the GPU?

Re:I/O Bandwidth (2)

Phopojijo (1603961) | about 10 months ago | (#45018349)

Only if you want it to! You can share resources between OpenCL and OpenGL without passing through the CPU.

Now, of course, you may wish to (example: copy to APU memory, run physics, copy to GPU memory, render)... but the programmer needs to explicitly queue a memory move command to do so. If the programmer doesn't move the content... it stays on wherever it is.

Re:I/O Bandwidth (1)

markjhood2003 (779923) | about 10 months ago | (#45019441)

While I'm sure it would allow customized algorithms, they would have to be rather unique to not be handled by the current state of geometry/vertex/fragment shaders. Are they thinking some of non-triangular geometry?

The FA mentions voxel rendering for Minecraft-type applications. Although volume rendering can be achieved with traditional hardware accelerated surface primitives, there are many algorithms that are more naturally described and implemented using data structures that don't translate so easily to hardware accelerated primitives.

Constructive solid geometry, vector based graphics, and ray tracing are also not such a nice fit to OpenGL and DirectX APIs. You don't always want to have to tessellate geometry that has an analytic expression, such as conics, rational quadratics, b-splines, and NURBS, so a more software-oriented approach can provide better renderings for those types of mathematical objects.

The challenge here is that graphics primitives that APIs such as OpenGL provide are of course those that the hardware can most readily accelerate. If you don't use primitives and operations that can be massively parallel then you may not get much use out of the hardware.

Re:I/O Bandwidth (0)

Anonymous Coward | about 10 months ago | (#45020369)

Yes, you're missing something. OpenGL and OpenCL can share all the data that resides in GPU memory, so the GPU can do all the real work by itself. In a graphically intensive OpenCL app, the CPU is just there to supervise.

The usual process is something like this:

The CPU instructs the GPU to set aside some memory for buffers, and may provide data to initialise those buffers.
The CPU provides 'kernel' objects (programs!) that will modify that data, and a 'queue' describing the order in which they're to run.
Then the GPU executes everything in the queue at high speed. All data is held in GPU memory and read/written in-situ.
Sometimes the CPU has to tune some parameters, but it rarely needs to read or write any significant amounts of data, so I/O is negligible.
If the CPU needs to know anything about what the GPU has done, it can query specific windows within those buffers, so that only small amounts of data ever need to be transferred back to the CPU.
One or more of those GPU-resident buffers is of an 'image' class, and that's where you've been storing all the results of your fancy rendering algorithms. It can be further processed on the GPU, or displayed directly, or used as a texture map in any OpenGL display routine.

This whole setup works astonishingly well, and can be used for rendering all sorts of fun stuff. Smoke, clouds, oceans, blobs, smoothly curvy things, nicer lighting & shadows, nicer bokeh & motion blur, finer textures, fractals, reflections, film grain, lens flares, you name it.

I don't see the need to complicate it by forcing it into a browser, though. If it ends up simplifying something someday, that's great, but unless you're actually browsing for games & rendering methods & scientific visualisations, it seems kinda redundant.

Huh? (0)

Anonymous Coward | about 10 months ago | (#45018373)

This is stupid. It's still an API. It doesnt matter if it's OpenGL, DirectX, or proprietary software in the app. It's still an API that talks to the hardware.

and they can't use a 2nd/3rd/4th core why? (1)

logicassasin (318009) | about 10 months ago | (#45018537)

I don't get it. Using a video card to render without the video card? Since so many machines on the market have 2+ cores, why not write a software renderer that will sit on one of the extra cores?

Re:and they can't use a 2nd/3rd/4th core why? (1)

godrik (1287354) | about 10 months ago | (#45018597)

cores or processors, are typically not that fast at doing graphical rendering. GPUs are typically much more efficinet at that task. (Hey that's what they have been built for.)

Re:and they can't use a 2nd/3rd/4th core why? (1)

Phopojijo (1603961) | about 10 months ago | (#45019323)

Because a GeForce Titan has about 2700 cores and about 4.5 teraflops of performance.

But yes, even CPUs have OpenCL drivers so (albeit Intel's is buggy as heck for the time being) so you could even select your CPU as your "graphics processor" and it would run... just slowly.

Software Rendering Engine GPU-Accelerated (0)

Anonymous Coward | about 10 months ago | (#45018547)

Software rendering xor GPU acceleration.

GPU accelerated software rendering is just rendering.

Reinventing the GPU API (1)

Anonymous Coward | about 10 months ago | (#45018711)

Reliance on OpenGL and DirectX could diminish when GPUs are utilized as general 'large batches of math' solvers which software rendering engines offload to.

GPUs have never not been general 'large batches of math' solvers. It's just that games have never required a very large amount of math to be done apart from rendering 3D graphics. Hell, they still don't. Most of the time this kind of stuff is actively avoided, or faked when it cannot be avoided. Why? Because there are better things you can do with the development budget than try to bring a science project to life.

"Good" code doesn't pay as well as "fast" code. (and since it's a little ambiguous, the "fast" means development time, not run time..)

Fast talker (0)

Anonymous Coward | about 10 months ago | (#45019261)

Holy shit that guy talks so fast its like holding fast-forward or something

Oh great. (0)

Anonymous Coward | about 10 months ago | (#45019713)

Websites will turn even slower, like they did after the javascript speedup enginery, and now everyone will have to upgrade their browser as well as their hardware to keep up with the webmonkeys' latest.

The computers keep becoming ever better room heaters but they're not accomplishing more, because the tasks they're set to cost so much more effort to achieve essentially the same as yesteryear. But at least it's that much more shiny! W000000t!

Article has some seriously wrong information (0)

Anonymous Coward | about 10 months ago | (#45021611)

I would encourage everyone to also look at the comment on the article from someone who is actually a graphics programmer. This guy has some really whacked ideas in his head about how GPUs and drivers actually operate. GPUs and drivers haven't rasterized triangles into scanlines in nearly a decade at this point - it's far more efficient now to simply throw the parallelism at it that they already have, and do a massively parallel point-in-polygon test on the triangle's screenspace bounding box.

This guy is obviously not a rendering engineer, and he is obviously not a GPU programmer. I guarantee you this guy's "brilliant ideas" have been thought of many times over by people at AMD and NVidia whose entire line of work is renderers and GPUs, and I would take anything he has to say on the topic with several truckloads of salt.

Comedy captcha: "Critics"

A really poor and pointless article (0)

Anonymous Coward | about 10 months ago | (#45022687)

The guy blabbers hedonistically about GPUs and engines like he invented them in its (not so distant) infancy.
It's a Reader's Digest of the obvious and the trivial.
Another wannabee beardy infant with really nothing to say.

Not worth publishing on ./; maybe he's friend to someone in the staff.
If so, fire them both, as literally as possible.

R

Browser side drm? (0)

Anonymous Coward | about 10 months ago | (#45026107)

Web browsers are quickly becoming bloated and vunerable, not only that, how long will it be before we see browser side DRM creep onto the scene? And if you choose to use a different browser, or even create your own, whats to stop the content provider from only allowing access from specific browsers?

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...