×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Supports OpenGL ES 3.0 On Linux Before Windows

Unknown Lamer posted about a year ago | from the hot-to-the-touch dept.

Intel 113

An anonymous reader writes "The Khronos Group has published the first products that are officially conformant to OpenGL ES 3.0. On that list is the Intel Ivy Bridge processors with integrated graphics, which support OpenGL ES 3.0 on open-source Linux Mesa. This is the best timing yet for Intel's open-source team to support a new OpenGL standard — the standard is just six months old whereas it took years for them to support OpenGL ES 2.0. There's also no OpenGL ES 3.0 Intel Windows driver yet that's conformant. Intel also had a faster turn-around time than NVIDIA and AMD with the only other hardware on the list being Qualcomm and PowerVR hardware. OpenGL ES 3.0 works with Intel Ivy Bridge when using the Linux 3.6 kernel and the soon-to-be-out Mesa 9.1." Phoronix ran a rundown of what OpenGL ES 3.0 brings back in August.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

113 comments

first? (-1)

Anonymous Coward | about a year ago | (#42886089)

should linux fanboys run around shouting FIRST?

ES is the key word. (5, Insightful)

Anonymous Coward | about a year ago | (#42886115)

OpenGL ES is a cut-down version of OpenGL aimed at mobile and embedded. Windows has never supported any version of it, and probably won't anytime soon.

So to see Linux get it "first" is completely unsurprising.

It's like saying Linux supported the EXT3 filesystem before Windows. So?

Re:ES is the key word. (4, Funny)

Zero__Kelvin (151819) | about a year ago | (#42886737)

I was unaware that there is no Windows for portable or embedded devices. Microsoft would start working on a mobile phone OS!

Re:ES is the key word. (1)

Desler (1608317) | about a year ago | (#42886789)

It uses Direct3d. Also the story was referring to desktop Windows.

Re:ES is the key word. (1)

ikaruga (2725453) | about a year ago | (#42890981)

Also windows mobile and RT only use ARM(AFAIK), while there are a few Intel Android phones and it's expected to see Ubuntu phones using both architectures as well. Linux actually has a need for new OpenGL ES version support.

Re:ES is the key word. (1, Informative)

hairyfeet (841228) | about a year ago | (#42887677)

That would be like asking why the software that runs on an airplane doesn't run on your cellphone. Windows Embedded, just like Linux embedded, runs a stripped down kernel and a MUCH thinner OS because when you are dealing with an embedded system you just aren't gonna be doing as much as a full OS, as they are designed for specific functions.

Re:ES is the key word. (1)

drinkypoo (153816) | about a year ago | (#42892331)

Windows Embedded, just like Linux Embedded, permits you to select how much of the operating system you want to include.

Before Windows Embedded, there was Windows CE, the less said about which the better.

Your lack of Linux understanding is astounding (1)

Zero__Kelvin (151819) | about a year ago | (#42895211)

",just like Linux embedded, runs a stripped down kernel "

On modern embedded systems the Linux kernel is the same. In the old days, or in cases where Linux is used on processors without VMM support, the kernel might be substantially different. Today most embedded systems use an x86 or ARM architecture. They don't use a "stripped down" kernel. They use the same kernel, and configure at build time to use the features they want. The same is true on the desktop. The only difference that you may be mistakenly referring to as "stripped down" is that for an embedded system you only build and include the drivers for the hardware you will run on, since you know that in advance. On a Desktop distributions kernel they build and include many more modules since they will be loaded and used on some hardware and not on other hardware. Thye details go on from there, but suffice it to say that you don't grasp any of them, and to be certain an embedded system using an NVIDIA, AMD, or Intel chipset will use the same driver as a desktop system.

Re:ES is the key word. (0)

Anonymous Coward | about a year ago | (#42890557)

Direct3D has feature levels that cover embedded: One API, expanding function base that can be queried in real-time. Embedded GPUs sit around DX level 9.4, with some on the newest ones achieving DX 11.

Re:ES is the key word. (0)

Anonymous Coward | about a year ago | (#42888673)

Depends on what you're defining as "cut down"...

ES 1.1 is intrinsically the fixed function pipeline from OpenGL 1.4 and before, sans the immediate mode calls (something that's been deprecated FOREVER...).
ES 2.X is intrinsically the shader driven pipeline from OpenGL 2.x and after, sans the fixed function path and the immediate mode calls.

If you're doing OpenGL like you're really supposed to be using it, it's amazingly easy to get your code to work properly on an ES implementation- and there's fixed-function wrappers for ES 2.X so you can mix and match pretty easily.

Re:ES is the key word. (1)

Anonymous Coward | about a year ago | (#42888871)

Except that ES 2.0 (2.X? 2.0 was directly followed by 3.0) supports very few features, even compared to various desktop GL 2.x implementations. Want to render to a depth buffer (for shadow mapping)? That's not supported by the core spec, you need an extension. Want multiple render targets in your FBO? Same thing. Even the GLSL is limited - officially, the fragment shader only supports loops with a compile-time fixed iteration count (some implementations relax this slightly, though). Not to mention that the GLSL in ES 2.0 is pretty much an ancient dialect, not really comparable to modern GLSL.

(All of this is not really surprising, considering that ES 2.0 itself is pretty old at this point.)

And if you're doing OpenGL 4.x style, you'll notice that 4.x has deprecated rendering from VBOs without going through a VAO.

So, yeah, ES 3.0 is starting to resemble a modern desktop OpenGL. Just too bad it's not supported by any device you'd want to run it on. Hint: the Intel HD 4000 that's mentioned here *already* supports desktop OpenGL 4.0 (windows), 3.2 (mac os) and 3.1 (linux). So, technically you can already get a "better" GL from the same hardware.

Re:ES is the key word. (0)

Anonymous Coward | about a year ago | (#42891889)

Deja vu?

Why? (0)

Anonymous Coward | about a year ago | (#42886119)

What's the point of having OpenGL ES when you can run full OpenGL on the desktop?

Re:Why? (3, Informative)

Anonymous Coward | about a year ago | (#42886215)

The only things that OpenGL provides that ES doesn't are the big, ugly, slow things which are useful for certain kinds of graphic design and industrial apps, but are completely useless for high-performance games. You're really not missing much, and in general if you're using things which are not provided by OpenGL ES to write apps where the real-time user experience counts, you are doing it wrong.

Re:Why? (3, Informative)

edwdig (47888) | about a year ago | (#42886821)

I went from OpenGL 1.x over to OpenGL ES, so I don't know most of what modern OpenGL can do. But one glaring weakness is that OpenGL ES doesn't support drawing quads, only triangles. Yeah, the GPU processes a quad as two triangles internally, but if it supports quads, there's less vertex data to generate and pass to the GPU. You can somewhat make up for it by using glDrawArrays, which uses array indexing into the vertex list, but in a lot of cases (especially for 2D scenes), it's still less efficient than if you had quads.

Re:Why? (2)

foijord (2840567) | about a year ago | (#42887217)

Just curious, how would quads be more efficient than say, triangle strips for drawing quads?

Re:Why? (2)

k3vlar (979024) | about a year ago | (#42887437)

He explained it in the comment: "there's less vertex data to generate and pass to the GPU"

Re:Why? (1)

foijord (2840567) | about a year ago | (#42887625)

He explained it in the comment: "there's less vertex data to generate and pass to the GPU"

This is false. If you're drawing a quad, you pass 4 vertices. If you draw 2 triangles forming a quad, you also pass 4 vertices (using a triangle strip and index buffer). The index buffer is not updated every frame, just once.

Re:Why? (1)

Khyber (864651) | about a year ago | (#42887817)

no you draw 6 vertices. the triangles are independently rendered.

Re:Why? (3, Informative)

tibman (623933) | about a year ago | (#42888401)

In my experience that will show an artifact. Like an odd line between triangles where there shouldn't be. It's been a while since i've worked in straight gl but you should reuse the vert to prevent that. Even if the verts are in the exact same position, it won't matter.

Re:Why? (1)

exomondo (1725132) | about a year ago | (#42888513)

no you draw 6 vertices.

You don't draw vertices at all, and if you're using triangle strips you only need 4 vertices to create a quad out of 2 triangles.

Re:Why? (0)

Anonymous Coward | about a year ago | (#42888599)

In such a short comment you essentially show you neither know how quads or triangle strips are rendered.

Re:Why? (1, Informative)

Khyber (864651) | about a year ago | (#42893191)

Depends on the rendering method.

In one comment you manage to demonstrate that you've never worked on an SGI machine, before.

Re:Why? (1)

Anonymous Coward | about a year ago | (#42896095)

What SGI machine supports modern OpenGL with vertex shaders but no vertex caching? If you don't need vertex shaders, then you don't need to do things the newer way with later versions of OpenGL anyway. The drivers will just translate everything you do.

And SGI machines have split quads up into triangles for a long time now. It is faster than trying to check for coplanarity or better than dealing with artifacts that some pure quad rendering methods produce if the points are not coplanar (or even some depth related problems that can come up). So even they bailed on doing things that way some time ago, before shaders and OpenGL 2.0 was even reached.

There are reasons that went from being part of how the renderer worked, to a convenience for the application writer, to ultimately a method that should be avoided. I do remember IrisGL, and while innovative for the time in some ways, things have moved on a lot since then.

Re:Why? (1)

edwdig (47888) | about a year ago | (#42888531)

The index buffer is not updated every frame, just once.

That's not always true. Sometimes it's more efficient to just stick the vertex data directly into the vertex buffer as your frame is generated, then do your draw order sorting at the end by rewriting the index buffer.

Re:Why? (2)

edwdig (47888) | about a year ago | (#42887593)

Triangle strips require everything in one draw call to be connected. If you want to draw N quads, you have to make N draw calls, passing 4 vertices each time. There's a significant amount of overhead involved in a draw calls, so this is slow. With quad support, to draw N quads you can just make a big array of 4N vertices and process it all in one draw call.

Re:Why? (1)

foijord (2840567) | about a year ago | (#42887847)

Here's a trick you can do to draw all your 2-triangle-quads in one draw call; pass 4 identical vertices for each quad, and 4 other attributes representing each corner of the quad. (-1, -1), (1, -1), etc. Then displace the vertex according to the corner attribute in the vertex shader. There's more data to pass, but I assume you're not passing the data every frame. If that's the case, don't. If you're moving the quads, just pass the position.

Re:Why? (0)

Anonymous Coward | about a year ago | (#42888781)

You can use triangle strips to make one call too via degenerate triangles which get thrown out by the video card anyway, so don't represent more work. So the triangle strips can be used for disjoint quads, with the same 4N vertex data. The only added thing needed would be the index array. Depending on what you are doing, and if you are using a vertex shader well, you could come out using a lot less data for the same quads. In the not too distant future for OpenGL ES, or on full OpenGL now, you can do better than all of that with just a geometry shader anyways.

Re:Why? (0)

Anonymous Coward | about a year ago | (#42888785)

Uh... If it's internally processing the things, the GPU pipeline actually takes more of a hit than your having to decompose the quads into two tri's and push the two tri's instead- there's a REASON that they quit supporting that construct. You're bitching because you want to be lazy and not think through your polygon specifications to the pipeline. Not to mention that you can still specify quads if you're specifying a tesselation shader...

Re:Why? (0)

Anonymous Coward | about a year ago | (#42886827)

You mean such things like pixel buffers without which it's impossible to have fast texture upload?
Of course you can do it the "GL ES way", by not using GL ES features by some vendor specific, crappy EGL hacks to do it.

Re:Why? (4, Insightful)

Guspaz (556486) | about a year ago | (#42886941)

On the other hand, if you're not the one writing the apps, it can be infuriating to use a system that supports only OpenGL ES. Last time I tried to use Ubuntu on a system with only OpenGL ES support, I discovered that OpenGL ES basically meant "no graphics acceleration", because nothing in the repository supported it; everything wanted OpenGL.

That's probably changed since then (it was a few years ago), but it was pretty frustrating at the time, especially since the GPU itself was rated for full OpenGL, it was only that PowerVR charged extra for that driver and TI didn't want to license it.

Re:Why? (1)

DrXym (126579) | about a year ago | (#42887129)

Not true. Full blown OpenGL supports geometry and tesselation shaders for example and loosens up various restrictions or limitations of ES. e.g. draw quads instead of screwing around triangulating everything. And while the fixed function pipeline is deprecated, it's still useful to just knock out something and far simpler rather than screwing around trying to compile, link and use a shader which does a matrix transformation and little else.

Re:Why? (2)

exomondo (1725132) | about a year ago | (#42888555)

The only things that OpenGL provides that ES doesn't are the big, ugly, slow things which are useful for certain kinds of graphic design and industrial apps, but are completely useless for high-performance games.

Like geometry shaders?

Re:Why? (0)

Anonymous Coward | about a year ago | (#42888911)

Or render-to-depth-texture for shadow mapping in ES 2.0. Fortunately that was added in ES3.0.

Re:Why? (0)

Anonymous Coward | about a year ago | (#42891165)

OpenGL 4 manual pages (297 functions):
http://www.opengl.org/sdk/docs/man4/ [opengl.org]
OpenGL ES 2 manual pages (109 functions):
http://www.khronos.org/opengles/sdk/docs/man/ [khronos.org]

Apart from the number of functions:
* Many OpenGL ES functions support fewer options than the equivalent OpenGL functions (e.g. the ES version of glTexImage2D supports 5 formats, the real version supports 81 formats)
* There are another 124 OpenGL 1.x functions which are deprecated but still available in the compatibility profile; those aren't listed in the OpenGL 4 manual pages.
* The ES version of the shading language is similarly "cut down" down compared to that in desktop OpenGL.
* ES has much lower minimum implementation limits (OpenGL 3 and later require support for 1024x1024 textures; OpenGL ES only requires 64x64).

Re:Why? (1)

mikael (484) | about a year ago | (#42888685)

You can do things like augmented reality on a smartphone. Use the built-in camera to take a live video stream of a particular location, the MEMS gyroscope and tilt sensors to determine the orientation of the system and GPS to determine the latitude and longitude. Combine this information together and render 3D information on top of this view. Maybe it's a terrain map, geological layers, the direction to the nearest public bar, train station, police station or A&E.

http://www.youtube.com/watch?v=gWrDaYP5w58 [youtube.com]

Re:Why? (0)

Anonymous Coward | about a year ago | (#42891557)

What does any of that have to do with the DESKTOP?

You can do it on the DESKTOP too. (0)

Anonymous Coward | about a year ago | (#42893885)

Didn't you know that?

OpenGL ES can be used to do the same things on the desktop.

WebGL uses OpenGL ES. It's on the desktop too.

Look, just because Microsoft aren't "world leaders" doesn't mean you have to diss OpenGL ES.

Not too suprising... (3, Interesting)

Junta (36770) | about a year ago | (#42886197)

Beating out nVidia and AMD is marginally surprising, but OpenGL ES support in Windows I'd figure to be the lowest priority 3D interface to implement for Windows, behind Direct3D and OpenGL. MS' attempt at targeting the lower end market is still emphasizing Direct3D, with OpenGL on Windows mostly only mattering for the occasional game engine and engineering application. OpenGL ES on Windows I'm thinking is a very very small slice of potentially interested parties.

Re:Not too suprising... (1)

UnknowingFool (672806) | about a year ago | (#42886537)

Well depends on what the average consumer needs from their PC. If it is not gaming (which a consumer would buy a discrete card anyway), most consumers need some graphics for web surfing and the like. With the built-in graphics of Ivy Bridge, there is enough GPU power for the average consumer. Why would this average consumer need Direct3D for YouTube?

Re:Not too suprising... (1)

fuzzyfuzzyfungus (1223518) | about a year ago | (#42886629)

Well depends on what the average consumer needs from their PC. If it is not gaming (which a consumer would buy a discrete card anyway), most consumers need some graphics for web surfing and the like. With the built-in graphics of Ivy Bridge, there is enough GPU power for the average consumer. Why would this average consumer need Direct3D for YouTube?

To say that they 'need' it would be a gross overstatement; but if they are doing their casual youtubing on a relatively recent wintel, they'll be using it anyway [msdn.com]...

Re:Not too suprising... (1)

Hal_Porter (817932) | about a year ago | (#42889953)

Why would this average consumer need Direct3D for YouTube?

Most consumers need DXVA - i.e. DirectX Video Acceleration. On a netbook DXVA would mean that you could play Youtube videos with low CPU usage. That's particularly important on a netbook.

On my 1015PX - which is the second netbook I've bought so Intel have had two chances to get it right - it still doesn't work.

http://www.notebookcheck.net/Intel-Graphics-Media-Accelerator-3150.23264.0.html [notebookcheck.net]

According to Intel, the GMA 3150 can help the CPU decode MPEG2 videos. The DXVAChecker shows hooks for MPEG2 (VLD, MoComp, A, and C) up to 1920x1080. Therefore, the performance of the N450 and N470 with GMA 3150 is currently not sufficient to watch H.264 encoded HD videos with a higher resolution than 720p. HD flash videos (e.g. from youtube) are also not running fluently on the Atom CPUs.

It supports MPEG2. I don't think I've ever played an MPEG2 video on this machine and it could probably decode them fine in software anyway. It doesn't support H.264, which is absolutely ubiquitous and used by Youtube. An Atom N570 can decode H.264 in software but only with some effort (high CPU usage,fans at high speed, high power usage) at high resolutions.

On my i5 based notebook which is powerful enough to decode HD H.264 in software without breaking sweat H.264 is decoded by the GPU.

I think the problem is that Intel already has most of the netbook chipset market already. So getting HD youtube videos to work would just cannibalize the market for higher end i5/i7 machines. Still it seems odd that they revved the chipset and didn't fix the most obvious limitation.

Snore (0)

Anonymous Coward | about a year ago | (#42886471)

And zero people beyond Linux fanbois actually care.

Who cares? (0)

Desler (1608317) | about a year ago | (#42886499)

And zero fucks were actually given.

You need a hug. (-1)

Anonymous Coward | about a year ago | (#42886535)

From a woman.

Re:You need a hug. (-1)

Anonymous Coward | about a year ago | (#42886541)

Your mom gave me enough last night.

Assuming you're old enough (0)

Anonymous Coward | about a year ago | (#42886907)

Assuming you're old enough to have sex legally, that is.

Re:Who cares? (0)

Anonymous Coward | about a year ago | (#42886803)

Those of us who actually use OpenGL ES care. This is because with each advancement we can implement more and more cool stuff into the hardware you want it in. Of course if you don't like playing games, using smart phones, or using modern GUI's then advances in graphics technology specifications don't really matter. You can go hide in your cave and pretend like anything not on the x-box doesn't matter now.

Re:Who cares? (1)

Desler (1608317) | about a year ago | (#42886859)

You use OpenGL ES on desktop Windows for what exactly?

Re:Who cares? (2)

DrXym (126579) | about a year ago | (#42887233)

I use it on the desktop for Android development because it's a pain in the arse to develop OpenGL ES at the best of times. Development turnaround is a lot faster than uploading to a device and discovering the shader is broken because of a syntax error.

Re:Who cares? (1)

Desler (1608317) | about a year ago | (#42887321)

You use a wrapper not ES directly.

Re:Who cares? (1)

DrXym (126579) | about a year ago | (#42887795)

I'm calling ES directly through JOGL bindings and the GLES2 profile. I don't care if the driver is doing it over OpenGL, DirectX or directly. As far as I'm concerned it's ES and that's the primary thing for me. Makes it vastly easier to develop code, sparing any actual android work until things are beginning to take shape.

Re:Who cares? (1)

hairyfeet (841228) | about a year ago | (#42889221)

Uhhh...isn't that running in a VM anyway? by that logic one could say windows supports EXT since a VM of Linux runs EXT.

Re:Who cares? (1)

DrXym (126579) | about a year ago | (#42889675)

I'm not using the Android SDK VM for OpenGL ES 2.0 work. I'm using Java and JOGL on the desktop to develop the rendering code in a test harness. The JOGL and the Android bindings are close enough together that I can write two backends and an abstraction layer and have 99% of the rendering code common to either. I can turnaround stuff probably 5x faster too without the uploading to a device.

The Android SDK VM is slow enough at the best of times and the OpenGL software emulation is abysmally slow. It's real devices or the test harness, not the VM.

Re:Who cares? (1)

foijord (2840567) | about a year ago | (#42887257)

Portability. You could write a game engine that would run pretty much everywhere.

Re:Who cares? (1)

Desler (1608317) | about a year ago | (#42887339)

I was asking for what they specifically used it for. Not what someone could theoretically do. Also, ES is only supported through translation wrappers or emulators on Windows anyway.

Re:Who cares? (1)

foijord (2840567) | about a year ago | (#42887501)

There will be a windows driver soon, I assume. I also assume NVIDIA and AMD will provide drivers for windows. Asking what specifically you would use OpenGL ES 3.0 on windows for is like asking specifically what you would use OpenGL on windows for. It's for portable 3D graphics. OpenGL ES 3.0 looks like it will be the most portable version yet.

And when Windows gets it, Desler will like it. (0)

Anonymous Coward | about a year ago | (#42894343)

Because the only reason why this troll is going "so what?" is because it isn't on Windows.

No other reason at all.

Re: Who cares? (0)

Anonymous Coward | about a year ago | (#42887269)

Who cares? who cares??!!? Now I can masturbate to an OpenGL ES rendered rotating teapot smug with the knowledge that Winblows users are watching inferior direct3d rendered porn!

Re: Who cares? (1)

Desler (1608317) | about a year ago | (#42887349)

Or one could just use the EGL or WGL wrappers for AMD or NVIDIA GPUs respectively. Wouldn't make this submission's title any less stupid.

Very very poor article (5, Informative)

Anonymous Coward | about a year ago | (#42886573)

On Windows, the GPU is driven by either DirectX or OpenGL. Native OpenGL ES drivers for Windows are ONLY needed for cross-platform development where applications destined for mobile devices are built and tested on Windows first.

Now, this being so, the usual way to offer ES on the desktop is via EMULATION LAYERS that take ES calls and pass them on to the full blown OpenGL driver. So long as full OpenGL is a superset of ES (which is mostly the case), this method works fine.

The situation is different on Linux. Why? Because traditionally, Linux has terrible graphics drivers from AMD, Nvidia AND Intel. Full blown OpenGL contains tons of utterly useless garbage, and supporting this is more than Linux is worth. OpenGL ES is a chance to start over. OpenGL ES 2.0 already is good enough for ports of most AAA games (with a few rendering options turned off). OpenGL ES 3.0 will be an almost perfect alternative to DirectX and full blown OpenGL.

OpenGL ES 2.0/3.0 is getting first class driver support on Linux class systems because of Android and iOS. OpenGL ES 3.0 will be the future standard GPU API for the vast majority of computers manufactured. However, on Windows, there is no reason to expect DirectX and full blown OpenGL to be displaced. As I've said, OpenGL ES apps can easily be ported to systems with decent OpenGL drivers.

Intel is focusing on ES because, frankly, its drivers and GPU hardware have been terrible. It is their ONLY chance to start over and attempt to gain traction in the marketplace. On the Windows desktop, Intel is about to be wiped out by the new class of AMD fusion (CPU and GPU) parts that will power the new consoles. AMD is light-years ahead of Intel with integrated graphics, GPU driver support on Windows, and high speed memory buses with uniform memory addressing for fused CPU+GPU devices.

Inside Intel, senior management have convinced themselves (falsely) that they can compete with ARM in low power mobile devices. This is despite the fact that 'Ivybridge' (their first FinFET device) was a disaster as an ultra low power architecture, and their coming design, Haswell, needs a die size 5-10 times its ARM equivalent. The Intel tax alone ensures that Intel could never win in this market. Worse again is the fact that Intel needs massive margins per CPU to simply keep the company going.

PS Intel's products are so stinky, Apple is about to drop Intel altogether, and Microsoft's new tablet, Surface Pro 2, is going to use an AMD fusion part.

Re:Very very poor article (0)

Anonymous Coward | about a year ago | (#42886767)

Just waiting for multi-threading support for OpenGL. It currently only supports one context per app, but threads behind the scenes. Optimally it allows multiple contexts per app and does no threading behind the scenes.

Re:Very very poor article (0)

Anonymous Coward | about a year ago | (#42886893)

Why? I don't really get this fascination with multi-threading when you have to wait for a lock on a common device (screen) anyway.

Re:Very very poor article (0)

Anonymous Coward | about a year ago | (#42887187)

Modern GPUs can be feed both asynchronously and concurrently, so threading can help a lot. Civ5 is one of the few games to support threading and non-threaded drivers showed 6 out of 12 cores under load and around 20fps and enabling threading in the drivers allowed 12 out of 12 cores to be used and the fps almost doubled to 40fps.

When there are lots of objects to be rendered, the single biggest bottleneck is how many commands you can issue to the GPU, and that is almost entirely dictated by the number of cores you have. If you want lots of objects, you need to thread, and Civ5 is a great example of lots of objects, so is EvE Online and blob warfare, but that engine does not support threading.

ABSOLUTELY FALSE. (1)

Anonymous Coward | about a year ago | (#42887005)

OpenGL, unlike DirectX pre 11, doesn't lock your structures to a thread. Any thread from the same process can access OpenGL data and therefore supports multithreading.

DX forbid multithreading until 11. OpenGL never did.

Re:ABSOLUTELY FALSE. (0)

Anonymous Coward | about a year ago | (#42887123)

OpenGL only supports one thread sending commands to the GPU at a time, DX11 supports deferred contexts which allows threads to write to their own command queues and lets the GPU drivers to read concurrently and about removes context switching.

Currently, the one of the biggest bottlenecks is the ability of the CPU to send commands to the GPU. A modern 3ghz CPU has an effective cap on how many instructions it can send, but threading it allows an almost linear increase in the number of commands that can be sent.

In nearly every game, my GPU is around 10% load and my CPU is around 10% load and my FPS tanks when lots of objects enter the screen. If my system is effectively "idle", then why is my performance so bad? Ohh, plenty of research says we need to thread this stuff.

By the only interpretation that could be true (0)

Anonymous Coward | about a year ago | (#42887295)

By the only interpretation that could be true, DX doesn't allow sending multiple commands to the GPU at a time either.

OpenGL was used and designed for UNIX and similar Big Workstations (SGI et al). Lots of CPUs. Intended to produce graphics alone.

Why the HELL are you thinking they would not have allowed more than one CPU thread update to their graphics system???

Your apocryphal is bullshit. Hell, it's not even true for single threaded DX9 or 10. Some games CPU bound some games GPU bound.

Re:ABSOLUTELY FALSE. (1)

Marxdot (2699183) | about a year ago | (#42890281)

lets the GPU drivers to read concurrently

There is currently no such thing.

and about removes context switching

Again, there is no such thing. The command buffer of the GPU is a shared resource that can only serve one logical core at a time.

Ohh, plenty of research says we need to thread this stuff.

What should we be "threading"? Obviously not context access, which is a gimmick. It would make sense for software rendering, but not for shared access to a single resource.

Please show us this research.

Re:Very very poor article (1)

Marxdot (2699183) | about a year ago | (#42890009)

Since there is only one interface to the GPU (and thus only one command pipeline being used to the best of its ability for each context), multi-threaded access to OpenGL contexts only serves to slow rendering down.

So multithreading DX11 will slow things down? (0)

Anonymous Coward | about a year ago | (#42894063)

Your claim doesn't pass the sniff test, old boy.

Not to mention that updating the scene doesn't have to be single threaded: you can have one thread for each discrete object and each thread updates the location of its object in the scene.

Re:Very very poor article (5, Insightful)

Zero__Kelvin (151819) | about a year ago | (#42886807)

" Linux has terrible graphics drivers from AMD, Nvidia AND Intel."

Since youdon't use Linux, or don't know how to configure it properly, you should refrain from speaking as though you have. NVIDIA and Intel have great Linux drivers. I cannot speak for AMD, since I haven't used them in years, but you seem to confuse the Open Source NVIDIA driver (nouveau) with the proprietary drivers, which work awesome and allow full use of the GPU through CUDA. Intel's Open Source driver is also quite good.

Re:Very very poor article (1)

fat_mike (71855) | about a year ago | (#42889707)

Try running X-Plane on the same system using a Windows 7 and any Linux distribution and tell me how awesome the open source drivers are compared to Windows.

Re:Very very poor article (0)

Anonymous Coward | about a year ago | (#42894167)

Apparently you cannot read, since the post you replied to clearly says "you seem to confuse the Open Source NVIDIA driver (nouveau) with the proprietary drivers, which work awesome and allow full use of the GPU through CUDA."

You are obviously confused.

Re:Very very poor article (1)

Zero__Kelvin (151819) | about a year ago | (#42895227)

First of all, X-Plane is designed for OS X, but can be made to run on Windows and Linux. Second of all, there is no problem with doing it if, as I clearly stated, you use the proprietary driver.

Re:Very very poor article (1)

Randle_Revar (229304) | about a year ago | (#42891971)

I wouldn't use Nvidia anyway, but if I did, I sure wouldn't use the binary pile of shit.

Quality goes kinda like this:
fglrx < Nvidia < Nouveau < r300g/r600g < i965

Re:Very very poor article (1)

drinkypoo (153816) | about a year ago | (#42892341)

I wouldn't use Nvidia anyway, but if I did, I sure wouldn't use the binary pile of shit.

Unless you wanted 3d acceleration that worked. Or XVBA that works. Or, basically, anything else that works beyond framebuffer and basic OpenGL.

Re:Very very poor article (1)

Zero__Kelvin (151819) | about a year ago | (#42895255)

Well, as a guy who never has used it, and never will use it, you are certainly in a position to counter my claim as a guy who does in fact use it even as I type this post.

Re:Very very poor article (1)

Guspaz (556486) | about a year ago | (#42887119)

Microsoft going AMD Fusion would be surprising, since the AMD Fusion chips that can compete with Haswell on a power usage standpoint make the Atom look high performance by comparison.

Re:Very very poor article (1)

hairyfeet (841228) | about a year ago | (#42889331)

You are forgetting how much people want their mobile devices to do everything their PCs will do and AMD does have a pretty big advantage when it comes to GPUs. it was a big enough advantage that not one, not two, but three of the next gen consoles are using their GPUs and two are using their APUs so having lots of multimedia power is a big plus and Intel is still behind in that area. i'm frankly shocked Intel hasn't bought Nvidia by now, it would give them the graphics expertise they could really use.

Re:Very very poor article (1)

Guspaz (556486) | about a year ago | (#42890003)

The PS4's use of an APU is an unsubstantiated rumour at this point, the Wii U definitely doesn't use an AMD CPU at all (it's a PowerPC for Pete's sake), and the XBox 720's rumoured to have a GPU that's more than double the size of the biggest one they're shipping today (as far as I can tell), indicating a more traditional CPU/GPU architecture...

Re:Very very poor article (1)

Guspaz (556486) | about a year ago | (#42890011)

Sorry, I should clarify, the GPU is more than double the size of the GPU in any APU.

Re:Very very poor article (1)

foijord (2840567) | about a year ago | (#42887557)

On Windows, the GPU is driven by either DirectX or OpenGL. Native OpenGL ES drivers for Windows are ONLY needed for cross-platform development where applications destined for mobile devices are built and tested on Windows first.

That's a really good reason to have native OpenGL ES drivers IMHO. But why would you create an app on windows, and release it on any platform except windows?

Dream on (1)

Kjella (173770) | about a year ago | (#42887969)

On the Windows desktop, Intel is about to be wiped out by the new class of AMD fusion (CPU and GPU) parts that will power the new consoles. AMD is light-years ahead of Intel with integrated graphics, GPU driver support on Windows, and high speed memory buses with uniform memory addressing for fused CPU+GPU devices.

Yet despite their allegedly superior technology, last year was an unmitigated disaster, a steady decline from Q1 to Q4 losing over 30% in revenue and where in Q4 2011 they had a gross margin of 46%, in Q4 2012 it was down to 15%. They're cutting in R&D, in 2011 they spent 1453 million, in 2012 1354 million and if they spend the whole of 2013 at Q4 2012 levels then 1252 million. Yes, hopefully the PS4/Xbox 720 will give AMD a much needed cash infusion but their technology is not at all selling so maybe they should stop bleeding market share first before "wiping out" anything. Also, AMD just recently posted a new graphics roadmap [hardcoreware.net] for 2013, I'll give you the summary: No new desktop graphics cards until Q4 at the earliest.

Inside Intel, senior management have convinced themselves (falsely) that they can compete with ARM in low power mobile devices. This is despite the fact that 'Ivybridge' (their first FinFET device) was a disaster as an ultra low power architecture, and their coming design, Haswell, needs a die size 5-10 times its ARM equivalent. The Intel tax alone ensures that Intel could never win in this market. Worse again is the fact that Intel needs massive margins per CPU to simply keep the company going.

While Intel may have a tough time battling ARM on the low power front, AMD is totally lost. All their x86 CPUs burn more power than an equivalent Intel and they're dividing their resources between ARM and x86, try pitting a 18W Brazos 2.0 against a 17W i7 ULV. They're totally not in the same price class, but they are in the same wattage range. Intel will milk the market of high end desktop/workstation/server chips that AMD has pretty much abandoned and use that as its war chest against ARM, if they'll win is another matter but they got billions and billions to spend on that, right now based on market cap Intel could buy AMD for about 2% of their stock. Not that Intel would want to since they'd become a true monopolist in x86 space, but in the all-out battle with ARM they might end up a casualty caught in the crossfire.

AMD are buggered by Windows & Games single thr (0)

Anonymous Coward | about a year ago | (#42888689)

Because there hasn't been any games using more than a couple of threads to any extent that makes a damn difference (DX9,10 and 11 initial release), the scalability of the multi-core AMD system compared to that of Intel has been of no use.

But the simpler (and stupid) Intel multicore system wasn't the bottleneck and, along with fewer cores being pumped faster, resulted in Intel on games being better bang per buck.

Just barely now are DX games going to *start* be multi-cpu dependent, which would normally allow AMD's better technology to make a difference. Problem is, though, MS's marketing killbot strategy demanding that 11b which finally allows DX to be multi-threaded is ONLY allowed on Win8 will ensure that for the next three years or more. And that will likely see AMD screwed whilst Intel work out how the hell to make a sensible multi-core system that isn't a bloody silly bottleneck.

Re:Dream on (1)

dbIII (701233) | about a year ago | (#42890381)

Yet despite their allegedly superior technology, last year was an unmitigated disaster

Sales of personal computers are not doing well due to economic conditions and Intel has a tight grip on Dell etc, as well as closing the price gap that made AMD look like vastly better value for money in earlier years. It doesn't mean they are doomed.

the market of high end desktop/workstation/server chips that AMD has pretty much abandoned

Excuse me? Where did you get that from? There's a 32 core opteron coming out that can go on 4 socket boards, while meanwhile Intel is on ten cores with no better clock speed for the multi-way CPUs and a vast price difference. You are confusing the middle to low end with the high end.

Re:Dream on (1)

drinkypoo (153816) | about a year ago | (#42892363)

While Intel may have a tough time battling ARM on the low power front, AMD is totally lost.

Intel has already proven they can't battle ARM on the low power front. Even when they made ARM processors themselves they were higher-power than everyone else's. Literally. But your point about AMD power consumption is well-taken. Not since Geode have they managed to be impressive in that department.

Re:Very very poor article (0)

Anonymous Coward | about a year ago | (#42888025)

Intel about to be blown out of the water by AMD? Not in this reality. This statement is so full of shit, it's safe to assume the rest of your post is too.

AMD's integrated cpu/gpu combos have been nothing more than a massive disappointment at every turn and there is nothing to suggest the next gen will be anything different. Furthermore, Intel surpasses whatever AMD has been putting out whenever they release a new generation, and Intel doe not even seem to be putting much effort in to their integrated GPU development.

Furthermore, do you know what other console was originally supposed to ship with an AMD cpu? The orginal xbox. Yeah. Intel scooped that deal handily. What makes you think they won't attempt it again? And probably succeed? What do you bet that MS and Sony aren't just baiting Intel in an effort to leverage a better deal?

And???? (0)

who_stole_my_kidneys (1956012) | about a year ago | (#42886577)

So your saying support for OpenGL came out faster on an Open Source Operating system (Linux) then it did on Windows? Shocking!

Re:And???? (2)

Desler (1608317) | about a year ago | (#42886625)

No this is about OpenGL ES which is basically irrelevant on desktop Windows.

Re:And???? (1)

petermgreen (876956) | about a year ago | (#42888665)

And it's basically irrelevent on x86/x64 linux too. Afaict when software supports ES it generally has a compile time switch between regular opengl and opengl ES. Pretty much all GPUs seen in x86 systems support regular opengl while only a subset of them support ES so the sane thing for a distro to do is to use regular opengl for the x86/x64 builds of their software and only build for ES on architectures where ES only GPUs are common.

Re:And???? (0)

Anonymous Coward | about a year ago | (#42888969)

Precisely.

Add to the fact that the Linux drivers for the Intel HD 4000 mentioned here already have had support for desktop OpenGL 3.1 for some time, this is pretty much irrelevant. (What's more annoying is that the HD 4000 apparently supports OpenGL 4.0 on Windows.)

Windows and opengl ES (0)

Anonymous Coward | about a year ago | (#42886659)

In other words the sky is blue, they don't support any version on windows natively. It's wrapped and passed if you're designing programs on windows. Which makes sense.

Intel is playing both sides of the street. (0)

Anonymous Coward | about a year ago | (#42887031)

On one hand they gave Microsoft a carrot by not releasing c compile specs giving Windows a one time exclusive on a specialized release of the Atom. Now they are giving Microsoft the stick by making sure that the future of gaming and advanced entertainment devices using the Linux kernel with their processors is not effected.

Further to the point they are simply making sure that there is a door open for them in future to get some of the high end entertainment device business back from arm and AMD.

They realize the fact that an open software approach to device firmware is much more manufacturer friendly than anything Microsoft has come up with the locked down Windows embedded firmware kernels.

This is a fairly reasonable approach when dealing with the juggernaut from Redmond.

Re:Intel is playing both sides of the street. (1)

Desler (1608317) | about a year ago | (#42887113)

Execpt that OpenGL ES has zero relevance on desktop Windows. No one uses it. Kinda blows your whole theory out of the water.

Re:Intel is playing both sides of the street. (0)

Anonymous Coward | about a year ago | (#42887551)

Well WebGL is OpenGL ES. I am sure you heard of this web craze that started lately...

Well it's about effing time (0)

Anonymous Coward | about a year ago | (#42888515)

WTF took so long, anyway?

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...