×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Releases 5,000 Pages of Open-Source Haswell Documentation

samzenpus posted about 4 months ago | from the pages-for-the-people dept.

Intel 111

An anonymous reader writes "Intel has ended out 2013 by publishing 5,000 pages of new GPU documentation about their latest generation 'Haswell' graphics hardware. The new documentation complements their longstanding open-source Linux graphics driver that has supported Haswell HD / Iris Graphics since last year. The new documentation covers the hardware registers and special information for 3D, video acceleration, performance counters, and GPGPU programming."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

111 comments

I received them earlier ... (4, Funny)

Anonymous Coward | about 4 months ago | (#45819741)

... thanks Snowden!

is it complete? (5, Funny)

waddgodd (34934) | about 4 months ago | (#45819797)

Does it include APIs for the NSA backdoors?

Re:is it complete? (0)

gnasher719 (869701) | about 4 months ago | (#45819875)

Does it include APIs for the NSA backdoors?

Bloody idiot.

Re:is it complete? (2, Insightful)

RightSaidFred99 (874576) | about 4 months ago | (#45820219)

God, I know. We get it - the NSA is spying on us. All these "NSA turr hurr!" jokes remind me of the "Scary Movie"/"Epic Movie" type pseudo-spoofs. They seem to think making references to other movies, alone, makes it funny. "Turr hurr, Gandalf is breakdancing turr hurr!". Same level of infantile humor.

Re:is it complete? (-1, Flamebait)

TheGratefulNet (143330) | about 4 months ago | (#45820483)

did we touch a nerve, somehow, with you?

why does this bother you? tell us. sounds like you have something you want to say.

Re:is it complete? (0)

Anonymous Coward | about 4 months ago | (#45821005)

I think he said it. But it falls on deaf ears - he's talking to an audience that still mindlessly throws in a "beowulf" joke or whatever. Unfortunately there's a lot of folks here that don't really understand humor because of cognitive deficiencies, but they observe other people using the same words and producing the same response with the given stimulus, so they repeat it. If machine AI ever becomes a reality, the first joke it makes will probably involve chair throwing or hot grits.

Re:is it complete? (1)

dyingtolive (1393037) | about 4 months ago | (#45821345)

he's talking to an audience that still mindlessly throws in a "beowulf" joke or whatever.

Then Hrothgar departed, his earl-throng attending him,
Folk-lord of Scyldings, forth from the building;
The war-chieftain wished then Wealhtheow to look for,
The queen for a bedmate.

?

Re:is it complete? (0)

Anonymous Coward | about 4 months ago | (#45821295)

did he touch a nerve, somehow, with you?
 
why does it bother you? tell us. sounds like you have something you want to say....
 
Naw, you don't have anything to say. You're just another Slashtard who wants to sound insightful while never really saying anything.
 
Keep sucking that melodramatic nut sack, bitch.

Re:is it complete? (1)

Anonymous Coward | about 4 months ago | (#45821203)

Doesn't matter either way. Most of the open source fanboys out there talk a good game but most don't know shit. I bet out of everyone reading this here that maybe 1 in 100 could make any amount of sense of what the documents contain and about 1 in 10,000 will actually do something with the information.
 
Slashdot is just a fanboy paradise of flame baiting and a bunch of would be nerds tugging on each others dicks. Why do you think people don't talk about real tech around here anymore?

Re:is it complete? (0)

Anonymous Coward | about 4 months ago | (#45823221)

Doesn't matter either way. Most of the open source fanboys out there talk a good game but most don't know shit. I bet out of everyone reading this here that maybe 1 in 100 could make any amount of sense of what the documents contain and about 1 in 10,000 will actually do something with the information.

That would be the entire point of open source. Strength in numbers. Those who are smart enough will use these documents to their advantage by writing/fixing drivers, a portion of the rest will report bugs to the first portion, to help them selves and everyone else. The remaining users will sit back and wait for someone else to report the bug (possibly because they don't know what/how to report), and hopefully in the end we all are better off.

Open source people are often fanboys because they all understand that the more users that can be attracted to these projects the better the projects will become. The better the projects become the more users will join creating a positive feedback loop.

Re:is it complete? (1)

unixisc (2429386) | about 4 months ago | (#45826665)

Only issue w/ that theory is that every FOSS project that ever came up should be a rip-roaring success like MS Office or Angry Birds, and no project should ever have a shortage of volunteers.

In reality, few people can afford to work for $0.00 - particularly the ones w/ the skills to be of any use to these projects.

Re:is it complete? (0)

Anonymous Coward | about 4 months ago | (#45821421)

Sounds like someone is a little butt hurt that they bought into the whole "hope and change" thing.

Re:is it complete? (0)

Anonymous Coward | about 4 months ago | (#45822197)

bool Bloody_idiot(Who& wh, Watches w, TheWatchers& w); //FTFY

Re:is it complete? (4, Funny)

TheGratefulNet (143330) | about 4 months ago | (#45819911)

what a relief to find that the arguments (of the api calls) only need rot13 applied twice in succession. its not at all cpu-intensive, so that's a relief.

alarmingly, the result is returned in plaintext. waiting for nsa api v1.1 for the fix to that one.

Huh? Open Source? (-1)

Anonymous Coward | about 4 months ago | (#45819905)

Can I see the source code for the documentation please?

Re:Huh? Open Source? (0, Interesting)

Anonymous Coward | about 4 months ago | (#45820211)

Can I see the source code for the documentation please?

Sure. It's written in TeX. Like most open source code, you can't really understand it but at least you have the freedom to modify it.

Re:Huh? Open Source? (0)

Anonymous Coward | about 4 months ago | (#45822947)

The irony is that it's not written in TeX. So unlike open source you get nothing, not even the pittance of being able to see and examine the source document.

Miles ahead of AMD and Nvidia (0)

Anonymous Coward | about 4 months ago | (#45819941)

Scooped!

Dear Nvidia... (5, Insightful)

KazW (1136177) | about 4 months ago | (#45819951)

Please take notice, this is how to support GPU hardware correctly.

Re:Dear Nvidia... (1)

aliquis (678370) | about 4 months ago | (#45820083)

What I wonder is what really makes it harder / impossible for Nvidia or whomever to do it but works for Intel? If anything.

Re:Dear Nvidia... (5, Interesting)

bill_mcgonigle (4333) | about 4 months ago | (#45820123)

What I wonder is what really makes it harder / impossible for Nvidia or whomever to do it but works for Intel? If anything.

The standard rumor is that they they all violate bogus patents rampantly and only by keeping their code secret (and possibly backdoored) can they stay afloat, in face of the patent trolls.

A deep cynic might claim that Intel can survive more of these trolls than nVidia could so this could be a competitive move. IIRC Intel and nVidia had a cross-licensing deal that involved Intel staying out of the discrete market - maybe that's due to expire soon.

Re: Dear Nvidia... (1)

Anonymous Coward | about 4 months ago | (#45820561)

Discreet is dead. a 5 year old video card can run most modern games. Unless there's a breakthrough like ray tracing the majority won't bother.

Re: Dear Nvidia... (1)

Miamicanes (730264) | about 4 months ago | (#45821061)

Yeah, yeah. Discrete is dead, and IGP is good enough now... as long as you don't want hardware-accelerated realtime raytracing:

(drool)

  http://www.siliconarts.co.kr/gpu-ip [siliconarts.co.kr]

(/drool)

When I can have Windows or Linux with full eye candy and zero performance hit (vs Win2k or something XFCE-like) at 2560x1920@240fps, I'll accept current GPUs as "good enough".

When I can open a pdf document on my phone or tablet and effortlessly fling through it without any perceptible lag waiting for the fonts to render, current GPUs will be "good enough". Newsflash: 80% of the reason why ebooks suck so miserably is the fact that current hardware CAN'T effortlessly render them in realtime

OK, the last one is a bit unfair, because the blame for current shit pdf-rendering performance lies mostly at the feet of the videocard industry, for throwing away everything it learned and developed relating to 2D acceleration in the mad rush to cheap 3D. Basically, they took 3D GPUs developed for strap-on videocards, grafted on enough extra silicon to let them stand alone as their own self-hosted minimalist frame buffer, and called it a day.

Re: Dear Nvidia... (1)

BlackHawk-666 (560896) | about 4 months ago | (#45821247)

(drool)

  http://www.siliconarts.co.kr/gpu-ip [siliconarts.co.kr]

(/drool)

When I can have Windows or Linux with full eye candy and zero performance hit (vs Win2k or something XFCE-like) at 2560x1920@240fps, I'll accept current GPUs as "good enough".

I checked out that link and it looked like I was stepping back into the 90s. That image on the home page looks like it's a 256 colour GIF! Where's the specular mapping? Everything in those shots looks dead, like a bad phong highlighted raytrace.

Re: Dear Nvidia... (0)

Anonymous Coward | about 4 months ago | (#45822285)

I checked out that link and it looked like I was stepping back into the 90s. That image on the home page looks like it's a 256 colour GIF! Where's the specular mapping? Everything in those shots looks dead, like a bad phong highlighted raytrace.

If you had read the text too instead of just looking at the pictures, you might have noticed the mention of "real time" and "mobile devices". Where do you come from then, 2050?

Re: Dear Nvidia... (1)

Ford Prefect (8777) | about 4 months ago | (#45823235)

I checked out that link and it looked like I was stepping back into the 90s. That image on the home page looks like it's a 256 colour GIF! Where's the specular mapping? Everything in those shots looks dead, like a bad phong highlighted raytrace.

There's much more impressive stuff going on with path tracing on conventional GPUs [youtube.com] - something that, at least for me, is making a definite case for ungodly improvements in processing power for GPU hardware.

Re: Dear Nvidia... (1)

smash (1351) | about 4 months ago | (#45824365)

Pretty much. I predict that within 3-5 years the discrete video card market will be much like the discrete sound card market is now. Very small and rarely purchased except for niche users. Nvidia should be scared, but then the writing has been on the wall for a good 5 years now already.

Re: Dear Nvidia... (1)

Anonymous Coward | about 4 months ago | (#45825063)

Don't forget GPGPU. Discrete AMD cards still slaughter anything short of a Xeon Phi for GPGPU.

Re:Dear Nvidia... (2)

gnasher719 (869701) | about 4 months ago | (#45821025)

The standard rumor is that they they all violate bogus patents rampantly and only by keeping their code secret (and possibly backdoored) can they stay afloat, in face of the patent trolls.

The other reason would be that if NVidia engineers could read the documentation needed to write ATI drivers and vice versa, they would figure out some clever ideas that their competitor had and reproduce them. If you make more and more powerful cards, you always need new clever ideas how you turn more transistors into more speed. You run into bottlenecks that weren't bottlenecks when you had a quarter of the transistors. Since Intel integrated graphics is quite a bit behind in that respect, Intel is probably doing clever things that ATI and NVidia would have been glad to know four years ago.

Re:Dear Nvidia... (3, Insightful)

tlhIngan (30335) | about 4 months ago | (#45823565)

The other reason would be that if NVidia engineers could read the documentation needed to write ATI drivers and vice versa, they would figure out some clever ideas that their competitor had and reproduce them. If you make more and more powerful cards, you always need new clever ideas how you turn more transistors into more speed. You run into bottlenecks that weren't bottlenecks when you had a quarter of the transistors. Since Intel integrated graphics is quite a bit behind in that respect, Intel is probably doing clever things that ATI and NVidia would have been glad to know four years ago.

The other thing is, well, the documentation simply doesn't exist. Having worked with quite a few ASIC vendors, inside and out, I can tell you the modern SoC if heavily undocumented. For a good chunk of the blocks, unless they're external IP (from say, Synopsys, Cadence or ARM), there IS no documentation other than a register list.

Yes, the documentation may just be a register list. Nothing to tell you how you should drive the hardware, if the registers have to be programmed in a certain way, or if there's tricks and other things that are important to know. Or even how some data structures need to be laid out in memory for hardware (you may get a brief layout, but that's it). Any my favorite - how the hardware reacts to an error. Sometimes it just plain locks up requiring a full reset. Other times it sets some strange error bit that must be cleared in order to resume functionality. If you're lucky to know how to clear the error.

Oh, and how does a software developer find out such information? Easy, they ask the ASIC designer how they envisioned the thing to work. Sometimes they're helpful and tell you exactly what you need to know, but they can be terse and expect you to figure out their thinking. And sometimes what you want to do LOOKS possible from the registers, only to find out that no, you just cannot program the bits that way and no, you shouldn't try.

That's probably why Intel took so long to get the documentation out - Haswell has been out since what, April? And only now they release the documentation? Well, it's probably because Intel was getting the whole whack together from random designer notes, software design, etc. Then bringing all that together (Intel creates some of the best documentation around) and editing it and all that.

For the most part, nVidia and AMD probably DON'T have much in the way of documentation on the latest GPUs. Intel does, and only because they spend time and effort doing it - but Intel's also huge and has a lot of cash to have a bunch of engineers sitting around writing documentation exclusively.

If you think OSS people hate writing documentation - it's no better when it's commercial development - again, the project usually doesn't allow for much documentation to be created. And things like GPUs which change frequently, well, by the time the documents are written, it's obseleted 3 times over.

Re:Dear Nvidia... (1)

Anonymous Coward | about 4 months ago | (#45824381)

They get the ideas from analyzing the bottlenecks on their own GPU's by running performance analysis and using instrumentation like timers and event counters. Whatever FIFO is filling up first, which cache is always swapping the most, which bus is running to full capacity, how to avoid wasted unused reads and writes. It's non-stop. Every new feature adds new methods.

Re:Dear Nvidia... (1)

Dragonslicer (991472) | about 4 months ago | (#45821449)

The standard rumor is that they they all violate bogus patents rampantly and only by keeping their code secret (and possibly backdoored) can they stay afloat, in face of the patent trolls.

That rumor would persist only because most people don't know anything about how lawsuits work. You don't need to see someone's source code before filing a lawsuit; you get to see their source code after you file the lawsuit.

there's more to it (0)

Anonymous Coward | about 4 months ago | (#45823261)

They may attract patent trolls, they may have third party IP under NDA, they may have stuff they keep secret instead of patenting to avoid telling other how it works
or they may just be paranoid and have a legal department that need to justify their existence

Re:Dear Nvidia... (1)

mikael (484) | about 4 months ago | (#45824331)

It's not the code that is kept secret from the patent trolls, executable code can always be disassembled, precompiled shaders can be disassembled - usually there's even a free disassembler thrown in with most development kits. The secret bits are the comments; explanations of techniques, todos, for-the-future, optimize this, that will be done in the next cycle. It's enough to mention two buzzwords together in one line, and the patent-trolls will be jumping up and down shout patent violation and wanting payment. It will be up to the company to disprove that violation. Just think of pairs of keywords: "shadows" and "stencils" or "floating-point" and "image-buffers".

Re:Dear Nvidia... (4, Interesting)

jedidiah (1196) | about 4 months ago | (#45820329)

I'm still waiting for Intel drivers that are on par with their Nvidia counterpart.

Despite all of the noise made about Intel's cooperation, this is the first time we've actually had full disclosure from them. Prior to today what they offered was incomplete. It was all empty promises despite of all of the rhetoric from the political purists about how Intel does things better.

Someday, this might lead to a proper driver. Although Intel hardware will probably still be just as lame then.

Re:Dear Nvidia... (0)

Anonymous Coward | about 4 months ago | (#45820639)

I'm still waiting for Intel drivers that are on par with their Nvidia counterpart.

Despite all of the noise made about Intel's cooperation, this is the first time we've actually had full disclosure from them. Prior to today what they offered was incomplete. It was all empty promises despite of all of the rhetoric from the political purists about how Intel does things better.

Someday, this might lead to a proper driver. Although Intel hardware will probably still be just as lame then.

Is that windows or linux drivers you're referring to?

Intel has published tons of documentation before this, what was incomplete about them, and how is this time different?

Re:Dear Nvidia... (0)

Anonymous Coward | about 4 months ago | (#45820335)

Allegedly, Nvidia has many videos cards for which the difference between the high-priced and low-priced model is not hardware, but drivers.

Re:Dear Nvidia... (2)

gigaherz (2653757) | about 4 months ago | (#45820489)

I doubt that. The firmware, maybe, but probably not drivers. Normally the difference between high-priced and low-priced models is that the low-priced models have some internal fuses blown, so that some of the cores are disabled. Sometimes those cores were defective, other times they disable cores just to meet the demand. It could be that they disable the cores with firmware instead of fuses, and somehow the drivers could reenable cores in the latter cases, but my guess is that the people who give the orders simply think of their precious architecture details as information that needs to be kept secret, in case the competition gets too many ideas from those details.

Re:Dear Nvidia... (1)

Rob Y. (110975) | about 4 months ago | (#45820671)

If that firmware is loaded by the driver, then it's essentially the same thing, isn't it?

Re:Dear Nvidia... (1)

gigaherz (2653757) | about 4 months ago | (#45821167)

Not necessarily: if all they were concerned about was hacking around model "locks", they could still release the documentation on the "public" registers, protocols, etc. and still require a special undocumented binary blob to be transferred into the gpu at boot.

Re:Dear Nvidia... (1)

aliquis (678370) | about 4 months ago | (#45821765)

At least earlier you flashed the firmware onto the card, I think that's the case still as people have been tinkering with GeForce 600-series cards trying to make them similar to the 700-series ones as far as clocking goes.

Re:Dear Nvidia... (1)

citizenr (871508) | about 4 months ago | (#45822871)

I doubt that. The firmware, maybe, but probably not drivers. Normally the difference between high-priced and low-priced models is that the low-priced models have some internal fuses blown, so that some of the cores are disabled. Sometimes those cores were defective, other times they disable cores just to meet the demand. It could be that they disable the cores with firmware instead of fuses, and somehow the drivers could reenable cores in the latter cases, but my guess is that the people who give the orders simply think of their precious architecture details as information that needs to be kept secret, in case the competition gets too many ideas from those details.

drivers and/or jumpers
http://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/ [eevblog.com]

Re:Dear Nvidia... (-1, Troll)

Luckyo (1726890) | about 4 months ago | (#45820451)

The fact that intel is about half a decade or more behind nvidia in GPU tech, and nvidia revealing its internals to this extent would allow competition to catch up much faster than it otherwise would.

Re:Dear Nvidia... (0)

Anonymous Coward | about 4 months ago | (#45820813)

That's BS, at least in regards to laptop graphics solutions - the only market where Intel graphics matter. Latest Intel Iris Pro can easily compete with mid-range discrete solutions from Nvidia.

Re:Dear Nvidia... (2)

Luckyo (1726890) | about 4 months ago | (#45821381)

Market disagrees. Vast majority of PCs sold, both desktop and laptop run intel's intergrated graphics. Most of them are the older GMA chipsets, but quite a few desktops nowadays run on intel's HD GPUs that come with the CPU.

Re:Dear Nvidia... (2)

tibman (623933) | about 4 months ago | (#45822241)

That's a trap. The mobo comes with integrated intel, yes. But in most cases the end user also has a discrete card. You can guess which one is actually used.

Re:Dear Nvidia... (1)

rescendent (870007) | about 4 months ago | (#45822523)

That's a trap. The mobo comes with integrated intel, yes. But in most cases the end user also has a discrete card. You can guess which one is actually used.

Depends if its WebGL in a browser nvidia locks you to intel: Option to select the preferred graphics processor is greyed out for IE, Chrome, and Firefox. [custhelp.com] and https://www.scirra.com/blog/ashley/7/nvidia-hobbles-webgl-performance-on-laptops [scirra.com]

Re:Dear Nvidia... (0)

Anonymous Coward | about 4 months ago | (#45825763)

"Most"? Certainly not on the low-budget market/segment. Why do you think both Intel and AMD invested so heavily on CPU-integrated GPU/APU?
In Brazil one actually has very few options with discrete GPUs when buying branded desktop or notebooks.

Re:Dear Nvidia... (1)

Rockoon (1252108) | about 4 months ago | (#45821481)

Latest Intel Iris Pro can easily compete with mid-range discrete solutions from Nvidia.

..and the edram they bolted onto the chip that makes it possible only adds $200+ to the price of the CPU.. what a bargain!

How the Intel fanboys can think that this is bragging material is beyond me...

Re:Dear Nvidia... (1)

smash (1351) | about 4 months ago | (#45824595)

We're in a transitional period - broadwell should be very interesting. The edram is also usable by the CPU portion, and don't forget the discrete GPU + CPU combo consumes 1.5-2x as much power.

Re:Dear Nvidia... (1)

symbolset (646467) | about 4 months ago | (#45824677)

Interestingly, Intel includes the better GPU in the upmarket CPU where it is going to not be used because those platforms get discrete GPU, and the inferior ones on the down market CPUs where they will be front and center as the display driver. That never made sense to me.

Re:Dear Nvidia... (1)

smash (1351) | about 4 months ago | (#45824565)

Performance per watt, NVidia isn't even close. I'm just waiting for multi-socket portables - you could stick 2x intel CPU/GPUs in a machine in the same thermal/power envelope as CPU + Discrete GPU, and have a lot less complexity with GPU switching for power purposes. Plus twice as many CPU threads.

Re:Dear Nvidia... (1)

Luckyo (1726890) | about 4 months ago | (#45824843)

That's nice.

We're still talking about huge market of gaming GPUs, not the tiny market of compute GPUs.

Re:Dear Nvidia... (1)

smash (1351) | about 4 months ago | (#45825443)

You mean like the biggest gaming market, the mobile gaming market, where integrated GPU is the norm?

Re:Dear Nvidia... (1)

Luckyo (1726890) | about 4 months ago | (#45825937)

I see what you did there. We're all talking about nvidia and intel and you decided that discussion doesn't really matter at all and went on to talk about powervr.

Since we're on the line of being pants on the hat stupid, let's talk about pants. I'm pretty sure they're worn on the legs!

Re:Dear Nvidia... (0)

Anonymous Coward | about 4 months ago | (#45821209)

Intel is selling you a CPU with the GPU along for the ride, while nVidia is selling a dedicated GPU. Half of the nVidia product is tied to the software and exposing this would be detrimental to the company.

Code. (3, Insightful)

ledow (319597) | about 4 months ago | (#45820179)

Documentation is ONE PART. It says what the design was supposed to be like.

Then you have errata and variations - when some of the hardware doesn't correspond to the documentation and acts differently.

Then you have examples - where someone shows you how to, e.g. draw a simple triangle using the documented opcodes and all of the boilerplate and set up necessary.

And then you have actual working code. Where you give away, for example, a complete implementation that conforms to a higher, standardised API and issues instructions to the hardware to perform those actions.

Out of all of those, documentation is the easiest thing to do. You can just (for example, just flicked through a PDF from that site) say that instruction X transposes a matrix. No idea of performance, whether that's the recommended way, what it contends with, how it works, whether the Intel drivers use that themselves, whether it's a legacy function, whether it has huge constraints on its use.

Without some code, it's all just fancy tech sheets. Sure, better than nothing, but a long way from actual co-operation. I'm not saying Intel don't co-operate in other areas, but documentation like this? That's the "quick reference" stuff for when your thousands of lines of existing example code don't act like you expect when you tweak them and you look up what that operand is supposed to do and how.

Put a hardware driver author in front of a documentation pack and a compiler, and tell him to write a driver, and he'll tell you to fuck off.

Put a hardware driver author in front of many working examples of device, with debug-level access, with example source (that he can't just copy due to licensing), errata, a direct line to cooperative hardware engineers AND this documentation and he'll start.

This is why I've never been that bothered by documentation releases, or even unmaintained source-drops. Supposedly Broadcom did something similar for the RPi's graphics chips. I think we're still waiting on anything that's not a binary driver there. And we have this sort of stuff for some ancient 3D graphics cards - it's just not as easy as reading it all and then sitting down to write a driver.

Intel, nVidia, ATI: Give us drivers with code that have no reliance on "black box" information/code, and we'll be happy. Until then, it's just lip-service. And you know that. That's why you don't release this kind of stuff for graphics chips, and nor does anyone else. Because you can drop this in someone's lap and years later STILL end up being pestered to the ends of the earth for an open-source driver (or assistance to help write one) because it doesn't exist.

Code is a lot more than writing things to perform a protocol described in the documentation. If only it were that easy.

Re:Code. (1)

ledow (319597) | about 4 months ago | (#45820345)

To cite a simple TL;DR analogy:

Documentation is like giving someone a dictionary to a foreign language they don't know.

Getting a working driver is like asking them to write the laws of the country in that language, and give a speech to inspire the majority of people who can understand it.

Re:Code. (0)

Anonymous Coward | about 4 months ago | (#45820359)

Fucking hell mate, for years the free/open source community has been saying "give use the docs an WE'LL write the code". No need to be quite so churlish,

If they are providing documented hardware that's a long way down the path - plenty of people at Intel work on Linux so this is very promising.

Re:Code. (5, Insightful)

Anonymous Coward | about 4 months ago | (#45820439)

You make a good point, however you are incorrect. As an author of a handful of drivers, and contributor to a handful more - we like specs. If you are incapable of taking an RFC or spec and outputting a working driver, then you aren't quite the programmer you think you are. Specs are often all a driver author ever receives, and you should be able to produce working code with nothing else. It's *nice* if the vendor sends best practices or additional notes about deprecation of certain methods...but it's not the norm, and should not be expected. Being a good developer means being able to benchmark methods described in specs and determine what performs best, and when that applies.

Re:Code. (1)

skids (119237) | about 4 months ago | (#45820683)

This. I'm not especially prolific or talented, but even I generally tend to write code directly from the spec.

(GP)

Then you have examples - where someone shows you how to, e.g. draw a simple triangle using the documented opcodes and all of the boilerplate and set up necessary.

These are usually pretty useless, involve horrible paradigms only used by crazy people like a buinch of access macros for some sublanguage-of-the-week that the authors thought was chic, and don't yield any information beyond what the documentation says.

However, the GP's point about documentation like "opcode foo: does foo" stands. That said, if the documentation did tell me how efficient a given operation was, I'd take that into account, but I wouldn't necessarily treat it as gospel. A good amount of writing code is testing.

Re:Code. (1)

Anonymous Coward | about 4 months ago | (#45821139)

I write drivers for a living. All I want is good specs.
Have you seen the quality of the code written by the hardware design guys?

Re:Code. (3, Insightful)

NixieBunny (859050) | about 4 months ago | (#45821237)

Tee hee. I have a fine counterexample.
About 15 years ago, my company (a producer of VMEbus and CompactPCI boards) designed a video module. We used a Trident mobile graphics chip. Unfortunately, we were attempting to use it with a PowerPC, not an x86 CPU. We had the big user manual for the chip, but when we programmed all the registers according to the published configuration info, it refused to initialize.
We then were given the BIOS object code from the factory (they wouldn't share the x86 assembly source code). We disassembled the code. It was such a tangled web of spaghetti that we never did figure out how to get the part initialized, and the factory app engineering team was unable to tell us how to do so either.
We eventually dumped the part and used an Intel part with C source code available. It worked just fine.

Re:Code. (-1)

Anonymous Coward | about 4 months ago | (#45822533)

Colossal old faggot detected.

Re:Code. (0)

Anonymous Coward | about 4 months ago | (#45824563)

Go somewhere else to troll people. We don't need you here.

Re:Code. (5, Informative)

Anonymous Coward | about 4 months ago | (#45820539)

lots of blah blah blaa

The Haswell GPU driver source code has been in the upstream kernel and userspace parts for maybe a year now.

Re:Code. (1)

saigon_from_europe (741782) | about 4 months ago | (#45820585)

Yes, it would be nice if we could get entire stack - documentation, working code, test examples, free support accounts, testing hardware, source repository access, Intel engineers payed to work on our favorite project, board of directors meeting memos and our own Santa but that is not going to happen. Documentation and some support is probably all the community would get, but that should be enough. The community usually had to work with a lot less and it was still capable of making useful code.

Re:Code. (2)

pentagramrex (1125875) | about 4 months ago | (#45821009)

Documentation is ONE PART. It says what the design was supposed to be like.

Then you have errata and variations - when some of the hardware doesn't correspond to the documentation and acts differently.

Then you have examples - where someone shows you how to, e.g. draw a simple triangle using the documented opcodes and all of the boilerplate and set up necessary.

And then you have actual working code. Where you give away, for example, a complete implementation that conforms to a higher, standardised API and issues instructions to the hardware to perform those actions.

Out of all of those, documentation is the easiest thing to do. You can just (for example, just flicked through a PDF from that site) say that instruction X transposes a matrix. No idea of performance, whether that's the recommended way, what it contends with, how it works, whether the Intel drivers use that themselves, whether it's a legacy function, whether it has huge constraints on its use.

Without some code, it's all just fancy tech sheets. Sure, better than nothing, but a long way from actual co-operation. I'm not saying Intel don't co-operate in other areas, but documentation like this? That's the "quick reference" stuff for when your thousands of lines of existing example code don't act like you expect when you tweak them and you look up what that operand is supposed to do and how.

Put a hardware driver author in front of a documentation pack and a compiler, and tell him to write a driver, and he'll tell you to fuck off.

Put a hardware driver author in front of many working examples of device, with debug-level access, with example source (that he can't just copy due to licensing), errata, a direct line to cooperative hardware engineers AND this documentation and he'll start.

This is why I've never been that bothered by documentation releases, or even unmaintained source-drops. Supposedly Broadcom did something similar for the RPi's graphics chips. I think we're still waiting on anything that's not a binary driver there. And we have this sort of stuff for some ancient 3D graphics cards - it's just not as easy as reading it all and then sitting down to write a driver.

Intel, nVidia, ATI: Give us drivers with code that have no reliance on "black box" information/code, and we'll be happy. Until then, it's just lip-service. And you know that. That's why you don't release this kind of stuff for graphics chips, and nor does anyone else. Because you can drop this in someone's lap and years later STILL end up being pestered to the ends of the earth for an open-source driver (or assistance to help write one) because it doesn't exist.

Code is a lot more than writing things to perform a protocol described in the documentation. If only it were that easy.

It's been a few years since I worked in hardware, but even when you build it for a commercial company; not open source, you don't get the things you (poster) think are important.

In the past I found AMD far more helpful than Intel, and I was building high end workstations then (Motorola and MIPS based, but lots of AMD chips). MIPS and AMD at least gave you the documentation for free if they thought you were serious: that was LOTS of thick books. Sometimes they helpd you figure out if things didn't work properly.

If you are in a small company, even when you buy a reference design kit you don't get any help. you have to work it out for yourself, even if there are some errata sheets. No properly working drivers for the reference design. No source.

These days I'm glad I have a little company in China making our boards, that while being a bit clumsy are very fast to fix things. Things get broken as quickly.

I'm even more glad that there are less complier bugs than I had to deal with on Microsoft compliers, or GNU on obscure ARM architectures I tore my hair out over.

Luckily I have a lot of hair.

Rob.

Re:Code. (1)

mikael (484) | about 4 months ago | (#45824675)

Put a hardware driver author in front of a documentation pack and a compiler, and tell him to write a driver, and he'll tell you to fuck off.

Put a hardware driver author in front of many working examples of device, with debug-level access, with example source (that he can't just copy due to licensing), errata, a direct line to cooperative hardware engineers AND this documentation and he'll start.

You don't want a hardware driver author, you want a technical writer. They will have the experience to draw pretty diagrams, organize and lay-out information clearly. A hardware engineer would just give you a list of register blocks, what each register did, and expected values. That gives you a more or less horizontal view of the hardware level, what pixel formats that textures, color, depth and stencil attachments can theoretically use. Then it's during testing that you find that some oddball reversed eleven and a half bit texture format doesn't work with some prime number bit sized framebuffer format because nobody thought they would ever be used together, until somebody decided to port Tetchy Squirrels to their smartwatch.

Re:Code. (2)

Kjella (173770) | about 4 months ago | (#45821163)

Put a hardware driver author in front of a documentation pack and a compiler, and tell him to write a driver, and he'll tell you to fuck off.

Put a hardware driver author in front of many working examples of device, with debug-level access, with example source (that he can't just copy due to licensing), errata, a direct line to cooperative hardware engineers AND this documentation and he'll start.

So in short you're saying the open source community is a bunch of liars and nVidia is right when they say "Well if we gave you X you'd ask for Y and Z until we're really doing all the job anyway, so why give you the little finger when you'll chew off our arm? If you're going to leave it all to us anyway, use the binary." For sure, there are hardware bugs and maybe undocumented ones too because the closed source driver never tripped it but this is like saying you can't program an x86 processor because Intel might decide that 2+2 = 3.99999999999978. You're just looking for an excuse to throw in the towel. For example you could start by writing the code to take Mesa from OpenGL 3.1 to OpenGL 4.4, it's all work and no hardware documentation required. There's tons of work that could be done, and the same is true for drivers as well. The documentation is there, the manpower is not. Stop pretending it's Intel, nVidia and AMD's fault that the open source drivers are lagging far behind the blobs.

Re:Code. (1)

scamper_22 (1073470) | about 4 months ago | (#45822249)

Unfortunately, this is another example where software developers are not a profession.

One could argue that Intel isn't going to spend the time and money to do all those things. They require man power to fund, and so they're not likely to do them.

Microsoft was/is one of the better companies in this regard and they mainly did it because you were locked into their platform and APIs. Their whole money making scheme was based around software being made for their platform.

But ultimately, this gap is what differentiates a profession from a regular job. A profession mandates certain behavior and standards. It's not all about money or efficiency.

Yes, as a professional, you might tell someone to fck off is they just handed you a bare spec and a compiler. Just like a lawyer might tell you to fck off if you handed them some contract scribbled written by pen and paper by your cousin on a napkin.

There was a time this was done more in certain fields. I have co-workers who used to work for some of the older telecoms. They complain about the amount of documentation and verification they used to have to provide internally. But that is what made things professional to some extent.

But we're not professionals. We might like to act like them and some of us manage to get away with it because we really are that valuable. But in the end, we're just worker bees. We suck it up and deal with the lack of professionalism. We will hack something to work. We will do our own performance tests and verification instead of demanding the professionals on the other end do their job. Most of all, we'll pride ourselves on being able to make it all work in the end and mock anyone who has better expectation of the field as not being to hack it.

Re:Code. (1)

scdeimos (632778) | about 4 months ago | (#45823185)

Put a hardware driver author in front of a documentation pack and a compiler, and tell him to write a driver, and he'll tell you to fuck off.

My how things have changed. I remember being handed register documentation for StarPort 16-, 32- and 64-port serial cards and being asked to write FOSSIL drivers for them. And I had to supply my own compiler and logic analyser.

Re:Code. (1)

vux984 (928602) | about 4 months ago | (#45823723)

I remember being handed register documentation for StarPort 16-, 32- and 64-port serial cards and being asked to write FOSSIL drivers for them. And I had to supply my own compiler and logic analyser

I remember being asked to waste my time on things too, to reinvent wheels that had already been invented, and to reverse engineer things that we were presumably supposed to know how worked.

GP is right, you say 'fuck off' when you get handed a task like that. Maybe, it doesn't go anywhere, and you really do have to reverse engineer how the bloody thing works to go forwards, but more often than not, its a more efficient use of time and money to get someone from the supplier to come and tell us what we need to know or otherwise assist, even if we have to pay them, because its just a waste of our time figuring it out for ourselves from scratch.

Re:Code. (1)

radarskiy (2874255) | about 4 months ago | (#45824661)

No, *you* say fuck off. The rest of us are competent.

Re:Code. (1)

vux984 (928602) | about 4 months ago | (#45824945)

The rest of us are competent.

Just because I can reinvent the wheel, doesn't mean I should waste my time doing it.

Being competent doesn't make it any less a waste of time. If a month of development time can be eliminated by simply talking to an engineer who built the X and knows it, then only an idiot would brag about how he wasted the weeks figuring it out for himself... because, you know, "competence".

I can waste 10 weeks too, if I have to. Sometimes that IS the only way. But unlike you I seem to be able to recognize that its a waste of time and that there are often better alternatives.

You seem to determined to somehow connect how much time you waste reinventing and rediscovering what other people already know as somehow measuring your competence.

Yes, reverse engineering (which is essentially what this is) requires competence. But reverse engineering is usually the least efficient method of getting to the results, and anyone competent only does it as the last resort.

Re:Code. (1)

radarskiy (2874255) | about 4 months ago | (#45825657)

"If a month of development time can be eliminated by simply talking to an engineer who built the X "
If you are waiting until the X is built, you've already lost the market.

Re:Code. (1)

radarskiy (2874255) | about 4 months ago | (#45824637)

"Out of all of those, documentation is the easiest thing to do."

If documentation is so easy, why is there so little of it and why is it so bad?

Re:Code. (0)

Anonymous Coward | about 4 months ago | (#45825837)

So they just released 5000 pages to describe a few registers... there's probably a little code and some of those other things.

Other implications for manufacturers. (1)

deviated_prevert (1146403) | about 4 months ago | (#45820407)

This release would make it completely possible for a manufacturers to easily create completely supported devices without the need for the Windows OS. Given the long stability and versatility of GNU and OSS. There is no reason why manufacturers could not jump ship. The average joe does not care which OS is on a device. It is only those who hold Windows as something akin to a religious experience on their computing device that bleat away against OSS operating systems.

Chrome books, android devices, even the iPad have proven them wrong. The consumer is not hooked into Windows, in fact the most common question I hear from most who consider a laptop with something other than Windows on it is whether or not they need anti virus. Sorry Microsoft until you shed the consumers perception that all of your devices must have AV and anti malware software you are vulnerable as a software and hardware company.

With Intel opening up completely on their key hardware it is now entirely possible for Windows to become irrelevant and this could happen almost overnight if manufactures cooperated and released an OSS operating system highly tuned exclusively for their devices completely independent of Microsoft. Either that or Intel is just blowing smoke ////again LOL Remember how pissed Microsoft was when Intel released a C compiler for hardware that made the Linux kernel out perform the NT kernel on big iron? And all the manufactures started to seriously consider Linux as an alternative to NT on servers. Well this release of key documentation brings the war down to a consumer level and if Microsoft does not act fast to quell things down with patent extortion through the Rockstar Consortium the possible market share drop of Windows in the consumer end of the market could hurt Redmonds bottom line this time. So I predict that within 2 years Microsoft will make more money off patent extortion than sales of the Windows operating system, the same way they make more money the sale of android phones and tablets than they make from sales of Win Phones!

We will see if the scare tactics of the men in black showing up with lawyers when ever a manufacturer releases a device with something other than Windows works for them the same way it did with Android.

So? (0, Troll)

Anonymous Coward | about 4 months ago | (#45820623)

So intel gfx hardware has, and always will be garbage.

Want to cripple a good machine? use Intel for the gfx.

Thank you, Intel. . . (1)

wardred (602136) | about 4 months ago | (#45820979)

Intel, thank you for your continued strong support of Linux. Already, if 3D performance wasn't an issue on one of my machines, having an embedded Intel video controller was a plus. As your GPU performance continues to grow, and I see you continuing to support Linux, there are less use cases where I feel I need a discreet video card. The release of this documentation indicates to me that you intend to continue your support of Linux, and I appreciate it.

how fast is the Haswell compare to AMD Hawaii (0)

Anonymous Coward | about 4 months ago | (#45821363)

All the doc you ever want - but what the comparison compare to AMD fastest GPU?
and why would I want to spend time on this vs AMD GPU?

Re:how fast is the Haswell compare to AMD Hawaii (1)

viperidaenz (2515578) | about 4 months ago | (#45821687)

Because it uses much, much less power?
Good luck putting a 300 watt GPU in a tablet.

those id10ts @ slashdot (1)

fredan (54788) | about 4 months ago | (#45821999)

Dear samzenpus,

Please don't include any link directly to the documentation.

Regards

The Community.

Re:those id10ts @ slashdot (1)

jones_supa (887896) | about 4 months ago | (#45824715)

What? Should it be behind multiple hoops in some obscure place so you could have your elitist feeling? ;)

Great - now fix my graphics! (0)

Anonymous Coward | about 4 months ago | (#45822207)

Maybe someone will now be able to fix the driver issue with my new PC that does not output HDMI. This is a huge issue that is impacting a lot of people: https://communities.intel.com/message/202922

If Intel with BILLIONS can't make decent drivers.. (-1)

Anonymous Coward | about 4 months ago | (#45822801)

There is a laughable fallacy that 'open source' software is the best place to create drivers for the most complex hardware machines used in computers. In reality, open-source GPU drivers STINK, or I should say, really really really stink. And apparently the fact they sometimes work at all, slowly, and with every possible bug, should be regarded as a 'good' thing.

Intel only does this as a publicity stunt to impress the very stupid, and it does so because Intel is a very distant THIRD in the real GPU market for x86 computers. Intel GPU hardware is crap- with poor features and horrible general compatibility with games. This despite the fact that Intel has bought multiple GPU hardware companies across the ages, and spent more ATTEMPTING to build a decent GPU than the entire R+D spend of ATI and Nvidia combined, across the entire history of both companies.

Yes, today, Intel integrated GPUs are in many (most?) Intel CPUs sold, and yes, the 2D side of Intel's drivers is largely good enough for simple Windows usage. Intel's hardware GPU architecture is largely stabilised, meaning that Intel should at least be able to support the main OSes that use its GPU without issue. Linux, Android, OSX, Windows- Intel's own drivers here should be infinitely better than anything any number of enthusiasts can cobble together from the documentation mentioned.

ATI has EVERY major mains powered console under its belt (Wii U, PS4, Xbox One), and its GCN GPU architecture is going to dominate the marketplace. Nvida has the need and skill to compete with ATI, at least in the PC space. No-one has need for Intel whatsoever. If a user happens to own a device using the crappy Intel GPU, that user is clearly already satisfied with extremely substandard GPU functionality, and the need for 'weird' home-made drivers is the last thing on that user's mind.

When Intel does these documentation dumps, it is literally because the company is in such a horrid place, it has nothing to lose. Anyway, not the industry is giving up (at last) on the twin atrocities of DirectX and OpenGL, and moving to a hardware-centric driver model, Intel's situation is about to get a whole lot worse.

Re:If Intel with BILLIONS can't make decent driver (1)

deviated_prevert (1146403) | about 4 months ago | (#45823053)

Caution fud attack. Here we go OSS can never be as good at GPU acceleration. B ULLOLCKS AND FUD STICK. Tell that to my laptop which is running an old Radeon mobile and Google Earth 7 in opengl mode on Linux faster than my newer DirectX Win7 machine with the same release cycle of Google Earth. STOP SPREADING BULLSHIT AND FUD about Linux GPU optimizations period WE KNOW BETTER.

Smart (-1)

Anonymous Coward | about 4 months ago | (#45822893)

This is Intel signalling that they intend to stay relevant in a post-Windows world.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...