Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Open-Sources Broadwell GPU Driver & Indicates Major Silicon Changes

timothy posted about a year ago | from the come-on-inside dept.

Intel 103

An anonymous reader writes "Intel shipped open-source Broadwell graphics driver support for Linux this weekend. While building upon the existing Intel Linux GPU driver, the kernel driver changes are significant in size for Broadwell. Code comments from Intel indicate that these processors shipping in 2014 will have "some of the biggest changes we've seen on the execution and memory management side of the GPU" and "dwarf any other silicon iteration during my tenure, and certainly can compete with the likes of the gen3->gen4 changes." Come next year, Intel may now be able to better take on AMD and NVIDIA discrete graphics solutions."

cancel ×

103 comments

Sorry! There are no comments related to the filter you selected.

Intel is keeping pace (4, Informative)

Calibax (151875) | about a year ago | (#45320759)

It's not like AMD, nVidia, PowerVR, etc. are standing still Every year brings better graphics, and Intel needs to keep pace.

But since they came late to the game, they have a patent minefield in front of them.

Re:Intel is keeping pace (0)

Anonymous Coward | about a year ago | (#45320797)

Turnabout is fair play. After all, what did Intel do to AMD when they tried to break into the CPU market?

Re:Intel is keeping pace (2, Insightful)

shentino (1139071) | about a year ago | (#45321005)

An eye for an eye leaves the whole world blind.

Also, two wrongs don't make a right.

Re:Intel is keeping pace (4, Funny)

Jmc23 (2353706) | about a year ago | (#45321509)

An eye for an eye leaves the whole world blind.

Wouldn't that just turn everybody into pirates?

Also, two wrongs don't make a right.

3 lefts do!

Re:Intel is keeping pace (0)

Anonymous Coward | about a year ago | (#45322011)

"But three lefts make a right. What's your point?"

-- Tasha Mack, The Game (TV Series)

Re:Intel is keeping pace (0)

Anonymous Coward | about a year ago | (#45324129)

An eye for an eye leaves the whole world blind.

Also, two wrongs don't make a right.

And good guys wear black. Or at least Chuck Norris did.

Re:Intel is keeping pace (1)

Bengie (1121981) | about 10 months ago | (#45357795)

I would rather have the entire world blind than having only ass-holes being able to see. It would leave the bad people in a superior position.

Re:Intel is keeping pace (1)

moozoo (1308855) | about a year ago | (#45320977)

>they have a patent minefield in front of them. nvidia and Intel have a patent agreement. Intel licensed their technology for 1.5 billion (over six years I think) AMD and Intel have patent agreements with regards to CPU technologies and some of those would apply to graphics (interconnects , memory etc) I doubt Intel and AMD would have trouble coming to an agreement with graphics patents.

Re:Intel is keeping pace (0)

Anonymous Coward | about a year ago | (#45321013)

I'm pretty sure everyone has cross-licensed everyones patents for CPU/GPU tech otherwise we'd be seeing lawsuits every year.

Intel used to licence the PowerVR stuff for the pathetic onboard video before they rolled their own.

Intel's onboard video solution has been catching up, it can now run 10 year old games no problem, but it's still terrible at 5 year old games.

The "standard" should be if Saints Row IV and Final Fantasy XIV can run at 60fps at 1920x1080 with all settings turned on. Saints Row runs fine at 30fps or so with all the settings flipped on on a mid-level video card, where as FFXIV gets maybe 15fps with everything flipped on.

Here's a fun fact... Windows 7's GPU compositing lets you use the dedicated GPU for the game, but still send the output to the onboard GPU. There's a very minor frame rate loss, but it means you don't need Lucid Virtu.

Re:Intel is keeping pace (2, Insightful)

SScorpio (595836) | about a year ago | (#45321063)

Why 1080p @60fps? Both the PS4 and Xbone will only be 30fps at the majority of games at 1080p. If Intel can reach parity with on board graphics to the new consoles that are just coming out they will have eaten into AMD's APU lead, since Intel currently crushes AMD when it comes to CPU performance.

Re:Intel is keeping pace (-1)

Anonymous Coward | about a year ago | (#45321541)

That's disgusting, my NES was 60fps (actually it was 50fps because it was PAL)!

We need to send a clear message to these faggots that 30fps is not acceptable, by not buying their disgusting games.

Re:Intel is keeping pace (3, Insightful)

citizenr (871508) | about a year ago | (#45322303)

Why 1080p @60fps? Both the PS4 and Xbone will only be 30fps at the majority of games at 1080p.

not true
in fact both consoles will target 50-60Hz + vsync, but X180 will do 720p (so last gen) while PS4 will 900-1080p (due to better gpu)

Re:Intel is keeping pace (2)

jones_supa (887896) | about a year ago | (#45321125)

Intel used to licence the PowerVR stuff for the pathetic onboard video before they rolled their own.

That is not true. Some (not all) Atom platforms use PowerVR stuff, that's all. Intel has rolled plenty of in-house GPUs before and after those.

Re:Intel is keeping pace (0)

Anonymous Coward | about a year ago | (#45322387)

Intel did not roll their own stogey they bought out chips and tech.

Re:Intel is keeping pace (0)

Anonymous Coward | about a year ago | (#45323491)

before they rolled their own.

what a stupid metaphor for something that clearly didn't need one.

Re:Intel is keeping pace (1)

morgauxo (974071) | about a year ago | (#45325491)

Of course they do. Why do you think a good graphics card costs so damn much?!?!

Re:Intel is keeping pace (2)

mobby_6kl (668092) | about a year ago | (#45321093)

Sure, but for now AMD and Nvidia seem to be happy rebadging previous-gen chips with new names and calling it a day. 2014 is almost here and still nobody knows anything about Maxwell, which was already supposed to be shipping by this point. With huge per generation improvements and a significant process advantage, Intel could really put the hurt on them in the lower end of the market, which is the majority of it.

Re:Intel is keeping pace (1)

fuzzyfuzzyfungus (1223518) | about a year ago | (#45321747)

No need to use the past tense. Even among the obviously gamer/enthusiast slanted systems represented in the Steam Hardware Survey [steampowered.com] , they place surprisingly well. Among people who don't care, buying a discrete video card went away some time ago, and Intel gets a default win on anything non-AMD.

Not a terribly thrilling market to dominate; but you make it up in volume, I imagine.

Re:Intel is keeping pace (1)

Anonymous Coward | about a year ago | (#45321187)

my last 4 builds used only intel CPUs, mostly because of the HD integrated GPUs.

yes, they are late, but they are the only one not requiring me to have a binary blob in my system. granted, my 3D perf is lousy...

Re:Intel is keeping pace (0)

Anonymous Coward | about a year ago | (#45323903)

Honestly, the AMD OpenSource drivers are fairly good IMHO, if you go with the previous generation at least.
And you do get VDPAU support, which has better support than Intel's VAAPI.

Re:Intel is keeping pace (1)

smash (1351) | about a year ago | (#45321537)

Eventually though, GPUs become "good enough" until software catches up. I suspect we're getting close to that point now.

Re:Intel is keeping pace (2)

TheRaven64 (641858) | about a year ago | (#45323325)

There's likely to be a long plateau. Past a certain resolution, there's no visible difference and so you have a maximum size for a frame buffer. Past another threshold, there's no benefit in increasing geometry complexity because every polygon is smaller than a pixel in the final image, so you don't see any difference. No one cares about turning up the detail settings to maximum if they can't see that it's better. Then there are some small improvements, such as stereoscopic displays, but they just double the frame buffer size and do nothing much to the geometry complexity or lighting.

Rendering to a volumetric display (something that you can look inside, like a tank, or something that will fill the entire room with a projection) massively increases the size of the frame buffer and also the bandwidth required. Currently, the bandwidth for such displays is most of the problem. Even with a fairly low resolution 1024x1024x1024 with 24-bit colour, you're talking 65GB/s just to get 25 frames per second, which makes the 10Gb/s of Thunderbolt look somewhat anaemic.

Re:Intel is keeping pace (1)

smash (1351) | about a year ago | (#45324759)

Yeah, i'm not counting 3d holographic/volumetric type display. Presumably that will be the next generation of device, once we're "done" with the current 2d display method of rendering and some new display tech comes out.

Re:Intel is keeping pace (0)

Anonymous Coward | about a year ago | (#45321805)

Except that relative to Intel, they are. AMD and nVidia have both made only fairly minor performance gains recently, and generally while keeping power consumption the same. Meanwhile, Intel has been gaining performance hand over fist (the jump to Sandy Bridge for example approximately quadrupled performance, the jump to Haswell tripled it), all while lowering power consumption.

Intel very much is gaining.

Re:Intel is keeping pace (0)

Anonymous Coward | about a year ago | (#45322431)

Wrong in so many ways, it's not even funny.

Re:Intel is keeping pace (1)

smash (1351) | about a year ago | (#45324771)

Not really, it's not. Intel have been making massive performance improvements in relative terms (2-4x previous generations for a couple of generations now), whereas Nvidia and AMD are making far smaller leaps in percentage terms. yes, they're still a long away ahead but the gap is closing gradually. And of course mobile (and thus performance PER WATT) is becoming a lot more important.

Re:Intel is keeping pace (0)

Anonymous Coward | about a year ago | (#45326997)

Yes, it is.
Sandy Bridge HD4000 was a 7W IGP, 16 16-wide EUs.
Haswell HD4600 is a 7W IGP. 20 16-wide EUs. ~same clocks as HD4000. benches about 25% faster than HD4000.
So performance/EU/clock stayed about the same. power efficiency increased ~25%. intel decided to spend that power budget on making it bigger.

GT3 aka HD5200 is the only haswell IGP that's anywhere close to 3x HD4000 performance (benches more like 2.5x). How did they do that? 40 EUs and a 15W TDP.
Yep, they made it twice as big.

Now look at the % of die occupied by GT3 on a 4-core Haswell.

How many more times do you think they can pull that stunt until they have a GPU with a few CPU cores stuck onto a side?

Re:Intel is keeping pace (1)

smash (1351) | about a year ago | (#45332599)

How many more times do you think they can pull that stunt until they have a GPU with a few CPU cores stuck onto a side?

What's to say this isn't the plan?

Whether it is called a GPU or a CPU, if intel makes it and ships it, I don't think they particularly care.

Re:Intel is keeping pace (2)

UnknownSoldier (67820) | about a year ago | (#45322639)

Intel has never been competitive with discrete GPUs from nVidia, AMD.

Olbg. http://www.dvhardware.net/news/nvidia_intel_insides_gpu_santa.jpg [dvhardware.net]

It seems like they are competitive now (1)

SuperKendall (25149) | about a year ago | (#45322725)

Intel has never been competitive with discrete GPUs from nVidia, AMD.

In doing research on the new Macbook Pros, it sure looked like the performance of the Intel Iris Pro shipping in the Macbook Pro 15" is pretty competitive with the nVidia 650M.

Re:It seems like they are competitive now (2)

FlyHelicopters (1540845) | about a year ago | (#45322893)

Maybe, but that is comparing low power notebook chips. Try comparing it in desktops and the picture changes by quite a bit. Of course, it is also worth considering that a modern Haswell CPU uses 84W of power while a modern AMD GPU uses 300W of power. If you gave Intel 300W of power to work with, I'm sure they could come up with something impressive.

Re:It seems like they are competitive now (2)

TheRaven64 (641858) | about a year ago | (#45323331)

I don't know if you've noticed, but desktops are a niche market now. We're almost five years passed the point where laptop sales passed desktop sales, and that trend hasn't changed. Laptop parts are where the high volumes are and that's where the big profits come from. Ask SGI some time how focussing on the high end and ignoring the mass market works as a strategy in the GPU business.

Re:It seems like they are competitive now (1)

FlyHelicopters (1540845) | about a year ago | (#45326025)

You are of course correct that desktop sales have slowed, but the question becomes, "how many people are just upgrading their desktops rather than replacing them?"

You can't really upgrade a notebook, however my desktop case is more than 6 years old, it has seen everything replaced in that time frame, including the power supply (needed more power and more PCI-E connectors). That doesn't count as a desktop sale, but I sure spent thousands of dollars in parts.

In any case, for high end graphics, you end up with high powered desktop graphics cards. For GPU processing, you also have high powered desktop cards, because they can compute far faster than anything in a notebook.

Again, comparing what is possible with sub 50W Intel GPU (since some of that 84W goes to the CPU) against 300W that a desktop card can pull, isn't really reasonable. It remains a fair point that if you gave Intel a 300W power budget just for the GPU side, they could probably come up with something very impressive.

Re:It seems like they are competitive now (1)

TheRaven64 (641858) | about a year ago | (#45326353)

You are of course correct that desktop sales have slowed, but the question becomes, "how many people are just upgrading their desktops rather than replacing them?"

No it doesn't. Desktop upgrades were always a tiny niche for hobbyists.

Re:It seems like they are competitive now (1)

FlyHelicopters (1540845) | about a year ago | (#45326983)

For a "tiny niche", it sure seems like there are a lot of companies selling hardware for it. NewEgg has become quite a large business largely based on selling hardware to such people.

Re:It seems like they are competitive now (2)

TheRaven64 (641858) | about a year ago | (#45327189)

NewEgg also sells assembled machines. Want to take a guess at what proportion of their total sales each make up? If you don't believe me, go and look up the numbers. Last ones I read, well under 5% of all computers sold ever received an after-market upgrade, and most of those were just RAM upgrades.

When you're talking about a market with a billion or so sales every year, it's not surprising that there are companies that do well catering to a fraction of a percent of the total market, but that doesn't mean that they're statistically relevant to the overall shape of the market.

Re:It seems like they are competitive now (0)

Anonymous Coward | about a year ago | (#45332029)

Never underestimate the ability of the enthusiast market to convince itself that it is the entire industry.

Re:It seems like they are competitive now (1)

SuperKendall (25149) | about a year ago | (#45327257)

Maybe, but that is comparing low power notebook chips.

Since that's the real market though now I think that's where it should be compared - and the other comparison is, can it reasonably run a high performance game? It meets that metric now also.

For me that's really what I mean by "competitive", is that if I'm given a choice between an Intel Iris Pro and some discrete GPU I won't necessarily pick the discrete part. That is true right now, you can get decent enough performance out of the Intel chipset that even a gamer would not automatically opt to spend more for a GPU.

Re:It seems like they are competitive now (1)

EdZ (755139) | about a year ago | (#45323377)

When your flagship GPU is about level with a mid-range mobile part from a year and a half ago...

Re:It seems like they are competitive now (1)

smash (1351) | about a year ago | (#45324791)

If you're referring to the Iris GT3e - the GT3e is a mobile part itself. And its part of the CPU, so it is competitive with GT650M in around HALF THE POWER. In other words, if intel were to enable it for multi-socket SMP, they could have a dual quad core system with two GT3e Iris Pro GPUs in a similar thermal/power envelope as the previous model machines with Ivy Bridge + Nvidia GT650M.

Meh (0, Flamebait)

Anonymous Coward | about a year ago | (#45320767)

Don't care. With intel i have to pay extra for their giant ad campaign.

Is AMD always the fastest? No. Is AMD always the best value per dollar? Yes.

Re:Meh (1, Insightful)

pellik (193063) | about a year ago | (#45321775)

For some time now unless you are buying a $30 three generation old CPU intel is as good or often better then AMD in performance per dollar. Doubly so if you're buying to overclock.

Re:Meh (3, Insightful)

Rockoon (1252108) | about a year ago | (#45322735)

Sure, if you ignore either the price or the performance [cpubenchmark.net] you can imagine your statement to be true.

Re:Meh (2)

Rockoon (1252108) | about a year ago | (#45323917)

Just to clarify, breaking up the list into $20 segments:

$80...$100 The highest benchmark score for Intel is 3781 with the G3430, and the highest benchmark score for AMD is 4353 with the AMD A8-5600K
$100..$120 The highest benchmark score for Intel is 4399 with the i3-3225, and the highest benchmark score for AMD is 6401 with the FX-6300.
$120..$140 The highest benchmark score for Intel is 4928 with the i3-4130, and the highest benchmark score for AMD is 6609 with the FX-8120.
$140..$160 The highest benchmark score for Intel is 4831 with the i3-3250, and the highest benchmark score for AMD is 8134 with the FX-8320.
$160..$180 The highest benchmark score for Intel is 6202 with the i5-3350P. AMD has no parts in this price segment but still wins using any of the previous 3.
$180..$200 The highest benchmark score for Intel is 7018 with the i5-4570, and the highest benchmark score for AMD is 9082 with the FX-8350

Intel "wins" most of the remaining segments by default like it did the $160..$180 segment, but doesn't surpass the $180..$200 winner in performance until you spend $264.99 on the Xeon E3-1240 V2.

So the facts are that AMD not only continues to win the performance per dollar comparison, they are still completely dominating it. Sure, if you are going to spend $300+ just on a CPU then Intel wont let you down, but it takes someone very bad at math to claim that Intel is even close to competing in the performance per dollar comparison. BOOM! HEADSHOT

Re:Meh (0)

Anonymous Coward | about a year ago | (#45324809)

To be fair, cpumark is one benchmark. AMD's high number of cores make them ideal for many applications, but older software or applications not suited for a multi-core design may benefit more from other designs.

Re:Meh (1)

smash (1351) | about a year ago | (#45324819)

Which benchmark?

Re:Meh (1)

KingMotley (944240) | about a year ago | (#45326831)

Of course, you could then look at this benchmark is more indicative of what people actually run: http://www.cpubenchmark.net/singleThread.html [cpubenchmark.net]

$80...$100 The highest benchmark score for Intel is 1,988 with the G3430, and the highest benchmark score for AMD is 1,385 with the AMD A8-5600K
$100..$120 The highest benchmark score for Intel is 1,797 with the i3-3240 (G3430 better buy), and the highest benchmark score for AMD is 1,526 with the FX-4350.
$120..$140 The highest benchmark score for Intel is 1,859 with the i3-3245, and the highest benchmark score for AMD is so far down the list I got bored.
same trend continues in every bracket

Re:Meh (1)

Rockoon (1252108) | about a year ago | (#45329721)

Of course, you could then look at this benchmark is more indicative of what people actually run

Word processors, web browsing, etc.. thats what you were thinking about, right?

I prefer to only consider what people actually wait for. You are of course right that most things that people do on a computer are single threaded, but nearly all of those very same things arent waited for by users because computers were more than fast enough for those tasks a decade ago. More performance has little to no benefit at all on those tasks.

and the highest benchmark score for AMD is so far down the list I got bored.

You got bored before looking at the first AMD part in the price range? Really?

"I see your true colors shining through"

(A) dont be a dishonest fuck about even small details (like you "getting bored"), especially obviously so, because nobody will believe the shit you are being honest about when so obviously dishonest elsewhere. We know that you are being a dishonest fuck because you then went on "same trend continues in every bracket" -- really, you all of a sudden werent bored anymore, but amazingly also didnt provide data? Thats twice now that you were being a dishonest fuck.

(B) Calling your data into question, I examined the data you provided in the same sentence about Intel the parts: you are wrong about which $120..$140 Intel part is the best Intel part in that price range for a single threaded score based on the very link you provided. There is at least one faster Intel parts in the price range, but then I "got bored" so there might be more that you missed.

Sloppy sloppy sloppy, and dishonest, dishonest, dishonest. Why should anyone pay attention to you?

P.S: Learn how to use a fucking spreadsheet (cut and paste the whole page, fool) Then you avoid being sloppy, don't need to be dishonest, and don't have to be guessing about brackets that you made assertive statements about but were "too bored" to actually put yourself into a position where you would actually know what you were talking about.

Re:Meh (0)

Anonymous Coward | about a year ago | (#45332243)

Oh waaaah, somebody called poor widdle Rockoon out on his pet benchmark not being very representative of the real world. Congrats dude, you found something which scores a ton of crappy cores highly even though most consumers don't run workloads like that. Great, that lets you indulge in AMD fanwankery. Go wank some more, ok? But do it in private. Don't post.

I'm so sorry you were offended that somebody got "bored", but the stench of non-objectivity is coming from inside your house, not KingMotley's. You're the one who just wrote a giant ad hominem post attacking KM not for the quality of his arguments, but because you don't like the cut of his jib.

Also, cpubenchmark.net is terrible in so many ways. It's laughable that you linked their "value" chart in the first place. The top scorer is a 2008 AMD Turion, and it scores that high because cpubenchmark.net scraped a single retailer offering that old out of production piece of shit for $8.95. But it's not just the price data which is garbage, it's the performance data too. It's just a database of user provided PassMark numbers, so there's no objective repeatable methodology, plenty of opportunity for fanboys of any stripe to distort the results by submitting fake reports, and so on. Also, it's a single proprietary synthetic benchmark app from a commercial operation, and these are almost always terrible. Using PassMark as the sole figure of merit (or even any figure of merit) is really, really dumb. The numbers at cpubenchmark.net are basically noise.

AMD fanboyism has become seriously pathetic. Cherry pick benchmarks, get hurt and offended when people point out they're not representative, whine at the world at large for their intransigence in not choosing AMD.

The OSS community needs to think about some things (-1)

Anonymous Coward | about a year ago | (#45320779)

Intel, Google, HP, Red Hat, and IBM are all substantial contributors to open source software. They are also major providers of technology to the federal government's Intelligence wing and the greater Department of Defense. If we pride ourselves on socializing the ability to leverage powerful technologies for the greater good, then why are we allowing these groups to help with one hand smear feces on us with the other? I'm deeply troubled by much of the behavior of the OSS community lately about turning a blind eye to this serious issue.

Re:The OSS community needs to think about some thi (0)

Anonymous Coward | about a year ago | (#45320839)

The OSS community doesn't behave that way. I believe you're referring to the LEECH community. (Leeching, Egotistical, and Envious Conglomeration of Hatred)

Re:The OSS community needs to think about some thi (0)

Anonymous Coward | about a year ago | (#45320843)

If the OSS community decides to ignore every company that sells product to the US government, what will the community work on? If you exclude Intel, AMD, nVidia, Apple, HP, Dell there will be little CPU/Graphics stuff to work on. If you exclude Microsoft, Adobe, Red Hat (actually all of Linux), Apple there will be few software products to write.

Re: The OSS community needs to think about some th (-1)

Anonymous Coward | about a year ago | (#45320875)

It's not about selling to the US government, it's the Dept of Defense, State, and Justice that is the issue. If Dell sells 400 servers to the Dept of Education to help improve efficiency I have no qualms with that,but if they sell 10 to the DoD to help monitor communication or run drone sites I have a major issue with that. Taking the work of people that never intended it to be used for evil and doing just that should be something we are all up in arms over.Â

Re:The OSS community needs to think about some thi (-1)

Anonymous Coward | about a year ago | (#45320851)

A drug lord decides to start donating money to charity. Does that make up for the attrocities they commit in the name of profit? No. Does it still do good in the world? Yes. And the contributions of those corporations can be audited by anyone.

Re: The OSS community needs to think about some th (-1)

Anonymous Coward | about a year ago | (#45320883)

So the ends justifies the means. So the NSAs behavior is ok as long as a life is saved.

Re: The OSS community needs to think about some th (1)

Anonymous Coward | about a year ago | (#45321173)

Wow. You're spectacularly bad at reading comprehension. ("Spectacularly" means "really, really", and "reading comprehension" means "understanding what you read". "Understanding" means, like, "getting it".)

Meh (2)

Sable Drakon (831800) | about a year ago | (#45320847)

For low and some mid-range stuff, sure. But Intel is never going to be able to get above that so long as nVidia and AMD keep cranking out new components year after year. All Intel should be striving for is decent 4K@60 support, making sure multi-monitor systems don't break, and that compositing works as intended.

Competition is more than performance (2)

tuppe666 (904118) | about a year ago | (#45320963)

For low and some mid-range stuff, sure. But Intel is never going to be able to get above that so long as nVidia and AMD keep cranking out new components year after year

Personally I love the thought (and so does the market, and manufactures) of getting a more powerful Fanless; Cheap; supported by reliable first party open source developers discreet GPU. That gives me a massive boost over what I am getting over my current APU performance. In reality its only a few specialists (albeit more newsworthy) that really buy into the high end anyway.

Re:Competition is more than performance (1, Insightful)

Sable Drakon (831800) | about a year ago | (#45321043)

And what Intel is offering right now with Haswell is a massive improvement over any APU that AMD has produced to date. Which shouldn't be much of a surprise considering AMD's constant spectacular disappointments. But you're never going to get a cheap fanless discrete GPU with the power to hold up against a GTX660 or better hardware.

Re:Competition is more than performance (2)

Kjella (173770) | about a year ago | (#45322553)

But you're never going to get a cheap fanless discrete GPU with the power to hold up against a GTX660 or better hardware.

Never? Like how cheap fanless GPUs from 2013 don't beat the crap out of any high end graphics card from 1998 never? Don't go there. But yes, for any given level of technology you can always do more with a 250W+ dGPU power budget than a <10W fanless thing. But do gamers need it? From the Steam hardware survey 98.5% of the users play at 1920x1200 or below and of them 16% on Intel graphics. Not every game is a Crysis, many of them simply play well on any recent dGPU but suck just a little bit too much on integrated graphics. That's the market Intel's after, if most your games play decently at 1080p on medium settings it's "good enough" for many. Sure you could invest in a $200-600 graphics card and bump that up to ultra/enthusiast level, but most people won't bother.

Re:Competition is more than performance (2, Informative)

Anonymous Coward | about a year ago | (#45321209)

discrete= separate

discreet= quietly

Re:Competition is more than performance (1)

prionic6 (858109) | about a year ago | (#45334225)

I like discreet GPUs...

Re: Competition is more than performance (1)

BESTouff (531293) | about a year ago | (#45323117)

mod parent up !

Re:Meh (0)

Anonymous Coward | about a year ago | (#45321221)

never

Never bet against integration. Consoles are have already shown the way; the PS4 will have 8GB of GDDR5 and use it for system RAM. That's integration. Do that with a PeeCee and you get the benefit of Intel's superior fabs making GPUs combined with high speed, Discreet-CPU parity (as in equivalent) RAM in abundance.

Discreet GPUs have an expiration date because integration always wins in microelectronics; time is on Intel's side. Between now and then we'll just have to suffer with excellent video codecs, low cost and fully open source Intel GPUs that can't play Crysis 15.

Darn. o_O

Re:Meh (1)

smash (1351) | about a year ago | (#45321579)

Exactly. Just look at what happened to discrete math co-processors, discrete sound cards, discrete network cards, etc. It's only a matter of time. Using the CPU to do stuff in software is always going to be more flexible, as the above poster mentions, it's only a matter of time before the CPU is fast enough that most people aren't willing to pay extra for discrete GPUs. I'm betting on another 1-2 generations before this is the case. Haswell is pretty close for many people already - especially in portables, and portables are out-selling desktops these days.

Re:Meh (2, Interesting)

Anonymous Coward | about a year ago | (#45321841)

Discrete math coprocessors are actually the interesting one, because they were integrated, and then un-integrated again. We just re-named them to "GPUs" (that is after all all a GPU is, a very parallel vector maths processor, with a tiny bit of rasterisation hardware tacked onto it). That said, yes I fully expect that integration of GPUs is only going to continue.

Re:Meh (0)

Anonymous Coward | about a year ago | (#45332337)

Somebody please mod parent back down, because it's not true at all. The original discrete math coprocessors were not very much like GPUs. Especially not like the original late-1990s GPUs.

Discrete math coprocessor: usually very tightly coupled to the CPU, to the extent that FP math instructions appeared to be ordinary instructions to the programmer, FP registers were thread context just like the integer registers, and so forth.

1990s 3D accelerator: a peripheral device which did math, but did so in a fixed-function manner. You'd use the CPU to shove vertex and texture data at it (through PIO, not instructions which looked native) and it would do a fixed mathematical transformation on that data (triangle rasterization with texturing and perhaps some blending modes). You could not use them to perform arbitrary calculations.

Modern 3D accelerator: a peripheral device which works somewhat like the 1990s 3D accelerator except that now much of the fixed function 3D pipeline has been replaced by miniature programmable CPUs. Now you shove vertex and texture data plus "shader programs" at it. You can now use them for fairly arbitrary computational tasks, but they still don't work very much like a discrete math coprocessor used to. Instead of writing code that's fully integrated into your application at the machine instruction level, you're writing tiny compute-only threads for a fundamentally different type of computer.

Re:Meh (1)

FlyHelicopters (1540845) | about a year ago | (#45322911)

While I never want to say "never", it will be awhile. The fact is that rendering life-like graphics in real time at 60fps across 8+ million pixels takes a ton of processing power, far more than you can fit onto a small 84W CPU+GPU.

Re:Meh (2)

DMiax (915735) | about a year ago | (#45323427)

Having the GPU integrated into the same chip as the CPU is not the same as emulating it.

Re:Meh (1)

smash (1351) | about a year ago | (#45321557)

Nah, if intel can ramp up fast enough in the next couple of years they will reach "good enough" status and software won't demand better for a few years. The average user doesn't have a 4k display (even on steam - which is skewed towards gamers - the most common res on steam is either 1680x1050 or 1920x1080 at the moment).

Re:Meh (1)

bemymonkey (1244086) | about a year ago | (#45322849)

I wouldn't be surprised if compositing on 4K@60FPS works just fine already, provided the machine has display outputs that support the resolution and refresh rate. 2560x1600 via DisplayPort, for instance, was already available on Core2Duo laptops with 4500MHD graphics (that's... 4 generations before Haswell)...

Looks like even Ivy Bridge's HD4000 supports 4K: http://ultrabooknews.com/2012/10/31/new-intel-hd-3000-and-hd-4000-drivers-support-for-windows-8-4k-ultra-hd-opengl-4-0-and-more/ [ultrabooknews.com]

Re:Meh (2)

TheRaven64 (641858) | about a year ago | (#45323373)

I'm sure Intel is deeply disappointed to only have 60% of the GPU market. The board and shareholders must be crying all the way to the bank.

The problem with your line of reasoning is that it's exactly what SGI said in the mid '90s. That other companies were welcome to the low-end commodity GPU market, they'd keep the profitable high end graphical workstation market. Unfortunately for them, the cheaper parts kept improving and gradually passed a lot of people's thresholds for 'good enough'. Intel sells 4 GPUs for every one that nVidia sells and 3 for every one that AMD sells. That gives the a lot of money to spend on R&D.

Another relevant object lesson is FireWire vs USB. FireWire was better by almost every objective measure, except that it was a discrete part that added $1 to the cost of a motherboard, whereas USB came for free with the southbridge chip. For most people, the comparatively slow speed and high CPU usage of USB were still good enough. FireWire was relegated to a niche. FireWire was still faster than USB (in practice, if not on paper), and FireWire 800 was a lot faster, but by then the number of boards shipping with FireWire was small and so it lost on economies of scale and that $1 became closer to $5 for the smaller production runs. No one had to make a choice between FireWire or USB, they chose between USB or FireWire and USB, and for most users the extra cost of adding FireWire wasn't worth it. The same choice is happening today: do you want an Intel GPU, or an Intel GPU and an nVidia GPU? If the former is good enough, then why would you bother with both. In both cases, Intel gets some money for their R&D department to spend on the next generation. nVidia only does if you opt for both.

Re:Meh (1)

smash (1351) | about a year ago | (#45324887)

Seriously, I'd LOVE to see intel enable multi-socket for broadwell mobile CPUs. Can you imagine - 2x quad core CPUs, 2x integrated intel graphics (some variant of SLI or similar GPU load sharing). 8 Cores. 16 threads. ~70-90w TDP. You could stick that shit in a laptop - when running on battery just turn off a socket. When running on AC, it would fly.

what about high speed video ram channels? (0)

Joe_Dragon (2206452) | about a year ago | (#45320903)

what about high speed video ram channels? wait no we have to the same slow pool of slower system ram.

Re:what about high speed video ram channels? (1)

Anonymous Coward | about a year ago | (#45321503)

That's Exactly what Crystal Well (a.k.a. Iris Pro) is 128 MiBytes of very fast RAM with latency about 1/2 that of DRAM.

Re:what about high speed video ram channels? (1)

fuzzyfuzzyfungus (1223518) | about a year ago | (#45321837)

That's Exactly what Crystal Well (a.k.a. Iris Pro) is 128 MiBytes of very fast RAM with latency about 1/2 that of DRAM.

In typical 'Intel - because we can.' product differentiation, they've unfortunately gone and made that bit tricky to get: Apparently, only their R-series Haswells, the BGA ones, have the eDRAM on the desktop. On the mobile side, it's reserved for the highest end i7s, I don't know if any of those are LGAs.

I don't doubt that it makes economic sense; but Intel is continuing their annoying policy of deliberately having no ideal low-to-midrange part: If you go for a lower performance CPU, as a price-sensitive buyer would, they simply don't offer a GPU that isn't merely phoning it in. If you buy a screaming expensive model, you get their fastest GPUs, which are OK; but not great (and, with some CPU-heavy exceptions, quite possibly not good enough for the people who buy $500+ i7s.

It probably isn't in their interest; but if they actually were looking to put a nail in the coffin of the low-end discrete GPU market, they'd offer at least one i3 or i5 with full GPU punch.

One change I want to see (4, Insightful)

msobkow (48369) | about a year ago | (#45321765)

There is only one change I'd like to see made sooner rather than later:

Stop using my main memory as a video buffer!!!

The main reason I opt for discrete graphics solutions is not because of the performance of the graphics, but the lack of main memory throughput degradation. I build boxes to compute, not sling graphics.

Re:One change I want to see (0)

Anonymous Coward | about a year ago | (#45322721)

No wonder Intel limits their two memory channel offerings to 4 cores. They might as well increase the tread count and limit to 2 cores for the main memory frame buffer scenarios, if the contention is large enough.

Re:One change I want to see (1)

Rockoon (1252108) | about a year ago | (#45322753)

So you want to waste your video memory when you arent using the GPU?

Re:One change I want to see (1)

msobkow (48369) | about a year ago | (#45323413)

YES!!!

Re:One change I want to see (0)

Anonymous Coward | about a year ago | (#45324425)

Bad idea, pretty much in line with Windows pre-Vista not using your memory so you had high "Free Memory" numbers.

The reason is that it's better to have 50 Gbps unified bandwidth than 25+25 Gbps for CPU and GPU. Your compute box can't take advantage of the fact that your GPU memory bus is rather idle since that's an entirely different bus. On new AMD designs, however, an idle GPU means extra bandwidth for the CPU on the shared bus.

Of course, historically the benefit of adding a discrete card was that you also added extra memory and extra bandwidth, and more is generally better. But that's not blank slate thinking, that's incremental to an existing setup.

Re:One change I want to see (1)

smash (1351) | about a year ago | (#45324925)

Better option: just increase main memory bandwidth, and then everything gets better.

Re:One change I want to see (1)

Anonymous Coward | about a year ago | (#45325419)

yeah, let's get expensive dual-port gddr5 for main memory. good idea.

or, you know, we can make the hard part expensive, and the easy part 16 gigabytes...

Re:One change I want to see (0)

Anonymous Coward | about a year ago | (#45326959)

Like it or not the PS4 is to have unified memory for CPU and GPU - 8GB worth of DDR5... They give it plenty of bandwidth to do so.
The beauty of this is not having to copy data/textures between the 2 systems on slow PCIe.

Re:One change I want to see (1)

alexo (9335) | about a year ago | (#45331595)

There is only one change I'd like to see made sooner rather than later:

Stop using my main memory as a video buffer!!!

The main reason I opt for discrete graphics solutions is not because of the performance of the graphics, but the lack of main memory throughput degradation. I build boxes to compute, not sling graphics.

Once you start thinking of the GPU as a math coprocessor (that incidentally also slings graphics very well), your views on the subject may change.

intel gma (0)

Anonymous Coward | about a year ago | (#45321817)

Woo!!! who wants a 740, wake me when they manage to outrun an 8800Gt.

Re:intel gma (0)

Anonymous Coward | about a year ago | (#45322119)

*poke* Wake up already.

The HD2000 series from 3 generations ago already beat your 8800GT. The current Iris 5200 sits between a GeForce 9800 and a GeForce 280 in terms of performance.

Re:intel gma (2)

thesupraman (179040) | about a year ago | (#45322755)

You really REALLY need to look at what you are testing.

We are heavy users of OpenGL, and care critically about its performance.
And from that point of view, you are very VeRy wrong.
all current intel GMA implementations (even the super rare super-cache based implementations) are
terribly terrible slow compared even to old 8800gt.
we are talking significantly less than half the performance in many more advanced uses.

Yes, they can flat shade a limited number of polys quite well, and even do a little multitex, but its not
2001 any more.. we expect a little more these days.

Hit them with a few more advanced techniques and they really hit the wall, fast.
Not quite as fast as earlier GMA of course, but certainly not comparible to real hardware.

Re:intel gma (2)

FlyHelicopters (1540845) | about a year ago | (#45322931)

In fairness, everyone likes to compare Intel's GPU that has to fit into the CPU die and use perhaps 15W of power against an AMD or nVidia GPU that can use 150W or more of power. There is just no comparison. Give Intel 150W of power to play with and I'm sure they could do something interesting with it.

Re:intel gma (1)

Sockatume (732728) | about a year ago | (#45323639)

Having owned both I'd rate the HD3000 below an nVidia 7600GT in practice. It'll really move on older Source stuff but it starts to struggle even on HL2E1 unless you turn off a lot of the shinies. That said, being able to get a playable framerate and reasonably authentic looks out of modern games is a huge leap for laptop performance. I'm really impressed with the new era of laptop components from AMD and Intel alike.

Re:intel gma (1)

unrtst (777550) | about a year ago | (#45333451)

The HD2000 series from 3 generations ago already beat your 8800GT. The current Iris 5200 sits between a GeForce 9800 and a GeForce 280 in terms of performance.

You sure? Got any benchmark comparisons? I'm honestly curious because the comparisons I've seen don't jive. For example:
http://www.tomshardware.com/reviews/gaming-graphics-card-review,3107-7.html [tomshardware.com]

That page shows the HD 2000 on par with cards like Nvidia FX 5800, or ATI X1400.
HD 3000 is shows around Nvidia 6600 GT, or ATI X1600 PRO.
HD 4000 is shows around Nvidia 6800 GT or 7600 GT, or ATI X800 XT or HD 3650.
The HD 2500, HD 4200, HD 4400, HD 4600, HD 5000, Iris 5100, and Iris 5200 are not listed there.

I know that's not a benchmark, and I was curious, so I looked up some random PassMark G3D scores:
PassMark - G3D Mark
4255 Radeon 7870
4116 GeForce GTX 660
1677 Radeon 5770 (came with mac pro in 2012)
1572 GeForce GT 750M (what's in macbook pro 15" now)
1288 GeForce 640
922 Intel Iris Pro 5200
757 GeForce 8800 GT
718 GeForce 9800 GT
711 Radeon HD 5570
654 GeForce GT 240
632 Radeon HD 2900 PRO
628 Intel Iris 5100
606 GeForce 8800 GTS
599 Intel HD 4600
598 Intel HD 5200
544 Radeon HD 4670
515 Intel HD 5000
490 Intel HD 4400
487 GeForce GT 335M
477 Intel HD 4600
476 Radeon HD 7550M
461 Intel HD 4000
306 Intel HD 3000
216 GeForce 7900 GS
208 Radeon HD 7340

The Iris Pro 5200 looks alright, but it's far from common (the MacBook Pro is the only line I can find with one in a laptop), and I really doubt an HD 2000 is going to compare at all with an 8800GT. The above benchmark isn't the greatest, but it should get the ballpark right.

Thank God for this news... (1)

hackus (159037) | about a year ago | (#45322287)

I am so sick and tired of crap graphics on LINUX it isn't funny.

A fully open source solution from Intel and perhaps AMD would absolutely destroy Nvidia in the LINUX space.

I like the efforts so far AMD as made, and I applaud them for it, but it took them way too long.

If Intel can come out with a better GPU, MESA would be able to achieve OpenGL 4+ compatibility much faster.

Nividia mostly and to some part AMD has destroyed LINUX's ability to get onto the desktop.

Part of this I think is due to board collusion between Nvidia and Microsoft.

I would be a happy camper if I awakened the next morning and Microsoft and Nvidia's stock was officially DELISTED.

-Hack

Re:Thank God for this news... (2)

FlyHelicopters (1540845) | about a year ago | (#45322925)

Since Linux is about 2% of the total installed OS market and since the percentage of THAT market that plays games is limited, I doubt that Intel and AMD care very much about it. Linux on the desktop is a fantasy, I recall hearing "year of the Linux desktop" back in 1994, still hasn't happened...

Re:Thank God for this news... (1)

spire3661 (1038968) | about a year ago | (#45322963)

You are right, Linux desktop is a dead end. Linux APPLIANCES are going to be huge.

Re:Thank God for this news... (2)

FlyHelicopters (1540845) | about a year ago | (#45323001)

Maybe, but once it moves from "Linux desktop computer that you can do anything with" to "Linux appliance that you can't make any unapproved changes to", the difference becomes academic.

Re:Thank God for this news... (0)

Anonymous Coward | about a year ago | (#45326115)

Except that many Android and all Chrome devices are rootable/have developer mode so you can run what you like on them. No sign of this changing.

Re:Thank God for this news... (1)

spire3661 (1038968) | about a year ago | (#45329417)

I dont think you understand how fast the floor is rising. The ONLY people that can get away with locking hardware in the consumer space right now is companies that enjoy government monopoly protection (patent, copyright, telecom/cable)

Re:Thank God for this news... (1)

FlyHelicopters (1540845) | about a year ago | (#45329863)

I think you overestimate the number of people who care. Apple has one of the most closed ecosystems in the world, but clearly it doesn't matter. Android and Windows provide plenty of competition, but millions still line up to buy completely locked down devices from Apple.

Re:Thank God for this news... (1)

spire3661 (1038968) | about a year ago | (#45330435)

Its not about caring. Its about providing a compelling product. Linux appliances will be compelling products, either in providing a cheaper entry or more liberty, both cases increasing value. Has nothing ot do with 'caring'. Provide a superior product in all ways, and the people will come. Save your betamax argument, it wasnt superior in all ways.

Re:Thank God for this news... (1)

FlyHelicopters (1540845) | about a year ago | (#45330537)

Will they? Are you expecting an open standard Linux box that you can modify? Are you expecting the likes of Comcast and Time Warner to ever allow that?

What use will the box have? What would you do with it?

In front of my main TV, I have a Ruku 3, a PS3, a DirecTV box, and a Wii U. None of those are open and none of the companies behind them really want them to be. Maybe Ruku might be the closest, but the services on it are as closed as closed gets.

Maybe you should let me know what these Linux appliances will be, because I'm not seeing it.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>