×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

NVIDIA CEO Unveils Volta Graphics, Tegra Roadmap, GRID VCA Virtualized Rendering

Soulskill posted about a year ago | from the also-a-new-kitchen-sink-chip dept.

Graphics 57

MojoKid writes "NVIDIA CEO Jen-Hsun Huang kicked off this year's GPU Technology Conference with his customary opening keynote. The focus of Jen-Hsun's presentation was on unveiling a new GPU core code named 'Volta' that will employ stacked DRAM for over 1TB/s of memory bandwidth, as well as updates to NVIDIA's Tegra roadmap and a new remote rendering appliance called 'GRID VCA.' On the mobile side, Tegra's next generation 'Logan' architecture will feature a Kepler-based GPU and support CUDA 5 and OpenGL 4.3. Logan will offer up to 3X the compute performance of current solutions and be demoed later this year, with full production starting early next year. For big iron, NVIDIA's GRID VCA (Visual Computing Appliance) is a new 4U system based on NVIDIA GRID remote rendering technologies. The GRID hypervisor supports 16 virtual machines (1 per GPU) and each system will feature 8-Core Xeon CPUs, 192GB or 384GB of RAM, and 4 or 8 GRID boards, each with two Kepler-class GPUs, for up to 16 GPUs per system. Jen-Hsun demo'd a MacBook Pro remotely running a number of applications on GRID, like 3D StudioMax and Solidworks, which aren't even available for Mac OS X natively."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

57 comments

Huh? (0)

SpaceMonkies (2868125) | about a year ago | (#43219323)

Ok I know this is Slashdot, but I could barely comprehend the summary. Is this a good thing?!

Re:Huh? (1)

thesupraman (179040) | about a year ago | (#43219365)

Yes, it probably is a good thing, since it means it is a little less like the astroturfing, general interest approach of some articles these days.
If you had much interest in this area, you would find the summary is quite a good, well summary.

Have a nice day.

Re:Huh? (2)

Tailhook (98486) | about a year ago | (#43219891)

I noted that the story is shaded. A new GPU core with a novel memory solution, GeForce coming to mobile hardware, and some of the baddest visual compute hardware in the world and it's not worthy of at least the same exposure as the previous freetard story or the following YROL story.

Yay for mainstreaming.

Re:Huh? (0)

Anonymous Coward | about a year ago | (#43219381)

Is relatively easy to understand for me, easier if you know what GRID is.

Re:Huh? (1)

Xenkar (580240) | about a year ago | (#43219411)

From what I read, they'll be moving the DRAM from the circuit board and putting it on top of the GPU. This will probably mean that fewer layers will be required on the circuit board which lowers the cost of manufacturing. It will probably also mean a better watt to performance ratio.

So yes, this is a good thing. I just wish we could return to the days when a video card was solely powered by the slot it is placed in.

Re:Huh? (1)

jones_supa (887896) | about a year ago | (#43219665)

So yes, this is a good thing. I just wish we could return to the days when a video card was solely powered by the slot it is placed in.

NVIDIA releases constantly cards that are powered by the slot only. Some of them might be as fast as the previous generation card that required extra power lines.

Re:Huh? (0)

Anonymous Coward | about a year ago | (#43220343)

The Radeon 7750 seems to be the best card you can get that is powered from the slot itself right now. You can also get it in low profile if you have the need.

Re:Huh? (1)

Molochi (555357) | about a year ago | (#43221079)

Others have said it. I'll repeat it. You don't HAVE to buy multiple cards that need to run on aux power from a 1000W PSU. You just WANT to. I can appeciate the desire to run something like Skyrim on 9 uberHd monitors at max settings, but if it's that good a game you'll probably enjoy it with a single 42" 1080p display and a $90 videocard.

Re:Huh? (1)

Molochi (555357) | about a year ago | (#43221035)

Your UID is far too high to be confused by this summary. You should have posted as a title "Old News" and then at least linked a speculative summary from another tech site.

I on the other hand am old and crusty enough to compain about marketbabble terminology and RTFA... wait a sec... OK I just scanned it. Looks like theiy're claiming that Volta will have ~3 times the memory bandwidth of their current top offerings. So yeah that's a good thing as it would mean you could make what runs a triple monitor setup comfortably on three cards today, on one card "tommorow".

Also they've got this, which sounds like a bargain.

"Pricing for GRID VCA systems will range from $24,900 for a base single-CPU (8 threads) / 8 GPU / 192GB configuration (4GB frame buffers for each GPU) to $39,900 for fully-loaded dual-8-core CPU (32 threads) / 16GPU / 384GG setup. Licensing for the GRID client will be $2,400 annually for the base configuration or $4,800 annually for the max configuration"

Volta (2)

GiganticLyingMouth (1691940) | about a year ago | (#43219655)

This Volta sounds pretty exciting, DRAM bandwidth is commonly a limiting factor in GPGPU applications, so if it can get 1TB/s, it'll be more than 3x faster for memory bound kernels than the current high-end scientific computing cards (e.g. the Tesla K20). With that said, I'm a bit apprehensive about how much it'll cost; Tesla K20's currently cost over $2k per card...

Re:Volta (2)

godrik (1287354) | about a year ago | (#43220087)

1TB/s of memory bandwidth is indeed impressive. I am doing quite a bit of memory intensive kernels (graph algorithms) on accelerators (GPU, Xeon Phi). And bandwidth is a significant bottleneck. Kepler did not bring a significant bandwidth improvement over Fermi. Xeon Phi is in the same areas. But 1TB/s seems tremendous. I am impatient putting my hands (or my ssh) on one of these.

I do not understand ... (1)

Taco Cowboy (5327) | about a year ago | (#43221533)

1TB/s of memory bandwidth is indeed impressive

I do not understand why everybody and their great-grandmother's dog are drolling all over and goo-goo--gaa-gaa over the "memory bandwidth" thing

Even if that rig is dedicated for massive game-playing, what portion of the time the GPGPU needs to tap on the full strength of the 1TB/s memory bandwidth ?

Furthermore, the average rig wouldn't even use 0.1% percent of its time hitting the 1TB/s threshold

Which means, 99.9% of the time the GPGPU can get by with lower memory bandwidth requirements

Remember, 1TB/s of NOPs is 1TB/s of NOPs.

Re:I do not understand ... (1)

godrik (1287354) | about a year ago | (#43223169)

Well, If you do not need massive bandwidth, you do not. Personnally, I do not use GPUs to do graphic computation for sparse computations (multiplying sparse matrices or traversing graphs). On these computation the main bottleneck is memory bandwidth. So if the memory bandwidth increase by a factor of 3, I will see an immediate improvement of performance by at least 50%, potentially by a factor of three once the kernels are optimized for that new architecture.

Re:I do not understand ... (1)

GiganticLyingMouth (1691940) | about a year ago | (#43226269)

This is a major improvement for GPGPU, not game playing. Memory throughput is often the bottleneck in applications, as computational throughput improvements has greatly outstripped memory throughput improvements. To give you an idea about the importance of memory bandwidth, if you have a GPU with a peak arithmetic throughput of 1170 GFLOPS (this is how much a Tesla K20 gets for double precision floating point) performing FMA (fused multiply add, so 2 floating point operations for 3 operands), then to sustain that level of throughput, you would need roughly 13 TB/s*** memory throughput (this is assuming 8 byte operands and that each of the 3 operands of the FMA are unique). Of course you can't reach those levels with global memory, but any sort of improvement helps.

*** required memory throughput per second = 1170 * 10^9 ops * (24 bytes / 2 ops) = 14040 * 10^9 bytes ~= 13 TB

Overwhelmed by the rate of change. (1)

Circlotron (764156) | about a year ago | (#43219733)

Staggering specifications, but maybe several years from now it'll be be commonplace on a $50 smartphone. One year after that it'll be in kerbside hard rubbish collections. Sigh...

Re:Overwhelmed by the rate of change. (1)

viperidaenz (2515578) | about a year ago | (#43220171)

No, several years from now it will be what they're talking about. The time lines for this is 2015 and later.

Client/Mainframe (1)

ArchieBunker (132337) | about a year ago | (#43219807)

So we're back to the heavy mainframe and thin client topology now?

Doesn't sound too good (5, Interesting)

Anonymous Coward | about a year ago | (#43219827)

Nvidia has had solid success, but the future is looking ever more troubling. The exotic ultra-high end toys that Nvidia promotes (expensive racks of stuff) didn't help keep Sun or Silicon Graphics afloat either.

Nvidia's important markets are discrete GPUs for desktop and notebook PCs and its ARM SoC tablet/ARMbook parts.

-The desktop GPUs. Nvidia is held hostage by TSMC's ability to fabricate better chips (on smaller processes). Nvidia itself issued a white-paper where they predicted the costs associated with moving to a new process would soon overwhelm the advantages of staying with the previous process (for high end GPU chips). In fairness, this pessimism was driven by TSMC's horrific incompetence at the 28nm node. Nvidia's talk of a future GPU with exotic stacked DRAM is very troubling indeed, since companies only usually focus on such bizarre idiocy (like holographic optical storage) when traditional solutions are failing them. Building special chips is insanely expensive, especially when you consider that ordinary DRAM is rapidly getting cheaper and faster. As Google proves, commodity hardware solutions beat specialised ones.

-The mobile PC GPU. Nvidia was forced out of the PC motherboard chipset biz by Intel and AMD. Now Intel and AMD are racing to build APUs (combined CPUs and GPUs) with enough grunt for most mobile PC users. Nvidia chose to start making ARM parts over creating its own x86 CPU, so the APU is not an option for Nvidia. The logic of an OEM choosing to add Nvidia GPUs to mobile devices is declining rapidly. Nvidia can only compete at the ultra-high end. Maybe the stacked DRAM is a play for this market.

-The Tegra ARM SoC. Tegra has proven a real problem for Nvidia, again because of TSMC's inability to deliver. However, Nvidia also faces a problem over exactly what type of ARM parts are currently needed by the market. Phone parts need to be very low power- something Nvidia struggles to master. Tablet parts need a balance between cost, power and performance- there is no current 'desktop' market outside the Chromebook (yeah, I know that's a notebook). The Chinese ARM SoC companies are coming along at a terrifying pace.

Nvidia has stated that it will place modern PC GPU cores in the next Tegra (5) although Nvidia frequently uses such terms dishonestly. Logan would be around the end of 2014, and would require Android to have gone fully notebook/desktop by that time to have a decent marketplace for the expensive Tegra 5. Even so, Samsung and Qualcomm would be looking to smash them, and PowerVR is seeking to crush Nvidia's GPU advantage. Nvidia would need a win from someone like Apple, if Apple gives up designing its own chips.

In the background is the emerging giant, AMD. AMD's past failures mean too many people do not understand the nature of AMD's threat to Intel and Nvidia. AMD has a 100% record of design wins in new forward-thinking products in the PC space. This includes all the new consoles, and the first decent tablets coming from MS later this year. Unlike Nvidia, AMD can make its parts in multiple fabs. AMD also owns the last great x86 CPU core- the Jaguar. AMD is leading the HSA initiative, and can switch to using ARM cores when that proves useful.

Sane analysis would project a merger between Intel and Nvidia as the best option for both companies, but this has been discussed many times in the past, and failed because Nvidia refuses to 'bend the knee'. Alone, and Nvidia is now far too limited in what it can produce. The server-side cloud rendering products have proven fatal to many a previous company. The high-end scientific supercomputing is a niche that can be exploited, but a niche that would wither Nvidia considerably.

Shouldn't Nvidia have expected to have become another Qualcomm by now? Even though Nvidia makes few things, it still spreads itself too thin, and focuses on too many bluesky gimmick concepts. 3D glasses, PhysX and Project SHIELD get Nvidia noticed, but then Nvidia seemingly starts to believe its own publicity. It doesn't help that Nvidia is sitting back as the PC market declines - eroding one of the key sources of its income. The excitement is about to be the new consoles from Sony and MS, and Nvidia has no part in this.

Re:Doesn't sound too good (4, Interesting)

viperidaenz (2515578) | about a year ago | (#43220155)

NVidia can't make an x86 CPU/APU/whatever. It took over a decade of court battles between AMD and Intel to settle their shit. They now have a deal where they share each others patents. NVidia has nothing to share, good luck getting a good price on the licenses.

NVidia was forced out of the chipset market because every new CPU needs a new chipset. It became very expensive for them to keep developing new chips. There's also pretty much nothing left in them too. No memory controller, no integrated video. That's all on the CPU now. Where is the value proposition for an NVidia chipset? They make video hardware. All that is left on a north/south bridge is a bunch of SATA controllers and other peripherals no one really cares about.

Stacked DRAM isn't actually new. It's known as "Package on Package". The traditional benefits are smaller size and less board space and traces required. The positive side effect is very small electrical paths and the ability to have a lot of them densely packed.

Re:Doesn't sound too good (1)

DigiShaman (671371) | about a year ago | (#43220581)

Obviously the engineers at nVidia know what they're doing. But could someone please explain to me how they plan on dealing with heat dissipation with stacked DRAM modules?

Don't the current high-end cards have the heat-sink directly cool the modules via an intermediary thermal pad? I'm guess it's either not an issue, or these chips run a lot cooler than they used too. Perhaps they plan increasing the word length instead of clocking them high (wide and slow vs. narrow and fast)?

Not an expert on this... (1)

Molochi (555357) | about a year ago | (#43221151)

However i do recall a developement from a few years back that effectivly placed something like heatpipes inside the layers of the chips allowing it to be pushed out to the surface of the chip. TBH I wonder what level of fragility our CPUs are running on...

Side note (1)

Molochi (555357) | about a year ago | (#43221199)

What you were decribing sounds like what Intel did to produce a high bandwidwith chip (the Pentium 4) when their Pentium 3 failed to scale against the origonal AMD Athlon . That would seem to indicate they finally hit the wall CPUs ran into 10 years ago..

Re:Doesn't sound too good (4, Informative)

rsmith-mac (639075) | about a year ago | (#43221155)

Respectfully, I don't know why this was modded up. There's a lot of bad information in here.

On the one hand, you're right that NVIDIA can't get into the x86 CPU market. Intel controls that lock and key. Though NVIDIA does have things to share (they have a lot of important graphics IP), but it wouldn't be enough to get Intel to part with an x86 license (NVIDIA has tried that before).

However you're completely off base on the rest. Cost has nothing to do with why NVIDIA is out of the Intel chipset business. NVIDIA's chipset business was profitable to the very end. The problem was that on the Intel side of things NVIDIA only had a license for the AGTL+ front side bus, but not the newer DMI or QPI buses [arstechnica.com] that Intel started using with the Nehalem generation of CPUs. Without a license for those buses, NVIDIA couldn't make chipsets for newer Intel CPUs, and that effectively ended their chipset business (AMD's meager x86 sales were not enough to sustain a 3rd party business).

NVIDIA and Intel actually went to court over that and more; Intel eventually settled by giving NVIDIA over a billion dollars. You are right though that there's not much to chipsets these days, and if NVIDIA was still in the business they likely would have exited it with Sandy Bridge.

As for Stacked DRAM. That is very, very different from PoP RAM. PoP uses traditional BGA balls to connect DRAM to a controller [wikimedia.org], with the contacts for the RAM being along the outside rim of the organic substrate that holds the controller proper. Stacked DRAM uses through silicon vias: they're literally going straight down/up through layer of silicon to make the connection. The difference besides the massive gulf in manufacturing difficulty is that PoP doesn't lend itself to wide memory buses (you have all those solder balls and need space on the rim of the controller for them) while stacked DRAM will allow for wide memory buses since you can connect directly to the controller. The end result in both cases is that the RAM is on the same package as the controller, but their respective complexity and performance is massively different.

Re:Doesn't sound too good (1)

viperidaenz (2515578) | about a year ago | (#43226479)

My bad, stacked DRAM isn't PoP, its that thing Intel and Micron did in 2011 and called it Hybrid Memory Cube with the prototype getting 1Tbps.

Re:Doesn't sound too good (1)

mrchaotica (681592) | about a year ago | (#43225119)

NVidia can't make an x86 CPU/APU/whatever. It took over a decade of court battles between AMD and Intel to settle their shit. They now have a deal where they share each others patents. NVidia has nothing to share, good luck getting a good price on the licenses.

NVidia could buy VIA...

Re:Doesn't sound too good (1)

viperidaenz (2515578) | about a year ago | (#43225643)

VIA can't compete with AMD, let alone Intel.
They were good at the low power end around 10 years ago but now even lag behind there.

Re:Doesn't sound too good (1)

mrchaotica (681592) | about a year ago | (#43228075)

It was asserted that NVidia needs patent licenses to build x86 CPUs. VIA builds x86 CPUs; therefore VIA must have the patent licenses. If NVidia bought VIA, then NVidia would have the patent licenses and be able to build x86 CPUs.

NVidia would still have to catch up, but competing would at least become legally possible.

(Unless VIA only has licenses for old x86 technology, which would explain why they've lagged so far behind...)

Re:Doesn't sound too good (1)

viperidaenz (2515578) | about a year ago | (#43228375)

VIA has access to all the technology, they implements SSSE3, SSE4.1 and x86-64 in their latest quad core processors. I think the problem is it's not exactly easy to rival the performance of Intel CPUs or even AMD ones. None of their chips have ever gone above 2GHz.

VIA has some cross licensing with Intel and have an agreement that is about to lapse. They don't make chip sets for Intel any more because the agreement in 2003 only gave them those patents for 4 years. They also need to pay Intel royalties as well - VIA had 3 patents, Intel had 24.

Re:Doesn't sound too good (0)

Anonymous Coward | 1 year,29 days | (#43241113)

The downside, especially on a GPU, is heat. Lots and lots of heat but nowhere to take it out of the package.

Re:Doesn't sound too good (0)

Anonymous Coward | about a year ago | (#43220329)

Worth pointing out is that Nvidia is the only company to have significant issues with TSMC's 28nm node. While there were some things that needed working out overall, causing the stoppage of production, most customers got reasonable yields on their chips from the start, AMD included. Nvidia, however, has a history of trouble when moving to a new node - consider the abysmal yields on Fermi at 40nm, when - again - no one else had much trouble. They'd be quite right to predict more troubles if they can't figure out how to make a proper node transition.

Re:Doesn't sound too good (0)

Anonymous Coward | about a year ago | (#43221257)

I don't know, but maybe they were a little more QA sensitive after all those 8K series cards killed themselves from crappy soldering or was it the chip heating up and cooling off too quick. I mean, they're basicly contracting all their hardware out to a third party that says, "oh this'll work!" and when the technique fucks up you have explain it at a share holders meeting.

Re:Doesn't sound too good (1)

tyrione (134248) | about a year ago | (#43220373)

Agreed on all points about AMD. Their HSA initiative and the direction of GSN with FX and GPGPU designs, to their APU tieing them together, while embracing ARM 64 hybrids makes their future enormous.

Re:Doesn't sound too good (1)

sandytaru (1158959) | about a year ago | (#43220499)

There's also the point of diminishing returns from the consumer side. I upgraded my video card for Christmas. The bottleneck for my PC's performance is not my video card, and it probably won't be until my system is ready to be completely redone again in three years. It used to be that when the video card was the limiting factor for better performance in games, you had an incentive to upgrade on a year basis. Now, I'd need a new motherboard and processor to improve the performance of my games, because the several hundred dollar video card is already pushing the rest of the system to the max. And I'm still only going to see a few FPS performance increase in the handful of games I play that can actually take full advantage of the video card as it is.

There will always be the few thousand enthusiasts who will buy a new video card every year because they can afford it and because it's today's equivalent of wearing a digital Armani suit, but you can't build a sustainable business model off those folks alone.

Re:Doesn't sound too good (1)

aztracker1 (702135) | about a year ago | (#43220621)

I think high density displays (4K+) are coming, and that will need a lot more GPU horsepower... like 4X the horsepower from a GPU.

Re:Doesn't sound too good (0)

Anonymous Coward | about a year ago | (#43221607)

Generally... no.
Compare benchmark results for 2560x1440 vs 1280x720 on current GPUs.

Re:Doesn't sound too good (1)

Tynin (634655) | 1 year,30 days | (#43230325)

There's also the point of diminishing returns from the consumer side...

I'm afraid that you, Sir, are discounting the electrically priced out hordes of BitCoin miners that would love to see more shader/stream processors added to there GPUs at all cost, in such an enormous quantity that they would forever yield an efficient stream of never-ending currency! The ASIC invasion must be met with swift and decisive victories in the GPU market! So say'th the Poor Hashers of Satoshi Nakamoto...

In The Block, We Trust.

Re:Doesn't sound too good (1)

genkernel (1761338) | about a year ago | (#43220739)

In the background is the emerging giant, AMD. AMD's past failures mean too many people do not understand the nature of AMD's threat to Intel and Nvidia. AMD has a 100% record of design wins in new forward-thinking products in the PC space.

Hrm, while I agree with a good deal of the rest of your post, how does this manage to not include the bulldozer architecture? As a largely AMD customer myself, I'm not sure I can bring myself to call that a "design win".

Re:Doesn't sound too good (1)

Blaskowicz (634489) | about a year ago | (#43222153)

Stacked DRAM is not cold fusion or holographic storage or flying cars.
What they've announced is similar to Intel Haswell GT3e, which is a real product that runs today, awaiting commercial launch. "Silicon Interposer" or "2.5D stacking" are maybe a more useful term.
It will become an industry standard, the memory bandwith wall can't be written away like you do. AMD APUs are really crippled by their bandwith for instance, and using quad channel or gddr5 as system memory is an expensive proposition.

Re:Doesn't sound too good (0)

Anonymous Coward | about a year ago | (#43222415)

Oh dear, AMD has gotten so desperate it's started paying for astroturfers?

They really must be struggling, I guess the end is near for them.

Re:Doesn't sound too good (1)

DarthVain (724186) | about a year ago | (#43227015)

This. Agree with everything.

Was going to mention consoles, but you did. For fun you could have linked the slashdot story the other week about nVidia "turning down" PS4 development.

nVidia make great GPU, no doubt about that. However the future looks a bit grim when you start looking at the larger picture and all the challenges and forces arrayed against nVidia.

The world is full of companies that make great product that fail anyway due to other factors. A relevant example is 3dfx. I had a Voodoo3 3000 16MB back in the day. They made great video cards, however as a company they no longer exist (though ironically enough I believe some of the technology remains were bought up by nVidia).

How can GRID get past latency? (1)

BlueCoder (223005) | about a year ago | (#43219973)

Pinging Google is 20ms from my home computer. I can see how it might be possible in twenty years with a fiber optic connection but not in five years. And certainly not on a cell phone network. I can imagine certain programming techniques but I'm sure most of them are already implemented just to get lag down on a standalone computer and the rest would require games to be designed and programmed for high feedback lag. Some of the techniques I can imagine would trade more bandwidth to make up for the lag.

Explain to me how they will defeat lag.

Re:How can GRID get past latency? (1)

Luckyo (1726890) | about a year ago | (#43220287)

The idea seems to be more of "nvidia shield" style of remote rendering for most cases I think. Your powerful nvidia-based home PC renders the game and you can play it anywhere within your house over ethernet at latencies of 1-2ms.

The GRID solution for a lot of virtual systems could be used in netcafes and big tournaments I suppose. I agree that it's hard to imagine remote gaming at all (not just near future) simply because latency cannot be pushed low enough once you leave the immediate vicinity in terms of network topography.

Mobile GPU (1)

ChunderDownunder (709234) | about a year ago | (#43220083)

Nvidia ditching whatever embedded GPU Tegra has parallels with Intel dropping PowerVR for their latest Bay Trail Atom.

I wonder if this means the nouveau driver will be compatible with one's ARM tablet. If so, Canonical's convoluted architecture for Mir (embracing android blobs) might be shortlived - with lima, freedreno, nouveau, intel all targeting Xorg/Wayland - leaving PowerVR solutions as the odd one out.

Stacked DRAM (1)

viperidaenz (2515578) | about a year ago | (#43220103)

Good for saving space. Good for making speeding things up. Bad for heat dissipation.

How can I increase the thermal resistance of my processor.... I know, stick a DRAM chip between it and the heat sink!

Re:Stacked DRAM (1)

Anonymous Coward | about a year ago | (#43222397)

the DRAM stack is undoubtedly along side the GPU die, on a silicon interposer (for fine-pitch/high-density routing) most likely. a 2.5D solution, as someone else mentioned... for high end and high-power parts, the DRAMs are not going to be between the GPU die and heat sink...

I note that "opencl" appears in neither article (1)

drinkypoo (153816) | about a year ago | (#43222343)

So my choices of video card are between slow and expensive (intel integrated, intel CPUs cost more than AMD CPUs if you don't care about single-thread performance, and I don't, and the motherboards cost much more as well) or crap drivers (AMD) or evil lock-in (nVidia).

All I want for Christmas is a graphics card company I can buy from without feeling like an asshole.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...