Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

NVIDIA Announces Tesla K40 GPU Accelerator and IBM Partnership In Supercomputing

samzenpus posted about a year ago | from the greased-lightning dept.

IBM 59

MojoKid writes "The supercomputing conference SC13 kicks off this week and Nvidia is kicking off their own event with the launch of a new GPU and a strategic partnership with IBM. Just as the GTX 780 Ti was the full consumer implementation of the GK110 GPU, the new K40 Tesla card is the supercomputing / HPC variant of the same core architecture. The K40 picks up additional clock headroom and implements the same variable clock speed threshold that has characterized Nvidia's consumer cards for the past year, for a significant overall boost in performance. The other major shift between Nvidia's previous gen K20X and the new K40 is the amount of on-board RAM. K40 packs a full 12GB and clocks it modestly higher to boot. That's important because datasets are typically limited to on-board GPU memory (at least, if you want to work with any kind of speed). Finally, IBM and Nvidia announced a partnership to combine Tesla GPUs and Power CPUs for OpenPOWER solutions. The goal is to push the new Tesla cards as workload accelerators for specific datacenter tasks. According to Nvidia's release, Tesla GPUs will ship alongside Power8 CPUs, which are currently scheduled for a mid-2014 release date. IBM's venerable architecture is expected to target a 4GHz clock speed and offer up to 12 cores with 96MB of shared L3 cache. A 12-core implementation would be capable of handling up to 96 simultaneous threads. The two should make for a potent combination."

Sorry! There are no comments related to the filter you selected.

What about OpenCL 1.2 support? (1)

nikkipolya (718326) | about a year ago | (#45455855)

Nvidia has sidetracked OpenCL for CUDA?

Re:What about OpenCL 1.2 support? (0)

Shinobi (19308) | about a year ago | (#45455889)

All the major players are putting aside OpenCL. AMD is betting on Mantle for example.

Re:What about OpenCL 1.2 support? (2)

nikkipolya (718326) | about a year ago | (#45456019)

But Mantle is an alternative to OpenGL.

Re:What about OpenCL 1.2 support? (0)

Anonymous Coward | about a year ago | (#45456587)

actually it is (supposedly better) alternative to both OpenGL/DirectX, OpenCL, and PhysX in one

Re:What about OpenCL 1.2 support? (4, Informative)

fuzzyfuzzyfungus (1223518) | about a year ago | (#45456021)

"Mantle", at least according to the press puffery, is aimed at being an alternative to OpenGL/Direct3d, akin to 3DFX's old "Glide"; but for AMD gear.

CUDA vs. OpenCL seems to be an example of the ongoing battle between an entrenched and supported; but costly, proprietary implementation, vs. a somewhat patchy solution that isn't as mature; but has basically everybody except Nvidia rooting for it.

"Mantle", like 'Glide' before it, seems to be the eternal story of the cyclical move between high-performance/low-complexity(but low compatibility) minimally abstracted approaches, and highly complex, highly abstracted; but highly portable/compatible approaches. At present, since AMD is doing the GPU silicon for both consoles and a nontrivial percentage of PCs, it makes a fair amount of sense for them to offer a 'Hey, close to the metal!' solution that takes some of the heat off their drivers, makes performance on their hardware better, and so forth. If, five years from now, people are swearing at 'Mantle Wrappers' and trying to find the one magic incantation that actually causes them to emit non-broken OpenGL, though, history will say 'I told you so'.

Re:What about OpenCL 1.2 support? (0)

Anonymous Coward | about a year ago | (#45457489)

but has basically everybody except Nvidia rooting for it.

Linux user here, I would like to see the rooting Intel provides, as of this comment the best I get is an open source driver that calls abort on most OpenCL functions and I have to use trial and error to find out which work.

In contrast I get great CUDA drivers, free dev tools and great documentation (for both CUDA and OpenCL) from NVIDIA.

makes performance on their hardware better

It might do so just by limiting the retarded things developers can do. Until OpenGL defined the Core profile (including a few years after that) people still offered tutorials teaching OpenGL 1 style glBegin() glEnd() blocks as the way to draw - the worst possible thing to teach being both verbose and horribly slow compared to even the simplest alternatives.

Re:What about OpenCL 1.2 support? (1, Interesting)

Shinobi (19308) | about a year ago | (#45457777)

"CUDA vs. OpenCL seems to be an example of the ongoing battle between an entrenched and supported; but costly, proprietary implementation, vs. a somewhat patchy solution that isn't as mature; but has basically everybody except Nvidia rooting for it."

Wishful thinking. Intel doesn't give a crap about OpenCL, they don't even expose their GPU's for OpenCL under Linux, and as I mentioned AMD are betting on Mantle. As for "costly", there's nothing about CUDA that is costly that isn't costly with OpenCL

Mantle is far more than just a Glide-like API. It covers both graphics and GPGPU, effectively replacing OpenCL on the AMD side(unfortunately, that still comes with AMD idiocy in how to access interfaces etc..... grrrr..)

Re:What about OpenCL 1.2 support? (0)

Anonymous Coward | about a year ago | (#45458131)

Well there's a couple of things here that don't make sense.
Mantle is an open API so there's actually nothing stopping Nvidia from using it - so not just for AMD gear (technically).
Mantle high-perf/low-complex/low compatibility vs the high-complex/highly abstracted/highly portable DirectX - wait what? DirectX is portable?

Oh and even though AMD may be doing the GPU on consoles they aren't going to be getting Mantle.

I don't think anybody is going to argue that DirectX isn't a pain in the nethers and that hopefully Mantle will shake things up. The worst case is fragmenting the choice further with OpenGL/DirectX/Mantle and having some hardware not supporting it all, the best, that everybody comes together to work openly on a great interface - I ain't gonna hold my breath though.

Re:What about OpenCL 1.2 support? (3, Interesting)

Jthon (595383) | about a year ago | (#45458589)

Mantle is less an open specification than CUDA is, CUDA does have a full x86 implementation available which is mostly slower due the CPU not taking too much advantage of the massive parallelism of the GPU (not sure about how this play out with Xeon Phi).

Mantle on the other hand is a very low level Graphics API that basically exposes SW to some low level interactions with AMD's GPU. It's more like GLIDE than OpenCL. From what I've seen so far it's not clear to me Mantle will be very portable across several AMD generations. It works for GCN based cards out now but who knows if it will be fast for GCN++ without a major rewrite of the application. NVIDIA could implement Mantle but would probably have to translate so much stuff in SW to make it work you'd lose the low SW overhead.

From the one or two talks I listened to Mantle seems to basically expose the same interface the driver developers have access to and lets you go to town. This is great for the latest architecture but now it's up to your application to evolve as the HW does. There's a whole lot of work being done to optimize for each architecture release in the driver which allow older games that the publisher doesn't really want to support anymore to work and see performance boosts.

Re:What about OpenCL 1.2 support? (0)

Anonymous Coward | about a year ago | (#45456593)

All the major players are putting aside OpenCL. AMD is betting on Mantle for example.

Here's your OpenCL answer:

http://www.prnewswire.com/news-releases/altera-brings-fpga-based-acceleration-to-ibm-power-systems-and-announces-support-for-openpower-consortium-232329231.html

Re:What about OpenCL 1.2 support? (3, Insightful)

fuzzyfuzzyfungus (1223518) | about a year ago | (#45455919)

Nvidia has sidetracked OpenCL for CUDA?

Nvidia has never much liked OpenCL. And why would they? They currently hold the high ground in GPU computing, with a proprietary API that only they can implement. I'd assume that they have some sort of 'OpenCL contingency plan', just in case the market shifts, or they ever want to sell a GPU to Apple ever again; but as of right now, supporting OpenCL would just be a "Sure, please, commodify me, I'd love that!" move.

Re:What about OpenCL 1.2 support? (0)

Anonymous Coward | about a year ago | (#45456033)

they do sell GPU's to apple, lots of them

lately apple doesn't seem to care about openCL either

Re:What about OpenCL 1.2 support? (4, Informative)

FreonTrip (694097) | about a year ago | (#45456379)

I wouldn't say that's strictly true - Mavericks implements OpenCL 1.2 support pervasively, even down to the rinky-dink Intel GPUs that can handle it.

Re:What about OpenCL 1.2 support? (0)

Anonymous Coward | about a year ago | (#45456051)

I should clarify: at least some of their products to 'support' OpenCL, architecturally. It will work if you try it; but as a company, and in terms of their focus, ongoing developing and polishing, and so on, OpenCL on Nvidia gear is slightly less passive-aggressive than Microsoft's NT POSIX compatibility services, and for similar reasons.

Re:What about OpenCL 1.2 support? (0)

Anonymous Coward | about a year ago | (#45456625)

actually Windows POSIX services are great, they are more standards compliant than even Linux (they were made to mirror original Unix and not Linux) only issue is you have to pay them to use POSIX in addition to paying OS, Linux is free

Re:What about OpenCL 1.2 support? (0)

Anonymous Coward | about a year ago | (#45456091)

...or they ever want to sell a GPU to Apple ever again...

lol [apple.com] wut? [apple.com]

Re:What about OpenCL 1.2 support? (1)

fuzzyfuzzyfungus (1223518) | about a year ago | (#45456257)

And on their newly redesigned, 'performance' model [apple.com] ? Sure, they currently use Nvidia for polygon pushing on their lower end devices, for the higher-res situations where Intel won't cut it; but do you think that they dropped Nvidia from their 'pro' model, despite the flack from CUDA-dependent visual effects/video workflow nuts, for nothing?

Re:What about OpenCL 1.2 support? (0)

Anonymous Coward | about a year ago | (#45456333)

Don't care, not the point. Nvidia isn't hurting for Apple's business.

Re:What about OpenCL 1.2 support? (2)

Khyber (864651) | about a year ago | (#45457001)

"They currently hold the high ground in GPU computing"

And yet they still can't even get a decent fucking hashrate with CUDA, meanwhile OpenCL and AMD stomps the fuck out of them for that.

AMD has essentially 'made' everything from Bitcoin to every game console this gen? What the hell is nVidia doing if they're so superior?

Re:What about OpenCL 1.2 support? (0)

Anonymous Coward | about a year ago | (#45457667)

AMD has essentially 'made' everything from Bitcoin to every game console this gen? What the hell is nVidia doing if they're so superior?

Stuff that actually makes a difference in the world. Research on AMD hardware is basically unknown for GPGPU.

Re:What about OpenCL 1.2 support? (1)

Shinobi (19308) | about a year ago | (#45457831)

That's because Bitcoin mining is not something critical, AND happens to fall into the limited memory structures and computational capabilities that AMD provide. In real-world relevant computational tasks, nVidia and CUDA are dominating in ease of use, flexibility and computational throughput. Hence why HPC use Nvidia and not AMD.

Hashrate is just a gimmick anyway, since if you're serious about it, you go with a FPGA kit.

Re:What about OpenCL 1.2 support? (1)

Khyber (864651) | about a year ago | (#45461317)

"That's because Bitcoin mining is not something critical,"

I guess you don't watch C-SPAN or pay attention to Bitcoin, otherwise you'd understand it's the most valuable currency on the planet right now. When a digital string of essentially randomly generated fucking numbers is worth more than PLATINUM, you'd better pay attention.

AMD makes you money. nVidia makes you broke and delivers not very much useful, it seems.

Re:What about OpenCL 1.2 support? (0)

Anonymous Coward | about a year ago | (#45461593)

AMD doesn't make you money - you'd better have an ASIC rig today to have a chance to get a few bitcoins.

NVidia makes big money, toiling on calculations for oil companies and universities.

Bitcoin flies above 600 USD! (-1)

Anonymous Coward | about a year ago | (#45455869)

Hope you are making money today!

Re:Bitcoin flies above 600 USD! (1)

ArcadeMan (2766669) | about a year ago | (#45456883)

Not on my nVidia 320m, I'm not!

I should have bought USB ASIC miners when they were still available for cheap after the 75 USD price crash.

So, let me get this straight here... (1, Insightful)

fuzzyfuzzyfungus (1223518) | about a year ago | (#45455927)

IBM is announcing that their hardware is "Open", in the sense that it has PCIe slots, and Nvidia is announcing that they'd be happy to sell hardware to the sort of price-insensitive customers who will be buying Power8 gear?

I'm shocked.

Re:So, let me get this straight here... (-1)

Anonymous Coward | about a year ago | (#45456147)

Starting more than a year ago, NREL initiated work to expand the Jobs and Economic Development Impacts (JEDI) model to include fixed-bottom offshore wind technology. Following the completion of the model (and in partnership with the DOE Wind Program, Illinois State University, and James Madison University), NREL supported the analysis of the regional jobs and economic impacts of offshore wind for the Great Lakes, Mid-Atlantic, Southeast, and Gulf Coast regions. The November Stakeholder Engagement and Outreach webinar will provide an overview of the new offshore wind JEDI model and review the four assessments that have been completed.
The webinar is free. Advance registration is not required. Log-in information is below.
Audio Access
Toll-free #: 888-396-0679
Toll #:1-773-756-0107
Participant passcode: 8466385
Web Access
URL: https://www.mymeetings.com/nc/join.php?i=RW3511409&p=8466385&t=c
If you have trouble with the above link, try going to this website and enter the information separately:
URL: https://www.mymeetings.com/nc/join/
Conference number: RW3511409
Audience passcode: 8466385

More to it than that... (4, Insightful)

Junta (36770) | about a year ago | (#45456441)

IBM has announced willingness to license the Power8 design in much the same way that ARM licenses their stuff to a plethora of companies. IBM has seen what ARM has accomplished at the lower end in terms of having relevance in a market that might otherwise have gone to Intel given sufficient time, and sees motivation to do that in the datacenter where Intel has significantly diminished POWER footprint over the years. Intel operates at obscene margins due to the strength of their ecosystem and technology, and IBM is recognizing that it needs to build a more diverse ecosystem itself if it wants to compete with Intel. That and the runway may be very short for such an opportunity. ARM as-is is not a very useful server platform, but that gap may close quickly before IBM can move, particularly as 64-bit ARM designs start getting more prevalent.

For nVidia, things are a bit more than 'sure we'll take more money'. nVidia spends a lot of resources on driver development and without their cooperation, using their GPU accelerator solution will get nowhere. nVidia has agreed to invest the resources to actually support Power. Here, nVidia is also feeling the pressure from Intel. Phi has promised easier development for accelerated workloads as a competitor to nVidia solutions. As yet, Phi hasn't been everything people had hoped for, but the promise of easier development today and promise for improvements later has nVidia rightly concerned about future opportunities in that space. Partnering with a company without such ambitions gives them a way to try to apply pressure against a platform that clearly has it's sights on closing the opportunity for GPU acceleration in HPC workloads. Besides, IBM has the resources to help give a boost in terms of software development tooling that nVidia may lack.

Re:More to it than that... (0)

Anonymous Coward | about a year ago | (#45460791)

There is no working linux driver availiable for a lot of Nvidia graphics cards. Nvidia is the single worst company regarding linux support according to Linus Torvalds. No linux no Nvidia for supercomputing.

Not to mention that there are known bugs from six years ago that haven't been fixed in there latest https://bbs.archlinux.org/viewtopic.php?id=167195&p=6

No linux no coperation takes the risk with Nvidia. They are simply known to care shit for linux the last 2-3 years, and THE SITUATION GETS WORSE if you look at their latest beta drivers.

FUCK YOU NVIDIA! (quote from Linus Torvalds)

It seems Nvidia developers can't write drivers or the company cares shit about linux. AMD's developer may be bad developers as well but at least they have an open source drivers so good programmers can help.

Even more to it than *that*... (2)

Funk_dat69 (215898) | about a year ago | (#45460811)

According to the Reg [theregister.co.uk] (page 2) Power8 is going to have some sort of memory coherence function for accelerators. Allowing the GPU to be just another first-class processor with regards to memory could be a big win, performance-wise, not to mention making it easier to program.

The latest version of CUDA (version 6) has also just added features in the same area (unified memory mgmt). Anandtech [anandtech.com] has some more info about that.

This thing will be beast!

Anyone remember the Cray? (3, Interesting)

msobkow (48369) | about a year ago | (#45456175)

Ah, the good old days.... when CPUs were measured in megahertz, and instructions took multiple clocks. :D

Really, what was the Cray when it first came out? One vector processing unit. How many does this new NVidia board have? How much faster are they than the original Cray?

Re:Anyone remember the Cray? (2)

bob_super (3391281) | about a year ago | (#45456371)

People spent less CPU cycles getting to the moon than are wasted every day on cat videos and facebook.
Where's my flying car?

Re:Anyone remember the Cray? (2)

semi-extrinsic (1997002) | about a year ago | (#45456801)

The combined computing power of the guidance computers on the space shuttle (IBM AP-101) is less than the computing power of a new "smartwatch". Where are my glasses that project information from the internet? Oh wait...

Re: Anyone remember the Cray? (0)

Anonymous Coward | about a year ago | (#45456855)

Lets make explicit the point: logic machine in a box cannot move earth and process fuel. Computation is worthless without muscle.

Re: Anyone remember the Cray? (0)

Anonymous Coward | about a year ago | (#45459077)

Computation is worthless without muscle.

What?

Re:Anyone remember the Cray? (0)

Anonymous Coward | about a year ago | (#45456671)

instructions still take multiple cycles, its just that now if instruction to multiply 2 numbers takes 4 cycles you have 4 parallel multiply "engines" on core so that 4 multiply operations can be done in 4 cycles (simplifying numbers are approximate)

Re:Anyone remember the Cray? (0)

Anonymous Coward | about a year ago | (#45457543)

Its more than just "multiple engines". Processors do so much pipelining, prefetching, and speculative branching nowadays that while getting all the memory gathered and prepared to execute takes many cycles, it pretty much finishes the next cycle after the previous instruction. This pushes the number of instructions per cycle pretty close to 1 in perfect situations, and with out of order execution most processors can even do more than 1 instruction per clock per core. Of course in the real world you have to context switch to the OS every so often, branches get predicted wrong, memory barriers get called, page faults and whatnot, so you don't actually get 1 instruction per clock... but Intel sure seems to come as close to this as possible.

Re:Anyone remember the Cray? (0)

Anonymous Coward | about a year ago | (#45461935)

actually it is much better than 1 instruction/cycle, for haswell I5/I7 it is 2 SIMD FMA instructions/cycle/core

so each core can run 2 multiplications and 2 additions per cycle in parallel on 8 numbers result = 2*2*8 = 32 single precision "simple" instructions per cycle (1 multiplication and 1 addition) since there are 4 cores one CPU does 128 single precision instructions per cycle (theoretically)

Re:Anyone remember the Cray? (1)

ctrl-alt-canc (977108) | about a year ago | (#45457363)

One of the first computers I used was a Cray Y-MP. Now the PC on which I am typing this post is about four times faster, but I miss the Cray. I could take a nap [piggynap.com] over it, try doing this on a laptop now!

Re:Anyone remember the Cray? (0)

Anonymous Coward | about a year ago | (#45457623)

That CRAY vector processor was pipelined, cooled with liquid Helium and had a heavily optimized Fortran compiler. Compare that with your Nvidia GPU, which has 800 stage pipelines and a good few thousand cores, all running at 600 MHz.

A simple comparison is running John Conway's game of life either randomized or with the R-pentomino shape.

On an Atari 800, a 1.5Mhz 6502 assembly language program would take 18 hours to run 1000 generations.
A mid 1980's Dell PC (20 MHz 80386), a 8x86 assembly language version took about 90 seconds.
A late 1990's TMS340x0 (66 MHz) graphics coprocessor could run the whole 1000 generations in 30 seconds.
But on a present day GPU, a CUDA program takes microseconds.

Another yard-stick is the Mandelbrot set. That would take hours on a 6502, a few minutes on a 80386 PC, and runs in real-time on a current day GPU using double floating-point precision. GPU's have moved onto 3D fractals.

A single double precision vectorized GPU core with full OpenGL shader instruction set of today will fit in the space of a single logic gate of a 6502 from the 1970's.

Re:Anyone remember the Cray? (0)

Anonymous Coward | about a year ago | (#45459123)

Actually the Cray was cooled in a Fomblin (later Fluorinert) bath, and it had also a very good C/Pascal compiler.

Re:Anyone remember the Cray? (0)

Anonymous Coward | about a year ago | (#45457683)

Remember? They still do good work.

Re:Anyone remember the Cray? (1)

gman003 (1693318) | about a year ago | (#45458185)

Really, what was the Cray when it first came out? One vector processing unit. How many does this new NVidia board have? How much faster are they than the original Cray?

2,880 "cores", each able to do one single-precision FMA per clock (double-precision takes three clocks for this card, but 24 clocks for most gaming GPUs). These are organized into fifteen "SMX Units", which have 192 ALUs apiece (with four schedulers and eight dispatch units). The exact clock rate is variable, as it will boost the clock speed above "normal" levels, thermal and power conditions permitting, but 1GHz is a good enough approximation. This comes out to about 1.92TFLOPS, 128GFLOPS per SMX, or (interestingly) 666 (point six repeating) MFLOPS per core.

The Cray-1 worked on arrays of up to 64 units (each 64-bits wide) at a time, and it could execute (in optimal cases) two instructions per clock. At 80MHz, that comes out to 160MFLOPS on a single core.

By that math, a single Kepler core is about four times as powerful as a Cray-1, and a full SMX is eight hundred times as powerful.

(I won't be surprised if someone corrects me on something, either miscalculating the Cray-1 or maybe even Kepler - feel free to tell me I'm wrong)

Re:Anyone remember the Cray? (1)

msobkow (48369) | about a year ago | (#45459907)

Now imagine a Cray-sized cabinet stuffed with those cards.

Bwahahahahaha! Power!!!!

Speaking of the 21st Century (1)

justthinkit (954982) | about a year ago | (#45459759)

By any chance, is nVidia planning on doing an end-around on Microsoft with the graphics card hosting a full-blown operating system? 12GB of RAM gets you plenty of working space.

Re:Speaking of the 21st Century (0)

Anonymous Coward | about a year ago | (#45460083)

Someone with a lot of free time should try porting linux to one of these cards. There is an open source C/C++ compiler based on CLANG/LLVM: http://llvm.org/docs/NVPTXUsage.html

I'm sure there would be a ton of issues with getting this working and it is absolutely not for the faint of heart, but if you can boot linux in a javascript emulator, someone should be able to make this happen.

DRAM bandwidth (1)

green is the enemy (3021751) | about a year ago | (#45456319)

NVIDIA seems behind AMD in moving to 512-bit wide GDDR5: this K40 still has 384-bit. Also worrying is whether significant performance improvements will really be possible beyond that point. GPU code is notorious for easily becoming DRAM bandwidth limited. Cache on the GPU is very small compared to the computing resources.

Re:DRAM bandwidth (3, Informative)

Anonymous Coward | about a year ago | (#45456725)

NVIDIA seems behind AMD in moving to 512-bit wide GDDR5: this K40 still has 384-bit.

Right now memory bus width is a die size tradeoff. NVIDIA can get GK110's memory controller up to 7Gbps (GTX 780 Ti), which on a 384-bit bus makes for 336GB/sec, but relatively speaking it's a big honking memory controller. AMD's 512-bit memory controller in Hawaii isn't designed to clock up nearly as high, topping out at 5Gbps, or 320GB/sec. But it's designed to be particularly small, smaller than even AMD's old 384-bit memory controller on Tahiti.

So despite NVIDIA's narrower bus, they actually have more available memory bandwidth than AMD does. It's not a huge difference, but it's a good reminder of the fact that there are multiple ways to pursue additional memory bandwidth.

Re:DRAM bandwidth (0)

Anonymous Coward | about a year ago | (#45456805)

true, next step will probably be stacked silicon, or TSV that way they can put memory closer to GPU core and reach 2 or maybe even 4 times more bandwidth using similar memory technology, even though we have insane GPU memory bandwidth nowadays it is still most limiting factor of GPUs and not GFLOPS themselves

that is if you ignore artificially crippled FP64 performance that should be 1/2 of FP32 performance and not 1/8 like on new AMD cards, or 1/24 of FP32 on new nVidia cards ...

integer width is not even 32bit, its actually 24bit on AMD hardware, if AMD was smart and made part with normal (1/2) FP64 performance i would buy 14*R9 290 as soon as ASUS/GIGABYTE make normal cooling solution for it, i will not stand being cheated and paying 10 times more for same hardware as gamers for their FirePro version to have normal double precision performance, 10% is fine, after all it has ECC memory, but 10 times more that is highway robbery

Re:DRAM bandwidth (0)

Anonymous Coward | about a year ago | (#45457115)

So get a $999 Titan. 1/3 rate FP64 (which is still the best around) and 1.3TF peak.

Want better price/perf and don't care about density and power? A $299 R280X is 1/4 rate FP64 and still a hair over 1TF peak.

You seem to be under the mistaken impression that there's some law that says there has to be half as many FP64 units as FP32 units...
290[X] aka Hawaii natively only has 1/8 rate FP64. It's a big part of how AMD managed to reduce area/shader vs. Tahiti.
Similar for GK104, it really only has 1/24 rate FP64.

Now GK110 crippled to 1/24 rate FP64 (aka GTX780/780Ti) ... that's simply market segmentation.

integer width is not even 32bit, its actually 24bit on AMD hardware

Wrong. AMD had 32 bit integer width since the R600 aka HD3xxx series.

Re:DRAM bandwidth (0)

Anonymous Coward | about a year ago | (#45459447)

So get a $999 Titan. 1/3 rate FP64 (which is still the best around) and 1.3TF peak.

yes I agree 1/3 FP64 is realistic (not crippled) but it is still unfair they are making me pay 250% of price for uncrippled hardware (Titan is 250% more expensive than gamer cards with similar transistor budget/32bit TFLOPS potential) , that is unfairness i was talking about (BTW i have Titan*4 but i would replace it/fill all my PCIE slots on 2 workstations with 14*290x if AMD didn't try to rob me blind) it is not that its expensive (it actually is expensive but that is not main reason) main reason is that it is unfair from both AMD and nVidia, i don't like when merchants are obviously trying to make me pay same product several times more than other people just because i want to use their product to its fullest potential

Want better price/perf and don't care about density and power? A $299 R280X is 1/4 rate FP64 and still a hair over 1TF peak.

actually you are 100% correct, i don't know how i missed this un-crippled jewel (ok crippled a bit) i guess i was too much salivating about R9-290 potential memory bandwidth (after third party cooling solutions start running its 512 bit bus on 7GHz+

You seem to be under the mistaken impression that there's some law that says there has to be half as many FP64 units as FP32 units...

its not about law, it is about research Intel, ARM and several other chip companies did (several years ago - spent quite a bit of time reading research papers and some patents) it would take less than 1% additional transistor budget (not even counting cache) to enable 2*FP32 to work as one 1*FP64, no dedicated FP64 units would be even needed at all, so FP64 would be 50% of FP32, can't really believe they would try to save 60 million transistors on 6 billion transistor chip and in process cripple FP64 from 50% of FP32 to 33% of FP32 (nVidia Titan) or 25% of FP32 (AMD R280X)

290[X] aka Hawaii natively only has 1/8 rate FP64. It's a big part of how AMD managed to reduce area/shader vs. Tahiti.
Similar for GK104, it really only has 1/24 rate FP64.

Now GK110 crippled to 1/24 rate FP64 (aka GTX780/780Ti) ... that's simply market segmentation.

i still believe those chips have 1/2 - 1/3 rate and it is just limited by burning off some resistor on chip for "gamer edition" chips

integer width is not even 32bit, its actually 24bit on AMD hardware

Wrong. AMD had 32 bit integer width since the R600 aka HD3xxx series.

you are partially true AMD does have 32bit integers, BUT on GCN architecture scalar ALU (1 per CU) supports 32bit integers and vector SIMD just emulates them using FP32 hardware using just mantissa part with fixed exponent so it has 24bit width, as result 32bit integers work at 1/16 speed of 24bit (1 CU has 1ALU but it has 16 SIMD)

WTF? IBM is promoting big time to ditch GPUs (0)

Anonymous Coward | about a year ago | (#45456361)

They walk around among campuses pitching funding and hiring graduates for getting rid of GPUs and using FPGAs instead. According to what they said in the interview, they even have a large software group doing this. They claim that GPUs have no future, and now they're partnering with NVIDIA? It seems to be their strategy for a quick-fix until their high-level language to FPGA compilers and Lime have matured.

Re:WTF? IBM is promoting big time to ditch GPUs (2)

dreamchaser (49529) | about a year ago | (#45456829)

They are pursuing both, and in fact the Power8 will support both GPU and FPGA accelleration add-ons.

Re:WTF? IBM is promoting big time to ditch GPUs (0)

Anonymous Coward | about a year ago | (#45456945)

actually IBM is right FPGA can achieve much higher performance than CPU/GPU, unfortunately it is extremely hard to program for (especially "runtime reconfigurable" FPGA) so there will always have to exist some "easy" solution like C++ or CUDA/OpenCL

Venerable? (0)

Anonymous Coward | about a year ago | (#45456859)

If Power is venerable, what is the correct term for x86?

X86 works reasonably well with today's transistor budgets, but its evolution is layers upon layers of kludges on top of workarounds on top of band-aids (rinse and repeat) for over 3 decades. Some details (like how some instructions affect flags) are straight from the 8008 (earlier than the 8080), others are thankfully being forgotten like th x87 stack, but it's still here for backward compatibilty. Even z Servers, with about 50 years of existence since IBM 360, look clean compared with x86 from an architectural point of view.

Re: Venerable? (0)

Anonymous Coward | about a year ago | (#45456975)

As a non tech you appear to be correct, but I want insight. What pressures going forward would inspire the entities that influence this sort of thing to start fresh? What in the competition between Intel IBM etc causes this apparently extreme level of backward compatibility? What is stopping them from building a niche product that abandons backward compatibility or does that already exist?

Re: Venerable? (1)

epyT-R (613989) | about a year ago | (#45457713)

What pressures going forward would inspire the entities that influence this sort of thing to start fresh?

A license system that allows hardware vendors/users to port/recompile code to current designs? x86 has a legacy because of all that binary only windows software out there.

What in the competition between Intel IBM etc causes this apparently extreme level of backward compatibility?

Their customers want it so they don't have to buy new overpriced binaries every time they upgrade hardware. If they have to upgrade the software as well as the hardware, why not consider a competitor?

What is stopping them from building a niche product that abandons backward compatibility or does that already exist?

Sure, this is done from time to time, like those arm based windows RT tablets which didn't do well because they couldn't run x86 software.

Tesla GPU ? (2)

ctrl-alt-canc (977108) | about a year ago | (#45457301)

Does it catch fire ?

Price (0)

Anonymous Coward | about a year ago | (#45459861)

What's the price - and any software to help me mined bitcoin ?

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?