Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Nvidia launches fastest "parallel processor"

angry tapir (1463043) writes | more than 3 years ago

Supercomputing 1

angry tapir writes "Nvidia has announced the Tesla M2090 graphics processor, which the company calls the world's fastest "parallel processor" for high-performance computing. The M2090 is a graphics processing unit that has 512 cores and is able to perform specific math and scientific calculations up to 30 percent faster than its predecessor, the Tesla M2070 GPU, which has 448 cores. The M2090 can deliver peak performance of around 1330 gigaflops, according to the company."
Link to Original Source

cancel ×

1 comment

Sorry! There are no comments related to the filter you selected.

The future of Supercomputing? (1)

JohnConnor (587121) | more than 3 years ago | (#36160522)

The popularity of these GPUs baffles me. They are hard to program and very limited in what they can do, not to mention the horrible transfers to main memory, yet because there is no other foreseeable technology coming in the next 5 years or so they are becoming the standard for massively parallel programming on a budget. Any university and its dog has GPU projects, with wild performance claims, usually measuring a code they spend years optimizing for the GPU against the original code running un-optimized on one CPU thread. Yet in the real world there are very few applications of the GPU. The memory transfer bottleneck amplifies Amdahl's law. I work in an mission-critical supercomputing center and it will be years before we adopt the GPUs because of the manpower required to convert existing code, the uncertainty of the future of the technology, the quasi vendor lock-in situation that we have now with NVidia, and the fact that vendor support is not yet where it should be. Yet I am watching this technology being slowly adopted by everyone for lack of a better alternative. Thinking about it, it is pretty sad times that we live in term of supercomuting. Don't believe me? Ask the vendors what exciting new technology they have coming. They don't have any.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?